HARTFORD -- First off, what is Artificial Intelligence (AI)? To find out a bit more, we talked to professor Mark Hoffman from Quinnipiac University.
"Basically, you're making machines that have human characteristics, intelligence being one of those characteristics," said Hoffman.
We use AI all the time, and for many of us, without even knowing it. Your TV has it, your coffeemaker has it, your car uses it. Think anti-lock brakes or traction control. The iPhone's Siri and Amazon's Alexa are obvious examples as well. Those are all called narrow AI, because they're really good at one task. But if you asked your anti-lock brakes to solve a math problem, well it would have some trouble.
So that's narrow AI, but there are two more levels. Artifical general intelligence, called AGI, is when a computer is on par with human intelligence. Experts differ in their opinions on when this will be reached, but the year 2040 is a general consensus.
After AGI, the next level is artificial superintelligence, called ASI. That's pretty self explanatory.
Think of how much smarter we are than an ant. That's how much smarter ASI would be compared to us. There already are computers that are being taught some pretty complex things.
"I would think that if there was something for us to be concerned about, it might be the ability for these to communicate and then be able to learn," said Hoffman.
Not taking it seriously? Maybe because it still feels so fictional, and it's been seen in many movies, from iRobot to war games. Superintelligence is fiction for now, but then again, so was space travel until we did it.
And now futurists and scientists are warning us that the time frame may be closer than we think. Let's talk scenarios. What are some potential good outcomes?
Well something like having nanobots going through your blood stream to literally eat cancer cells, or clean out a blockage in your arteries. Or how about reversing aging using a method that humans weren't even capable of thinking of. But every major advance can have negative side effects.
"We developed atomic energy with the best of intentions and it turns out you can make a bomb,"said Hoffman.
Now onto the bad outcomes for us. The doomsday AI, if you will. It could be something as simple as a misinterpretation by the computer, like the robot uprising in iRobot. The AI's instructions are to keep humans safe, and the central mind believes the only way to do that is by essentially keeping us prisoner so nobody gets hurt.
Or what if superintelligence decides that humans are simply no longer needed, and we're just an unnecessary annoyance. Alright, what can we do now? Well, not much, other than being aware of it.
Spacex and Tesla founder Elon Musk formed open AI, a nonprofit company that aims to further good AI, they publish all their results online for everyone to see.
Google has invested billions of dollars into advancing computer power in a safe way.
So as computing power continues to accelerate, the best we can hope is that the advance of AI is more like The Jetsons and less like Terminator.