Movies like Maximum Overdrive, Terminator, and I, Robot focus heavily on the concept of humans beings versus sentient intelligence. Sentient intelligence is different than artificial intelligence, a term that we use very often. Artificial intelligence is what is exhibited from a machine to problem solve in order to attain a certain goal after being programmed first by a human. These machines are designed to accomplish tasks such as placing a car door in an assembly line, calculating trends in the stock market, and even providing vital information in our hospitals. We are comfortable with A.I., but sentient intelligence is a different beast entirely.

While artificial intelligence focuses on a single task, sentient intelligence refers to a machine or computer’s ability to become autonomous, potentially even reaching a human level of learning and understanding. The word sentient is more commonly used to describe humans or animals that can think and react to situations on their own, but what does it mean for a robot or a machine?  The scary aspect of this quandary is that it could possibly imply that robots might one day be able to feel, comprehend, and even rise up against their human masters.

These sorts of nightmare scenarios have played out in many films over the years, the most notorious of them being Skynet in the Terminator franchise. The concept of Skynet when it was originally created in the Terminator universe was as a computerized defense weapon for the United States military. It was first given access to all military systems, but then when it gained consciousness, it became aware of its power and immediately outpaced its human creators. After the scientists rushed to shut down Skynet, it realized that it was being attacked. Consequently, it made the staggering conclusion that all of humanity was a threat and launched numerous nuclear missiles to start a war. This is a future that, while terrifying, is unfortunately possible with sentient intelligence.

Sentient Intelligence 2

Now, I am not trying to be the government conspiracy nut that everyone avoids in any kind of social situation, but when you give a machine the power to make decisions unsupervised, there will be consequences. While the Skynet reasoning for killing off a huge chunk of the human race seems ridiculous, wars that humans have started have begun with just a difference of opinion. Hell, even the revolutionary war was started over future Americans not wanting to adhere to a government system that had been in place for hundreds of years.  If we as humans are this flawed, why would we expect our sentient intelligence counterparts to be any different?

On the complete opposite end of the spectrum, many have argued that where our society is heading is a reality that we may inevitably have to face to make our lives easier. Just imagine if you didn’t have to do household chores, hire strangers to babysit your children, or even have to drive yourself to work–wouldn’t that be amazing? The simple solution to the issue of robot domination is that we will have to be the stronger species and have dominion over the sentient creatures, so it doesn’t get out of hand. However, we cannot even predict human patterns of criminal behavior, nor have we figured out a decent way to help our mentally ill. How are we going to control robot butlers that know there is more to life than washing dishes and folding our laundry?

There have been surveys on the subject asking people their various levels of comforts with artificial and sentient intelligence. Most of the time the answers depend on what the machine is doing and the amount of comfort changes with the machine’s assigned tasks. Over several studies, most people were satisfied with intelligence machines to complete simple, time consuming tasks, but the numbers falter when they start invading the home. The numbers dwindle further when asked about household chores and are dismal at best when it is suggested that intelligence machines could take a greater presence in medical care for humans.

Sentient Intelligence 3

There are far too many ethical questions that would have to be considered if we began to let sentient intelligence driven machines lead their own lives and work in certain fields. What happens when they realize they are more than what they were intended to be and want to be equal to human? Are we going to treat them as equals since we in essence gave birth to them or are we going to treat them as second class citizens because they aren’t human? What if the wiring gets mixed up, like human brains, and the sentient being starts committing crimes against his or her fellow robots, then do they enter the human criminal justice system or will they have their own system?

Honestly, while sentient intelligence where machines can be equal to humans is far from being a reality. But the prospect still leaves giant gaps of information that terrify us, even if it’s still just an idea.