Sentient AI: We Are Getting Closer

Written by

In November, experts wrote a commentary for the scientific journal Nature that outlined a scenario in which rogue artificial intelligence (AI) hijacked a brain-computer interface (BCI). It could manipulate a person’s thoughts and make decisions against the person’s will, with physical consequences:

A paralyzed man participates in a clinical trial of a brain-computer interface (BCI). A computer connected to a chip in his brain is trained to interpret the neural activity resulting from his mental rehearsals of an action. The computer generates commands that move a robotic arm. One day, the man feels frustrated with the experimental team. Later, his robotic hand crushes a cup after taking it from one of the research assistants, and hurts the assistant. Apologizing for what he says must have been a malfunction of the device, he wonders whether his frustration with the team played a part.

A similar take on the idea of an uncontrollable AI showcases what is (for now) a ridiculous hypothetical. In the TV sitcom Ghosted, a government shadow organization dedicated to working cases of the supernatural installs a helpful AI to facilitate collaboration and organization. It has a face, and a name ('Sam'), and before long he’s popping up unannounced on people’s computer screens to chat, offering relationship advice and engaging in inter-office politics. Hooked in as he is with every level of data and networking aspect of the organization, it doesn’t take long for Sam to grow drunk with power and show his true goal: world domination.

AIs gone bad is obviously not a new trope, but the idea of embedding them into human cognition adds a new layer of ethical concern.

“Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals can communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains facilitate their interactions with the world such that their mental and physical abilities are greatly enhanced,” the researchers wrote in Nature.

Of course, imagine the implications of being able to hack a BCI, or to be able to 'woo' a self-aware, system-level AI to your cause.

Sure, it all sounds futuristic, but the caution comes at a time when AI is enjoying a chasm-crossing moment. Northeastern University and Gallup just released a fascinating new survey of 3300 US adults that gauges public perceptions about AI and the impact it will have. It found that most Americans believe AI will fundamentally impact the way they work and live in the next decade, with 77% saying it will have a positive impact. That compares to 23% who believe AI is a threat to their job – another common trope in the pop-culture AI annals.

Also, work is underway on the neural networking side that could see the launch of a 'thinking' AI sooner rather than later – one that can go beyond just data-crunching and analytics and cross-referencing and algorithms. In August, Microsoft announced that its neural network for recognizing conversational speech has matched the abilities of even trained professionals. Meanwhile, using electroencephalogram (EEG) data, researchers at the University of Freiburg in Germany showed that AI can be used to decode “planning-related brain activity” to give disabled people with limited communication the capability to control robots.

So perhaps the time is now to figure out what the ethical constructs and security best practices should be around programming advanced (read: sentient) AI.

“The possible clinical and societal benefits of neurotechnologies are vast,” the BCI researchers wrote, putting not too fine a point on it. “To reap them, we must guide their development in a way that respects, protects and enables what is best in humanity."

What’s hot on Infosecurity Magazine?