Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

AI Researcher: Let's Not Create Robot Overlords

Just as a new Terminator film gets ready to hit theaters, a new warning on artificial intelligence is here: The great robot overlords are coming to enslave us.

Or at least, enfeeble us.

That’s the word from AI researcher Stuart Russell, professor of Computer Science at the University of California and author of Artificial Intelligence: A Modern Approach.

In an interview with Science, Russell warns that what starts out as good, helpful, interesting tech can evolve into something that threatens our very existence, if left unchecked, or developed without stringent ethical research parameters.

He compares the dangers to nuclear fission.

“From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy... I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence,” he said. “Both seem wonderful until one thinks of the possible risks.”

Specifically, a top risk is that super-intelligent AI systems could be developed with an eye to  making them self-aware, capable of learning and contextual inference, and with the capacity to evolve over time. All of which would give AI systems the tools with which to outstrip whatever purpose they were originally intended for.

The essential (and existential) problem is that we as humans will not be able to compete – AI could in  those circumstances grow its brain capacity at a much faster rate than any other “species” has ever done, relegating us to an inferior, second-class status, ripe for enslavement  – or genocide.

Or in a less extreme scenario, we could become so dependent on the technology that we couldn’t survive “off the grid” at all. As a species that would leave us weak and, Darwinistically speaking, unfavorable for the planet’s ecosystem.

As we have covered before, AI dangers have the attention of a range of big names—Elon Musk, Bill Gates, Stephen Hawking.

“First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well,” Gates said during a Reddit Ask Me Anything Q&A in January. “A few decades after that though, the intelligence is strong enough to be a concern.”

To head all of this off at the pass, “It is timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI,” Russell and other experts in the field explained in an open letter at the beginning of the year. “Such considerations…constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

As in, not turn us into helpless drones.

Musk, the Tesla Motors and SpaceX entrepreneur who’s often characterized as a real-life Tony Stark, called AI “our biggest existential threat,” comparing it to a demon that, once summoned, cannot be controlled. And Hawking has been even more to the point. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.

All of that said, AI remains one of the least understood global challenges, and there is much uncertainty as to the development timeline. Russell and others are urging caution now—before things get out of hand.

What’s Hot on Infosecurity Magazine?