Artificial Intelligence, Hungering for Human Extinction

Written by

With a brand-new Terminator movie set to debut this summer, the perils of artificial intelligence (AI) are bound to roar back into geek-pop culture. But Skynet, et al, is just a sci-fi fantasy trope – isn’t it?

Researchers at Oxford University say that AIs with malicious intent are actually “a threat to human civilization, or even possibly to all human life.”

In a paper on emerging extinction threats, they outlined the issue. “Extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.

“And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”

MORE likely?

So, let’s nutshell this. In addition to the super-volcano under Yellowstone, unsecured nuclear stockpiles and asteroids, we need to fear independently-thinking computers which aren’t really down with the concept of “that’s against my programming.”

Rise of the machines, indeed.

The Oxford researchers aren’t alone in their assessment. Elon Musk, the Tesla Motors and SpaceX entrepreneur who’s often characterized as a real-life Tony Stark, called AI “our biggest existential threat,” comparing it to a demon that, once summoned, cannot be controlled.

No lesser of a personage than Bill Gates is concerned as well. “First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well,” he said during a Reddit Ask Me Anything Q&A in January. “A few decades after that though, the intelligence is strong enough to be a concern.”

Stephen Hawking has been even more to the point. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.

The essential (and existential) problem is that we as humans will not be able to compete – AI will evolve and grow its brain capacity at a much faster rate than any other “species” has ever done, relegating us to an inferior, second-class status, ripe for enslavement  – or genocide.

All of that said, AI remains one of the least understood global challenges, and there is much uncertainty as to the development timeline.

“There is considerable uncertainty on what timescales an AI could be built, if at all, with expert opinion shown to be very unreliable in this domain,” the Oxford report noted. “This uncertainty is bi-directional: AIs could be developed much sooner or much later than expected.”

What’s hot on Infosecurity Magazine?