#IPEXPO: Prof Nick Bostrom on Controlling 'Super Intelligent' AI

Written by

How can we get something that is far smarter than us to do what we want, but still be under our control?

This was a question asked by IP Expo Europe opening keynote speaker Professor Nick Bostrom, who discussed the impact artificial intelligence (AI) will have on humanity in the future. According to Bostrom, whilst super computer intelligence has the ability to change the world for the better, it also has the potential to exceed our control for the worse if steps are not taken to ensure it doesn’t.

The value alignment for AI is to “build something that wants the same as we want”, but if some parameters of human values are omitted, an optional policy often sets those parameters to extreme values, which can have damaging ramifications. 

The key is “looking for control methods that are scalable as our AI systems become smarter and smarter, but there are a bunch of control methods that don’t take scale into account,” he added.

If control methods are not scalable and AI becomes super intelligent it might seize control for itself and start to manipulate things that we do not want to be interfered with, such as our applications – we must not assume AI systems are capable of strategic behavior like humans.

The key is tackling the issue of artificial morality, but to do that we need to solve the technical control questions first.

“We don’t have all the solutions to that yet,” he concluded, but it’s encouraging that more investments are being made to do this, and that there is a growing community thinking about controlling machine technology to make it the best thing that has happened to humanity, and not the worst.

What’s hot on Infosecurity Magazine?