Security Community Must Grapple with AI's Ascent

Written by

As a computer science student back in the 1980s, I was assigned to write a program that could play the ancient Chinese game of Go.

It was a real challenge, and I remember being humbled by the experience. While I was able to create a program that could play the game and was able to beat several of the other programs that were created, even the most novice of human players could beat my program.

The experience underscored how far away we were from building artificial intelligence (AI) systems that could mimic human behavior. In the years that followed, AI fell out of favor, and the quickest way to get denied funding for proposed IT or development projects was to mention “artificial intelligence,” which was perceived to be less reliable than and less likely to be successful than conventional programming.

Given that backdrop, I was especially awestruck when AlphaGo, part of the DeepMind project from Google, was able to defeat a Go grand master in 2016. To engineer a system that could beat a grand master reflects AI mimicking and surpassing human behavior – it cannot simply be done by brute force of calculation.

This was a huge moment for AI, since defeating a Go grand master was considered by some to be the holy grail. It signaled that we are now moving much faster on AI than many people realize.

In the past decade, AI has had a huge resurgence in interest. Credit researchers and technology professionals for not giving up, instead positioning the technology to be on the verge of transforming many aspects of society.

Consider the swelling popularity of Apple’s Siri, Amazon’s Alexa, and then realize that even more transformative innovations are coming in the not-too-distant future; I expect self-driving cars to outperform human drivers within the next decade.

These advancements come with enormous challenges for technology professionals. While a Terminator-like doomsday scenario might seem like Hollywood fantasy to some, the reality is it will be critically important for controls and safeguards to be put in place that protect us from unintended consequences of AI technology.

For example, an AI system charged with improving urban traffic might conclude that orchestrating a series of accidents is one strategy for removing vehicles from the road. Tesla CEO Elon Musk has gone as far as to characterize AI as a fundamental threat to humankind, and more than 60% of respondents to ISACA State of Cybersecurity research believe AI will increase security risks in the long-term.
 
Personally, I take a more optimistic view, as I am a believer in the capabilities of technology innovators to work toward the necessary safeguards to keep AI under control. That doesn’t mean it will be smooth sailing. There is no questioning the potential for AI to go awry; traditional techniques for identifying software vulnerabilities and auditing systems might not work well for systems based on AI. In fact, we likely will need to turn to AI technology to conduct these new and critically important tasks.

These are challenges that we as technology professionals must grapple with – and quickly. As if the aforementioned AlphaGo triumph over a human grand master in 2016 wasn’t impressive enough, late last year, AlphaGo was overtaken by AlphaGo Zero, which was simply trained on the basic rules of Go, and then learned by playing itself millions of times – no human training involved. After just a few short days, AlphaGo Zero reigned supreme and could be considered the best Go player on the planet.

We are headed toward a fascinating future when it comes to AI’s imprint on society, and faster than people believe. As technology professionals, there are pressing questions that should not be ignored. Is AI the future of hacking? Is AI the future of cybersecurity? Is AI the future of audit?

This much appears certain: AI has come too far to be brushed aside, and bragging rights in Go will not be this technology’s last claim to fame. 

What’s hot on Infosecurity Magazine?