Artificial Intelligence Risks Call For Fresh Approaches, Expanded Collaboration

Written by

New technologies inevitably introduce new risks for security professionals to mitigate. While this is often challenging, practitioners generally have been able to adjust to the evolving threat landscape, drawing upon frameworks, solid risk management fundamentals and training to help keep their enterprises on secure footing.

Those time-tested approaches remain important, but traditional methodology will not be enough when it comes to the transformative wave of artificial intelligence that is poised to have a profound impact on our professional and personal lives in the near future.

While we can’t be sure of the exact course AI will take in the coming years, growing capabilities for imitating intelligent human behavior has the potential to reshape society in ways that are both exhilarating and menacing to consider.

Existing examples of AI such as fraud detection and virtual personal assistants have proven popular and useful, but they are just scratching the surface of the applications for AI that will be deployed in the not-too-distant future. 

Along with driving promising technological advancements, the more potent AI capabilities of the future will pose significant challenges to our physical security, political security and digital security. In the physical security realm, envision swarms of micro-drones reconfigured to harm, rather than help, farmers’ crops.

We can envision even more chilling consequences, such as attacks on self-driving vehicles and damage inflicted by autonomous weapons. How do we know AI would not set such calamitous courses of action in motion? Who is responsible for hard-coding into the AI that no action will be taken resulting in the harm of human beings, and who – or what – will provide the needed assurance that those essential security controls exist and have been adequately tested?

AI also has the potential to wreak havoc on our political systems, creating targeted propaganda and capitalizing on improved capabilities to analyze and predict voter behavior based upon potential manipulation of data. The digital implications of AI are vast, including the ability to alleviate the existing tradeoff between the scale and efficacy of attacks. 

It is clear that the information security community is not yet ready for what is coming, and, in some cases, what already is here. ISACA digital transformation research shows that the majority of industry professionals are not confident in their organization’s ability to accurately assess the security of systems that are based on AI and machine learning. This is a glaring concern that must be given the highest priority by security leaders going forward, including determining how to harness AI technology as a solution to the very threats that the technology introduces.

However, security practitioners cannot go it alone given the scope of this challenge. With these unprecedented capabilities comes the need for far greater collaboration than we have traditionally seen among enterprises, security researchers, regulators, legislators and other stakeholders who must be involved in the ethical evolution of AI.

For example, policymakers should collaborate closely with technical researchers to investigate, prevent and mitigate potential malicious uses of AI, such as social engineering, data poisoning and political propaganda.

Likewise, researchers and AI engineers should take a dual-nature approach to their work, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant stakeholders to address harmful applications. 

In his article How the Enlightenment Ends, former US Secretary of State Henry Kissinger contends that AI is “inherently unstable,” and therefore likely to achieve unintended results that could bring about civilization-altering consequences.

Regardless of whether you share Kissinger’s level of consternation about how AI could recalibrate society and modify human consciousness, there is little question that AI calls for a concerted, more collaborative approach if we are going to accentuate this technology’s tremendous upside and minimize the corresponding risks. The sooner we embrace this challenge, the better our chances of rising to the occasion.

What’s hot on Infosecurity Magazine?