It took a six second recording to give Sarah Ezekiel, who has been living with motor neurone disease (MND) since 2000, her voice back. Like many others with MND, she uses computer-generated voice to help her to communicate. Now, for the first time in over 20 years, she sounds like herself again, using an AI model that was trained on a six-second recording of her. I defy anyone to find a more a more personal and inspiring story of how AI can change lives.
This year we are celebrating the 75th anniversary of the Imitation Game, a concept introduced by Alan Turing in 1950. More often called the ‘Turing test’, it pits AI against a real human in a conversation; and if another human being cannot tell who the AI is, then AI is the winner.
In more recent times, AI has felt more like a ‘digital gold rush.’ In health, machine learning models are detecting cancers earlier. In finance, AI algorithms are flagging fraudulent transactions in real time. In climate science, AI is helping model complex systems to predict extreme weather events.
AI is accelerating scientific discovery: from protein-folding breakthroughs to novel antibiotic discovery. In cybersecurity, AI is defending critical infrastructure against increasingly sophisticated threats – some of which are now AI-generated.
Yet alongside its promise lies a growing chorus of concern about bias, surveillance, misinformation and existential risk that just won’t go away. Nor should it. The challenge we face is not whether to embrace AI, but how to do so safely and securely – and how to do it with our eyes wide open to the benefits and risks.

The Main Risks of AI
There are arguably three main risks in AI. First, there are the risks that are inherent in the technology itself. AI models are becoming more explainable and better understood all the time, but they still make things up and they still get things wrong.
They do things we cannot predict and, annoyingly, that we cannot always explain. We are used to expecting our code to be reliable and predictable. And even when it is not, we are pretty good at fixing it. AI is not like that. Modern AI systems are a complex function of model design, training data, guardrails, testing and prompt engineering.
Then there are the risks in the way AI is used and adopted. Just like any new system or service, it is important that we – users and developers – understand who is using it and for what purpose. We need to know what it is doing and whether we expected it.
We need to know where our data is. We need to know that the AI system is secure. Those things have always been important in our increasingly digital lives. But they are especially important when we are deploying AI that might do things we do not expect. We have to design for that and we have to make sure the wider system can cope with it, very often with a ‘human in the loop’. It is similar to hiring a keen and enthusiastic new starter than a worldly-wise experienced hire.
Finally, there are a whole set of perception and consequence risks. Will AI take my job? Will AI invade my privacy? Will it discriminate against me? Is it telling me the truth? Who is responsible if AI-driven cars cause an accident? Those risks are real. And those risks are important.
How to Manage AI Risks
So, we need to manage those risks – but how?
Research and Development
Research into AI safety and security must go hand in hand with AI development. Safety research helps us understand and mitigate risks like bias, system failures, adversarial attacks and misalignment with human values. We have a world-class and growing AI research community, and we need to invest in it.
Transparency
Transparency is equally vital. AI systems must be increasingly explainable so we can all understand how conclusions are reached. AI cannot become a black box of unaccountable decisions. We need to publish and share these insights.
Diverse Expertise
We need to include diverse voices in AI research and development. Interdisciplinary collaboration between technologists, ethicists, sociologists and policy makers will help to ensure that AI works better for everyone.
Public Understanding
Increased public understanding of AI is more important than ever. Our formal education systems need to evolve to teach not just coding and data science, but also the ethical, social and philosophical dimensions of AI. AI learning does not stop at school. We need our curiosity about AI to be a lifelong learning habit.
Building a Community
We need to build a more informed and engaged sense of community around AI. Media and civil society play a vital role in holding AI systems accountable. Investigative journalism has uncovered algorithmic bias and surveillance risk.
Advocacy groups have pushed for fairer AI policies. We need that constructive dialogue and debate across the spectrum to help us learn and use AI safely.
Human-Centered Mitigations
All of this will take leaders. Expert and thought leaders. Leaders in industry, academia, government and civil society. Leaders who can engage and inspire and bring that community of practice together. Leaders who can help us to make sense of complexity and move forward with confidence.
And, of course, we need governance and regulation. But given the pace of change and the complexity, we must be proactive and enabling in our controls. Over-regulating AI will result in shadow AI and that will help no one.
Ironically, all of these mitigations are all deeply human-centred. AI will help us to manage the risks in AI. But it is humankind who have the agency to make that work well. The future is still very much ours to decide. It will require courage, humility and strong dose of curiosity in all of us. But if we get it right, AI will be one of the most powerful forces for good that we have ever known.
It will pass the Turing Test and go even further. And it will do it in a way that we can all understand and trust.
