Navigating the Future with Generative AI: Treating AI as a Trusted Colleague

Written by

Imagine you have a new colleague. A bright spark, who’s talented and capable of achieving great things. Full of ideas and with the potential to relieve the workload of existing staff members.

As an employer, you’d welcome this person with open arms. What you probably wouldn’t do is give them the keys to the company safe on day one or allow them the autonomy to make decisions affecting the organization’s future.

Well, meet generative AI – your newest colleague. The start of 2023 has seen frenzied discussion about ChatGPT, Bard and other generative AI products, with predictions about how it will shape the future coming thick and fast.

In my view, some of the enthusiasm is warranted, while some is hyperbole. But either way, the everyday use of generative AI is growing and it’s going to have an impact on the way organizations operate. Yet as an open letter signed earlier this year by technology luminaries including Steve Wozniak sets out, the risks associated with the uncontrolled introduction of AI can have huge implications for organizations and individuals.

Treat AI as an Employee

This is why it’s key to think of AI not only as a disruptive technology, but also as you would any employee who is hired to help your organization succeed. If you are using AI in your organization, why wouldn’t you expect AI – and the developers behind it - to adhere to your policies and procedures like any other employee?

For example, confidentiality agreements, fair treatment of team-members, or cultural expectations. It’s no stretch to say that employees have guidance on these issues. Likewise, to safeguard the use of AI, having a framework in place can help build digital trust and help to determine when AI can be used effectively and what other considerations there might be.

"Transparency is vital."

Equally, as with any employee, having an audit trail is also important. AI is not simply a code or algorithm, it’s the people behind the code. This means understanding what the code is doing, what the models are creating, or how different weightings are being applied to different attributes of data that's being fed in. Transparency is vital so that if anybody affected by the AI can query an outcome and organizations are crystal clear on how a result has been reached.

A Call for Regulation

Take the financial services industry for example. Rightly, given the impact it has on people’s lives, it’s highly regulated, but also stands to benefit from AI automating some of its more functional tasks. Yet practitioners need to be able to assure the regulator that they understand how the AI works, what data is being put in, and why they can be certain that it is leading to the correct conclusion or right risk rating. All this needs to be documented, because it’s not just about AI being able to deliver but being able to tell the story of why the AI is reliable and its outputs can be trusted. This is because in-built bias is a major concern – we need confidence that there are processes in place to identify and flag data biases that could lead to a flawed model.

The crux of this is about applying judgement. Yes, AI offers immense opportunity to unlock “a new wave of productivity growth”, as suggested by Microsoft, but organizations that integrate a risk-based approach to its use will maximize its benefit. Especially in their first few months, a new employee isn’t (usually) given unfettered opportunity to shape commercial strategy or independently deal with a reputational issue. They’d be expected to work with a manager and collaborate with more experienced colleagues. The same approach can be taken with AI.

"When we apply AI, we need to test it out."

A case in point is using AI to make decisions about the future of individuals or teams. The risks of incorrect application would far outweigh the potential benefits of expediting the HR process. So, when we apply AI, we need to test it out, consider the answers being provided, then apply our own skill and judgment on top. After all, we are more experienced in the human aspects of our organizations than AI, however quickly it can learn.

What the AI future looks like remains to be seen, but even the biggest sceptics would perhaps find it hard to deny its enormous potential. But behind it, we can’t forget that just because technology can do the job, human beings are still needed to train the system to do it well. And intentional human biases will naturally feed through.

AI has the potential to be a hugely valuable addition to any organization – but rather than assuming AI is magic, we need to build greater trust in its benefits by acknowledging that it’s a machine, with an army of people and their human limitations behind it.

That’s why introducing frameworks to embed digital trust matters, so AI can be deployed in a credible, authentic, well-executed, and well-governed way. A risk-based approach involving checks and balances and controls, will pave the way to make AI a truly valuable addition to any team and organization.

What’s hot on Infosecurity Magazine?