Strategic Intelligence Guidance for Adopting AI Models in Your Organization

Written by

As AI models become more sophisticated and firms explore how they can drive efficiencies within business, it is important to ensure that they are used in a way that enhances organizational humanity, rather than replacing it.

The Crown Jewels

Trained AI models will become the crown jewels of intellectual property and business process of a company. This concentration of business information and business functionality will be target numero uno for threat actors.

Conceivably, one of the more difficult jobs of a threat actor who has successfully infiltrated an organization and begun the process of data exfiltration is ensuring that the information being stolen is of significant and impactful value to the business. Due to the data ‘packrat’ mentality of many organizations, much of what is retained and available to the threat actor is superfluous, outdated and not particularly valuable.

In fact, the data exfiltrated may have minimal operational, intellectual property or compliance value. The valuable data a cybercriminal hopes to leverage for a large ransom payment by threatening the company with public release or ‘unbreakable’ encryption may be a needle in a haystack, and certainly require the threat actor's own eDiscovery function to set a price for ransom negotiations.

Organizations should consider trained AI models within their business to be a crown jewel, especially if that AI model is leveraged into a specific delivery role and is not just an advisory tool. Any collection of business service delivery, intellectual property and potentially customer or employee records comprises a highly valuable target, and a single point of failure which, if compromised, would have significant impact for a business process which is dependent on the AI tool - such as technical or customer service - or if the AI tool contains pre-market information, ideas or concepts - such as sales and marketing content.

This concentration of business information into an AI tool should be considered critical and highly sensitive: in the hands of a competitor or threat actor the ramifications and disclosures the AI model could make could have significant impact. There is a likelihood that the stolen AI model could be leveraged into divulging critical weaknesses and instability of the organization, leading to significant harm if the model was made unavailable or fell into the hands of an unauthorized party.        

Run an Integrity Check

The security team will need to run a somewhat variable integrity check against the trained AI models to ensure responses fall within an acceptable range of responses.

Trained AI models which are leveraged into customer-facing and organization delivery roles - generally confined at present to minimally skilled roles within the organization which will shift the traditional focus of security of confidentiality and - business BCP/DR - availability to scrutiny of the integrity of the AI model’s responses.

The AI model may over time ‘learn’ shortcuts or ‘work arounds’ for the common scenarios it is trained for. This benefit is both expected and precisely what an AI model is designed to do. However, nuance, cultural, societal, religious and philosophical differences in the interactions between customers and the AI model may lead to a "drift" in responses with the potential of suggestions being made, especially in areas such as healthcare, which may range from deeply abhorrent to offensive on a number of aforementioned levels. Over time, negative interactions may dissuade potential customers or exacerbate a delicate situation.

The last thing an organisation needs is a rogue AI which starts recommending the products or services of a competitor.

The integrity checking of an AI model which maybe widely deployed in the organization and face thousands of customer interactions daily will be required and is strongly advised. Since we are at the infancy of AI model development it is imperative that an organisation checks it for accuracy and consistency in responses, and the AI model is curtailed from "high levels of creativity" which may be learned from thousands of interactions. The last thing an organisation needs is a rogue AI which starts recommending the products or services of a competitor or becomes deeply offensive to potential and current customers.  

Risk and Responsibility

Trained AI models in production may make mistakes or give wrong advice. From a Governance Risk and Compliance (GRC) perspective and corporate liability, is it better to remove AI models from the corporate body as a whole?

The way in which AI models are adopted into an organization’s service delivery, decision making, and automation capabilities introduces potential technological risks which, from a GRC perspective, are difficult to qualify as the push towards efficiency - also known as profitability - may take a "back seat" in executive strategic goals. Some consideration should be given to look at departmental or business function AI models rather than a single full organizational one. This would essentially segment AI models - at the risk of some business synergies - from one another to avoid unhealthy prejudice and competition. As strange as this may sound, the last thing an organization needs is to replicate the very human interactions or battles between the company’s vice-presidents into an AI-driven business decision model. At some level, humans must still make decisions, set priorities and interact with their fellow humans.

The thought of an entirely AI-run organization incorporating AI learning models, block chain and smart contract technologies, along with robotics, is intriguing and coming into view over the three-to-six-year horizon. It is impossible to predict when a fully AI "CEO" model - which then drives other organizational departmental AI models creates a fully AI model corporate entity, unleashed with our present technologies for simple "widget" production. However, given the pace of technology it is likely to rapidly progress into a simulation or virtual experimentation to determine the art of possible adoption in the real business world. Currently, it appears that corporate AI models are being constrained by numerous legal, regulatory and political hurdles but, as with most technologies of the 20th century, continued adoption is inevitable.

Check your Insurance

Are the consequences of trained AI models in production environments that do make mistakes and give wrong advice potentially causing "harm or loss" covered under Errors & Omissions (E&O) insurance?

Any AI model's "unintended consequences" will likely be identified as an insurable peril.

Inevitably, the insurance sector will look at the level of dependency the organization places on AI models for functions and delivery of services. Any AI model's "unintended consequences" will likely be identified as an insurable peril. This forecast is not without precedent. It is fair to say since the first ransomware events "cyber insurance" was responsible for double digit growth in the otherwise anemic insurance product market. Many insurance companies started writing policies with no underwriting data, minimal pre-qualification cyber-hygiene requirements and premiums calculated without any precedents.

It feels as if we may find ourselves at the same crossroads as before. Certainly, mistakes will occur and there will be cases where users of AI models in decision making may reach contrary conclusions and advise against the AI model's recommendations. This complicates organizational and individual decision making and will require corporate governance mechanisms as well as adjudication and administrative procedures to protect individuals and the organization from AI models which lead to "undesired outcomes". Are executives going to be held to account when an AI model recommends actions which become detrimental to the organization? What about the opposite, when executives fail to follow the recommendations of an AI model? The responsibilities of company officers and executive leadership decision making will be complicated with an AI model looking over their shoulders.

Ultimately, we need to ensure organizational efficiency with AI models is not traded for organizational humanity.

Follow The Beer Farmers on Social Media: 
Twitter: @Thebeerfarmers
LinkedIn: The Beer Farmers 

What’s hot on Infosecurity Magazine?