The Impact of Conversational AI on the GRC Workforce: Training our Next Generation Workers

Written by

The world has been changing literally before our eyes. The pandemic, which represented the opening salvo to our entrance into the fourth Industrial Revolution, triggered a wave of disruptive transformation, of which we are only scratching the surface. 

The integration of newly instrumented physical, biological and digital worlds has given rise to an unprecedented number of 'big bang disruptions,' the breadth and depth of which will herald the transformation of entire systems, creating and destroying product lines, markets and ecosystems. We are also entering the third wave of artificial intelligence (AI). In this era, we imbue human perception capabilities onto virtual assistants that deliver personalized experiences spanning multiple worlds.  

We have entered the 'Age of Intelligent Ecosystem.'

Traditionally, attackers have outperformed the cybersecurity and governance, risk and compliance (GRC) industries in their ability to adapt, leverage and exploit disruption. Their speed, agility and creativity have been defining factors giving them a significant edge in the global fight to protect and defend the world's most critical systems.

Industries, especially the GRC sector, have often needed help to keep pace with leveraging and integrating emergent technologies in our practices. Yes, research and committees have been established and published well-defined position papers. However, entry-level curriculums have not mastered using extended reality, the Internet of Things, or AI in creating the learning experiences necessary to operate in this new world.

The GRC for Intelligent Ecosystem (GRCIE) foundation, affectionately called/pronounced 'Gracie,' is an academy that specializes in teaching and training cybersecurity and GRC analysts using emergent technologies. By working with organizations such as ISACA's Emerging Trends Group and the Cloud Security Alliance, we have been able to leverage research that often informs how we train our students using a VR-dominant curriculum. We also leverage frameworks by the Information Commissioner's Office (ICO) to teach the fundamentals of auditing and assessing explainability and algorithms at the beginning of their careers.  

Yet we were surprised by the speed of adoption we are experiencing with the rise of conversational AI. 

In early February 2023, we sat down as a team to determine how we would tackle the viral nature in which conversational AI and large language models have become part of the lexicon. 

We wanted to explore how systems that leverage large language models, such as ChatGPT, would alter the information security and GRC workforce. What roles will be created, and which ones will be automated away? What are the skills that will be required to fulfill these new jobs? How do we train people to work alongside AI systems? What are the unseen risks that these technologies expose?

So, one Sunday morning, a team of security analysts – Amanda Lyking, Nicholas Smith, Todd Williams, senior controls analyst Rashida Thomas and I – gathered to discuss our learnings from trends we are witnessing and experiencing. 

What is a Large Language Model?

A large language model is a statistical tool that ingests massive amounts of data to predict the probability distribution over sequences of words to predict the next word in a sentence. It ingests, summarizes and translates texts that predict future words, enabling it to produce sentences that mimic how humans naturally speak and write conversationally. 

For example, the large language model used by San Francisco-based OpenAI's ChatGPT is generative pre-trained transformer (GPT3). GPT3 has been trained using enormous data sets of information ingested from various sources, including the public internet. It is one of the largest-scale and most powerful large language models to date, with 175 billion parameters, and according to Wikipedia, it "forms part of a trend in natural language processing systems of pre-trained language representations." 

GPT3 and other similar language processing models use a form of training called reinforcement learning with human assistance or feedback (RLHF). These methods train the systems based on human feedback and intuition, enabling it to present more human-like responses. These large language models form part of the foundation for chatbots and conversational AI.

"Formal AI governance must be a part of the entry-level GRC curriculum"

The Difference Between Conversational AI Systems and Chatbots

While chatbots and conversational AI systems may share similarities, it is in their purpose and engagement method that we find the difference. At their core, both systems are conversational in that they try to understand and engage with humans contextually. However, conversational AI systems are platforms built with tools, chatbots and even virtual assistants that enable them to interact with, mimic and carry out conversational experiences with humans. Chatbots are systems that interact with people but may or may not use conversational AI or machine learning. That doesn't make them less powerful, as we have seen systems such as ChatGPT being successfully used to generate entire training curriculums, pass employer coding exams, write poetry and even compose music. 

So how may this rise in conversational AI systems impact the GRC and cybersecurity workforces?

Each of GRCIE's analysts was tasked with analyzing different aspects of the organizational use of conversational systems, looking very specifically for the uncovered insights that would require a change in the way we design and deliver our curriculum. 

Once we synthesized our analysis, we found a few key themes.  

Formal AI governance must be a part of the entry-level GRC curriculum, including a focus on conversational systems, their information supply chains and the psychological safety of the humans involved in the training of AI systems.

Each of the analysts has spent considerable time advocating for social justice and were interested in how these systems are trained, as they often reflect the bias of those involved in the system's design and training or the bias infused in the mechanisms by which the ingested information was initially collected or synthesized.  

Given the viral nature of ChatGPT, it was clear that many organizations will have to reckon with its use as part of operations, especially when these insights make their way into critical decision-making processes.

Amanda Lyking, a security analyst and GRCIE alumni, stated that AI governance needs to be at the forefront of all GRC workers' core training, which must include a repeatable framework for assessing the explainability of AI systems used in their organizations. Explainable AI involves all the processes, methods and documentation necessary to ensure that the output and results created by machine learning algorithms are understood by most people. Assuming that GRC controls analysts, auditors or compliance personnel may be responsible for understanding the impact and risk of interfacing with conversational systems, they will need frameworks for understanding and assessing the explainability of that system. 

Nicholas Brown, a security analyst and GRCIE alumni, surmised that if a company intends to use, for example, ChatGPT as part of a critical business process, then an explainability assessment could be conducted as part of a business impact assessment. 

Foundationally, six types of explainability align with four fundamental principles:  

Four Principles of Explainability 

  • Transparency: What are our documented processes around using conversational AI-enabled decisions? This includes both when and why. 
  • Accountability: Who is responsible for managing and overseeing the explainability requirements around their organization's use of the conversational system?
  • Context: When the organization plans to use conversational AI to help make decisions that impact or influence critical processes, how are we considering the setting in which we will do this and the potential impact of the decisions you and the system will make?
  • Reflects on Impacts: What are the consequences in areas such as the physical, emotional and sociological effects on free will, privacy and the workforce?

"Conversational interfaces into systems, such as GRC platforms and SIEMs, would enable better and faster insights into the state of security"

The Six Types of Explainability

While the six types of explainability apply to all forms of AI, in our context, the GRC analyst would evaluate the explainability types through the lens of decisions made involving a conversational system that may impact a critical process, users, systems or information.

  • Rational explanation: the reasons that led to a decision, delivered in an accessible and non-technical way.
  • Responsibility explanation: who is involved in developing, managing and implementing decisions involving a conversational AI system? And who to contact for a human-only review of a decision?
  • Data explanation: What is the information, and how was it used in a specific decision?
  • Fairness explanation: What steps were taken across the design and use of a conversational AI system to ensure that the decisions it supports are generally unbiased and fair and whether an individual has been treated fairly?
  • Safety and performance explanation: What steps were taken across conversational AI systems to maximize the accuracy, reliability, security and performance of its decisions and behaviors?
  • Impact explanation: How have we considered and monitored the impacts that our use of the conversational system and its decisions may have on our workforce or even broader society? This includes understanding the psychological safety of the human supply chain involved in training these systems – for example, workers exposed to psychologically unsafe content to train an algorithm to be unbiased.

While our most significant 'aha' moment was how early in a GRC worker's career they must be trained to audit and analyze explainability in AI systems, there were a few other vital learnings.

GRC Workers Should Be Formally Taught How to Work Alongside Conversational Systems 

Rashida Thomas, an experienced technology and risk leader, noted that interfacing and using conversational systems might help improve our critical thinking, requiring the school to be thoughtful about how to train students to ask the right questions of a conversational system and determine the accuracy of the information being returned.  

For example, ChatGPT first made its way organically into GRCIE through our risk assessment training. The students were being trained to implement application-specific risk assessments, and one went straight to ChatGPT to research the applications, the companies, the probability of breaches, etc. Soon after, we discovered students using ChatGPT to help generate practice exam questions for flashcards. ChatGPT had caught fire. The problem was that some of the responses from ChatGPT were not entirely accurate or relevant to the questions posed. This was partly due to the way the questions were formulated and the system's inability to derive the goal of the questioner. The ability to iteratively ask the right questions and determine the validity of the responses must be a formally trained skill.

Ultimately, these real-world examples solidified the need to train our students to interface with conversational systems and document their use of the system and the decisions they made using it. Their decisions based on interactions with a conversational system may impact the organization's overall risk.

Teaching Cybersecurity and GRC Workers How to Interface with Conversational Systems May Reduce Burnout 

One of our team members, Todd Williams, is a formally trained social worker coming from an industry rife with burnout with upwards of 33-34% turnover every year. He noted that GRC, cybersecurity and social services workers are all required to make split-second decisions with heavy documentation requirements where the implications of such actions or inactions can be costly. And all three industries operate heavily in the unknown. The stress across cybersecurity and GRC industries is real.  

According to Mimecast's State of Ransomware Readiness 2022 report, one-third of cybersecurity teams experience increased absences due to burnout following an attack, and one-third are considering leaving their role in the next two years due to stress or burnout. The team believes that conversational interfaces into systems, such as GRC platforms and SIEMs, would enable better and faster insights into the state of security around systems. This can have real-world impacts in reducing aspects such as containment times and limiting burnout in the response teams. Teams can also leverage conversational systems for communications guidance – for example, how to present information customized to the audience, such as communicating security risk to front-line supply chain management vs. the board of directors. 

By training cybersecurity and GRC workers on how to interact with conversational systems, how to ask the right questions and how to determine bias or validate inaccuracies, there is a real possibility of an emotional benefit in minimizing stress. 

AI systems are no longer just emergent technology, and we cannot hope that the use of these technologies is years away, off on the horizon. They are being used right now in ways we cannot predict. Because the workforce of the future is expected to protect and defend AI systems, we need to teach the students of today how to work alongside them.

What’s hot on Infosecurity Magazine?