How to Navigate the Risks of Generative AI

Written by

Consumers are increasingly using generative AI tools in their day-to-day lives, from creating meal plans to asking for medical advice to even creating videos. While these tools offer immense potential for various applications, recent findings from KPMG highlight a concerning trend: consumers are willingly inputting sensitive information into these AI systems, potentially exposing themselves to security and fraud risks.

This could not only have negative consequences for individuals but will also have an impact on businesses and the broader cybersecurity landscape.

Consumers Throw Caution to the Wind – Phishing 2.0

KPMG's poll revealed that when using generative AI tools, a significant proportion of consumers are engaging in risky behavior by entering sensitive information without thinking about the repercussions.

Over a third (38%) of respondents admitted to inputting financial information, while 27% disclosed personal details such as their address or date of birth. This information serves as a goldmine for cybercriminals, who can exploit it for identity theft, financial fraud and other malicious activities.

Furthermore, 42% of individuals acknowledged entering work-related information into these AI tools, raising concerns for businesses. Confidential data stored in these models could potentially be used to train future versions of the AI, posing a significant threat to corporate security.

Generative AI tools are already being used by cybercriminals to strengthen their attack strategies. By offering up confidential information so readily, consumers are merely adding fuel to the fire. When combined with emerging threats, such as Deepfake AI impersonations, this data can be leveraged to create bespoke attack campaigns, the dream of the digital confidence trickster.

A GenAI Knowledge Gap

Despite the complex nature of generative AI tools, it is surprising to note that a majority of users (59%) perceived their knowledge of the technology as excellent or good. However, this self-perceived understanding does not seem to translate into a comprehensive grasp of the potential dangers associated with using these tools.

While some consumers exhibited a worrying lack of caution, others expressed valid concerns about the safety of generative AI. Over half (52%) worry about the potential for criminal misuse, while 51% feared the misappropriation of their inputted information.

Additionally, 50% were apprehensive about third-party data usage, and 47% expressed concerns about the sharing of confidential information. Therefore, it is promising to see that some individuals are wary of the dangers of this emerging technology.

Implications for Businesses

These latest findings only underscore the urgent need for individuals and businesses to exercise caution when using generative AI tools. Consumers must be made aware of the inherent risks associated with inputting sensitive information into these systems.

It is commonplace for the terms and conditions of AI tools and systems to pass the liability for error or failure to the purchaser or user. This risk should be front of mind for both the business and the end-user.

For businesses, the implications are equally significant. Among other risks, organizations must recognize the potential exposure of confidential data through employee use of generative AI tools.

Implementing robust security measures, educating employees about the dangers, and establishing clear policies for handling sensitive information are crucial steps in mitigating these risks. Those organizations with strong cybersecurity foundations will be better positioned to address the evolving threats associated with technologies like generative AI.

The technology firms that create these models also have a critical role to play in deterring risky behavior by users. While warnings against inputting confidential information are a step in the right direction, the findings suggest that stronger procedures are necessary.

Enhancing user education, implementing strong data protection mechanisms and fostering a culture of cybersecurity awareness are essential actions for technology companies to take.

The Role of Regulation

The poll also sheds light on public opinion regarding the regulation of generative AI. Financial services (25%), healthcare (18%) and government (15%) emerged as the sectors perceived to require the strictest regulation. However, a significant 20% of respondents believe that all sectors should be subject to equal regulation.

With the Government calling for various regulators to publish their strategic approach to AI by the end of April, it will be interesting to see how their attitudes differ, or whether there are synergies across all sectors. Given that the EU’s AI Act has now been passed, it is expected that each sectoral approach will include elements of these new rules.

That said, it cannot be left entirely to the Government and regulators to guarantee the safety of generative AI. Consumers must take some responsibility for their actions in the same way that they are responsible for their safety when they get behind the wheel of a car, for example.

Generative AI is relatively new, and its applications and risks are still being discovered. However, this research should serve as a wake-up call, highlighting the urgent need for increased vigilance and enhanced security measures in the realm of generative AI.

As these tools become more prevalent, it is imperative that a more holistic approach be adopted involving individuals, businesses, and technology firms working together to mitigate the threats and ensure the safe and responsible use of generative AI.

By raising awareness, implementing robust regulations, and promoting cybersecurity best practices, we can navigate the transformative power of generative AI while safeguarding sensitive information and preserving digital security.

What’s hot on Infosecurity Magazine?