AI Adoption Surges But Security Awareness Lags Behind

Written by

A new ExtraHop survey involving over 1200 global security and IT leaders has provided fresh insights into the adoption and management of generative AI tools like ChatGPT and Google Bard. 

Security is reportedly not the primary concern for organizations using these tools; respondents are more worried about inaccurate responses (40%) than the exposure of customer and employee personally identifiable information (PII) (36%), disclosure of trade secrets (33%) and financial loss (25%).

Basic security practices are lacking, however, with 82% of respondents confident in their security stacks but less than half investing in technology to monitor generative AI use, exposing them to data loss risks. Only 46% have established security policies for data sharing.

“Data privacy is a pivotal concern. Interaction with such models can inadvertently lead to the sharing or exposure of sensitive information. This necessitates robust data handling and processing frameworks to prevent data leaks and ensure privacy,” warned Craig Jones, vice president of security operations at Ontinue.

“Organizations need to rigorously assess and control how LLMs [large language models] handle data, ensuring alignment with GDPR, HIPAA, CCPA, etc. This involves employing strong encryption, consent mechanisms and data anonymization techniques, alongside regular audits and updates to data handling practices to remain compliant.”

Additionally, the ExtraHop report shows that despite 32% of organizations banning generative AI tools, only 5% report zero usage, indicating employees find ways around these bans, making data retrieval challenging.

There’s a strong desire for external guidance, with 74% planning to invest in generative AI protection and 60% believing that governments should establish clear AI regulations for businesses.

Read more about AI-powered threats: New ChatGPT Attack Technique Spreads Malicious Packages

More generally, the ExtraHop report shows that use of generative AI tools are growing. To ensure safe and productive usage, the firm recommended clear policies, monitoring tools and employee training. 

“The use of generative AI tools is ultimately still in its infancy, and there are still many questions that need to be addressed to help ensure data privacy is respected and organizations can remain compliant,” commented John Allen, vice president of cyber risk & compliance at Darktrace.

“We all have a role to play in better understanding the potential risks and ensuring that the right guardrails and policies are put in place to protect privacy and keep data secure.”

What’s hot on Infosecurity Magazine?