NCSC Calms Fears Over ChatGPT Threat

Written by

A leading UK security agency has claimed there’s a low risk of ChatGPT and tools like it effectively democratizing cybercrime for the masses, but it warned that they could be useful for those with “high technical capabilities.”

National Cyber Security Centre (NCSC) tech director for platforms research, David C, and tech director for data science research, Paul J, acknowledged fears over the security implications of large language models (LLMs) like ChatGPT.

Some security experts have suggested that the tool could lower the barrier to entry for less technically capable threat actors, by providing information on how to design ransomware and other threats.

Read more on ChatGPT threats: Experts Warn ChatGPT Could Democratize Cybercrime.

However, the NCSC argued that LLMs are likely to be more useful for saving hacking experts time than teaching novices how to carry out sophisticated attacks.

“There is a risk that criminals might use LLMs to help with cyber-attacks beyond their current capabilities, in particular once an attacker has accessed a network. For example, if an attacker is struggling to escalate privileges or find data, they might ask an LLM and receive an answer that’s not unlike a search engine result, but with more context,” the agency claimed.

“Current LLMs provide convincing-sounding answers that may only be partially correct, particularly as the topic gets more niche. These answers might help criminals with attacks they couldn’t otherwise execute, or they might suggest actions that hasten the detection of the criminal.”

LLMs could also be deployed to help technically proficient threat actors with poor linguistic skills to craft more convincing phishing emails in multiple languages, it warned.

However, the NCSC added that there is currently “a low risk of a lesser skilled attacker writing highly capable malware.”

The agency also warned about potential privacy issues resulting from queries by corporate users that are then stored and made available to the LLM provider or its partners to view.

“A question might be sensitive because of data included in the query, or because [of] who is asking the question (and when),” it said.

“Examples of the latter might be if a CEO is discovered to have asked ‘how best to lay off an employee?,’ or somebody asking revealing health or relationship questions. Also bear in mind aggregation of information across multiple queries using the same login.”

Queries stored online, including potentially sensitive personal information, might be hacked or accidentally leaked, the NCSC added.

As a result, terms of use and privacy policies need to be “thoroughly understood” before using LLMs, it argued.

What’s hot on Infosecurity Magazine?