Infostealers Spread Via AI-Generated YouTube Videos

Written by

Cybersecurity researchers have observed a 200–300% month-on-month increase in YouTube videos containing links to information stealer (infostealer) malware in their descriptions. A growing number of these were generated using artificial intelligence (AI) programs such as Synthesia and D-ID.

The findings were described in a new report by Pavan Karthick, a threat intelligence research intern at CloudSEK.

“It is well known that videos featuring humans, especially those with certain facial features, appear more familiar and trustworthy,” reads the document.

“Hence, there has been a recent trend of videos featuring AI-generated personas across languages and platforms (Twitter, Youtube, Instagram), providing recruitment details, educational training, promotional material, etc. And threat actors have also now adopted this tactic.”

Infostealers observed to be delivered via these videos included Vidar, RedLine and Raccoon. Many of these channels counted hundreds or thousands of views.

“[For instance], a Hogwarts [Legacy] crack download video generated using d-id.com was uploaded to a YouTube channel with 184,000 subscribers. And within a few minutes of being uploaded, the video had nine likes and 120+ views,” Karthick wrote.

According to the security researcher, this trend shows the threat of infostealers is rapidly evolving and becoming more sophisticated.

“String-based rules will prove ineffective against malware that dynamically generates strings and/or uses encrypted strings. Encryption and encoding methods differ from sample to sample (e.g., new versions of Vidar, Raccoon, etc.),” Karthick explained.

“In addition, they will only be able to detect the malware family when the sample is unpacked, which is almost never used in a malware campaign.”

Read more on Raccoon here: Credential Stealer Malware Raccoon Updated to Obtain Passwords More Efficiently

To defend against threats like this, Karthick advised companies to adopt adaptive threat monitoring tools.

“Apart from this, it is recommended that users enable multi-factor authentication and refrain from clicking on unknown links and emails. Additionally, avoid downloading or using pirated software because the risks greatly outweigh the benefits,” concluded the advisory.

AI tools are also often associated with data privacy concerns. For more about this trend, read this analysis by Infosecurity deputy editor, James Coker.

What’s hot on Infosecurity Magazine?