What the FBI Hoax Blast Reveals About Email Deception

Written by

In today’s digital world, distinguishing between friend and foe has never been more challenging. For example, imagine that you receive an email from a legitimate FBI address and server warning of an imminent threat to your infrastructure – would your first impulse be to assume the FBI sender was really a hacker? Probably not.

Over 100,000 recent email recipients had to address this exact question when they received an email blast from ‘eims@ic.fbi.gov,’ which corresponds to the FBI’s Criminal Justice Information Services. The email came from a legitimate FBI server, so it slipped right past legacy email security tools and successfully reached inboxes, justifiably alarming recipients. 

But the email was not sent by the FBI. Instead, it was reportedly sent by a hacker with the apparent intention to damage a security researcher’s reputation, falsely identifying him in the message as a threat actor.

Compromised Supply Chain Email… the Scariest Spearphish 

One of the most sophisticated email attack types is hijacking a trusted third-party account to position it as a genuine and validated contact. This technique is scary, as attackers specifically target an organization and an individual from a trusted and vetted contact, taking digital deception to another level – easily deceiving the human victim and slipping past legacy email security technologies. 

In this case, these emails came from a trusted FBI email server, not a spoofed FBI email address, with the threat actor abusing a coding error on an FBI website to blast out the hoax message. Despite cyber-hygiene, a traditional email gateway in place and anti-phishing training, the reality is that it is becoming an almost impossible human task to distinguish a legitimate domain versus a malicious one, particularly when a trusted supplier or partner was spoofed.

In essence, attackers today are forcing organizations to choose between blindly trusting their suppliers or potentially risking business disruption. This security challenge is simply no longer a human-scale problem, and businesses must rely on advanced and innovative technologies to protect their digital environments as targeted attacks increase in number, scale and complexity.

In one similar instance, Darktrace discovered an attack in which attackers performed account hijacks of several trusted suppliers and used these compromised credentials to send tailored phishing emails to the target company. These emails appeared to contain file storage links to RFP documents from trusted partners but were actually a means of further Microsoft 365 credential harvesting. Because this organization had Darktrace autonomous response for email, Antigena Email (AGE), in passive mode (as opposed to active mode), attackers were able to use this malicious email to successfully compromise an internal account, elicit sensitive information from that account and pivot to using the credentials to send out further malicious emails across the business.

Self-Learning AI: Understanding the Communications, Context and Behaviors  

Rather than an approach that looks at historical attacks and categorizes email addresses into binary lists of ‘good’ and ‘bad,’ organizations should adopt security approaches that can review myriad aspects of the email. It should do this from the details of the message’s content to a rich understanding of the typical communication habits of senders and recipients to determine what should and should not reach recipients’ inboxes. Today’s security tools need to understand businesses communications, context and behaviors to combat sophisticated and socially engineered attacks. 

Organizations cannot rely on education or traditional email security tools that only stop human-observable anomalies or known historical threats. Today’s security tools need to understand businesses communications, context and behaviors to combat sophisticated email attacks. Fortunately, even if these emails come from legitimate addresses from authoritative government agencies like the FBI, self-learning AI can autonomously assess many factors, including subtle deviations in language, to sniff out when something is not quite right. 

What’s hot on Infosecurity Magazine?