Insider Threat Mitigation: The Role of AI and ML

It needs no telling how damaging insider threats can be. Amongst its numerous impacts, the most significant involve the loss of critical data and operational disruption, according to statistics from the Bitglass 2020 Insider Threat Report. Insider threats can also damage a company’s reputation and make it lose its competitive edge.

Insider threat mitigation is difficult because the actors are trusted agents, who often have legitimate access to company data. As most legacy tools have failed us, many cybersecurity experts agree that it is time to move on. Artificial Intelligence and Machine Learning are the most promising technologies in the coming years of cybersecurity. What roles do they play in addressing insider threats?

Authentication

The bulk of insider threats facing businesses are due to negligence. While the workforce can be trained on cybersecurity awareness and so on, one cannot rule out possibilities of human error from time to time. Cybersecurity is about eliminating risks, that includes reducing the attack surface; if a breach does occur, it can be contained easily.

In authentication, this can be implemented by using the least privilege principle. Least privilege means that no worker gains access to more data than is needed to perform their tasks at any point.

Signature-based cybersecurity tools fail at helping organizations to properly implement this principle. The reason is the lack of context. This is a gap that AI-backed Risk-Based Authentication (RBA) fills conveniently. RBA does not only verify a person’s identity; it also analyzes the context to detect anomalies, if any. An RBA solution combines behavioral analytics and machine learning to establish a pattern of user behavior and enforce threat-aware policies.

In addition to identity verification, RBA collects information about a user’s location, device, time of access, etc. to determine that no breach is being attempted. Avoiding two-factor authentication, RBA estimates a risk score based on login behavior. Suspicious behavior (such as trying to gain access through an unknown device) would make the system request additional information. On high risk, access is not granted at all.

False Positives

At least 43% of organizations discover false positives in more than 20% of cases. The problem is that many companies still hold on to signature-based (or rule-based) cybersecurity tools in an age where cyber-attacks have transformed. Cyber-criminals now use artificial intelligence to scale their attacks and work with greater preciseness, deftness, and sophistication. Unfortunately, legacy security tools are weak in the face of such advanced threats.

Organizations must combat AI-based cyber-attacks with even more robust AI tools. Using machine learning, these tools establish a baseline of normal behavior against which it assesses future usage to detect unusual and suspicious behavior. This also combines with predictive analytics that ensures that IT is alerted before an attack occurs.

These stronger defenses protect organizations from zero-day attacks and other types of strange attacks. Legacy cybersecurity tools need to be told what’s good and what’s malicious; they grant or restrict access to a network based on the rules of this. Even without AI, such an approach can be easily bypassed by launching an attack that exploits an unknown vulnerability or tricking the system to pass off malware as ‘safe-ware’. AI-based tools ‘know’ better. Machine learning can lower cases of false positives by up to 50% - 90%.

Phishing Prevention

Insider threat actors are of three kinds:

  • Malicious users – intentional data breaches
  • Careless users – accidental data breaches
  • Compromised users – accidental data breaches instigated by an external actor (this occurs when a user falls for a spear-phishing attack and clicks on a malicious link)

The two previous sections address how AI and ML protect organizations from malicious and careless users. This last section focuses on compromised users.

Phishing prevention is another area where behavioral analytics is useful. AI can be used to analyze emails from seemingly trusted sources to determine if the messaging is consistent with the sender’s previous emails. It uncovers the often subtle contradictions in syntax, word choices, writing style, etc. that bypass human attention. Besides this, it can scan links and media ahead to ensure that they are authentic and safe to access.

The automation of this entire process reduces the burden of human actors having to sift through lengths of data to identify potentially malicious code and media.

Over the coming years, AI, ML, and cybersecurity are going to be more intertwined, but their association is not a magic potion that automatically eliminates all our cybersecurity problems. Don’t forget that cyber attackers are also equipped with AI tools by which they can enhance attacks.

So the trouble remains persistent. It’s best to think of AI as an additional tool in your box that, together with other tools, creates a strong, almost indestructible defense.


Michael Usiagwu is the CEO of Visible links Pro, a premium  Digital Marketing Agency committed to seeing your brands/company and products gain the right visibility on the search engine. He has been featured on Innovation Enterprise, Hackernoon, Readwrite & Bizcommunity.


What’s Hot on Infosecurity Magazine?