For Phishing Protection, Rely on More than Users

Written by

In a recent report, Wombat Security showed that 42% of UK respondents know what a ransomware is, and that 17% percent know how to spot a phishing attack. In a chat with Infosecurity, they go on to explain the numbers, and claims that protection is “down to people”. 

If we are “down to people” to protect against phishing and ransomware, as is claimed by many security professionals in addition to Wombat, we should just give up the fight. If we truly believe that the only way to spot and stop these kinds of threats is to have the employees to do the job, I can confidently tell you that not only will we fail, science also can explain why we will fail.

Just ask any social engineer - they will go on to explain exactly how to exploit the human weaknesses, the mental models and the behavioral patterns that makes humans function. I tend to claim that the only way to make a human person social-engineering safe is to have that person lobotomized. The reason behind the claim is not (only) to be shocking, from psychology we learn that the human mind function in a particular way, allowing humans to form strong and large social networks.

Let me share some insights, hopefully enabling your curiosity so we can debunk the myth of cybersecurity being “down to people” and move the discussion to how to improve our technology instead. 

Humans are wired to function in a social setting. In fact, we cannot survive alone, our survival is dependent on others. If you don’t believe this, consider a new-born. How old must the child be before it can survive by itself: six months; two years; ten years?

One of the mechanisms humans use is reciprocity. If I extend my hand to you, you are more likely than not to want to meet me. If I give you a gift, you are more likely than not to want to return a gift. This is why it is so easy to have people feel obliged to give you their passwords, information or hold the door for you, we are made to return a favor.

Holding the door is also tapping into another extremely strong function in being human: in-group bias. If I follow you towards a door, you are likely to assume that I belong in that building, which in your brain is translated to "belonging in the same group". We, you and I, are on the same team, your brain decides, and as such you will be nicer to me, you will extend favors and you will follow my suggestions, especially if I have negative comments towards someone who do not belong in “our” group. Your brain also tricks you into fearing to be excluded from “our” group, should you fail to comply with the norms and social behaviors.

You may make the claim that you would never hold the door open for me, because you would spot an imposter from far away. You may be right too, because you are a security professional and some of us have been trained to discover these imposters. That does not change the fact that most people are susceptible to these mechanisms, most of the time.

Still not convinced? There are plenty of real-life examples of people gaining access to systems and information they should not have had, in organizations who do know better. They tend to be referred to as spies or criminals when on the other team. On our team, they are called undercover agents. These are people who deliberately play on the social rules and the psychological models of humans in order to get what they want.

Can you think of others who deliberately play on social rules and psychological models in order to get what they want? Of course you can. Social engineers should pop up in your mind. Criminals who spread ransomware should be as easy. What about that sales guy who try to make you buy that new car/house/firewall? If he is any good, he will play your mind.

What about your better half, then? What games does (s)he put on? The simple truth is that we all are subject to human behaviors. We, as an organism, are created and tuned to function in groups with others. We are not created to function as a computer, as some seems to believe.

The problem with cybersecurity threats targeting employees are not to be solved by throwing phishing assessments and awareness trainings at people. The problem must be defined as what it is, and then dealt with accordingly. The problem is a technological one - with cheap email distribution anyone can send anything to anyone, including phishing and spam.

A technical problem cannot be solved by throwing people solutions at it. It must be dealt with by technology. For example, phishing follows particular patterns, patterns that computers can identify with ease using big-data and monitoring. There are products available in the market to identify and filter out these kinds of threats. Tools like Google’s Gmail comes with spam, fraud and phishing filters built in, for free. They also flag suspicious messages if they are in doubt, and disables my attempt to open some attachments, and they also offer a default preview mode on documents.

These are great examples of technical measures to reduce phishing and email threats. They apply the strong points of computing, and provides me with an aid that supports the human model. 

Of course, products are there to try and solve a technical problem, by blaming people for their behaviors. This makes perfect sense from a business perspective. The question you should be asking is this: does it makes business sense for your organization?

What’s hot on Infosecurity Magazine?