People in Security: The Weakest Link vs Innocent and On Your Team

Written by

People in Security: The Weakest Link, by Bruce Hallas, Author and Host, Re-thinking the Human Factor (book and podcast)

If the means by which we measure weakness in security is the frequency by which humans make choices that result in risky behavior (often the root cause of data security and privacy breaches) then people are, without doubt, the weakest link.

A key part of the issue is that the process of decision making is not a binary one for mere mortals. It is surprisingly, in many circumstances, not even a conscious one. Whilst we all like to consider ourselves free spirits, able to make our own decisions with freedom to choose, we might be surprised to know that science has debunked that illusion.

Organizations invest in developing and embedding processes to identify risks to their cash flow and profitability. For some of us, that luckily includes information security.

Having documented the unacceptable risks, organizations then identify and implement controls to manage these in line with the board’s appetite for risk. This is often defined and recorded in a range of organizational policies.

These controls are there to reduce the risk associated, in many cases, with human behavior when users interact with information assets, systems and even the physical location where these are found.

Many of these controls are enforced using technology. However, many controls rely on employees making a discretionary choice whether to comply with policy or not.

To up the chances that employees will behave in line with an organization’s expectations, we invest in education and awareness programs and initiatives. These are designed to make employees aware of their roles and responsibilities and the serious consequences to them, the organization and even customers, of failing to comply.

We support these awareness activities by asking employees to make affirmations about their willingness to comply with policy. In addition, we increasingly assess employees’ competency to fulfil their roles and responsibilities. So why is it that after all that effort, people still choose to behave in a way which is contrary to expectation?

The root of the problem lies in one of several assumptions that people make when looking to influence behavior. This assumption is that information communicated to an audience, through whatever channel, is an effective means of driving change in behavior.

It assumes that people will use the information given to them to weigh up the ‘pros and cons’ of choosing to comply with policy and make decisions based on logic. There’s a name for this assumption: ‘utility theory’ and it’s been around in various guises since the mid-15th century. Another label we, as an industry, impose on this is ‘rational behavior,’ and when people don’t comply, we label them and their choices as ‘irrational.’

“How humans make judgements and decisions can be interpreted as not only a weakness, but possibly the greatest weakness by far within the human factor of security”

This seems, at first, to be sound reasoning, but the reality is that science has disproved this as far back as the 1970s. So what does science, and a few decades of the application of research, tell us?

The brain isn’t a logic gate. There are plenty of shades of grey before a decision is made. Furthermore, the brain’s ability to be highly effective is limited to a good supply of energy. The brain uses a disproportionate amount of the body’s overall daily energy consumption.

In our evolutionary past we struggled to consistently eat enough. Especially when we were hunter gatherers. Our brains and bodies evolved to handle this in many ways. One way our brain coped was to become what some call ‘lazy.’ By lazy, we actually mean to become more energy efficient.

To do this the brain evolved into two systems. One, sometimes called the ‘lizard brain,’ delivers quick and even automatic thinking. This can be thought of as your gut instinct or unconscious thinking. The second system performs what is called cognitive thinking. This is where we consciously are thinking things through. However, instinct requires a lot less energy than thinking things through and this is exactly what the brain needs to do to survive.

To save energy your system ‘one’ brain contains what behavioral scientists call ‘cognitive biases and heuristics.’ You can think of them as short cuts that enable quick, sometimes unconscious, decision making.

These short cuts have been embedded into human DNA over centuries. They served us well through the evolutionary process, however the brain hasn’t evolved sufficiently enough to handle the world we live in today.

The quantities of information we receive every day are considered excessive and the environment in which we make decisions is often stressful.

On this backdrop of information overload, employees must make decisions about a topic which is, rightly or wrongly, perceived as complex and unfamiliar, or one that incites ambivalence or pain.

So, instead of applying our system ‘two’ thinking brain, this information is often being received or reviewed by system ‘one,’ where our cognitive short cuts are at work. These short cuts served us well when we were hunter gatherers on the Savannah, but they aren’t as effective in the world we find ourselves in today. This makes these short cuts, a part of human DNA, a vulnerability, and how humans make judgements and decisions can be interpreted as not only a weakness, but possibly the greatest weakness by far within the human factor of security.

People in Security: Innocent and On Your Team, by Wendy Nather, Director of Advisory CISOs, Duo Security

The infosecurity industry has an unfortunate tendency to blame users for it’s security problems. If only users were less careless or better trained, we say, then we could prevent the majority of breaches. Yet, as our users continue to fall victim to attacks such as phishing and drive-by malware, what this really shows is the inadequacy of cybersecurity design and implementation.

Infosec puts too much of the burden on users to make up for badly-designed security systems. Users of a system don’t need to care about how it works; IT and cybersecurity teams need to make infrastructure and systems seamless enough for the less tech-savvy people to still be secure.

Security shouldn’t require a user to learn multiple interfaces and download several apps and accessories just to perform one simple task. Instead of interrupting or working against the natural workflow, security needs to perform smoothly in the background. By its very nature, bolt-on security tends to add friction to the user experience, and users react by avoiding it, evading it or turning it off entirely. This is not malice, it’s human nature.

Technology products often come with a whole host of features built in, and then we scold customers for actually using them, such as macros in Office applications. Banning the use of legitimate software functionality because it’s vulnerable to misuse is putting the onus on the wrong stakeholder to fix the problem. Try telling someone in finance and accounting not to use half of the spreadsheet functions they rely on for calculations, and see what they say. The web is designed around clicking on links, while the most popular and pithiest security advice is ‘don’t click on links.’ Not only is this admonition not helpful, but it displays to users that we don’t understand or appreciate their needs and priorities.

It is easy to make assumptions that organizations and their users are careless, or that security isn’t a priority. The truth is that security is only one of many concerns on their radar. It’s about understanding all the dynamics that play into what companies have to prioritize in order to stay in business. Fake moralizing – calling users lazy, stupid or evil when they don’t do exactly as we say – is the biggest disservice we do as security professionals.

“We must shift the view of security from a control function to a service function”

Furthermore, it’s not even clear whether most organizations can achieve effective security at all. One of the hardest tasks in cybersecurity is figuring out how to implement, in any given environment, what conventional wisdom says we should. How do you get access to network traffic data when you don’t run your own network? What do you do when you find a vulnerability in software that the vendor refuses to patch unless you pay for it as custom development work? Enterprises are complex and fragile, and the older they are, the more likely they are to have legacy systems with their own technical inertia that can’t be overcome without years of steady investment. Making the business case for that investment, above and beyond what’s needed to run the business, isn’t as simple as saying ‘security best practice’ or even ‘compliance,’ particularly if you can’t tell your management what the final price tag will be. Users can only work with whatever the organization manages to provide for them.

Take credential stuffing attacks as a result of users creating and reusing weak passwords, for example. On the surface, it seems logical to blame the user for their actions and for the results, but that ignores the root cause driving those actions. My belief is that we should never have designed the primary credential to reside in fallible organic memory (i.e. the brain). Sure, it seemed like a good idea back when there was only one login account per person, but today, given that most technologically active people around the globe have literally dozens or hundreds of accounts, it’s ridiculous to tell them to memorize unique, complex character strings and never to write them down.

The good news is that we are starting to design programmatic layers to shield users from the malignant growth of passwords. The bad news is that we are still far from being able to deploy these shields uniformly across all systems everywhere. Users will continue to struggle with an untenable technology design, and some of us will continue to beat them up about it, until they openly revolt. I predict that the revolution isn’t too far off.

We need to create security that our customers want to use, not just technology that IT people want to buy. More fundamentally, we must shift the view of security from a control function to a service function. By collaborating with our customers, and democratizing security so that it’s easier to use, we can stop the blame game and start winning together as part of the same team.

What’s hot on Infosecurity Magazine?