Are Humans the Weakest Link in the Security Chain?

Written by

In March 2017, Britain's National Cyber Security Centre (NCSC) set out a new challenge to security practitioners: lose the habit of calling users ‘the weakest link’ when considering security-related behavior in the workplace. Instead, NCSC recommends recognizing users as the strongest link and an important part of any organization’s defenses, arguing that devising security policies that work for, rather than against, users is crucial to making security work.

To users frustrated by corporate security, the approach seems logical enough. Usability experts have long argued that computer security is dominated by policies and rules that appear to have been formulated without considering the fact that most people’s primary jobs are not security-related. Soaking up users’ time and attention with complex policies they struggle to comply with not only frustrates them (and inspires them to come up with risky workarounds) but has substantial productivity, and therefore financial, costs to the business.

One of the first pieces of this new approach is the NCSC’s 2015 password guidance, which recommends doing away with complexity requirements and ceasing to require frequent changes. The first makes passwords hard to type correctly; both make them hard to remember. The US National Institute for Standards and Technologies has followed suit, and in July 2017 the original author of the NIST standard that included the now-dropped complexity requirements publicly apologized in the Wall Street Journal for making users’ lives unnecessarily difficult.

Nonetheless, the shift represents a profound cultural change for many security practitioners, particularly as typically security practitioners have been recruited for their technical knowledge rather than their interpersonal ‘soft’ skills. At security conferences, it is not uncommon to find practitioners who bond over their frustration with ‘lusers’.

“We're really looking at challenging the slogan,” says the NCSC’s Susan A, “because it’s just not a useful way of looking at the issue.” Instead, she says, “the argument to have is how to ensure we can properly manage risk in the work place. If we work with people, we can get a better risk management solution.” The NCSC intends to publish new guidance later this year to explain the thinking further and showcase examples of organizations that have begun to work differently.

“If you compare modern business culture and modern security culture,” she adds, “business knows that people are its greatest asset.” Security, by contrast, has more of a command-and-control culture. “It’s not really tenable going forwards to have two incompatible cultures in the same organization.” The picture that emerged from a research study at the Department of Work and Pensions was one of users trying their hardest, who had to navigate “slightly flawed systems” in a large, complex organization with a lot of legacy IT. “For me it was a very powerful thing, and it almost rewrote in my head how I thought about the problem.”

“It’s not really tenable going forwards to have two incompatible cultures in the same organization”

A New Approach

The new approach – which Susan A calls “just the start” and admits requires a profound change in mindset – necessitates developing a better understanding of where problems are. Today, common approaches are awareness training, phishing training and other efforts to ‘fix the users’, as UCL professor Angela Sasse likes to put it. However, better value for money may come from fixing underlying problems such as an aging, and therefore vulnerable, IT estate, or reducing stress and overwork so employees make fewer mistakes. Looking for the cause of the behavior you want to discourage plays an important part.

The Research Institute in Science of Cyber Security (RISCS), which Sasse leads, is an important part of the inspiration of the NCSC’s new approach. Sasse’s 1999 paper ‘Users Are Not the Enemy’ was commissioned by a group at BT who asked Sasse to find out ‘why these stupid users can't remember their passwords’. The push to commission it, however, came from the company’s accountants, who warned that the relentless rise in password help desk costs threatened to bankrupt the company.

Sasse finds two crucial problems with the old approach: first, security people tend to regard users’ time as a free resource that is theirs to command as they wish; second, that much security advice is not worth the trouble that users are being ordered to take. 

Recently, Sasse asked 250 experts to name their top three security tips; they collectively produced 152 unique pieces of advice. “The fault,” she says, “lies with the people who design security mechanisms that are too effortful and ineffective. The fact that they can’t agree among themselves what the methods should be shows what a mess the whole field really is.” Instead, security people need to learn to respect that, “sometimes what people value is different. Security experts tend to be very sniffy about it and insist that users should value what they value. That really has to change.”

An added piece of complexity is that these systems are, as you might say, humans all the way down. Wendy Nather, principal security strategist at Duo Security US, which makes two-factor authentication systems, agrees, but adds: “I would say that humans are always the weakest link, but they’re not necessarily the users but the builders and designers of the technology. One of the silliest things is that the web is built for people to click on links, it's all links – and yet we go back to users and say, ‘don't click on links!’”

The web is only one example; ‘security’ would have us avoid many features that have been built into technology and have important uses – and then call us stupid for using them. Nather advocates seeing the security function as a service organization and approaching the rest of the company as “we’re here to help you do what you need to do in a secure way that matches your risk tolerance.” That way, “you get along a lot better.”

A particular issue is frequent ‘warnings’ about technological problems users can't solve. Take certificate warnings: faced with these, most people see only two choices – proceed anyway, or give up. Humans quickly habituate to ignore them. Nather once found her three-year-old grandson downloading a browser toolbar; he couldn’t read, but he knew that clicking the highlighted button made annoying blockages go away. “People who design things that way are not putting themselves in their users’ shoes.” As independent consultant Dr Jessica Barker asks – “How do human resources and the company accountants do their jobs if they can’t click on attachments to open CVs and invoices?”

Similarly, Morey Haber, vice-president of technology for BeyondTrust, which makes privileged access management and vulnerability management products, argues that where the early days of computing saw fully closed mainframes that were only opened as necessary, the opposite approach was necessary for home computers, and that change made humans the problem. “Ever since, we’ve been shoe-horning security back in.” He considers designers among the humans causing the problems: “In some cases the end user is not at fault, but the life cycle is.” Many vendors, however, see the new approach as an opportunity: their technology can help protect companies from their users.

“It’s a balance to get appropriate security controls and make people understand why they need to follow those controls and why they're important"

Finding a Balance

Amanda Finch, general manager of the IISP (Institute of Information Security Professionals), views people as both the strongest and the weakest link. “It’s really getting the balance right,” she says. “It’s a balance to get appropriate security controls and make people understand why they need to follow those controls and why they're important, but if it’s impossible or it makes their lives very difficult, then they need to work together to make it very tenable.”

One issue that Dr Barker often finds when she is brought into companies to design and deliver awareness-raising training packages is that companies want a quick fix. “They think they can tick a box that the training is done and that means people won't make any mistakes,” she says. “Or they will tell me the user needs to be scared into behaving the right way. It’s common to hear, but it’s not a helpful approach to take.”

Worse, training is very often delivered as a set of rules that tell people what not to do. This approach omits two important principles: tell people what they should do, and explain why it's important. “If you explain and make the training relevant, they’re far more likely to follow the processes and understand why they’re doing something and more likely to engage”, says Barker.

Nather also raises these issues, comparing them to teaching driving: “we don’t just teach people how to avoid different ways to crash.” Nather has found good success in conducting training where the attendees are encouraged to try to trick each other. “Immediately you start getting them in the mindset of thinking like the attackers, and, knowing what they know about their classmate, asking what would their classmate fall for?” Ultimately, Nather concludes, “we should not expect users to know as much as we do, because it took us decades to learn it.”

What’s hot on Infosecurity Magazine?