Blaming Users for Security Fails: Oh, Yes We Should vs. Oh, No We Shouldn’t

Written by

Ira Winkler believes you should attribute appropriate blame and penalties to users in cases of clear negligence while Wendy Nather argues this approach is counterproductive

Ira Winkler, chief security architect, Walmart
Ira Winkler, chief security architect, Walmart

Oh, Yes We Should

While speaking with three other information security executives on the keynote panel of a pre-pandemic ISACA conference, I mentioned that users have to be held responsible for clear policy violations. Another panelist immediately interrupted with the typical line: “You can’t blame the user!” My reply was: “Why not?”

In cybersecurity, ‘you can’t blame the user’ has become a blind mantra. It doesn’t matter what the circumstance. We hear experts mindlessly repeat the statement. You don’t, however, hear this statement in other fields. For example, you don’t hear a CFO state that you can’t blame the user when a user’s action causes a financial loss. You don’t hear a COO claim you can’t blame a user when there is a large shutdown due to a user action. It is the same for safety-related issues. If a user watches pornography on a company computer, that person will be fired. Again, why is it not similar for a cybersecurity-related incident?

Before I go on, I should say that I fully agree that a single user action should not result in significant damage. For there to be a loss, the organization has to provide the user with the ability to create damage and for the damage to result in a loss. Users can theoretically only do what you give them the ability to do. So, even if a user is to blame at some level, it is not solely their fault.

At the same time, cybersecurity and IT professionals, in general, are poor at providing awareness training. They provide inadequate protections and don’t consider all of the capabilities they give users. With all of this considered, I want to be clear that I don’t believe it is always appropriate to blame users.

In my book, You Can Stop Stupid, I wrote about the concept of a ‘just culture,’ which I adopted from safety science. In safety science, a user is as much a part of the system as the tools; in this case, the computer. Any safety incident results from a failure of the entire system, not just the user; the user is the proximity of the error, and user error is a symptom of what is wrong with the system. Ironic to this discussion, a ‘just culture’ is also referred to as a ‘no blame culture’. In such a culture, users are encouraged to report safety failings without fear of retribution. It is designed to encourage a safer environment that remains functional. It is not, however, a get out of jail free card.

Users are still responsible for willful misconduct and gross negligence. Using a common cybersecurity example, if a user knowingly violates policies using a USB drive on a company computer and creates a malware incident, the user can, and should, be held responsible. Likewise, if users install unauthorized software on company computers against policies, they can, and again should, be held responsible. For example, in one case, a guard used the guardhouse computer to download pirated videos with embedded malware. Do you not blame the user here for destroying a safety-related system and placing the organization in legal jeopardy?

Where is the line? In a ‘just culture’, the line is relatively clear. There are specific characteristics of a ‘just culture’. In ‘just cultures’, users are 1) provided with clear guidance on how to perform functions properly, 2) given the resources to perform functions properly and 3) are provided with a work environment that supports doing their job function properly.

Does this mean that we blame a user for clicking on a phishing message? Clearly not. Do we blame people for accidents? In the absence of clear negligence, it is not even considered. If there is poor training in an environment, they cannot be blamed. If they are overworked or not given the appropriate resources to address the issues, again, we do not blame the users.

This does, however, mean that if you have a user who is unusually susceptible to multiple phishing messages and presents a constant risk, that person should be considered a risk. This is the same as considering disciplinary action against a well-meaning cashier who made frequent counting errors or a well-meaning nurse who often made errors in delivering prescribed treatment.

Likewise, if there is willful violation of security policies without a truly compelling justification in a ‘just culture,’ there should be blame and penalties. According to a recent study published in Harvard Business Review, there was a willful and conscious failure to comply with security policies in 5% of job tasks. Policies are put in place to avoid incidents, adhere to regulations and reduce losses, etc. Organizations are not only suffering tens of millions of dollars of losses due to employees’ failure to comply with policies, with consumers paying the true price; they are also being fined similar and larger amounts, around the world, for such failures. Such willful failure to adhere to policies would not be tolerated in any other business function.

Even when there is clear negligence or willful misconduct on the part of a user, it doesn’t mean that the system is not a contributing factor. However, it does mean that you should attribute the appropriate blame and penalties to a user when you have a ‘just culture.’


Wendy Nather, head of advisory CISOs, Duo Security (part of Cisco)
Wendy Nather, head of advisory CISOs, Duo Security (part of Cisco)

Oh, No We Shouldn't

So there I was, years ago, sitting in the office with my iron-jawed deputy, the scary Sergeant Skaarup, talking about the best tools for user awareness training and policy enforcement.

“Aluminum bats are easier to clean the blood off than wooden ones,” Sgt. Skaarup said, “so they’re more hygienic.”

“But if you use them right, there’s nothing to clean up,” I said. “Internal bleeding is carpet-friendly.”

Needless to say, I have since evolved my thinking about security awareness and policy enforcement. Yes, it was fun to talk about wall-to-wall counseling, but let’s face it: as security professionals, we lead an arcane technical function that even other technical colleagues don’t understand. It was one-sided from the start to expect everyone else to adapt to our models and our viewpoint.

The truth is that technology keeps evolving, and we run around begging people to stop using it because OMG HAX. The entire World Wide Web was built to be clicked on, and our response is to say to users, “Not that one! Oh no, don’t click that! That one’s okay. NOT THAT. Click here to download our 30-page white paper on how to stop clicking on links!” There are enormous financial applications that make heavy use of Excel macros, and it’s a preposterous answer on our side to say, “Just disable macros. It’s safer that way.”

Not only is it a ridiculous mission to attempt, but it’s self-defeating. It sets us up against the users, who are simply trying to use technology as it was designed to get their work done. Once we’re the opposition, it can only lead to the users cleverly working around the obstacles we set in their way, rather than coming to us with suggestions on improving the designed controls to accomplish our shared goals. The customers we serve end up becoming another threat vector for us to consider in our risk models, rather than allies against the adversaries we already have.

The reflex to blame the user, rather than the security design, runs so deep that it’s inherent in the conventional wisdom that says: all we have to do is train users enough, and they’ll see things our way. If they don’t, it must mean they still don’t understand, and we must train them more often and LOUDER. What was that definition of insanity again? Trying the same thing over and over and expecting a different result?

If we still face the same dynamics after all these decades, it’s time to ask ourselves why. It’s time to examine our hidden assumptions and biases. The first step is to acknowledge that the use of technology is no longer the exclusive province of technologists. It belongs to everyone, and there are no qualifications to entry. Since technology has become democratized, it follows that security must be democratized too.

Democratizing security means accepting that everyone has a say in how it operates for them. This is a radical departure from the traditional authoritarian model of security, in which an employer was the only source of technology and thus managed it, controlled it and enforced rules around its use. People have their own consumer use of technology and must make their own risk decisions every day; they then bring these judgments into the workplace. This is how it should be: users who are already practiced in assessing personal risk can be more engaged in the process of protecting corporate resources. They need to be consulted as peers, however, not as an ignorant user who needs to obey.

Beyond the principle that users should be encouraged to participate in risk management rather than treated as pets to be trained, there are practical reasons to avoid punishing them for mistakes: it incentivizes the wrong behavior.

At Duo Security (part of Cisco), we structure our awareness education as a user-centric exercise, and we celebrate every success when someone reports a phishing attempt, even if (and especially if) they fell for it. We encourage participation and dialogue with the security team, and that’s how we measure the program accomplishments. We don’t want users to be ashamed and hide their mistakes. Instead, we want to partner with them to reduce the errors next time, whether the user needs more information that they didn’t have before or perhaps redesigning a control to make mistakes less likely or less impactful.

The bottom line is that your users are your most valuable allies. They are the only way for you to scale your protection, detection and response; they fill in gaps with their reasoning and institutional knowledge that automation will never cover. As a former CISO, I can tell you that the best relationship to have with a user is one where they will come into your office, close the door and say, “I think there’s something you need to know.” Don’t waste those golden opportunities to build the strongest possible security program. Also, for crying out loud, put down the bat.

What’s hot on Infosecurity Magazine?