Cybersecurity Folklore - Separating Fact from Fiction

Written by

Although by training I am a software engineer, my education focused on chemistry more than any other subject. Primo Levi's "The Periodic Table" is essential reading for everyone, but the chapter "Chromium" holds particular significance for me. It tells of a formula for anti-rust paint that contained ammonium chloride for no reason – and why only he knew what lay behind it. 

What on earth does anti-rust paint have to do with information security? Three of the minor themes in "Chromium" stand out here: the professional satisfaction of solving difficult technical problems, the loss of organizational memory and the need to check your facts.  

While the Internet has enabled urban myths to spread, it has also helped debunk them. Urban myths still thrive in the cybersecurity sector. Many security policies suffer from security folklore: we do this because we always have (even though we may not remember why) or because it is "best practice".

Password policies are notorious. They are known for being difficult for users to remember and not all that secure - yet it took a public mea culpa from Bill Burr of NIST to bring this to worldwide headlines.

For more than a decade, security specialists have been saying that the "three strikes and you're out" password rule isn't best practice for enterprise authentication. In fact, it's ludicrous. Back in 2003, a research team from UCL predicted that requests for password reminders could be reduced by up to 44% just by increasing the number of strikes from three to ten. Yet we’ve plucked the magic number three from thin air and many companies are continuing to enforce this rule.

Another example of security folklore is anti-phishing advice. We tell users that if it's misspelt, it's a phishing e-mail. Once that particular piece of anti-phishing advice worked. Then two years ago, it stopped working – phishers got clever and digital marketers got lazy. Nowadays a misspelt e-mail is just as likely to be promotional as it is to be a phishing attempt. 

Now let's move on from security folklore about people to an example about technology: dummy security settings. Complex software products often have obscure security settings with impressive-sounding names. Once these settings used to do something, but now they simply have no effect.  Just consider Microsoft Windows – after improvements have been made to the operating system or a feature has been replaced, certain settings are no longer needed.

Yet out-of-date technical security policies are often littered with stern, apparently authoritative – but now false – references to these ineffective settings. This confuses people and creates the risk that other effective and important settings will be overlooked in all the clutter.

Yet another piece of technology security folklore is cryptography, and TLS in particular. The strength of cryptography erodes over time, so it needs to be replaced with more modern cryptography. Beware of old reference books that say "RC4 is strong"; that might have been true once, but not any longer.  Here there is no excuse. Cryptography doesn't become weak overnight – businesses have years of advance warning. Check your gateways and system configurations for modern TLS configuration.

Security policies need to be grounded in fact, not folklore. Businesses can follow two key steps to make this change.

First, IT must apply what politicians call "evidence-based policies". This means find evidence based on proper scientific research, and validated in real organizations, rather than using someone's idea of "best practice".

This is why the old password guidance was changed – research showed that it was counterproductive in practice. A thriving UK research community focused on security and human behavior now exists and I hope its findings will be just as influential on security policies over the next few years. Suppose part of your security policy was changed to its exact opposite? Imagine telling employees to share passwords (securely, of course). Maybe that would be more productive, and reflect what staff actually do. Researchers are actively studying this very topic.

Second, organizations need to keep track of the changing world. This extends beyond changes in people, process and technology to the surrounding environment. We now live in a world where attackers reverse-engineer patches to create new exploits, often within a few hours. It is no longer just the threat of an occasional zero-day vulnerability we have to worry about – how does your patching policy deal with the monthly batch of same-day vulnerabilities?

Many organizations are wasting their users' time and effort with ineffective security policies. If policies are not practical, users will bypass them. To keep security policies up-to-date, CIOs need to ask two key questions at their regular security policy reviews: where is the evidence that the policy is both effective and efficient? What has changed since the previous review?

With this information, the IT department can update security policies so that employees can easily comply by using modern devices and apps, while technology can be readily updated or replaced to deal with emerging threats and reduce costs. This insight will also help IT ensure processes are easily adaptable to match changing business demands.

Check both past and present facts – and when the facts have changed, change your security policies.

In "Chromium", Primo Levi had a shrewd employer who spotted his talent. He saved them a lot of money, and they increased his salary. I don't know if Bill Burr has read "Chromium" - but I suspect he has. You should, too.

What’s hot on Infosecurity Magazine?