To Inform or Not to Inform, There Should be No Question

Written by

Sometimes we need help to make sense of the world. In "The Man Who Mistook His Wife For A Hat", the 'lost mariner' tried to construct rational but flawed explanations for his life, despite having amnesia so extreme that he could not retain recent events for more than a few minutes. Oliver Sachs offered sympathetic and practical help to his patients who had untreatable neurological disorders. His writings expand our view of the world.

As Brian Boyd said: "We engage in a kind of social confabulation rather than having to confront our failure to understand."

Neurotypical people make mistakes too – people don't understand how most technology works. To use devices, we form 'folk theories'; mental models based on practical (rather than scientific) reasoning. Sometimes a flawed mental model just wastes energy. Perhaps your father, like mine, had to explain to you that turning up the thermostat wouldn't warm the house faster.

These folk theories of the humble thermostat are discussed in Don Norman's "The Design Of Everyday Things", from Willett Kempton's classic of cognitive science research. Those same social science research techniques have been used in the design of safety-critical systems for many years.

Folk theories of security
The fast-expanding field of sociotechnical security (security and human behaviour) now also studies folk theories of security. Just in time, too – everyday things now have serious security and privacy implications. Smartphones, home computers, and healthcare sensor devices are just the start. Here are some common flawed beliefs:

  1. "I'm not a target"
  2. "It's the other vendor's devices that are vulnerable – not mine"
  3. "There's nothing I can do to protect myself"
  4. "If it's says it safe, it is safe"

These security folk theories of people, processes and technology carry over into the workplace. Criminals and the unscrupulous know this, and exploit flawed beliefs as part of social engineering.

The onus is on security specialists
Security specialists have a different kind of challenge. We rely on mental models too; system threat models and deployment models are two examples. These models tend to be incomplete, rather than flawed in the way folk theories are. So we need to check these models for gaps. 

Security products and services are often opaque black boxes that their vendors discourage opening.  When a wise customer asks "how does this work,” the vendor should provide a mental model of their black box.

This may take the form of a top-level block diagram or architectural description; no more than a couple of pages. The description should only have enough detail to support an accurate mental model, so that customers know how to use the product or service securely. 

A vendor may say that the information is available or will be difficult to maintain. It shouldn't be – and if it does change, customers need to know. Often the information just needs to be extracted from training material, patent applications or Common Criteria documentation.

For cloud services, providing this information is good practice. It is part of service transparency, and doesn't mean revealing sensitive or proprietary information. Let's hope that one day, it will be a standard part of a product data sheet. In the meantime, security professionals need to both:

  1. Understand their users' folk theories by recognizing how they make security decisions. When a folk theory gets in the way, it isn't enough just to give practical advice; you must expand and adjust people's understanding.
  2. Ask their suppliers for up-to-date architectural descriptions of their products and services

Someone said recently that security professionals should be like family doctors, dispensing kindly advice to all who call. That's an idea we can all live with; I wish I'd thought of it myself.

Brian Boyd went on to say: "We have coped with our anxiety about not knowing enough by inventing stories involving agents who know what we don't. [...] Unlike stories with agents, which we have evolved to be predisposed to, the agentless explanations of scientific stories seem draining both emotionally, in that they require us to put our best explanations to the test, and imaginatively, in that they require us to think about mechanisms not at the level of agency".

Scientific stories don't have to be like that. In this series, I have tried to show how they can obliquely illuminate security matters; as fulfilling stories in which the scientists appear as they should be – the helpful experts who know what others don't.

What’s hot on Infosecurity Magazine?