Cybersecurity's Image Problem

Written by

Benjamin David investigates whether prevailing cyber imagery and rhetoric are dangerous for the cybersecurity industry and explores how we can fix them

When you think of the word cybersecurity, what comes to mind? The answer for many will be hoodied hackers, motherboards, padlocks and binary code. For Dr Victoria Baines, a research fellow at Bournemouth University, author and speaker, it’s a three-part answer: faceless hoodie-wearing cyber-attackers; military imagery/warfare rhetoric and cyborgs or galactic-fighting machines. While cybersecurity experts have decried these representations of cybersecurity in equal measure, this exaggerated imagery and rhetoric show no sign of ebbing.

Interestingly, these depictions have been discussed with fascination in academia in recent years, with the discourse shifting to what role the human now has within an increasingly faceless, militarized, unreal and AI-led industry. Indeed, for academics like Baines, these depictions actually imperil the industry by making the public feel disempowered. Yet, what are these depictions as Baines understands them, and why does the cybersecurity industry continue to use them? Also, how would ordinary people feel disempowered due to said depictions? Lastly, and perhaps most importantly, how can cybersecurity be remodeled for the better? 

The Hoodie-Wearing Cyber-Attacker

Before we consider the representation of the information security industry and professional, let’s consider the adversary. We are all familiar with the image of a hoodie-wearer with cascading binary code. An image that has permeated journalists and cybersecurity vendors in equal measure, the hoodie-wearing specter evokes fear of the unknown, a supposed modern representation of the cyber-attacker. The point it conveys is that cybercrime has ominous overtones, operates in the darkest recesses of society and is unavoidably coming for us all, ready to wage a kind of war.

Baines says that while the hooded figure is a stock feature of contemporary representations of malicious hacking, it is also a feature of “an older Western heritage of death symbolized as the Grim Reaper.” Either way, the evocation of enmity is unavoidable: “the hacking-hoodie is coming for you is the message conveyed,” Baines tells Infosecurity.

James Shire, in his article Cyber-Noir: Cybersecurity and Popular Culture, captures this deftly: “Cybersecurity experts foster a perception of cybersecurity as a gloomy underworld in which the good guys must resort to unconventional tactics to keep at bay a motley group of threats to the digital safety of unsuspecting individuals, businesses and governments.”

But why is this depiction used? Baines remarks that the enduring impression is that “cybercrime is something that is very threatening (perhaps mortally so), committed with impunity.” Paradoxically, it conveys the idea that people are powerless against cyber-criminals. As Baines worries, what’s especially spine-chilling is the obscurity, the facelessness: “This is disempowering to most people since it portrays the idea that threat actors are mysterious and unknown.” Baines asks, “would you feel confident defending yourself against such a faceless, unknowable attacker as represented by the hooded hacker?”

The Militarization of Cybersecurity

It has become a commonality to see the cybersecurity industry abound with military imagery and the language of warfare. Think “cyber-attack,” “defense-in-depth,” “red and blue teaming,” “weaponization” and “kill chains,” to name but a few. Cybersecurity vendor IronNet captures this, calling its cybersecurity products IronDefense and IronDome, the latter alluding to the Iron Dome air-defense system designed to intercept and destroy short-range rockets and artillery shells. Additionally, “many cybersecurity vendors are known to have hired military experts,” remarks Baines, clearly framing cybersecurity as a type of binary opposition: the good guys vs. the bad guys.

Yet, has this framework been used justifiably? According to Sir Jeremy Fleming, director of GCHQ, the answer is no. In October of 2021, while dispelling any idea of the UK building a “cyber warfare center,” Fleming elucidated that “with due respect to all of my military colleagues on both sides of the pond, there is a real danger of over-militarizing the cyber-domain.”

"Would you feel confident defending yourself against such a faceless, unknowable attacker as represented by the hooded hacker?"

Baines warns that modeling the cybersecurity industry along said lines has dire consequences. She stresses that this can negatively impact “a community’s ability to respond to cyber-threats” because of its remoteness: “There is a kind of distancing; cybersecurity feels very remote.” She points to actual war: “For families without serving military personnel and outside of conflict zones, war is remote, both physically and conceptually.” Therefore, militarizing cybersecurity and using the language of warfare is not only hyperbolical: it distances cybersecurity from ordinary human experience. This makes consumers of cybersecurity products feel as though “the war is out of their control,” Baines warns.

The Unreal

Aside from the militarization and the inflated use of hoody imagery in cybersecurity, advances in technology within cybersecurity also contain a sense of the unreal: “The association with fantasy and science fiction is a key feature of cybersecurity’s foundation myth,” Baines comments. Many in the industry will be familiar with how omnipresent cyborgs are in cybersecurity marketing. Futuristic technology has become ubiquitous, depicting cybersecurity as a cacophonous battle between machines akin to a Hollywood scene – think T-800 vs. T-1000 in Terminator 2: Judgment Day. Curiously, Skynet, the fictional superintelligence system that serves as the antagonistic force in the film, “is a name purportedly used by the US National Security Agency’s program for analyzing communications metadata.”

Baines points to the second half of the twentieth century, noting the surge of popular science fiction that was replete with cybernetic organisms, recognized by the portmanteau ‘cyborg.’ She notes that Darth Vader of Star Wars, The Terminator and RoboCop all introduced cyborgs to the public before cybersecurity gained popularity. Again, rhetoric-choice is important since “modern cyber-words are confections, neologisms,” Baines says. She adds that etymology accords with the dominant rhetoric in the industry: cybersecurity issues are deemed “exclusive, inaccessible and almost dystopian.”

Dehumanization

It’s safe to say that cybersecurity has an image problem, rues Baines. She notes that cybersecurity today is depleted of the rich tapestry of strictly human qualities and has been replaced with “conceptionally remote, unreal and sensationalist” imagery. She also stresses that this conveys the message that cyber-threats are “huge, serious, complex and frightening, and [portrays] a public that is powerless.”

Another concern that ostensibly links the three depictions already explored can be summarized as a type of dehumanization.

The argument is that cybersecurity today is disempowering, negating a sense of agency due to the faceless hoodies, militarization/warfare rhetoric, cyborgs and dystopian rhetoric abounding in the industry. Additionally, security professionals feel a receding sense of human-centric community. Baines summarizes this point when she says that when you are deep in the cybersecurity model, you enter a kind of world where it feels like there is a dearth of human-like and relatable qualities. Indeed, the human-centric world undergoes a type of recession.  

The work of Herbert Kelman, Professor of Social Ethics, Emeritus at Harvard University, is helpful here. In his work on dehumanization, humanness has two features: identity (i.e., a perception of the person “as an individual, independent and distinguishable from others, capable of making choices”) and community (i.e., a perception of the person as “part of an interconnected network of individuals who care for each other”). Using this framework, we can state the problem more clearly: prevailing cyber imagery and rhetoric has the potential to make those working outside of the industry feel disempowered, without choices, and those within the industry feel a weakening sense of interconnected human-based community.

Dehumanization and Artificial Intelligence

Marie Oldfield, CEO at Oldfield Consultancy and Kuinua Coaching, argues on similar lines as Baines, focusing on AI. In her article Cyber Security and Dehumanisation, she argues that certain types of dehumanization can cause humans to devalue technology and other humans. However, the type of dehumanization on which she focuses is a “human reaction to overused anthropomorphism and a lack of social contact caused by excessive interaction with, or addiction to, technology.” She explains that Siri and Alexa are good examples of products that are specifically designed to have human-like qualities. “AI is, of course, a sterling example I focus on,” she notes. By humanizing AI, people will likely trust it. This marketing choice is understandable since “AI can be very complicated for most people, so humanizing these products makes sense.”

"AI can be very complicated for most people, so humanizing these products makes sense"

AI is now fundamental in technology, and it’s “expected to make decisions, from deciding who gets a credit card to cancer diagnosis.” These decisions affect most, if not all, of society, Oldfield says. Yet, this is an example in which dehumanization takes root. AI-based social robots and chatbots are examples here, in which, given the anthropomorphism, unwitting customers develop an emotional investment.

Oldfield’s concerns echo a rousing TED talk by Sherry Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at the Massachusetts Institute of Technology, titled ‘Connected, But Alone?’: “We expect more from technology and less from each other. We create technology to provide the illusion of companionship without the demands of friendship.” The emotional investment results in negative psychological impact due to the “lack of social contact caused by excessive interaction with [technology],” warns Oldfield. The picture here gets more harrowing since Oldfield argues that what follows is “the abuse of technology [and] devaluing other humans.” Strikingly, “this is a contradiction of the use of ‘social robots’ and ‘chatbots,’” since they were created for the purposes of increasing socialization.

In cybersecurity, however, it’s difficult to imagine how excessive interaction with an AI product would lead to the kinds of negative psychological impacts that Oldfield mentions. Consider an AI cybersecurity product such as an anomaly-based intrusion detection system. Such a system lacks the types of social-robotics or chatbot function that Oldfield discusses and would therefore mean that this type of AI would be immune to the kinds of concerns Oldfield picks out. “Yet, it’s important to remember that the types of functions that such a system is advertised as doing are conceived in anthropomorphic terms,” Oldfield argues, even if futuristic cyborg-looking marketing is used. Indeed, it’s often advertised as being able to perform human activities, namely “detect, investigate and remediate threats.”

This view isn’t too farfetched since AI-based cybersecurity solutions are often conceived along anthropomorphic lines. Quoting Milad Aslaner, senior director of cyber defense strategy at SentinelOne, from his opinion article in Infosecurity’s 2021 Q4 edition, AI can support a SOC team “in the same way as adding dozens of new colleagues.”

Time to Move Forward

Rethinking cybersecurity, particularly the way certain images and rhetoric are consciously chosen in the industry, and considering their impact on audiences, can help us rethink the industry that will resonate with ordinary people while empowering them to protect themselves, argues Baines. Additionally, this can be done to the benefit of those working within the industry. Yet, discarding existing stereotypes is one thing, but providing another framework for cybersecurity that’s both catchy and advertisable is quite the brain-teaser.

One way we can collectively humanize cybersecurity is to adopt a public-health type of framework within it: “comparing communicable and non-communicable cyber-threats to communicable and non-communicable diseases,” as Baines says. Additionally, “it’s not far stretched to compare cyber-risk behaviors to public-health risk behaviors,” she notes. Since the status quo is disempowering, a public health approach could adopt the language of virology. Being such a relatable and unremote framework, this could reverse the sense of disempowerment that abounds within cybersecurity, restituting a sense of agency in the minds of ordinary people. The point Baines is trying to capture is that when we speak of diseases, the reactions of people aren’t strictly “I have no control of this,” but instead “what can I do about this?”

Although many might consider this public-health type of framework as common parlance since viruses and infections are terms used pervasively in the industry, metamorphosing the “framing of disease into one of community disease control will see cyber public health empower individuals to actively engage in it without feeling overwhelmed by technical jargon or faceless, militarized and unreal imagery,” says Baines. Also, it’s clear to see how this framework can engender a sense of belonging by remodeling the industry in human-focused terms, especially beneficial for building a human-based community for those working in the industry. This has the possibility of effectuating a world replete with strictly “human-like and relatable qualities.”

The prevailing imagery and rhetoric in cybersecurity have been effective, “since it drove consumers to buy anti-virus software, preventing some cyber-attacks from succeeding,” Baines remarks, but those wanting to improve the industry must remember that to empower people, to provide a sense of belonging and to forestall the efforts of cyber-attackers to commit targeted influence operations, “the current approach will no longer do.”   

What’s hot on Infosecurity Magazine?