Online Harms - Damned if They Do, Damned if They Don’t?

Written by

Long in gestation and eagerly awaited, the draft Online Safety Bill (‘Bill’), has finally been published, delivering on the government’s manifesto commitment to make the UK “the safest place in the world to be online” whilst also seeking to defend freedom of expression. The Bill’s genesis lay in the Online Harms White Paper, published over two years ago in response to widespread concern at the malign underbelly of the internet, after a series of high-profile tragedies. Following two years of passionate public lobbying from both sides of the argument however, is the result a Bill which, while well-intentioned, has tried so hard to please all interested parties that it ends up satisfying no-one?

Elusive Duty of Care

The cornerstone of the Bill is a new “duty of care” placed on service providers to protect individuals from ‘harm’. It will apply to providers both based in the UK and – nebulously – those having ‘links’ here. In the government’s sights is the gamut of illegal and legal online content, from child sexual exploitation material and terrorist activity to cyber-bullying and trolling.  

The ‘duty of care’ will apply to search engines and providers of internet services which allow individuals to upload and share user-generated content. In practical terms, this net will catch social media giants such as Facebook, Twitter and Instagram as well as public discussion forums and blogs.  

As regards illegal content, the duty will force all in-scope companies to take proportionate steps to reduce and manage the risk of harm to individuals using their platforms. They must employ proportionate systems and processes to minimize illegal content on their services and, where such content has been uploaded, to limit its dissemination, minimize the time for which it is visible and remove it as soon as possible.

Certain high risk ‘category 1’ providers – the big tech titans with large user-bases or which offer wide content-sharing functionality – will have the additional burden of tackling content that, though lawful, is deemed harmful. In this camp fall the likes of encouragement of self-harm and misinformation. These service providers will be expected to set out clearly in their terms and conditions what legal content is unacceptable to them and to take consistent enforcement measures accordingly.

Adding a further level of complexity, the regulatory framework will apply to both public communication channels and services where users expect a greater degree of privacy, for example online instant messaging services and closed social media groups.

Quite how service providers will be expected to meet these onerous new obligations is not specified in the Bill and, instead, they must wait for full Codes of Practice to be issued. Companies wishing to prepare for the new regulatory landscape must therefore sit on their hands for now.

Rabbits from the Hat

Sensitive to public pressure, the government has built on early iterations of its proposals to include new measures addressing concerns raised during the consultation process over freedom of expression, democratic debate and online scams.

The initial release of the Online Harms White Paper in 2019 triggered a furore over the potential threat to freedom of speech, with campaigners fearing the proposals would have a chilling effect on public discourse as service providers self-censored rather than face swingeing regulatory penalties for breaches in relation to ill-defined harms. In response to such concerns, service providers will now be expected to have regard for the importance of protecting users’ rights to freedom of expression and for protecting users from unwarranted infringements of privacy when deciding on and implementing their safety policies and procedures. 

Concern has been building for some time about the influence which the largest social media companies potentially wield over political debate and the electoral process. This was seen most starkly in the US during the recent Presidential election where some platforms may have felt like a political football in their own right. While there are only distant echoes of that here, the role which social media plays in UK democratic events has attracted attention and, in a nod to this, the government has proposed a new duty on category 1 providers to protect “content of democratic importance.” In what might euphemistically be described as opaque, the Bill defines such content as “content that is, or appears to be, specifically intended to contribute to democratic political debate in the United Kingdom…” Service providers affected may well be left scratching their heads about quite how they will be supposed to interpret and satisfy this obligation, and it is to hoped that the eventual Codes of Practice will provide some much needed clarity. Absent such guidance, the risk is that they will be condemned from all sides.

Erecting a further bulwark to protect the democratic process, the Bill’s new measures to protect journalists are sufficiently wide to cover online contributions by individuals to blogs and the like. Indeed, the press release issued by the government as the Bill was published trumpets that content produced by ‘citizen journalists’ will enjoy the same protection as their professional counterparts. Service providers must additionally offer a dedicated and expedited complaints procedure available to those who feel that journalistic content has been unfairly removed; where such complaints are upheld, the material must be swiftly reinstated.

Following a vocal campaign from consumer groups, industry bodies and Parliamentarians, the government appears to have capitulated to pressure to include measures bringing online scams within the scope of the Bill. E-commerce fraud is estimated to be up 179% over the last decade, with romance scams alone resulting in UK losses of £60 million in 2019/20; all service providers will be required to take measures against these illegal online scourges. Commentators have noted, though, that frauds committed via online advertising, cloned website and emails, however, will remain outside the Bill’s ambit, leaving many investors still vulnerable to the lure of sham investment frauds.

A Regulator with Teeth?

This ground-breaking regulatory regime will be enforced by a ‘beefed-up’ Office of Communications (“Ofcom”), which will wield an arsenal of new enforcement powers including fines and, in the last resort, business disruption measures. Penalties of up to £18m or 10% of annual global turnover (whichever is the greater) will be at the regulator’s disposal. Those calling for the senior management liability will, however, be disappointed; the Bill will not impose criminal liability on named senior managers of in-scope services, though the Secretary of State has reserved the power to introduce penalties on senior officers for failing to co-operate with Ofcom in the future. The government openly admits that the desire to attract inward investment to the UK by tech companies lies at the heart of its hesitation in this regard.

Given the severity of the threat, the legislation will also give Ofcom the power to require businesses to use technology that is highly accurate to identify and remove tightly defined categories of illegal material relating to child sexual exploitation and abuse on public and, where proportionate, private channels. This may raise concerns that the dominance of large service providers is being reinforced, with smaller service providers without deep pockets unable to afford the costs of such measures.

Conclusion

It remains to be seen how the juxtaposition between online safety and freedom of expression and democracy will play out. Service providers and Ofcom alike will no doubt have their plates full trying to decipher just how to moderate lawful but harmful online content whilst also ensuring users’ freedom of expression and democracy is not adversely affected.  

What’s hot on Infosecurity Magazine?