The Social Media Censor-Ship Has Sailed

Written by

Criticism of social media’s role in disseminating extremist material is a persistent theme in the news at the moment. Companies like Facebook come under fire when its users post graphic video content or comments inciting racial or religious hatred. Sometimes it is even discovered that the perpetrators of terrorist acts communicated their plans via social media, unbeknownst to the authorities – the Lee Rigby case being a major example.

When this happens, web communication providers are castigated for their failure or inability to share this information with the authorities before the crime. Many politicians use these opportunities to advocate greater monitoring of social media users, while liberal commentators and tech experts tend to argue the opposite, saying such proposals are unethical or impracticable.

But arguing against state snooping does not make you a terrorist sympathizer. Indeed, even the most liberal individuals are unlikely to argue that extremists should be free to spread vile material online with impunity. But infringing the freedoms of the entire populace in the name of rooting out a few bad apples is not a proportionate or realistic response – and nor are demands that social media giants find a silver bullet to stop offensive material being uploaded. It’s a nice idea, but it’s just not going to happen.

As the Guardian reports, addressing a hearing of European MEPs last week, Google’s policy manager Verity Harding argued that the sheer volume of material being uploaded to YouTube means that stopping the initial publication of extremist material is simply not realistic. This is unlikely to appease those who argue for a tightly policed internet. Nonetheless, it’s an uncomfortable truth that the quantity of data passing through social media and content sharing sites makes effective filtering of such material extremely difficult. 

"Infringing the freedoms of the entire populace in the name of rooting out a few bad apples is not proportionate"

As Harding said, “We have 300 hours of content uploaded every minute, and to pre-screen those videos before they are uploaded would be like screening a phone call before it was made. It wouldn’t allow YouTube to be this flourishing platform, so instead what we do is rely on our billion-strong community to help us flag violations of our policies.”

There are several parallels here with cybersecurity, where the ambition has shifted from completely preventing attacks to ensuring best practice in the disciplines of detection and response. You can’t always stop the initial breach, but you can limit its damage with effective response.

The security community also aims to share intelligence in an effort to mitigate risk and limit the damage that certain vulnerabilities or attack methods can cause. This is the model that major content platforms are adopting. YouTube strives to remove graphic and hate-inciting material once it is flagged. Facebook also has automatic detection processes, though it has courted controversy in the past for its reluctance to remove certain offensive content.

Trying to solve this problem with legislation is not the way forward. Indeed, at Google’s hearing with European officials, the EU’s own counter-terrorism co-ordinator Gilles de Kerchove said, “We can contemplate legislation but I suspect it would be an awfully monumental exercise.” This is from a parliament with a reputation for churning out complex legislation.

For the authorities to admit that this issue cannot be actively policed is significant in that it represents a rhetorical shift away from blaming social media giants for their inability to stop abusive content being uploaded.

Instead, more focus must now be placed on how the online community can respond to assist tech companies in detecting dangerous and hate-inciting material that violates policy. This will involve more effective education, in schools and businesses, surrounding the risks of extremist material online.

Bombarding technology companies with legislation and penalties distracts from the deeper underlying issues at stake and wastes time and energy that could be better spent elsewhere.

What’s hot on Infosecurity Magazine?