Praise for Online Harms Plan, Action Needed on Fake News

Written by

Speaking at the Westminster eForum policy conference on identifying and tackling the key issues in the online space and assessing the industry’s response so far, Professor Victoria Nash, deputy director, associate professor and senior policy fellow at the Oxford Internet Institute, said she admired but “was anxious about the breadth” of the Online Harms whitepaper, and the lack of distinction between legal and illegal online harms.

She said she had been very pleased to see a “clear distinction between the attention that will be given to the illegal harms and an approach in the context of legal but harmful which focuses more on procedure and governance and encouraging responsible behaviors by companies rather than focusing on specific pieces of content and having them removed.”

In particular, she argued there was room to establish the role of the regulator in being able to consider how to credit those technology companies who are proactive, as well as take action against problematic issues.

Highlighting recent events, Nash said that some of these represent the issues for regulators and technology companies going forward. She flagged the issue of hate speech, as reports continue around Facebook removing adverts, which she called “a failure to deal with the rise in hateful content,” and she said that the Oxford Internet Institute’s own research has seen a rise in hate speech since the COVID-19 pandemic began.

“At a time when we are asking companies to do more and to step up and reduce this content online, the nature of that content continues to advance and change, which poses challenges,” she said. “The other thing we need to bear in mind about that is that there is a tension between a need to remove content rapidly, but perhaps we give companies less credit for doing so accurately.”

Discussing the challenges posed by disinformation, Nash said the importance of this has been “magnified over the past few months.” She said as an academic, the spread of this issue has been monitored but “the speak of junk news may reach more individuals” than a genuine news story.

“While tackling it is a challenge and we understand its spread, we don’t understand its effects,” she stated. “So if companies are taking a proportionate and risk-based approach to removing content on their platforms, what does that look like in regard to disinformation? Does it mean removing it, does it mean de-ranking it, does it mean flagging it?”

She said there are no clear answers to those questions yet, but the whitepaper, regulator and technology companies need to deal with these issues.

“Whilst we’re closer to having a policy framework that is appropriate and likely to be effective in reducing our exposure to online harms, the nature of the challenge is not becoming any less complex,” she said. In particular, support for the technology companies will be necessary.

In a question posed by Infosecurity about the need for human moderators to work alongside AI and machine learning to flag harmful content, Susie Hargreaves, chief executive of the Internet Watch Foundation, said it was important to have human moderation, even while technology improves, but there is no “magic bullet” yet. “We are at a stage where the technology is developing, but we cannot get away from the need for human moderation,” she said.

Ben Bradley, head of digital regulation at techUK, said there are technical solutions on disinformation where you can see, detect and disrupt actions, but the larger challenge is how misinformation develops over time. “While you can build the tools, it does emphasize the need for greater thinking around this,” he said.

What’s hot on Infosecurity Magazine?