Identifying and Defending Against Harmful Content

Written by

In recent times, the perceived apathy of social media executives when it comes to harmful and inappropriate content on their platforms has been well-publicized and heavily criticized.

In response, the UK Government has proposed a ‘duty of care’ regime. Nicky Morgan, the culture secretary, states the aim of attempting to “concentrate the minds” of social media executives and force them to address the issue of potentially damaging images being shared on their platforms. Under the proposals, tech bosses would face the same sanctions as senior bankers and other finance executives who can be fined for data breaches.

However, the responsibility for protecting individuals from harmful or offensive images does not just fall on social media giants. Organizations across all industries face increasing pressure to protect their employees, customers or people in their care from dangerous content.

Particularly as under the Obscene Publications Act, they can be held vicariously liable for the actions of their employees if they fail to demonstrate that they have taken all reasonable steps to protect against a hostile working environment.

The rise of cloud transformation and messaging platforms such as WhatsApp and Slack in the workplace have added fuel to the fire, increasing the number of ways that pictures and videos can be transferred and shared. Therefore, making ‘Not Safe For Work’ content a more prevalent and complex issue than ever before. 

Apathy over harmful image content is a risky business 
According to a recent report, 57 percent of employees store photos and 29 percent store videos in cloud applications that are associated with their enterprise. This is before you consider the 54 percent that is ‘dark data’, which is unclassified, invisible to administrators, and could contain pornography or other offensive imagery. Therefore, whether there is malicious intention or not, there could be a plethora of explosive material simmering below the surface of an organization. 

Traditionally, businesses have tried to manage the issue of harmful content with an Acceptable Use Policy. Unfortunately, the truth is that simply telling your employees what they are permitted to send and upload doesn’t prove effective as a method of protection.

This is demonstrated by our 2018 survey, which found that one in ten employees admit to visiting adult websites on a company device or on the company network. The onus is now on security teams to accept and account for the ever-present factor of human fallibility and be more proactive about protecting their business, or face the wrath of the regulators.

The problem in doing so lies in the fact that this harmful image content is what is known as “unstructured data”. Unlike offensive words which can be easily read, identified and blocked, it’s much more complex to identify and block offensive videos and images. Fortunately, despite the fact that advances in technology have exacerbated the proliferation of offensive content, they now also provide the solution to help businesses to combat it.

AI vs offensive material 
Image Content Analysis (ICA) does exactly what it says on the tin, analyzing images and videos to define them as either ‘good’ or ‘bad’. Although this technology has been around for years, it is only the recent advances in AI that have greatly improved the ability of ICA to accurately identify NSFW or harmful images, and discount false positives.

Historically, this technology used to be accurate around half the time but the addition of supervised machine learning techniques means that accuracy rates are now over 95 percent. 

Previously, technology analyzed the pixel content of individual images and was prone to a high number of false positives. ICA providers opted for a more cautious approach that meant images that could be mistaken for harmful were also blocked. While it did have an application in some especially sensitive industries, the overzealousness of previous iterations of ICA made it unworkable in most businesses.

However, this is one area of security where advances in machine learning techniques - in this case supervised machine learning - has had a notable and proven impact. ICA has a far more accurate success rate than ever before.

As a result, ICA now has the potential to provide value across an organization and to protect employees from inappropriate visual content that may be delivered over various channels – including email, web and cloud applications. Businesses now have the power to enforce their Acceptable Use Policies and, beyond this, to identify any employees that may be misusing their computers.

In the past, they have been at the mercy of their employees’ actions but this technology offers organizations visibility and control as well as an automatic audit trail to demonstrate how and why they have acted. 

The improvement of ICA through advances in machine learning could not come at a better time, as we witness a cultural backlash against the proliferation of harmful content. From the scrutiny of the social media giants, to the #MeToo movement - there is a clear message from the public that obscenity is not to be tolerated, and especially not in the workplace.

With the Obscene Publications Act remaining a constant source of anxiety for organizations, the use of ICA demonstrates that they are taking the online welfare of employees seriously and that all reasonable steps have been taken to protect people from a hostile working environment.

What’s hot on Infosecurity Magazine?