Facebook Fights Terrorism with AI; Is It Censorship?

Written by

The rise of the machines continues: Facebook recently said that it will use artificial intelligence (AI) to find and remove terrorist content across its many millions of posts. The news comes as the US Supreme Court hands down a verdict that confirms there is no “hate speech exception” to the First Amendment—setting the stage for some line-balancing for the social network.

“We want to find terrorist content immediately, before people in our community have seen it,” the company said in a posting. “Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting-edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.”

This is in many ways a welcome policy, given the uptick in vigilante-style terrorist attacks of late in London—but if done incorrectly, it could constitute a form of censorship. Obviously, behavior that incites hate and violence to others is unacceptable, but under the First Amendment in the United States, much of the terrorist content that Facebook identifies could be considered protected speech, no matter how unsavory it is.

Claire Stead, online safety expert at Smoothwall, who said she welcomed the notion of identifying inappropriate content using tools to smartly filter and monitor the web in action, noted that “there is a fine line between freedom of speech and potentially harmful behavior, and so this needs to be assessed carefully.”

The US Supreme Court today ruled in Matal v. Tam that the government could not refuse to register a band name, the Slants, as a trademark. It had previously declined the trademark on the grounds that the name might be seen as demeaning to Asian-Americans.

“[The idea that the government may restrict] speech expressing ideas that offend … strikes at the heart of the First Amendment,” reads the opinion by Justice Samuel Alito (for four justices). “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”

Justice Anthony Kennedy added in a separate opinion: “A law found to discriminate based on viewpoint is an egregious form of content discrimination, which is presumptively unconstitutional… A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.”

Yet, incitement to violence is the line for First Amendment protection. It goes back to a watershed Supreme Court case in 1969, Brandenburg v. Ohio. Clarence Brandenburg, a Ku Klux Klan (KKK) leader in rural Ohio, led a typical Klan rally that was covered by the local news: Men in robes and hoods, some carrying firearms, first burning a cross and then making speeches. One of the speeches made reference to the possibility of "revengeance" against "ni****s", Jews and those who supported them. Brandenburg was charged with advocating violence under Ohio's criminal syndicalism statute. After the case made it to the Supreme Court, the justices reversed the conviction, and held that government cannot punish inflammatory speech unless that speech is "directed to inciting or producing imminent lawless action and is likely to incite or produce such action."

That line can be fairly straightforward: Consider the stomach-turning case of Michelle Carter, a teenager in Massachusetts that sent dozens and dozens of texts to her boyfriend encouraging him to kill himself, which he ultimately did. The coverage on the case is not for the faint of heart—she pressured him into doing it in no uncertain terms. This budding sociopath was arrested and tried and convicted for involuntary manslaughter. She faces up to 20 years in prison—but when the verdict was handed down last week, her defense promised to appeal on the grounds that you can’t kill someone with words. Given the current stance of the court in light of Matal, it will be interesting to see how this plays out.

Back to Facebook: it said in its blog announcement that when it receives reports of potential terrorism posts, it reviews those reports urgently and with scrutiny. But it said that uncovering evidence of imminent harm has happened only in “rare cases,” and “we promptly inform authorities.”

So is all of the other speech protected, even if it’s ISIS propaganda? Watch this space. The stakes are high, according to Stead: “If censorship on mainstream social sites emerges it is highly likely that extremists will move to different platforms or the dark web. I doubt it will ever be eradicated completely. There is still a lot more work that network providers, social media companies, need to do but Facebook’s measure is a step in the right direction in protecting the public from harm.”

On the technical front, Facebook noted: “We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do.”

And overall, the direction is promising, said Homer Strong, director of data science at Cylance, via email. “A major issue with using humans to provide ground truth for AI is that humans are not perfect either. There needs to be processes for evaluating human judgement in parallel to machine judgement. Otherwise the AI can end up learning the subjectivities of individual reviewers, distracting the AI from learning properly.”

Plus, not to rain on anyone’s parade, but both the confidence and the decision of sufficiently sophisticated AI can be bypassed using adversarial learning techniques.

“A terrorist who is blocked by Facebook is more likely to switch to some other platform rather than bypass the AI, but Facebook can never completely remove terrorist content,” Strong noted. 

What’s hot on Infosecurity Magazine?