Trump Donates $1.5Mn to Save Starving Puppies

Written by

Hey Infosecurity readers! Did you hear that Donald Trump donated $1.5 million to save starving and abandoned puppies in third-world nations?

Well—of course he didn't—but you clicked anyway, right? As the US settles into a new normal after the election of Trump to the presidency, an ancillary conversation—and controversy—has arisen around the role of fake news during the election season.

There has been a level of ongoing criticism about stories from fake news sites being spread as a kind of viral misinformation campaign. Hoaxes, inaccurate reports and flat-out lies have been passed off as legitimate news via Facebook, in particular—affecting the conversations being had among the right and the left.  

In a perfect example of the scourge, on Monday, the top trending story when Googling "final election vote count 2016" was a fake story on a site called 70News claiming that Donald Trump had won the popular vote. Hillary Clinton has in fact won the majority vote in the US election, by about 700,000 ballots.

In another example, cited in a recent BuzzFeed study, a site called Freedom Daily wrote fake details around a months-old video to make it seem like two white men had been beaten and set on fire by supporters of the Black Lives Matter Movement.

And as a third object lesson, a Facebook post claims that Hillary Clinton has deep ties to satanic rituals and the occult. The post has nearly 3,000 shares.

And just speaking from anecdotal experience, the echo chamber effect has had a residual effect, with post-election squabbles over, say, whether the Confederate flag is a racist symbol, or whether the #NotMyPresident protests are a valid form of public discourse, being played out with a healthy dose of factual inaccuracies backing up arguments on both sides.

One thing’s for sure—all of the fakeness has people on both sides bemoaning the state of journalism—even though the “journalism” they’re probably talking about actually isn’t journalism at all.

To put it another way: That same BuzzFeed study found that three big right-wing Facebook pages published false or misleading information 38% of the time during the period analyzed, and three large left-wing pages did so in nearly 20% of posts—including such as Eagle Rising on the right and Occupy Democrats on the left.

But no matter how big a problem fake news is, it presents a heady mix of questions—is Facebook merely a vessel for information with no responsibility for what that information is? What constitutes censorship? Should algorithms offer more transparency? Is there a net neutrality aspect to this?

One interesting bit of news—to use a now-loaded term—is the fact that Google and Facebook have decided to block fake news sites from using their ad software—eliminating the monetization path for “clickbait” posts.

“In accordance with the Audience Network Policy, we do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news," Facebook said in a statement. "While implied, we have updated the policy to explicitly clarify that this applies to fake news. Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance."

Google said it will restrict ad serving on pages that “misrepresent, misstate or conceal information about the publisher, the publisher's content or the primary purpose of the web property.”

But meanwhile, Facebook CEO Mark Zuckerberg had over the weekend denied that the social network had any culpability whatsoever for influencing the outcome of the election. In a posting, he wrote:

“After the election, many people are asking whether fake news contributed to the result, and what our responsibility is to prevent fake news from spreading. These are very important questions and I care deeply about getting them right. I want to do my best to explain what we know here.

Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

Identifying the 'truth' is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves."

The Guardian however reported that the social network is taking the problem seriously, and is working on how to tweak its algorithm to better identify fake news, perhaps allowing viewers to “flag” questionable content somehow. Sources also said that Facebook could be reinstating its team of human editors instead of leaving filtering to software alone. The editors were dismissed earlier in the election season to avoid any allegation of political bias.

Canada’s CBC noted however that “only Facebook knows” exactly how much of an impact all of that hyper-partisan sharing and hyperbole and post-factual discussion had on the voting public, thanks to its secret algorithms, which are of course the heart of its trade secrets.

Without algorithmic transparency or accountability, some argue that gives the outlet too much power.

"Do we really want Facebook deciding what's misinformation or not?" said Jonathan Koren, speaking to CBC. He’s a software engineer at an artificial intelligence company called Ozlo who previously worked on Facebook’s trending algorithm.

Nicholas Diakopoulos, an assistant professor at the University of Maryland's journalism school, told CBC that "Election information is one of those domains where there's a pretty clear connection between information that people are being given access to and their ability to make a well-informed decision. [Having transparency] is “one method to increase the level of accountability we have over these platforms."

So how do we fix this? The jury is still out on this one, but watch this space, as it’s a debate that’s unlikely to go away anytime soon.

Photo © Joseph Sohm/Shutterstock.com

What’s hot on Infosecurity Magazine?