OpenAI Announces Plans to Combat Misinformation Amid 2024 Elections

Written by

OpenAI, the developer of the AI chatbot ChatGPT and the image generator DALL-E, has announced new measures to prevent abuse and misinformation ahead of big elections this year.

In a January 15 post, the firm announced that it was collaborating with the National Association of Secretaries of State (NASS), the oldest non-partisan professional organization for public officials in the US, to prevent the use of ChatGPT for misinformation ahead of the US Presidential Election in November.

For instance, when asked questions about the election, such as where to vote, OpenAI’s chatbot will direct users to CanIVote.org, the authoritative website on US voting information.

“Lessons from this work will inform our approach in other countries and regions,” the firm added.

Fighting Deepfakes with Cryptographic Watermarking

To prevent deepfakes, OpenAI also said it will implement the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials for images generated by DALL-E 3, the latest version of its AI-powered image generator.

C2PA is a project of the Joint Development Foundation, a Washington-based non-profit that aims to tackle misinformation and manipulation in the digital age by implementing cryptographic content provenance standards.

Its main initiatives are the Content Authenticity Initiative (CAI) and Project Origin.

Several major companies, including Adobe, X and The New York Times – which has recently sued OpenAI and Microsoft for copyright infringement – are members of the coalition and actively support the development of the standard.

Finally, OpenAI said it was experimenting with a provenance classifier, a new tool for detecting images generated by DALL-E.

“Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers – including journalists, platforms, and researchers – for feedback.”

Google DeepMind has developed a similar tool for digitally watermarking AI-generated images and audio with SynthID. Meta is also experimenting with a similar watermarking tool for its image generator, although Mark Zuckerberg’s company has shared little information about it.

"Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm," OpenAI added in its post.

Read more: Ethical Hackers Could Earn up to $20,000 Uncovering ChatGPT Vulnerabilities

A Move in the Right Direction

Speaking to Infosecurity, Alon Yamin, co-founder and CEO of AI-based text analysis platform Copyleaks, encouraged OpenAI’s commitment against misinformation but warned it could be challenging to implement.

“Going into this election year, considered one of the biggest in recent history, and not just in America but worldwide, there is a lot of concern about how AI will be misused for political campaigns, etc., and that concern is fully justified. So, to see OpenAI taking initial steps to remove potential AI abuse is encouraging. But as we’ve witnessed with social media over the years, these actions can be difficult to implement due to the vast size of a user base,” he said.

In the UK, where the next general election should be held between mid-2024 and January 2025, the Information Commissioner’s Office (ICO) launched a consultation series on generative AI on January 15.

The first chapter is open until March 1.

What’s hot on Infosecurity Magazine?