Romantic AI Chatbots Fail the Security and Privacy Test

Written by

Experts have warned users of AI-powered “relationship” chatbots that their data and privacy are at risk, after failing all 11 apps they tested.

Non-profit Mozilla chose Valentine’s Day to release new research into the chatbots as part of its long-running *Privacy Not Included series of reports.

Since generative AI (GenAI) burst onto the scene, there has been an explosion in romantic or relationship chatbots marketed as providing companionship to lonely hearts.

However, in reality, they either deliberately or negligently ignore privacy and security best practices, Mozilla argued.

“To be perfectly blunt, AI girlfriends are not your friends,” said Mozilla researcher Misha Rykov.

“Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

All 11 chatbots assessed in these tests – including titles such as Romantic AI, Talkie Soulful AI and EVA AI Chat Bot & Soulmate – were slapped with a *Privacy Not Included warning label. That singles the category out as one of the worst the non-profit has ever reviewed for privacy.

Read more on Privacy Not Included: Mental Health and Prayer Apps Fail the Privacy Test

The chatbots have collectively been downloaded by over 100 million users from Google Play, but most come with a string of privacy and security issues including:

  • No public information on how they manage security vulnerabilities (73%)
  • No clear information about encryption and whether they use it (64%)
  • Permission to use weak passwords, including “11111” (45%)
  • Selling user data, sharing it for targeted advertising, or not providing enough information in their privacy policy to confirm they don’t (90%)
  • Forbidding deletion of personal data (54%)

“Today we’re in the Wild West of AI relationship chatbots. Their growth is exploding and the amount of personal information they need to pull from you to build romances, friendships, and sexy interactions is enormous. And yet, we have little insight into how these AI relationship models work,” warned Privacy Not Included director, Jen Caltrider.

“One of the scariest things about the AI relationship chatbots is the potential for manipulation of their users. What is to stop bad actors from creating chatbots designed to get to know their soulmates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others? This is why we desperately need more transparency and user-control in these AI apps.”

What’s hot on Infosecurity Magazine?