Regulator Halts AI Chatbot Over GDPR Concerns

Written by

The Italian privacy regulator has ordered a popular AI chatbot to cease processing data on domestic citizens after breaking GDPR rules.

Replika is marketed by San Francisco-based developer Luka as “the AI companion who cares” – a virtual “friend” for its users.

However, Italian GDPR regulator, the GPDP, said late last week that the app doesn’t comply with the law’s transparency requirements, and it processes the personal data of children unlawfully.

Specifically, there is no age verification mechanism to prevent children signing up, and the AI bot’s “replies” to users have been flagged as unsuitable for younger users. The GPDP said some app store reviews had noted sexually inappropriate content generated by the bot.

“The ‘virtual friend’ is said to be capable of improving users’ emotional well-being and helping users understand their thoughts and calm anxiety through stress management, socialization and the search for love,” the regulator said.

“These features entail interactions with a person’s mood and can bring about increased risks to individuals who have not yet grown up or else are emotionally vulnerable.”

Luka has been ordered to terminate the processing of Italians’ data within 20 days or risk a fine of up to €20m or 4% of turnover.

The app’s Russia links are also attracting scrutiny, according to Jonathan Armstrong of legal advisory service Cordery. Luka was founded by two Russian’s, Eugenia Kuyda and Philip Dudchuk.

“Whilst this does not seem to have featured in the Italian investigation, there have been concerns that Luka has also used Replika to broadcast Russian propaganda messages,” Armstrong wrote.

“Researchers have included screenshots of chats where Replika seems to say that it collects information for the Russian authorities. Currently Replika seems to have around 10 million users, bringing Luka an estimated $1m per month in download upgrade fees.”

The GPDP ruling may lead to similar scrutiny of popular AI bots like ChatGPT, he added.

“There have been concerns about the transparency of the ChatGPT application including allegations that some of the information it has provided has been inaccurate, for example in connection with the Elon Musk acquisition of Twitter,” Armstrong concluded.

What’s hot on Infosecurity Magazine?