Overcoming the AI Privacy Predicament

Written by

Many of us are familiar with the privacy paradox. Put simply, although consumers say they care about privacy, they willingly consent to sharing their personal data on a regular basis, either unaware of or unconcerned with the privacy risks of doing so.

It is a popular argument but one that has been dissected at length. One corollary of the argument is to blame individuals for the privacy problems they experience and absolve from guilt anyone who mishandles their data. Yet, it isn’t paradoxical to care about something and to be incapable of protecting it. Instead, that describes an unfortunate circumstance, a predicament, which is further deepened by the popularization of artificial intelligence.

In my recent International Association of Privacy Professionals (IAPP) report on Consumer Perspectives of Privacy and Artificial Intelligence, I was interested to understand what consumers think about AI’s privacy risks. In other words, are individuals hesitant to adopt AI due to their potential privacy-invasiveness? And, if so, is this true across all domains (e.g., e-health, fintech, self-driving vehicles, etc.)? Lastly, what, if anything, can be done to assuage consumers’ fears?

While many may fear the worst, a sizeable segment of consumers are still uncertain or undecided as to how their privacy will be affected by the advent of AI. According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy.

Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education.

Yet many are apprehensive of the privacy-related risks that AI technologies entail, such as the automated decision-making based on the massive amounts of personal data they collect.

Growing Consumer Distrust in Businesses’ Data Practices

It's also important to remember that AI is growing in popularity at a moment when trust in data collection and processing is at historical lows. As revealed by the IAPP Privacy and Consumer Trust Report 2023, 68% of consumers globally are either somewhat or very concerned about their privacy online. 

Over the past several decades, consumers’ trust in companies and governments to respect their privacy and use their data responsibly has waned. And consumer distrust only grows when AI is incorporated into a company’s business practice. A Pew Research survey conducted in May 2023 found that among Americans who had heard of AI, 70% said they had very little or no trust at all in companies to use AI responsibly.

In addition, consumers often feel – and often do – lack the ability to exercise much control over their privacy. In Europe, cookie-fatigue is a real phenomenon being taken seriously by the European Commission and the European Data Protection Board.

In the US, privacy policy fatigue is no less demotivating. According to one recent study, it would take 47 hours per month – basically, an entire workweek – to read the privacy policies of all the websites that one typically visits in a month.

In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended.

That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy.

When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.

Thus, to promote uptake of their new technologies, producers of AI-driven products and services must truly help consumers to understand and make effective choices regarding their privacy.

To realize the full potential of AI, organizations must prioritize building trust and addressing consumers’ privacy concerns by design, rather than as an afterthought. It’s the only way out of the privacy predicament.

What’s hot on Infosecurity Magazine?