The field of privacy has become a critical business issue, requiring substantial attention and investment. This trend has been driven by surging digital data collection and a plethora of data protection legislation designed to govern how vast quantities of sensitive information is managed and secured.
The surging use of generative AI in businesses has added another dimension to managing personal and other sensitive data.
As a result of these trends, the role of privacy professionals has rapidly evolved in terms of scope and importance.
Infosecurity Magazine spoke to Box’s global head of public policy and chief privacy officer (CPO) Leah Perry about the changing role of privacy professionals in this landscape and overcoming the significant privacy challenges posed by rising AI usage.

Infosecurity Magazine: How has the role of chief privacy officer evolved over recent years?
Leah Perry: The role of the chief privacy officer (CPO) has grown significantly with the rapid evolution of global privacy and data protection laws. As a result, it’s evolved from a compliance-focused function to a dynamic leadership position integral to key operations and business functions.
From product development, risk, strategy, governance, sales and marketing, amongst other business functions, privacy has a pivotal seat at the table. Today’s CPOs must not only ensure adherence to evolving global privacy laws, but also actively collaborate with product teams to embed privacy-by-design into the very lifecycle of each product, support cybersecurity and manage privacy incidents, and more.
The rise of AI adds further complexity to today’s growing regulatory landscape.
CPOs now lead teams spanning legal, compliance, operations and incident response. They work to address privacy matters across the business – marketing, use of chatbots, cookies, contractual agreements, data processing agreements, data transfers, compliance and certifications, as well as support M&A and third-party due diligence.
This requires a more elevated and expanded role of the CPO than in the past. At Box, this is reflected in our reporting structure as I report directly to our chief legal officer and corporate secretary.
As both global head of public policy and chief privacy officer, I have a front row seat into global public policy, privacy legal and privacy compliance developments. This enables Box to anticipate regulatory changes, ensuring preparedness for laws impacting data privacy, cybersecurity, AI and related matters.
By staying ahead, we can assess the potential impact to our business, our customers and third-party vendors. We can engage with regulators more proactively and maintain readiness as the regulatory landscape continues to evolve.
IM: What have been the key strategies for ensuring compliance with data privacy legislation across the different regions that Box operates in?
LP: Box’s key strategy for maintaining compliance with data privacy laws across the globe lies in establishing a comprehensive, scalable framework. Our approach includes robust security measures, transparent data governance, AI governance and proactive risk management to safeguard customer data.
We empower users with clear privacy controls and tools to manage their information while maintaining strict oversight through audits, employee training and vendor assessments.
For instance, through this framework we’ve created and implemented the AI governance program in partnership with key stakeholders, creating and updating policies and tailored approaches to address jurisdictional nuances.
By leveraging standards like the US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, we’ve applied a proactively approaching to mapping, measuring and managing risks throughout the AI lifecycle.
In doing so, we’ve made privacy central to the AI Governance program at Box as we continue to integrate AI into our product offering and services. Critical cross-functional peers like our CIO, CTO, CISO and product leaders are represented in the AI Governance steering committee, which reports regularly to our executive team and board.
Partnering with advocacy groups like the Business Software Alliance (BSA) and Global Data Alliance is also important for staying up to date on the changing regulatory environment across the markets in which we operate.
IM: What have been the biggest privacy challenges you have faced with the Box AI product? How were these managed?
LP: The biggest privacy challenge we’ve faced with developing Box AI was also a benefit –keeping up with the rapid pace of change when it comes to AI. Almost two years before our announcement of Box AI, there was internal discussion about it.
"There’s never been a better time to be a privacy professional"
I’d read the tea leaves from an AI Governance perspective, and it took around 1.5 years for our team to implement an AI Governance Framework.
This required extensive cross-functional collaboration to ensure we were forward-thinking and future-proofing our approach to integrating AI into our product offerings. It was a lot of work with many moving parts, working with stakeholders across the business.
We referenced the EU AI Act while still in draft, alongside the Organization for Economic Cooperation and Development (OECD) AI Principles and NIST AI Risk Management Framework, during the planning and eventual launch of Box AI.
Any business will face the same challenges with the rapid pace of AI. One second, you’re looking at one use case, and it evolves quickly to something you hadn’t even considered. Such is the nature of a new technology. Because of that, it makes it difficult to track what the goal post is because the technology is evolving so quickly.
Yet, this rapid evolution also drives AI’s transformative potential. AI innovation is something that is critical to make a positive change in business efficiency, effectiveness, globally growing economies and creating more jobs.
IM: What will be the role of privacy professionals in building consumer trust with AI tools going forward?
LP: The role of privacy expands as regulation expands. CPOs and their respective teams are at the helm of ensuring compliance when adopting AI within products or services.
Transparency with AI usage is key. Companies must disclose how AI tools use data, what data they are collecting, data sets being used to train the model and if consent is lawfully obtained.
Further down the chain, there are considerations if you are integrating with a LLM, and whether you’re making a substantial change to their model. Certain requirements will apply based on jurisdiction.
Understanding the use case is also important. You could be at a higher risk from a privacy compliance perspective, for example, if you’re using AI for decision making on lending, home loans, or employment. Privacy professionals must navigate these complexities, advocating for best practices and standards to minimize regulatory and reputational risks.
Globally, harmonization in laws when it comes to privacy and AI are essential. Efforts like the Council of Europe’s push for multilateral agreements across nation states highlight the importance of shared frameworks to ensure trust and ethical AI usage. We need a level of harmonization – agreed practices and standards of what we will and won’t do with AI.
IM: What are your biggest concerns in the privacy field today?
LP: My biggest concern in the privacy field today is that we will not reach the level of regulatory harmonization needed to drive technological advancements in AI, while safeguarding consumer and data protect rights and beyond.
Instead, companies will have to navigate a patchwork of laws across the US, EU, UK and elsewhere.
In the US, regulating AI starts with passing a comprehensive federal privacy law that strikes the right balance of maintaining cohesion with laws set by our closest allies like the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act (DPA).
IM: What are the biggest successes the privacy industry is experiencing today?
LP: There’s never been a better time to be a privacy professional. The bright spot in the privacy field is that so much of what is happening in technology is tied to privacy. There is a growing demand for CPOs and privacy experts to guide the implementation, use and integration of AI while ensuring oversight and compliance.
As a privacy professional, you’re already in a proactive position to work on organizations more critical decisions concerning AI.
IM: If you could give one piece of advice to fellow chief privacy officers, what would it be?
LP: Learn AI now. Learn the law, the standards, and best practices. Work with your colleagues in privacy to cross-compare learnings. Take the International Association of Privacy Professional’s (IAPP’s) AI Governance Training and other similar programs.
Learn AI now, because if you don’t you’ll only be playing catch up later. AI is here to stay – it’s not going away. It will only increase in adoption and complexity, including the level of change, usage, and the laws that come with it.