As cyber threats grow in complexity and regulatory scrutiny intensifies, the role of executive leadership in cybersecurity is undergoing a profound change.
Organizations can no longer afford for security to remain a purely technical concern, it demands informed, active engagement from the boardroom to ensure resilience and business continuity.
At the the Gartner Security & Risk Management Summit 2025, Jonathan Monk, CIO at the Institute of Cancer Research, a UK charity, discussed a novel approach security leaders can use to support executives in making informed and active decisions about implementing security controls and managing exposure.
Infosecurity spoke to Monk about this approach and how to execute it effectively and gain boardroom buy-in for cybersecurity.
Monk also took us behind the scenes of how sensitive medical research data is both protected and carefully shared for the advancement of science.

Infosecurity Magazine: At the Gartner Summit, you discussed how you transitioned the dialogue with executives to enable them to make informed, active cybersecurity decisions. How did you achieve this?
Jonathan Monk: This relates to the discussions with executives and researchers we’ve had about protection level agreements (PLAs) for base level controls.
This tries to move from something that is highly qualitative to something that is more quantitative, particularly when we talk about risk.
The purpose of moving to a set of PLAs is to be able to agree what the level of friction within the organization is, agree what level of cost there would be, for example for implementing multifactor authentication (MFA), and take an estimate on the time it takes. You can quantify the friction and cost levels versus better security.
For example, do you want MFA on a few apps, do you want MFA on everything, every hour? You can be creative about what those control levels are.
At the Institute of Cancer Research (ICR), we put those controls into four scenarios, from very limited protection and low friction but high risk of exposure, all the way to best practice security and low risk of exposure, but high friction.
We get the executives to vote on all the scenarios and show them the distribution of who voted for what area.
That focuses the discussion on something tangible rather than becoming an argument.
Through that dialogue, what we created was a vehicle to discuss, in relatively sophisticated terms, cybersecurity controls with non-specialists. We write them in natural language, but we are not afraid to be specific. For example, you will get an MFA challenge every 24 hours or every eight hours. You can write that down and people will understand it; you don’t have to dumb it down.
This allows executives to make investments into protections, ultimately diverting money from ICR scientific research, and they can see progress. They can see that number has gone from ‘there to there’ – that is a positive thing.
The idea is to be transparent about what the controls are, how they work and give executives the credit that they can understand those controls.
Executives are smart people; they like to make informed decisions and don’t like being railroaded.
IM: How well has this approach been received by executives and the wider organization?
JM: The audit committee were very positive; they liked the transparency of it. We’ve agreed PLAs and then added one every quarter. That’s another key bit of advice, take your time.
Focus on one PLA per quarter, that gives clarity to your deployment teams, because you’re able to give them some sense of direction.
IM: What are the most notable changes you have observed in the tactics cybercriminals use to target the medical research sector?
JM: What we are seeing is that financially motivated cybercriminals, the ransomware-as-a-business operation, don’t care that we are a charity doing medical research. We are seen as an organization that may potentially pay out, therefore that ransomware business model has no moral compass.
We can’t argue that ransomware actors wouldn’t target a charitable research institute that’s focused on helping people. That would be highly naïve.
IM: Does using medical research data present any unique privacy and security challenges?
JM: When we talk about protecting our data, our purview is not just the electronic IT side. It is also about securing the technology that maintains frozen medical samples because they’re equally as important as electronic data.
The second thing is we have a lot of data; we have petabytes (PB) of it. We have very long series, longitudinal studies. For example, we have a study called the Generations Study, which studies over 100,000 women with the intention of following them for 40 years.
We’ve been researching them for around 25 years so far. This allows us to build up a really good view of the potential environmental causes around cancer for women.
If you lose that data, it’s 25 years to start again. Those longitudinal datasets are important.
The final part is that we are required to keep data for 20 years if it has been used in a medical trial, after its last been referenced.
Further to all of that, science is based on sharing, open interpretation and analysis of data. Simply locking it in a vault and airgapping it doesn’t do anyone any good.
The data not only has to be kept incredibly secure and is very sensitive, has massive importance and is difficult to regenerate, it also needs to be open. That’s how science works, through sharing and people repeating experiments.
IM: What strategies do you employ to protect sensitive cancer research data while also enabling transparency and data sharing practices?
JM: The key is we put immutable backups on everything now, even 8PB scale storage arrays.
We still use offline tapes, because the storage is good but, also, it’s easier to protect and you get a fast recovery time. It works out in our favor for long-term archiving.
Then also making sure that data is in multiple locations, so if something does happen to one location, we’ve got it in another.
There are also the usual security measures, such as access controls.
When it comes to sharing, we partner with various companies to provide trusted research environments.
IM: Have you met any resistance from the medical research community around data security measures?
JM: Most of the research with the highly sensitive medical data involves medical professionals, and there is a lot of guidance, training and clinical practice that they are very familiar with. You’re not having to explain these concepts to a group of people that don’t understand this need.
Equally, when they do their research, they don’t want to be constrained in the same way that they are when they’re working in a clinic. They want the freedom to try new algorithms, to try different software or code on their datasets.
It’s much more about having an informed, risk-based decision, rather than black and white rules.
The second aspect is that not all our research is patient centric. We’re interested in the reasons why cancers occur, including aspects like cells and environmental factors. A petri dish has no sensitive data in it, if it’s just cells or chemicals, then that’s a very different situation.
Therefore, we must stratify our controls depending on the sensitivity of the data. Our key focus is not having a one-size-fits-all, but having a clear set of stratifications, segmenting our environment into several different areas and deciding the appropriate set of controls for the sensitivity of the data.
