The head of the UK’s national cybersecurity agency is calling for security professionals to “seize the disruptive vibe coding opportunity” to make software more secure.
However, this must be coupled with the rapid development of vibe coding safeguards for AI code-generation tools to become “a net positive for security”.
Delivering a keynote speech during the RSA Conference in San Francisco on March 24, Richard Horne chief executive of the UK’s National Cyber Security Centre (NCSC), said the cybersecurity industry should leverage the exploding use of AI-assisted software development – also known as vibe coding – to reduce the collective vulnerability to cyber-attacks.

Whilst software produced without human review could potentially propagate vulnerabilities, well-trained AI tooling writing software which is secure by design could transform cybersecurity outcomes.
“The attractions of vibe coding are clear. Disrupting the status quo of manually produced software that is consistently vulnerable is a huge opportunity, but not without risk of its own,” he said.
“The AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities.”
NCSC’s Secure Vibe Coding Commandments
In parallel, David C, CTO for architecture at NCSC, published a blog on March 24 arguing that, while AI-generated code currently poses intolerable risks for many organizations, vibe coding shows “glimpses of a new paradigm” allowing “experienced developers to massively increase their productivity.”
The CTO predicted the business benefits of using AI to write code will drive up adoption. He argued it is vital that security professionals start engaging with the risks now and embed core security principles that will make software less vulnerable to attack.
His suggested commandments for securing vibe coding include:
- Integrate secure by default coding practices into vibe coding tools: AI models must generate safe, hardened code out of the box
- Adopt a ‘trust but verify’ approach: demand provable model provenance to ensure no malicious backdoors in AI-generated code
- Perform AI-powered code reviews: use AI to audit all code (human-written and AI-generated) and scan for vulnerabilities
- Implement deterministic guardrails: enforce strict, rule-based controls to limit what code can do, even if it’s compromised
- Secure hosting platforms: build environments that sandbox and protect against bad code, AI-generated or not
- Automate security hygiene: let AI handle docs, tests, fuzzing, and threat modeling for every piece of software
The NCSC’s CTO emphasized the need to start implementing some of these guardrails now, “without waiting five years for the vibe future.”
“As just one example, the ability to use AI to harden the hosting or code of a legacy (even end-of-life) critical application would pay off a lot of technical and security debt carried by an organization,” he said.
He also highlighted that AI could help with securing coding practices, from the smallest tasks, like maintaining the allow-list of URLs an application is permitted to talk to, to bigger tasks, like rewriting critical components in a framework that protects from common security issues by default, or in a memory safe language.
He envisaged “a possible future” where AI code ends up far more restricted and locked down by default than the best on-premises or software-as-a-service (SaaS) product.
“Ironically, it may even present a solution to organizations still worried about the old concerns with cloud services, who have avoided migrating in all these years,” he added.
Read now: Palo Alto Networks Introduces New Vibe Coding Security Governance Framework
