What Is Vibe Coding? Collins’ Word of the Year Spotlights AI’s Role and Risks in Software

Written by

Collins’ Dictionary Word of the Year 2025 has been declared and it is ‘vibe coding.’ At its core, vibe coding relates to the use of AI, specifically large language models (LLMs), to turn natural language prompts into computer code, alleviating the need for deep knowledge of coding languages like JavaScript or C++.

This ultimately lowers the barrier to entry for those working on software development, allowing more people to build their own apps and websites.

However, as a Venafi survey revealed, the reliance on AI to generate code has already caused fear it could result in a major security incident.

Dr Andrew Bolster, senior manager of research and development at Black Duck, noted that the rise of vibe coding has already resulted in many people questioning the entire field of software engineering.

“If you can ‘just’ ask for what you want and get a safe, secure, maintainable and sellable product, what on Earth are we doing with all these software engineers, product managers, release managers, quality assurance professionals, etc,” he said.

“However, we’re already seeing the pitfalls of this supposedly magical and ‘freeing’ approach to the occasionally opaque world of software engineering,” he added.

Read more: Vibe Coding - Managing the Strategic Security Risks of AI-Accelerated Development

Additionally, it also allows threat actors to leverage AI to potentially develop malicious code.

LLMs can help find security weaknesses and improve hacking techniques by automating tasks like finishing code, spotting bugs or even creating harmful software designed for specific systems.

Google recently shared how AI tools have allowed threat actors to dynamically generate malicious code and evasion capabilities for malware.

New Threats Emerge from Depending on LLMs for Code

More sophisticated risks have begun to emerge as a result of developers relying on LLMs to build code.

One such issue can occur when attackers register software packages with names hallucinated from AI systems.

This has become known as ‘slopsquatting’. It is a type of supply-chain threat in AI-powered workflows, malicious actors can use the hallucinated, non-existent but plausible, package names to deliver malware.

Slopsuqatting is a play on ‘typosquatting,’ a popular tactic used by threat actors for phishing campaigns, where they register slightly misspelled versions of legitimate domains.

The Hidden Risks of AI-Generated Software

It’s not just hackers using AI tools to conduct exploits. There are also risks involved in deploying legitimate code that has been produced off the back of vibe coding which may introduce security weaknesses.

Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, commented, “The main security implications of vibe coding are that without discipline, documentation and review, such code often fails under attack.”

Daniel dos Santos, head of research at Forescout, noted that AI-generated configurations often contain weak security settings such as easy-to-guess passwords or overly generous access rights.

AI models also often generate code that is vulnerable to long-known security issues such as SQL injection and cross-site scripting (XSS).

“Vibe coding brings a whole new scale to these security issues. Developers can write code faster, automate repetitive tasks and solve complex problems with AI support. However, this increase in productivity comes at a price: security vulnerabilities are being introduced at a rate that overwhelms human capacity to detect and fix them,” he noted.

“The impact of this problem is not limited to individual companies. As AI-generated code ends up in open source libraries and shared components, vulnerabilities can cascade through the entire software ecosystem,” he added.

How to Secure AI-Generated Code

It is not that organizations should avoid AI-powered development tools but they should take a proactive and security-conscious approach, argued Dos Santos.

Forescout provided Infosecurity with the following points which can help to minimize the risks of vibe coding:

  • Rigorous code review: Human oversight is critical to detect security vulnerabilities that AI models miss. Four-eyes principles should be strictly adhered to, especially for safety-critical components
  • Automated security testing: Integrating security analysis tools directly into the development pipeline can help identify vulnerabilities in real time. Tools such as static code analysis, software composition analysis and dynamic security tests should be implemented as standard
  • Developer training: Raising developer awareness of the potential pitfalls of AI-generated code and encouraging healthy skepticism is essential. Developers should understand that AI is a tool, not a substitute for security expertise
  • Dependency checking: Implement strict controls for code dependencies to prevent supply chain attacks. Validate all external libraries and packages before integrating them into your projects

Drew Streib, VP of engineering at Black Duck, told Infosecurity, “Today, vibe coding is rapidly moving from experimentation to production and enterprises must ensure that their processes continue to drive rigorous testing and security of these systems to keep up with fast-evolving tooling changes. The promise is real, but so are the risks, and process is the key to unlocking innovation safely.”

What’s Hot on Infosecurity Magazine?