AI Is Writing the Next Wave of Software Vulnerabilities — Are We “Vibe Coding” Our Way to a Cyber Crisis?

Posted on October 15, 2025
Author: Joseph M. Saunders

Artificial intelligence (AI) is reshaping how software is written and how vulnerabilities emerge. Developers are no longer limited to reusing open-source components or third-party libraries and instead are asking AI to build code on demand. This “vibe coding” revolution is powerful, but perilous.

For decades, cybersecurity relied on shared visibility into common codebases. When a flaw was found in OpenSSL or Log4j, the community could respond: identify, share, patch, and protect.

AI-generated code breaks that model. Instead of re-using an open source component and having to comply with license restrictions, one can use AI to rewrite a near similar version but not use the exact open source version.

I recently attended SINET New York 2025, joining dozens of CISOs and security leaders to discuss how AI is reshaping our threat landscape. One key concern surfaced repeatedly: Are we vibe coding our way to a crisis?

Listen to the Audio Overview

 

Losing the Commons of Vulnerability Intelligence

At the SINET New York event, Tim Brown, VP Security & CISO at SolarWinds, pointed out that with AI coding, we could lose insights into common third-party libraries.

He’s right. If every team builds bespoke code through AI prompts, including similar to but different than open source components,  there’s no longer a shared foundation. Vulnerabilities become one-offs. If we are not using the same components, we won’t have the ability to share vulnerabilities. And that could lead to a situation where you have a vulnerability in your product that somebody else won’t know they have.

The ripple effect is enormous. Without shared components, there’s no community-driven detection, no coordinated patching, and no visibility into risk exposure across the ecosystem. Every organization could be on its own island of unknown code.

AI Multiplies Vulnerabilities

Even more concerning, AI doesn’t “understand” secure coding the way experienced engineers do. It generates code based on probabilities and its training data. A known vulnerability could easily reappear in AI-generated code, alongside any new issues.

Veracode’s 2025 GenAI Code Security Report found that “across all models and all tasks, only 55% of generation tasks result in secure code.” That means that “in 45% of the tasks the model introduces a known security flaw into the code.”

For those of us at RunSafe, where we focus on eliminating memory safety vulnerabilities, that statistic is especially concerning. Memory-handling errors — buffer overflows, use-after-free bugs, and heap corruptions — are among the most dangerous software vulnerabilities in history, behind incidents like Heartbleed, URGENT/11, and the ongoing Volt Typhoon campaign.

Now, the same memory errors could appear in countless unseen ways. AI is multiplying risk one line of insecure code at a time.

Signature Detection Can’t Keep Up

Nick Kotakis, former SVP and Global Head of Third-Party Risk at Northern Trust Corporation, underscored another emerging problem: signature detection can’t keep up with AI’s ability to obfuscate its code.

Traditional signature-based defenses depend on pattern recognition — identifying threats by their known fingerprints. But AI-generated code mutates endlessly. Each new build can behave differently and conceal new attack vectors.

In this environment, reactive defenses like signature detection or rapid patching simply can’t scale. By the time a signature exists, the exploit may already have evolved.

Tackling the Memory Safety Challenge

So how do we protect against vulnerabilities that no one has seen — and may never report?

At RunSafe, we focus on one of the most persistent and damaging categories of software risk: memory safety vulnerabilities. Our goal is to address two of the core challenges introduced by AI-generated code:

  • Lack of standardization, as every AI-written component can be unique
  • No available patches, as many vulnerabilities may never be disclosed

By embedding runtime exploit prevention directly into applications and devices, RunSafe prevents the exploitation of memory-based vulnerabilities, including those that are unknown or zero days.

That means even before a patch exists, and even before a vulnerability is discovered, RunSafe Protect keeps code secure whether it’s written by humans, AI, or both.

Building AI Code Safely

AI-generated code is here to stay. It has the potential to speed up development, lower costs, and unlock new capabilities that would have taken teams months or years to build manually.

However, when every product’s codebase is unique, traditional defenses — shared vulnerability intelligence, signature detection, and patch cycles — can’t keep up. The diversity that makes AI powerful also makes it unpredictable.

That’s why building secure AI-driven systems requires a new mindset that assumes vulnerabilities will exist and designs in resilience from the start. Whether it’s runtime protection, secure coding practices, or proactive monitoring, security must evolve alongside AI.

At RunSafe, we’re focused on one critical piece of that puzzle, protecting software from memory-based exploits before they can be weaponized. As AI continues to redefine how we write code, it’s our responsibility to redefine how we protect it.

Learn more about Protect, RunSafe’s code protection solution built to defend software at runtime against both known and unknown vulnerabilities long after the last patch is available.

Guide to Creating and Utilizing SBOMs

Latest Blog Posts