Software security continues to lag behind the pace of development, with vulnerabilities accumulating faster than they are being resolved, according to the latest State of Software Security report from Veracode. The annual report, based on analysis of 1.6 million applications tested on its cloud platform, paints a picture of increasing “security debt” – unresolved vulnerabilities lingering for over a year – exacerbated by the accelerating adoption of artificial intelligence in the software development lifecycle.
Currently, 82 percent of companies are grappling with security debt, a significant increase from 74 percent just a year ago. More concerning is the rise in high-risk vulnerabilities – flaws that are both easily exploitable and capable of causing significant damage – which have jumped from 8.3 percent to 11.3 percent. These figures are derived from a comprehensive suite of testing methodologies including static analysis (examining source code for potential weaknesses), dynamic analysis (testing application behavior during runtime), software composition analysis (identifying vulnerabilities in third-party components and libraries), and manual penetration testing performed by security experts.
While the overall trend is concerning, the report isn’t entirely bleak. Veracode observed a reduction in the number of applications containing known vulnerabilities in open-source components, decreasing from 70 percent to 62 percent. Overall “flaw prevalence” – the percentage of applications with any kind of vulnerability – also saw a slight decline, moving from 80 percent to 78 percent. However, researchers caution that this improvement may be partially attributable to increased testing coverage, meaning more vulnerabilities are being *discovered* rather than fewer being *created*.
The Velocity Problem: Development Outpacing Security
A key driver of the growing security debt is the increasing speed of software development. The report highlights that new code is being added to applications at a rate that outstrips the ability to address existing vulnerabilities. This is particularly acute in organizations embracing agile and DevOps methodologies, which prioritize rapid iteration and frequent releases. The pressure to deliver features quickly often leads to security considerations being deferred or overlooked.
Adding to this complexity is the growing use of AI-generated code. While AI tools can potentially assist in identifying vulnerabilities and automating remediation efforts, they also introduce new challenges. The report notes that AI-generated code can be more difficult to analyze and understand, making it harder to identify and fix security flaws. The potential for malicious actors to leverage AI for penetration testing or to manipulate AI models through techniques like prompt injection adds another layer of risk.
The impact of AI on security is a complex and evolving issue. While AI-powered security tools are emerging, their effectiveness is still being evaluated. The report acknowledges that human oversight remains crucial, but the practical implications of that oversight – particularly in the context of rapidly evolving AI technologies – are not yet fully defined. The recent example of Cloudflare’s AI-driven application development, where a significant application was built in a week with minimal human code review, underscores the potential for security to be compromised in the pursuit of speed.
The Rise of False Positives and the Remediation Gap
The increasing reliance on automated testing tools also contributes to the challenge of managing vulnerabilities. While these tools are valuable for identifying potential issues, they often generate false positives – identifying code as vulnerable when it is not. This creates a significant burden for security teams, who must spend time investigating and dismissing these false alarms. The report doesn’t quantify the rate of false positives, but acknowledges that they can skew the overall vulnerability numbers.
The core issue, however, isn’t simply the *number* of vulnerabilities, but the widening “remediation gap” – the difference between the rate at which vulnerabilities are discovered and the rate at which they are fixed. Veracode’s report states that this gap has reached “crisis proportions,” and that incremental improvements are no longer sufficient. “Transformational change” is required, but the report offers little concrete guidance on what that change should entail.
A Cycle of Tooling and Unfulfilled Promises?
The report’s conclusion – “The velocity of development in the AI era makes comprehensive security unattainable” – is a stark assessment of the current state of affairs. It suggests that traditional security approaches are struggling to keep pace with the demands of modern software development. There’s a risk that the industry will respond by doubling down on AI-powered security tooling, despite evidence suggesting that these tools are not yet capable of solving the underlying problem.
The challenge lies in finding a way to integrate security seamlessly into the development process, rather than treating it as an afterthought. This requires a shift in mindset, a commitment to security best practices, and a willingness to invest in the tools and training necessary to build secure software. It also requires a realistic assessment of the limitations of AI and a recognition that human expertise remains essential. Without a fundamental change in approach, the cycle of accumulating security debt is likely to continue, leaving organizations increasingly vulnerable to attack.
