Share This:

Managed security service providers (MSSPs) prepare for a surge of application vulnerabilities—flaws that attackers are likely to exploit faster than ever.

AI-generated code: The new normal

A Sapio Research survey of 450 IT professionals in the U.S. and Europe found that 69 percent have discovered vulnerabilities in AI-generated code, and 20 percent report serious incidents as a result.

Conducted on behalf of Aikido Security, the survey reveals that, on average, 24 percent of code running in production environments comes from AI tools. It also shows that 92 percent of respondents worry about vulnerabilities in AI-generated code, and 25 percent are seriously concerned.

The core issue lies in how large language models (LLMs) train to generate code. Providers like OpenAI and Anthropic train their LLMs on vast, varied code collected from across the Web, which can introduce vulnerabilities into AI coding tools as the pace of development accelerates. Security teams often discover and fix many of these vulnerabilities before production, but the rapid pace of application development overwhelms them. As more applications roll out, the total number of exploitable vulnerabilities increases.

The double-edge sword of AI

Even more troubling is AI’s double-edged nature. Cybercriminals now reverse‑engineer exploits with these tools, signaling a dramatic rise in exploitable vulnerabilities. It could become a tsunami of attacks for MSSPs to thwart, with many zero-days lacking immediate patches.

While developers should rely on testing tools to identify and remediate issues, scanning tools have long produced many false positives, causing alerts to be ignored more often than not. The survey notes that software engineers spend an average of 6.1 hours per week checking and triaging security tool alerts, with 72 percent of that time wasted on false positives. Nearly two-thirds of respondents (65 percent) said teams bypass security checks, delay fixes, or dismiss findings due to alert fatigue.

A sliver of optimism

On the plus side, respondents are hopeful the situation will improve. Seventy-nine percent say their organization is relying more on AI to help fix vulnerabilities. About 96 percent believe AI will eventually write secure code, but 21 percent think it can happen without human oversight. A total of 90 percent expect AI to reduce the need for humans to conduct penetration testing.

Nevertheless, 79 percent say remediation takes longer than a day, with backlogs everywhere.

In the short term, at least, application security is likely to worsen before it improves. MSSPs should plan for the worst while hoping for the best and begin preparing now for the challenges ahead.

Photo: TROFNOM / Shutterstock


Share This:
Mike Vizard

Posted by Mike Vizard

Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike blogs about emerging cloud technology for Smarter MSP.

Leave a reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.