Ai Code Assistants Make Developers More Efficient At Creating Security Problems
AI coding assistants allow developers to move fast and break things, which may not be ideal.
Application security firm Apiiro says that it analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, to better understand the impact of AI code assistants like Anthropic’s Claude Code, OpenAI’s GPT-5, and Google’s Gemini 2.5 Pro.
AI is fixing the typos but creating the timebombs
The firm found that AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues.
“Security issues” here doesn’t mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations.
As of June 2025, AI-generated code had introduced over 10,000 new “security findings” per month in Apiiro’s repository data set, representing a 10x increase from December 2024, the biz said.
“AI is multiplying not one kind of vulnerability, but all of them at once,” said Apiiro product manager Itay Nussbaum, in a blog post.
“The message for CEOs and boards is blunt: if you’re mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you’re scaling risk at the same pace you’re scaling productivity.”
The AI assistants generating code for the repos in question also tended to pack more code into fewer pull requests, making code reviews more complicated because the proposed changes touch more parts of the codebase. In one instance, Nussbaum said, an AI-driven pull request altered an authorization header across multiple services, and when a downstream service wasn’t updated, that created a silent authentication failure.
The AI code helpers aren’t entirely without merit. They reduced syntax errors by 76 percent and logic bugs by 60 percent, but at a greater cost – a 322 percent increase in privilege escalation paths and 153 percent increase in architectural design flaws.
“In other words, AI is fixing the typos but creating the timebombs,” said Nussbaum.
Apiiro’s analysis also found that developers relying on AI help exposed sensitive cloud credentials and keys nearly twice as often as their DIY colleagues.
The firm’s findings echo the work of other researchers. For example, in May 2025, computer scientists from University of San Francisco, Vector Institute for Artificial Intelligence (Canada), and University of Massachusetts Boston determined that allowing AI models to iteratively improve code samples degrades security.
This shouldn’t be surprising given that AI models ingest vulnerabilities in training data and tend to repeat those flaws when generating code. At the same time, AI models are being used to find zero-day vulnerabilities in Android apps.
Apiiro’s observation about AI-assisted developers producing code faster than those without appears to contradict recent research from Model Evaluation & Threat Research (METR) that found AI coding tools made software developers slower. It may be however that Apiiro is counting only the time required to generate code, not the time required to iron out the flaws.
Apiiro, based in Israel, wasn’t immediately available to respond. ®
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.