A recent report has revealed that one in five security breaches can now be attributed to vulnerabilities in AI-generated code. According to the State of AI in Security & Development report from Aikido Security, 69% of organizations have discovered flaws in code produced by artificial intelligence, despite AI accounting for 24% of production code globally.
The findings highlight a growing concern within the tech industry. Companies are increasingly adopting AI to enhance efficiency and output, yet security teams, developers, and mergers are often blamed when issues arise. Specifically, 53% of security teams, 45% of developers, and 42% of mergers have faced accountability for problems linked to AI-generated code. This trend raises significant questions about who bears responsibility for these vulnerabilities, complicating efforts to track and address them effectively.
Research indicates that nearly half of all AI-generated code contains security flaws, even affecting large language models (LLMs). The rapid generation of code by AI brings forth potential security risks, as noted by Aikido CISO Mike Wilkes, who stated, “Developers didn’t write the code, infosec didn’t get to review it and legal is unable to determine liability should something go wrong. It’s a real nightmare of risk.” Wilkes emphasized the ambiguity surrounding accountability when breaches occur due to AI-generated code.
The situation varies significantly between regions. In Europe, 20% of companies have reported serious security incidents, while the figure in the United States is considerably higher at 43%. Aikido attributes this disparity to two main factors: a greater tendency among U.S. developers to bypass security controls—72% compared to 61% in Europe—and the stricter compliance regulations present in European countries. Nevertheless, 53% of European firms acknowledge having experienced near misses related to security breaches.
While AI tools themselves may not be the problem, the complexity of the ecosystem surrounding their use could be contributing to the issues. The report revealed that 90% of organizations using six to eight different tools experienced security incidents, in contrast to 64% of those employing just one or two tools. The time taken to remediate issues also varies significantly, with organizations using one or two tools averaging 3.3 days for remediation, compared to 7.8 days for those using five or more tools.
Despite the challenges, the outlook for AI in code development remains optimistic. An overwhelming 96% of respondents believe that AI will eventually be able to produce secure and reliable code within the next five years. Nearly as many, 90%, are confident that AI could manage penetration testing within 5.5 years. Importantly, only 21% of those surveyed anticipate that this progress will occur without human oversight, underscoring the ongoing necessity for human expertise in the development process.
As organizations navigate the complexities of integrating AI into their coding practices, the balance between leveraging technology for efficiency and ensuring robust security measures will remain a critical focus. The evolving landscape calls for clear guidelines on accountability and best practices to mitigate risks associated with AI-generated code.
