AI/ML, AI benefits/risks, Generative AI

Is AI-Generated code safe?

Share
AI-generated code

As artificial intelligence (AI) weaves into the fabric of daily operations, its ability to automate tasks, from mundane to complex, sets new standards of efficiency and innovation.

We’re already getting a glimpse into a world where AI-driven algorithms optimize logistics, forecast trends, and even craft code, unlocking unprecedented productivity gains across various sectors.

Yet, beneath this promised progress lies a critical vulnerability: the security of AI-generated code. This paradox, where the very technology designed to propel us forward could also expose us to new risks, sets the stage for a nuanced exploration of AI's double-edged sword.

But all its marvels aside, the fast adoption of AI raises a pressing question: How do we harness the power of this new technology while safeguarding our growing attack surface?

It’s an especially critical question with code development, which sits at the foundation of our digital infrastructure.

There’s a stark contrast between using AI for simple tasks and its application in the complex domain of software development. AI excels at automating routine, straightforward tasks with a clear set of rules.

However, software development, characterized by its intricate nature, requires a deep understanding of logic, context, and creativity, areas where AI's capabilities are still evolving.

Insights from recent research highlight that while AI-generated code can significantly speed-up development processes, it also introduces unique vulnerabilities. This concern is underscored by findings in the "2023 Snyk AI-Generated Code Security Report," which highlights the efficiency of AI in coding, but also points to a significant increase in security vulnerabilities because of reduced human oversight and AI's inherent limitations in understanding complex security requirements.

True, advancements in large language models (LLMs) have significantly improved the ability of AI to understand and generate code, leveraging LLMs’ strengths in processing language-based information. However, even with these advancements, the challenges related to security vulnerabilities remain pertinent. LLMs, while adept at generating syntactically correct code, may not fully grasp the nuanced security contexts or the specific secure coding practices required for different applications. A 2023 McKinsey report supports this, noting that AI doesn’t contribute as much when it comes to more complex coding tasks.

Evolving security risks with AI

From the exploitation of AI system weaknesses by savvy hackers, to the challenges of managing rapidly produced AI-generated code and navigating the risks of unsanctioned AI tool usage, organizations need to prepare. Each of these raises different challenges that teams need to address:

  • Exploitation of AI vulnerabilities by hackers: As AI systems become more sophisticated, hackers increasingly exploit vulnerabilities in these systems. This includes manipulating AI algorithms to behave unpredictably or to bypass security measures, potentially leading to unauthorized access or data breaches. Forms of attacks include prompt injection, indirect prompt injection and training data poisoning.
  • Challenges of managing AI-generated code: The automated nature of AI-generated code can introduce security gaps, especially if the code does not get thoroughly reviewed and tested. The speed at which AI can produce code might outpace the ability of security teams to audit and validate it for vulnerabilities. Stanford University research found that AI-assisted code development produced buggier code. With more code, delivered faster and with more vulnerabilities, already stretched and stressed application security teams will surely be overwhelmed.
  • Blind spots in unsanctioned AI tool usage: Employees using AI tools without proper oversight can inadvertently introduce risk. Gartner reports that by 2027, 75% of employees will acquire, modify or create technology outside IT’s visibility – up from 41% in 2022. AI-generated code from those tools may be hard, if not impossible, to track as it makes its way from local development environments to the central repository.

Balancing innovation with security

Is AI-generated code safe? No, but with AI now part of the software development landscape, organizations have to accept that reality and adapt. They need to work hard to ensure their AI-enhanced software is made safe with proper oversight. This demands a security-first mindset, and proactively engaging with the complexities and risks intrinsic to AI-generated code. More code, generated faster, and likely with more vulnerabilities is a reality that will need to be embraced.

Here are four actions teams can take now to start that process:

  • Educate and train: Train development and security teams on the capabilities and limitations of AI-generated code.
  • Adopt secure coding standards for AI: Develop and enforce secure coding practices tailored to AI-generated code.
  • Practice DevSecOps: Foster a collaborative culture that brings together developers using AI, security teams, and operational teams.
  • Automate vulnerability management processes: Automate remediation workflows to scale and accelerate the company’s remediation operations so they can keep pace with AI-generated code, and reduce the window of opportunity for attackers.

AI presents many potential benefits -- and pitfalls -- but with some proactive approaches, organizations can get the most out of this exciting new technology.

Yoran Sirkis, co-founder and CEO, Seemplicity

Is AI-Generated code safe?

No, it's not safe, but now’s the time to take proactive steps and adapt.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.