A survey of developers who use AI in their work found that 42% said at least half of their codebase is AI-generated.Only 67% said they review AI-generated code before every deployment, despite 79.2% saying they believe AI will exacerbate open-source malware threats, according to the Cloudsmith’s Artifact Management Report 2025, published Wednesday.The survey of 307 participants, all of whom use AI during their daily software development, DevOps or continuous integration/continuous deployment (CI/CD) workflows, reveals the need for software artifact management and security measures to catch up with the growing prevalence of AI-generated code.“With escalating software supply chain threats and the meteoric adoption of GenAI-powered coding practices, organizations are being forced to rethink how they manage, secure and scale their software artifact infrastructure,” Cloudsmith CEO Glenn Weinstein wrote in the report. The report reflects both concern about AI risks and various levels of trust in AI-generated code among respondents.While 30% of respondents believe AI will significantly increase open-source malware threats and 41% said code generation was the greatest area of risk when it comes to AI-generated input in the software development cycle, only 59% applied additional reviews to AI-generated packages, with 16% treating them like any other package.Additionally, a fifth of respondents said they fully trusted AI outputs without any scrutiny, while two-thirds said they only trusted AI-generated code after a manual review. More than 20% of respondents said a majority or all of their codebase was generated by AI.Most organizations, 86%, saw an increase in AI-influenced dependencies within the last year and 40% reported a significant increase in these dependencies. While concerns about the risks of AI-generated code in the open-source ecosystem are high, only 29% of respondents felt very confident that they could detect malicious code in open-source libraries.AI-assisted code generation opens up a new attack vector for malicious actors, and poses the risk of inadvertently introducing security weaknesses. Backslash recently found that “vibe coding” with popular models such as OpenAI’s ChatGPT can produce code vulnerable to up to nine of the top 10 Common Weakness Enumeration (CWE) flaws. Using security-minded prompts was found to reduce the incidence of these weaknesses, however.AI-generated code could also become a target for malicious manipulation. For example, Pillar Security warned attackers could influence AI-generated code by tricking developers into using malicious rule configuration files for popular AI coding assistants like GitHub Copilot and Anysphere’s Cursor.Additionally, researchers from the University of Texas at San Antonio, Virginia Tech and the University of Oklahoma found that 20% of packages in AI-generated Python and JavaScript code were hallucinated. This has raised concern about potential “slopsquatting,” where the names of hallucinated packages are claimed by malicious actors to introduce harmful dependencies.Cloudsmith notes that artifact management solutions can play a role in addressing emerging AI risks through automated policy enforcement to detect unreviewed AI-generated artifacts and provenance tracking to distinguish AI-authored code from human-authored code. By integrating trust signals directly into the development pipeline, these solutions can reduce the burden on developers to manually review potentially risky AI-generated code."Automated checks and use of curated artifact repositories can help developers spot issues early in the development lifecycle," Weinstein said in a statement.
AI/ML, DevOps, Supply chain, Application security
42% of AI-using devs say at least half of their codebase is AI-generated

An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds