TechCrunch reports that applications developed using code-generating artificial intelligence systems had increased odds of having security vulnerabilities.
Incorrect and insecure programming solutions were more likely among software developers given access to the AI code-generating system Codex, a Stanford study revealed. Moreover, developers using Codex also had a higher likelihood of reporting insecure answers as secure, compared with those in the control group. "Code-generating systems are currently not a replacement for human developers. Developers using them to complete tasks outside of their own areas of expertise should be concerned, and those using them to speed up tasks that they are already skilled at should carefully double-check the outputs and the context that they are used in in the overall project," said study lead co-author Neil Perry. However, study co-author Megha Srivastava said that code-generating systems may be beneficial for low-risk tasks, including exploratory research code development. "Companies that develop their own [systems], perhaps further trained on their in-house source code, may be better off as the model may be encouraged to generate outputs more in-line with their coding and security practices," said Srivastava.
Security Program Controls/Technologies, DevSecOps
Code-generating AI-based apps contain more security flaws
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds