Hackread reports that open-source artificial intelligence models could be stealthily compromised through the exploitation of four security issues in the picklescan tool for determining malicious code within Python pickle files, which could result in arbitrary code execution and system takeovers.Included among the discovered and already patched picklescan vulnerabilties were CVE-2025-1716, which could allow evasion of picklescan's security checks; CVE-2025-1889, which could deter identification of concealed malicious files; CVE-2025-1944, which could hinder picklescan functioning via ZIP archive filename alterations; and CVE-2025-1945, which could prevent the discovery of malicious files due to ZIP archive modifications, an analysis from Sonatype showed. Averting potential risks requires not only the avoidance of pickle files from untrusted sources and the loading of such files in controlled environments but also the utilization of cryptographic signatures and checksums for AI model integrity evaluations, according to Sonatype Chief Product Officer Mitchell Johnson. Organizations were also urged to implement several security scanning systems.
Vulnerability Management, AI/ML
AI security defenses potentially circumvented via picklescan flaws

Adobe Stock
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Related Events
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds