Artificial intelligence systems could be targeted with remote code execution intrusions through the exploitation of the now-addressed high-severity Meta Llama large language model framework vulnerability, tracked as CVE-2024-50050, The Hacker News reports.
Such an RCE flaw, which was given a critical severity designation by Snyk, impacts the Llama Stack component, particularly in the implementation of the reference Python Inference API, which automates Python object deserialization through the risky pickle format, according to an analysis from Oligo Security. "In scenarios where the ZeroMQ socket is exposed over the network, attackers could exploit this vulnerability by sending crafted malicious objects to the socket. Since recv_pyobj will unpickle these objects, an attacker could achieve arbitrary code execution (RCE) on the host machine," said Oligo Security researcher Avi Lumelsky. Oligo Security's findings come after security researcher Benjamin Flesch reported that websites could be subjected to distributed denial-of-service attacks facilitated by the exploitation of a high-severity OpenAI ChatGPT crawler issue.