It’s time we wake up. While AI’s promise to drive efficiencies presents an attractive proposition, we need to stay hyper-aware of AI’s inherent risks. The emergence of AI systems reminds me of the early days of cloud. Driven by the allure of getting products to market faster and cheaper, organizations went on a purchasing frenzy, piling systems on top of systems without proper cross-team coordination, strategy, or real purpose.Over time, it became clear that this web of cloud-based systems was unsafe, exposing security gaps and widening our attack surfaces. We’ve reached a point where organizations have enough AI systems that it’s creating another layer, an AI Attack Surface. Security pros are right to sound the alarm about the sophistication of threats emerging right before our eyes. Today, an organization's most precious assets extend beyond data, SaaS systems, and databases to a multitude of AI models and the systems they run. And unlike the days of cloud and SaaS, AI spreads much, much faster, outpacing the security team’s ability to prepare for unforeseen issues. Not only that, AI models are now making real decisions, not just storing or presenting information.Prompt Injection: Where attackers manipulate an AI model's output by embedding specific instructions in the prompt, similar to SQL injection in databases. Prompt Leaking: A subset of prompt injection designed to expose a model's internal workings or sensitive data, posing risks to data privacy and security. Data Training Poisoning: Involves corrupting a machine learning model's training dataset with malicious data to influence its future behavior, akin to an indirect form of prompt injection. Jailbreaking Models: Refers to bypassing the safety and moderation features of AI chatbots, such as ChatGPT or Google’s Bard, using prompt injection techniques that resemble social engineering. Model Theft: The unauthorized replication of an AI model by capturing a vast number of input-output interactions to train a similar model, which can lead to intellectual property theft. Model Inversion Attacks and Data Extraction Attacks: An AI model gets queried with crafted inputs to reconstruct or extract sensitive training data, threatening confidentiality. Membership Inference Attacks: Designed to determine if specific data was used in training a model by analyzing its responses, potentially breaching data privacy. Regardless of the attack, the threat actors aim to find a backdoor to an organization’s data and assets. And attackers will go to great lengths to attack these models to expose weaknesses. As a way to counteract these attacks, Adversarial Testing has grown in popularity, a methodology used to evaluate the robustness of algorithms by intentionally feeding them deceptive or malicious input. Inputs are crafted to cause a model to make a mistake; they look almost identical to genuine inputs, but are altered in ways that lead models to make incorrect predictions or classifications. These alterations are designed to be undetectable or insignificant to human observers, highlighting the discrepancy between how AI models and humans perceive data.Exposure Management has also grown in popularity among CISOs in recent years as a way to bolster an organization's security. As noted by Gartner, maintaining a continuously-updated inventory of an organization's attack surface, including the AI layer, has become imperative to security. Exposure Management already helps organize, test and prioritize any potential gaps in a traditional attack surface. This framework can easily extend to AI systems. By emphasizing automated asset discovery, evaluation of business significance, and regular assessment for potential vulnerabilities, it can let security teams not only identify, but also prioritize risks associated with AI systems and models. While not a silver bullet, it’s a systematic approach that can help security teams ensure AI assets are under continuous scrutiny in order to safeguard against any incoming threats. Unlike the cloud days, it’s not too late. These are still the early days of AI and there’s time to put proper measures in place while our “AI stacks” are still somewhat manageable.Security teams must proactively address these emerging threats by implementing robust security frameworks, investing in continuous monitoring, and fostering a culture of exposure management throughout our organizations. As we navigate these new waters, our collective effort in securing the AI attack surface will protect our assets and also ensure the responsible and safe evolution of AI technologies for future generations. By taking decisive action today, we can mitigate risks, safeguard our innovations, and harness the full potential of AI with confidence.Rob Gurzeev, chief executive officer, CyCognito
AI/ML, AI benefits/risks
Seven AI attack threats and what to do about them

(Adobe Stock)
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds