Security faced significant challenges in the past year. On the threat side, exploits and numerous zero-day attacks sent security teams scrambling.On the regulation side, the U.S. SEC instituted stringent incident reporting requirements, while in the EU, government bodies proposed new frameworks like the Artificial Intelligence Act in response to rising AI threats.Compounding this issue we see the rise of novel AI threats, which have further complicated the security industry.Organizations are now under pressure not only to defend against these advanced threats, but also to harness the power of AI for their own security programs. Today, AI has been woven into the fabric of entire organizational systems—and its presence has only increased—blurring the lines of responsibility and ownership within a company.It’s reminiscent of the early days of cloud and SaaS. As organizations transitioned to this new way of computing, new security threats emerged, leaving many unsure of who should bear responsibility.This uncertainty prompted calls for a more unified and collaborative approach to security, recognizing that protecting against evolving threats required collective effort.It’s no different with AI. However, the speed at which AI moves and grows has accelerated more than anything we experienced with cloud-based technologies, which will require security leaders to think and act faster to get ahead of any threats.Training on protected information: AI systems rely on data, often pulled from public sources. However, when private information gets used to train models, we need to take a very different approach to securing data. We must isolate the model so that data put in does not get served up to unauthorized individuals. This includes both the data we feed the model and data the model collects as part of its normal functionality. Teams need to ensure that only authorized individuals can access the data served back. Trusting AI to do too much, too early: While AI creates efficiencies in many everyday activities, it can also make errors. Over-reliance on AI without proper controls can lead to significant issues. For example, Air Canada’s chatbot misinformed a customer about a bereavement policy, resulting in legal consequences for the company. This incident highlights the importance of ensuring that AI systems, especially those interacting with customers, are thoroughly checked and controlled to prevent misinformation. Use of AI by malicious adversaries: New technology often attracts malicious adversaries who think of ways to use it for harmful purposes. Bad actors might use AI to create deep fakes or misinformation campaigns. They could also leverage AI to develop sophisticated malware or highly effective phishing campaigns targeting companies. Understand vendor usage of AI: Identify the vendors that are leveraging AI in their software, and ask specific questions to understand the application of AI to the company’s data. Determine if vendors are training models on the data the company provides and what that means in terms of protecting company data further. The Cloud Security Alliance (CSA) offers excellent resources, such as the "AI Safe Initiative" group, which includes valuable research and education and AI safety and security. Demand transparency and control: Ensure transparency in how AI gets used in the products the company uses. For example, at our company, we are very transparent about our use of AI and even let customers turn it off if they are not comfortable using the technology. These are the choices we should demand from our products. Find out which vendors are moving to a model where they are training the AI on the company’s sensitive data. While it’s risky, and security teams need to decide on their own level of comfort. Follow evolving community frameworks: There are many frameworks being developed, but two that I recommend taking a look at now are the NIST AI RMF and ISO 42001. Other available frameworks include the OWASP AI Security and Privacy Guide and AI MITRE will help in staying up to date on the latest. AI security will require a collaborative approach and clear ownership of responsibilities within organizations. By understanding how vendors use AI, insisting on transparency, and keeping up with community resources, security teams can better protect their organizations from any emerging threats.Jadee Hanson, chief information security officer, Vanta
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds