AI/ML, Data Security

Shadow AI emerges as growing new security concern

Security leaders are facing a growing challenge as employees create and use unauthorized artificial intelligence applications, known as shadow AI, without IT oversight, according to an article in VentureBeat.

These applications, designed to streamline tasks and enhance productivity, often operate without security controls, exposing organizations to risks such as data breaches, compliance violations, and reputational harm. According to security experts, many of these apps default to training on any data they receive, potentially leaking proprietary information into public models like OpenAI’s ChatGPT and Google Gemini. A recent audit of a financial firm found 65 unauthorized AI tools in use, far exceeding the security team’s estimates. Research suggests that most shadow AI adoption stems from employees seeking efficiency rather than malicious intent. However, the unchecked use of these tools poses regulatory and cybersecurity risks, especially as governments impose stricter AI-related compliance requirements. To mitigate these risks, experts advocate for centralized AI governance, enhanced security controls, and employee education. Establishing a vetted AI tool inventory, conducting audits, and integrating AI oversight with governance, risk, and compliance frameworks can help organizations balance security with innovation. Instead of banning AI outright, businesses are encouraged to implement secure, sanctioned solutions that meet operational needs while protecting sensitive data.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds