AI/ML, Data Security

Study: GenAI tools raise risk of sensitive data exposure

Businessman use artificial intelligence AI technology for enhanced work efficiency data analysis and efficient tools, Unlocking work potential with AI solutions chatbot help solve work problems.

A recent study by Harmonic Security highlighted significant risks of sensitive data exposure through generative AI tools like ChatGPT by OpenAI, Microsoft's Copilot, Google's Gemini, and others, reports Cybernews. The study, which analyzed tens of thousands of prompts, revealed that nearly 8.5% of business users may have disclosed sensitive information, with 46% of these incidents involving customer data such as billing and authentication details. Employee-related data, including payroll and performance reviews, accounted for over a quarter of the cases, while legal, financial, and proprietary security details sought after by threat actors were also frequently exposed. Sensitive code, including access keys and proprietary source code, made up the remainder. The study found that many employees use free versions of these tools, which often lack adequate security controls. Despite the risks, the majority of generative AI use was deemed safe, focusing on tasks like text summarization, editing, and coding documentation. Experts stress that proper training and safeguards are essential to minimize exposure and ensure secure AI usage.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds