AI/ML, Training, Data Security

38% of AI-using employees admit to sending sensitive work data

Share
Adobe Stock

More than a third of employees who use AI for work admit to sending sensitive work information to AI applications without their employer’s knowledge, a survey by the National Cybersecurity Alliance (NCA) and CybSafe found.

The “Oh, Behave!” 4th Annual Cybersecurity Attitudes and Behaviors Report 2024-2025, which incorporated quantitative and qualitative insights from more than 7,000 participants across five generations and seven countries, included a section on AI for the first time this year.

The report, published Thursday, revealed the prevalence of personal and workplace AI use, attitudes toward AI use and AI-generated content, and information about AI security training, revealing that only 48% of employees had received any type of AI training.

The study, conducted in March and April of 2024, found that more than a quarter (27%) of respondents use AI tools at work, with OpenAI’s ChatGPT being the most popular tool, used by 65% of AI users. More than a third of employees, 38%, said they believed AI would increase their work productivity.

Most alarmingly, 38% of employed respondents who use AI said they have submitted sensitive work-related information to AI tools without their employer knowing, demonstrating the ongoing risk of “shadow AI.” Younger generations — Gen Z and Millennials — were more likely to use AI at work and more likely to submit sensitive work information to AI tools (46% and 43%, respectively).

Previous research has found shadow AI use and data leakage through AI are becoming more common in workplaces. A report published by Cyberhaven in May revealed that the vast majority of workplace AI use was through personal accounts rather than corporate accounts (73.8% for ChatGPT and 94.4% for Google Gemini), and that 27.4% of data sent to chatbots was sensitive — a 156% increase from the previous year’s report.

“While the security community is well aware of AI-related threats, this awareness hasn’t yet translated into consistent security practices across the workforce,” CybSafe CEO and Founder Oz Alashe said in a statement.

Respondents concerned about AI scams, election influence

Survey respondents also expressed concern about AI-related cybercrime as well as AI-generated content, such as phishing content and election-related misinformation. A majority — 65% — said they were concerned about AI-driven cybercrime, 55% said AI would make it harder to stay secure online and 52% said they believed AI would make online scams harder to detect.

With two major elections coming up at the time that the survey was conducted — the UK general election in July and U.S. presidential election in November — more than two-thirds of respondents (36%) said the rise of generative AI (GenAI) would influence their perceptions of what is real and fake online during election campaigns.

Respondents’ confidence in their ability to detect AI-generated content was mixed, with 36% expressing high confidence and 35% expressing low confidence. Confidence decreased with age, with 53% of Gen Z and Millennials having high confidence but only 17% of Baby Boomers and 8% of the Silent Generation having similar confidence.

There was a similar mixture of responses when it came to respondents’ trust in companies to implement AI responsibly, with 36% having high trust and 35% having low trust; trust levels also similarly decreased with age. When it came to saying who should be most responsible for overseeing and regulating generative AI, 77% said tech giants should be held responsible, 73% said national regulators and 70% said the government should be responsible.

In addition to insights about AI, the 139-page report covered many other aspects of respondents’ attitudes toward cybersecurity and online safety, including responsiveness to security training and confidence in identifying phishing. With AI likely to play a more significant role in these aspects of cybersecurity in the future, NCA and CybSafe said “organizations need to get over AI governance — quickly.”

The report concluded with recommendations to accept the reality of GenAI use in the workplace and prioritize helping employees use these tools safely. The authors also recommended that AI security training make the potential consequences of unsafe use clear, providing the necessary motivation to drive behavior change, rather than simply “dropping a brick of information.”

“While AI presents unique and urgent challenges, the core risks remain the same. Many employees understand what’s required to safeguard their workplace against cyber threats, but the key to strengthening organizational resilience lies in transforming that knowledge into regular, safe behavior,” Alashe stated.

38% of AI-using employees admit to sending sensitive work data

More than half of employees report receiving no training on secure AI use.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.