Application security, AI/ML, Cloud Security

‘Shadow SaaS’ used by two-thirds of security pros, survey finds

Share

Nearly two-thirds of security professionals admitted to using unauthorized software-as-a-service (SaaS) tools, also known as “shadow SaaS,” in a survey published this week by Next DLP.  

Of the more than 250 security professionals surveyed at RSA Conference 2024 and Infosecurity Europe 2024, 73% said they had used unauthorized tools, despite most respondents acknowledging the risks of data loss, data breaches and lack of visibility associated with shadow SaaS.

The Next DLP survey also touched on how generative AI tools, such as OpenAI’s ChatGPT, contribute to employees’ use of shadow SaaS and explored how organizations are addressing the problem of shadow SaaS and shadow AI.

“The pervasiveness of shadow SaaS and shadow AI use, even among security professionals, is alarming but not surprising. It reflects the broader trend we’re seeing in SaaS adoption across organizations,” Guy Rosenthal, vice president, product, DoControl, told SC Media in an email.

DoControl’s State of SaaS Data Security 2024 Report found that companies created 14.9 million new SaaS assets on average in 2023, a 189% increase from the previous year, Rosenthal said. Adding the boom of GenAI applications into the mix, the risk of data compromise from shadow IT use is especially potent.

“The recent incident where Samsung employees accidentally leaked private code through ChatGPT highlights the risks associated with unsanctioned AI user. It’s not just about data loss but potentially exposing our defenses to threat actors,” Netenrich CISO Chris Morales told SC Media.

Shadow IT, GenAI defenses lacking at many organizations

The security professionals surveyed by Next DLP expressed a lack of organization-wide awareness and robust policies to address the problems posed by uncontrolled use of third-party SaaS and genAI tools. Only 37% said they had developed clear policies and consequences for using these tools, with only 28% promoting approved alternatives to employees, according to Next DLP.

While half of security professionals surveyed said that the use of AI at their organization was restricted to certain roles, and 16% said a complete ban was implemented, 40% said they did not believe staff at their organization had a sufficient understanding of the risks associated with use of unauthorized AI and other SaaS apps.

Additionally, only half said they had received any guidance or updated policies on shadow IT, including AI, within the past six months, and one-fifth said they never received guidance regarding these issues.

“For AI specifically, it’s crucial to establish a framework for safe usage rather than imposing blanket bans. These tools are becoming integral to our work, and we must adapt our security practices accordingly,” Morales said. “To my peers in the security field: We must set the standard. If we bypass security measures, it undermines our entire security posture.”

The potential consequences of shadow IT are demonstrated by the fact that one-tenth of survey respondents reported their organization had experienced a data breach or data loss due to use of unauthorized applications.

Shadow IT can even leave a scar on organizations via former employees who continue to access assets stored in SaaS applications after leaving the company, with DoControl’s State of SaaS Data Security 2024 Report finding this was the case at 90% of organizations, sometimes up to two years after an employee’s departure, Rosenthal said.

Strategies to combat shadow SaaS and GenAI risks

Given the increasing prevalence of third-party SaaS tools and GenAI apps in the workplace, organizations need more robust and more frequently updated policies to combat unauthorized use, which should also include measures to build employees’ awareness about their risks.

“To CISOs grappling with this issue: This is an opportunity to align security with innovation. Talk to your teams about why people use ‘shadow IT’ and devise plans that balance security with the need to run operations smoothly,” Morales said.

At Netenrich, Morales said the company is addressing the issue proactively with an approach that includes implementing systems to identify shadow IT, streamlining approval processes for new applications, providing safe alternatives to popular but risky tools and developing clear guidelines and training for the use of GenAI in the workplace.

“Fostering a culture of responsible innovation can help us harness the power of new technologies while maintaining robust security practices,” Morales said.

Rosenthal also recognized that acknowledging employees’ reasons for using tools like GenAI is part of building an effective solution for shadow IT.

“Employees, including security staff, are under pressure to be productive and innovative. They’re naturally drawn to tools that can help them work more efficiently, even if those tools aren’t official sanctioned,” Rosenthal said.

With the rapid growth SaaS and the tendency for employees to “overshare” potentially sensitive information with GenAI apps and other third-party services, organizations need to adopt effective strategies that will allow them to catch up with their employees’ SaaS adoption.

“It’s crucial for companies to implement robust SaaS Security Posture Management (SSPM) solutions to gain visibility into all SaaS usage, including shadow IT, and enforce consistent security policies,” Rosenthal concluded.

‘Shadow SaaS’ used by two-thirds of security pros, survey finds

Many organizations lack policies and training to address the risk of shadow IT, including GenAI.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.