AI/ML, Generative AI

How to keep innovating with GenAI without compromising security  

Share
GenAI

The generative artificial intelligence (GenAI) innovations and advancements over the past 18 months have been unmatched. Gartner predicts that by 2026, more than 80% of enterprises will have deployed GenAI applications in production environments and/or used GenAI application programming interfaces or models. This number jumped from less than 5% in 2023. 

But GenAI application development and deployment has been heavily laden with security risks, and security has not kept pace with the speed of innovation. It’s part of a broader trend when it comes to AI security: 82% of respondents to a recent IBM Institute for Business Value study acknowledged that secure and trustworthy AI is now essential to the success of their business, but 69% of those surveyed still said that innovation precedes security.

How can organizations derive value from GenAI application development and deployment without compromising security?

Risks and opportunities building GenAI apps

Organizations can derive the most value by customizing AI models with proprietary data; taking a generic model and using it off-the-shelf adds limited value. Gartner predicts that by 2027, more than 50% of the GenAI models that enterprises use will be specific to either an industry or business function – up from about 1% in 2023. 

Many organizations use the retrieval augmented generation (RAG) architecture for their GenAI apps. RAG architecture supplies an organization’s proprietary data to a large language model (LLM), helping to train such models, as well as create prompts for the LLM to generate accurate, desired outputs. This approach lets teams develop an application that’s specific to their organization and its unique needs. 

But this creates the risk of confidential data flowing out of the organization and even being used to train other models. Proprietary data gets supplied in the LLM in the form of a prompt, which can then be subject to attack. 

The risk of sensitive or confidential data exfiltration has been top-of-mind when using these models, but we don’t have to live this way. Setting governance and security controls at both the ingress and egress level helps address top security issues. We’ll need fine-grain controls to regulate the data and traffic coming in from the internet and external sources, as well as for the traffic leaving the GenAI application. The main consideration there is that specific data does not leave the enterprise in an unregulated way. 

Tension builds between developers and security teams  

Organizations looking to develop and deploy GenAI applications must empower their developers and give them the freedom to experiment. But the platform engineers and security teams tasked with avoiding and mitigating security risks want to establish as many controls as possible. This leads to tension between these two equally important groups. 

We must wire security at every level of the GenAI development and deployment cycle to avoid potentially devastating consequences. Today, teams find setting governance and security guardrails a big challenge. Platform and/or security engineers must set granular controls at the GenAI application level, both on the ingress and egress side, to regulate what can come in and what can go out. Without the guardrails, it’s impossible to empower and let developers securely experiment and innovate.  

To drive mutually-beneficial relationships, security leaders should include developers in their security efforts, allowing them to participate in setting their own security controls, and providing the proper tools to accomplish their goals.

Where does open source fit in?

Open source creates further opportunities and risks. Presuming organizations have multiple GenAI applications – and each application is composed of multiple services – they are likely using a healthy dose of open source, both in their applications, and some off-the-shelf open source models.

While users can pull something off-the-shelf and leverage it, there are risks in these models, as it’s impossible to guarantee that these models have not been breached. It’s critical to enforce some level of multi-tenancy, or some level of isolation between the applications. That way, if one application has been compromised, the boundaries are in place to ensure that the rest of the applications do not get compromised. Being able to enforce those controls, to establish isolation at the application level – or even at the namespace level – becomes a must-have. These security guardrails are used as a preventative measure to ensure that if something does get compromised, the user can contain the breach with a limited blast radius.

There’s an incredible amount of excitement and energy around GenAI, but we can’t sacrifice security for all this added innovation. A decade ago, we saw an implosion of SaaS companies, and we’re starting to witness something similar: thousands of GenAI companies are being established to tackle different problems and take on niche categories.

But they themselves have to establish security controls and guardrails to prevent security incidents, and to protect data so that enterprises using their services don’t run into security issues. Now’s the time for everyone to start prioritizing security because it’s critical to the future success of secure GenAI application development and deployment.

Ratan Tipirneni, president and CEO, Tigera

How to keep innovating with GenAI without compromising security  

The innovation we’ve seen around GenAI has been very exciting, but for AI to truly succeed, teams need to get a handle on securing all these new apps.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.