It’s hard, if not impossible, to find a product that swept the technological zeitgeist and reached more people faster than generative AI. This rapid adoption brings promises of increased worker productivity and innovations. It also carries risks: data leakage, empowering attackers with technology that can enhance their efforts, data poisoning, and more. There are also bound to be new threats and risks with this technology that we have yet to understand. With that uncertainty, one thing is clear: enterprises better form a generative AI policy sooner rather than later.
If there’s one thing previous transformation waves of technology have taught us, it’s that the companies that are successful in managing this new technology won’t be those that attempt to ban it outright. Enterprises with sensible AI policies will succeed. Sophos recently published a framework to help organizations craft their appropriate use policy.
Consider generative AI uses carefully
As Sophos’ policy guidance suggests, using generative AI in the enterprise demands careful consideration. The scope of the policy should be defined, as well as the responsibility for securing and managing the overall program, data sources and the classification of data used by the generative AI models.
Managing and securing generative AI starts with policy, one that defines rules or guidelines that an organization establishes to govern behavior for using generative AI within the organization. The scope of the policy must be determined, and organizations must also define to whom the policy applies. Such a policy should also mitigate some risks, such as inaccurate or unreliable outputs, biased or inappropriate outputs, security vulnerabilities, IP and privacy concerns, legal uncertainties, and vendor license terms and conditions that may be unacceptable.
Next, the AI training data should be collected, stored, managed, and used by the models in a way that is compliant with government regulations and company security policy. Finally, organizations must put into place the ability to continuously monitor the behavior of their AIs, the data fed into the models, and how the AI models are being used. Any model can be shut down if it begins behaving suspiciously, maliciously, or breaching sensitive data.
AI hygiene
The hygiene of the data that feeds the models is also critical. Organizations should detail the process to clean, enrich, and validate data used to train their GenAIs. Organizations must also define who will be in charge of these processes and the expected protocols.
Data security is also essential, and organizations must define security and access policies for access to their existing models as well as model development. The data that feeds the models must also be secured.
Don’t forget bring-your-own AI
Of course, the generative AI policy can’t just be about AI models developed in-house — the policy must also focus on mitigating the risks of bring-your-own-AI. To ensure, as much as is realistically possible, that AI services chosen by staff don’t leak sensitive data or provide staff with inaccurate, biased, or malicious results, an internal team must vet each AI service. Organizations must consider the approval process required for not only the adoption of a generative platform. Each new use case of generative AI could also be subject to an approval process. Sophos suggests that prohibited by default, approved by exception, could be a useful policy. Once services are approved, they can perhaps be added to an internal “GenAI” store. Also, organizations can consider developing their own training or certification for their staff who will use generative AI.
Finally, Sophos suggests reducing risks while encouraging exploration, staff curiosity, and trial and error, the hallmark traits of the organizations that most succeed with generative AI. And companies should take care in designing the generative AI policies so that they can get the most benefits from AI while balancing risk depending on the nature of their organization. Additionally, the policy should be incorporated into other policies throughout the organization, such as security, data privacy and management, ethics, regulatory compliance, and any appropriate policy depending on the nature of the business and its industry.
Perhaps form a steering committee and remember regular audits, risk assessments, and ongoing policy refinement.
One thing is sure: success at crafting generative AI usage and security policies will take experimentation and regularly dropping the areas of the policy that don’t work and tightening areas that prove too lenient. But taking a solid framework and crafting it today is the best first step an organization can take to adopting GenAI responsibly. For a more detailed look at Sophos’s framework for building a use policy for generative AI, visit their post here