We’re all still coming to grips with the exciting possibilities of Generative AI (GenAI) as a technology that can create realistic and novel content, such as images, text, audio, and video. Its use cases will span the enterprise and can enhance creativity, improve productivity, and generally help people and businesses work more efficiently. We are at a point in time where no other technology will transform how we work more drastically than AI.However, GenAI also poses significant cybersecurity and data risks. From seemingly innocuous prompts entered by users that may contain sensitive information (which AI can then collect and store), to building large-scale malware campaigns, generative AI has nearly single-handedly expanded the ways modern enterprises can lose sensitive information.For most LLM companies, they are just now starting to consider data security as part of their strategy and customer needs. Businesses must adapt their security strategy to accommodate this, as GenAI security risks are revealing themselves as multi-faceted threats that stem from how users inside and out of the organizations interact with the tools.
What we know so far
GenAI systems can collect, store, and process large amounts of data from various sources – including user prompts. This ties into five primary risks organizations face today:- Data leaks: If employees enter sensitive data into GenAI prompts, such as unreleased financial statements or intellectual property, then enterprises open themselves up to third-party risk akin to storing data on a file-sharing platform. Tools such as ChatGPT or Copilot could also leak that proprietary data while answering prompts of users outside the organization.
- Malware attacks: GenAI can generate new and complex types of malware that can evade conventional detection methods – and organizations may face a wave of new zero-day attacks because of this. Without purpose-built defence mechanisms in place to prevent them from being successful, IT teams will have a difficult time keeping pace with threat actors. Security products need to use the same technologies at scale to keep up and stay ahead of these sophisticated attack methods.
- Phishing attacks: The technology excels at creating convincing fake content that mimics real content, but contains false or misleading information. Attackers can use this fake content to trick users into revealing sensitive information or performing actions that compromise the security of the business. Threat actors can create new phishing campaigns – complete with believable stories, pictures and video – in minutes, and businesses will likely see a higher volume of phishing attempts because of this. Deep fakes are being produced to spoof voices for targeted social engineering hacks and have proven very effective.
- Bias: LLM’s can become biased in their responses and potentially give misleading or wrong information back out of models that were trained with bias information.
- Inaccuracies: We’ve also seen that LLMs can accidently deliver the wrong answer when analysing a question because of a lack of human understanding and full context of a situation