There’s been any number of news releases around artificial intelligence (AI) this week, as the industry and government look to chart a path forward with these new technologies.From a hands-on industry perspective, Google announced its new bug bounty program in which it aims to take a fresh look at how bugs are categorized and reported.The United Nations and OpenAI also announced that they plan to study AI in the coming months, with OpenAI focused on what they called “catastrophic risk.” All of this comes on top of the Biden administration expected to roll out an executive order (EO) around AI sometime this coming week.In a blog post Oct. 26, Google pointed out that generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation, or misinterpretations of data (hallucinations). “As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks,” said Google’s Laurie Richardson and Royal Hansen in the blog. “But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure.”Google plans to expand its vulnerability rewards program (VRP) to include attack scenarios around prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft.Alex Rice, co-founder and CTO of HackerOne, said Google’s expansion of its bug bounty program is a signal for where all bug bounty programs are headed. Rice said the ethical hacker community is a great resource to explore emerging technology because they’re often at the forefront of researching how these kinds of technologies can be exploited.“I foresee GenAI becoming a significant target for hackers and a growing category of bounty programs,” said Rice.Rice pointed out that research from HackerOne validates this: 55% of the hacker community on the HackerOne platform say GenAI tools will become a major target for them in the coming years, and 61% say they plan to use and develop hacking tools that employ GenAI to find more vulnerabilities. Another 62% of hackers also said they plan to specialize in the OWASP Top 10 for large language models (LLMs). Casey Ellis, founder and CTO at Bugcrowd, added that Bugcrowd has been active in AI and ML testing as far back as 2018, involved in the AI Village and Generative AI Red Teaming events, and working with a number of the leading AI players that popped into the public collective consciousness in 2022-2023. Ellis said AI has captured the imagination of hackers, with more than 90% reporting that they use AI in their hacking toolchains as per a recent Bugcrowd survey.“AI testing mostly augments, rather than replaces, traditional vulnerability research and bug hunting for those who are already experienced in the latter,” said Ellis. “The part that's exciting is that the barrier to entry for AI testing is much lower for a very large number of people, since the only language a prospective hacker needs to know in order to get started is the one they're probably already using.”
AI benefits/risks, Generative AI
Google launches AI bug bounty program as organizations plan to study risks

OpenAI said it plans to study what they called “catastrophic risk” associated with generative AI. (Adobe Stock)
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
Related Terms
AlgorithmYou can skip this ad in 5 seconds