Security Program Controls/Technologies, Compliance Management
Leading AI firms volunteer security commitments to Biden administration

President Joe Biden speaks about artificial intelligence at July 21 meeting at the White House with, from left to right: Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, president of OpenAI; Nick Clegg, president of Meta; and Mustafa Suleyman, CEO of Inflection AI. (Photo by Andrew Caballero-Reynolds/AFP via Getty Images)
Security was a critical component of the “voluntary commitments” around artificial intelligence the Biden administration said it obtained from seven leading AI companies that met with the president at the White House on Friday.In a fact sheet, the Biden administration said it plans to develop an executive order and pursue bipartisan legislation to help the United States take the lead in AI innovation, including red teaming, deploying watermarks, and monitoring for insider threats.The seven AI companies that met with the administration included representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.Under the commitments the major AI players made, the companies agreed that AI firms have a duty to build systems that put security first. That means safeguarding their AI models against cyber and insider threats and sharing best practices and standards to prevent misuse, reduce risks to society and protect national security. The AI companies also recognized that it’s important for people to understand when audio or visual content becomes AI-generated. To advance this goal, they agreed to develop “watermarking systems” for audio or visual content created by any of their publicly available systems. They also agreed to develop APIs to determine if a particular piece of content was created with their system.These “watermarks” are considered important because although invisible to the human eye, they let computers detect that the text more than likely comes from an AI system. So if they are embedded in large language models (LLMS), industry experts believe they could help defenders stop attacks.Impact on offense and defense: Malicious actors and defenders will use AI in their activities. However, whether AI will favor the attacker or defender is still up for debate, and it might end up being a wash. For example, while generative AI will let malicious actors write better phishing emails, defenders can also use AI to help detect phishing emails. Daniel said we’ll need more analysis to determine whether AI will tilt the balance towards attackers or defenders. Security of AI itself: Because AI tools are still relatively new, we don’t know the most effective practices to protect them from disruption or manipulation. Data poisoning or algorithmic tampering are hard to identify: has the AI been corrupted or is it just hallucinating? Right now, there’s a lot we still don’t know about how to secure AI systems themselves. However, there are steps we can take when it looks more like traditional cybersecurity. For example, accounts with administrative privileges for the AI systems should use multi-factor authentication and be limited in the activities they can perform. Impact on the broader ecosystem: As generative AI comes into widespread use, some organizations are hardening their APIs to prevent data scraping. However, these actions reduce the data available for any purpose, including, estimating how widespread a vulnerability is in the ecosystem. “Ultimately, we will need to address all three of these security issues,” said Daniel “While it’s still not clear what kind of testing will be involved, we can imagine testing that would be typical for any network application: does the company monitor and log code updates?" Mike Britton, chief information security officer at Abnormal Security, said he believes that it’s still an open question whether the federal government will need to regulate AI.“Some will say voluntary systems have been proven, such as in the ad-tech space, but others argue that regulations such as GDPR were necessary because ad-tech didn’t do a good enough job of policing itself,” said Britton. “The most significant regulation will be around ethics, transparency and assurances in how the AI operates, and having some mechanism that still requires a human component. Any good AI solution should also enable a human to make the final decision when it comes to executing — and potentially undoing — any actions taken by AI.”Cybersecurity pros have also been concerned about whether it’s possible to make it easier for industry professionals to use it for defensive purposes, but harder for threat actors to leverage AI for malicious purposes.“In a word: 'no,'” said Mike Parkin, senior technical engineer at Vulcan Cyber. “Cybersecurity professionals will generally bind themselves to the rules and commit to doing their job legally and ethically. Malicious actors put themselves under no such constraint. While they may have a challenge accessing some of the larger commercial engines that follow the guidelines, there’s nothing to keep them from investing in their own or from hostile nation-states to create purpose-built engines for the task.”Damir Brescic, chief information security officer at Inversion6, added that the commitments highlight the importance of data privacy and security. “As a cybersecurity expert, I appreciate the focus on safeguarding personal and sensitive data in AI systems,” said Brescic. “Developers and organizations are urged to implement robust data protections measures, including encryption and access controls to prevent unauthorized access or misuse of the data. "However, the guidelines should have done more to emphasize the need for ongoing monitoring and vulnerability assessments to identify and mitigate potential security risks associated with AI systems," he continued. "More work here is clearly going to be needed. I wouldn’t be surprised if a net new AI certification process wouldn’t evolve out of this initiative.”
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds