AI/ML, Governance, Risk and Compliance, Government Regulations

Cybersecurity experts praise veto of California’s AI safety bill

Share

A controversial artificial intelligence safety bill, Senate Bill 1047, was vetoed by Gov. Gavin Newsom on Sunday after passing through the California legislature in late August.

SB 1047 would have placed requirements for safety testing on developers of AI models trained using computer powers greater than 1026 and at costs of at least $100 million, making those developers potentially civilly liable for AI-related mass casualty events or incidents causing more than $500 million in damages.

Despite some changes made to the bill prior to its passage, such as the removal of potential criminal liabilities, critics remained unsatisfied, saying SB 1047 could put an undue burden on startups and open-source model providers while focusing on theoretical futuristic scenarios rather than current, realistic AI risks and threats.

In his letter returning the bill without signature to members of the California State Senate, Newsom recognized the need for AI safety regulations and noted previous AI-related bills he has signed while addressing what he considered to be SB 1047’s pitfalls.

“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancements in favor of the public good.”

AI, cybersecurity experts largely echo Newsom’s stance on SB 1047

Several AI and cybersecurity professionals who spoke with SC Media about the news saw the veto as a wise decision, perceiving SB 1047’s approach to AI safety as misguided.

“While the bill had good intentions, it was too flawed to pass. Much of what it aimed to regulate dealt with the ‘foreseeable future’ rather than the immediate concerns. There’s plenty to address today, such as establishing bias guidelines for training data, ensuring privacy within datasets, and making it clear what data was used to train the models currently in use,” said Jim Liddle, chief innovation officer of data intelligence and AI at hybrid cloud data platform company Nasuni.

Newsom has signed multiple AI privacy and safety-related bills into law previously, including 17 signed over the past month related to deepfakes, AI watermarking and AI-generated misinformation.

SB 1047, however, ran into trouble due to its focus only on developers of high-cost models, its potential threats to innovation, and a lack of empirical evidence backing the problem and solution it attempted to present, according to Newsom’s statements.

“There is not a direct correlation between model size and risk, and these bills mistakenly address the computational power required to train Large Language Models (LLMs) and overlook that small, specialized, and powerful models may be far more equipped to do harm than large national language processing (NLP),” noted David Brauchler, technical director at cybersecurity consulting firm NCC Group, echoing Newsom’s sentiment.

“The risk of automated spam is far more real and ongoing than the risk of severe injury to person or property caused by AI. Safety risks most often arise due to poor model integration (e.g. implementing a poor-performance model in a self-driving car) rather than emerge from the model itself,” Brauchler added.

Experts also viewed the regulation as premature and without a full understanding of how the rapidly evolving technology can best be tested and secured.

“The key takeaway for tech companies and lawmakers is the importance of investing in a better understanding of generative AI and its safety requirements. Academic research plays a critical role in this, and government partnerships with academic institutions are essential to generate the knowledge needed to establish effective safeguards and guide responsible regulation,” said Manasi Vartak, chief AI architect at hybrid data company Cloudera.

What’s next after SB 1047’s veto?

The failure of SB 1047 does not spell the end for AI safety regulation in California or the rest of the United States. Even those opposed to the bill are hoping that more nuanced and evidence-based regulations will soon take its place.

“Governor Newsom’s rejection of SB1047 demonstrates commendable foresight in AI regulation, but it must be the prelude to swift, targeted action,” said James White, president and chief technology officer at AI security company CalypsoAI. “Now, Newsom and legislators must capitalize on this moment to expedite right-sized legislation. We need smart, adaptable laws that account for business size, potential impact, and the crucial distinctions between AI training and inference.”  

Many more laws regulating AI are still coming down the pike across the country, with OneTrust Chief Ethics and Compliance Officer Jisha Dymond noting more than 80 state and 30 federal draft laws have been considered. Additionally, regulatory bodies like the Federal Trade Commission are taking AI safety into their own hands to deal with pressing issues like data privacy in AI training.

“In the continued absence of federal privacy or AI legislation in the US, the FTC has presented algorithmic disgorgement, or model deletion, as a powerful enforcement tool requiring companies to delete data and associated products built on unlawfully obtained information,” said Dymond. “Sectors like healthcare, financial services, and law enforcement are under the microscope due to the high stakes involved with sensitive data and potential harm.”

National and international guidelines and standards, like the National Institute of Standards and Technology (NIST) AI Risk Management Framework, while not legally enforceable, are also positive steps toward improving AI security. However, experts concerned with ongoing AI safety risks emphasize laws will be needed to ensure widespread compliance with security testing and risk management standards.

“Elements like third-party audits and senior-level accountability measures are positive steps that promote responsible and trustworthy AI frameworks, aligning with Ai standards such as ISO 42001 — the first global standards for AI management systems. On the other hand, the opportunity to refine and improve the bill is a chance to better balance innovation with safety, especially considering the evolving nature of AI technology,” said Danny Manimbo, AI assessment leader and principal at IT compliance attestation company Schellman.

“Most of the big companies developing AI are already putting a big focus on ‘do no harm’ — for example, as shared by Anthropic in their detailed papers about Claude models, and Google about Gemini models, which even have safety settings for different topics showing how much Google cares about this topic,” noted AppOmni Artificial Intelligence Director Melissa Ruzzi. “But to make sure all players will follow the rules, laws are needed. This removes the uncertainty and fear from end users about which AI is being used in an application.”

Cybersecurity experts praise veto of California’s AI safety bill

Despite its good intentions, many experts said the bill took a flawed approach to regulating AI safety.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.