AI/ML, Governance, Risk and Compliance, Government Regulations, Threat Intelligence

‘Big Beautiful Bill’ could bring new challenges for AI, security pros

Dawn and sunrise over nations capital with all monuments aligned against orange sky

The second Trump administration has brought no shortage of controversy, along with questions about the future of the enterprise technology space.

Along with major initiatives, the administration has proposed cutting a number of federal government organizations and programs that leave pillars of the information technology community such as MITRE and the CVE program up in the air.

The cuts and sweeping policy changes have sent organizations scrambling to cope with the new reality and brace for additional actions from the administration and its appointed officials.

The crown jewel for the new administration is set to be a legislative act known as H.R 1, the “One Big Beautiful Bill Act.”

The bill includes sweeping changes to a number of government agencies and a reset of many government policy positions.

One such change will be to artificial intelligence, where a major investment is planned to advance the technology in the service of government agencies looking to automate day-to-day tasks. The bill currently calls for “$500,000,000, to remain available until September 30, 2034, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence.”

Under the bill, a whopping $124 million will be earmarked to “Test Resource Management Center artificial intelligence capabilities,” while another $145 million would go into the development of “artificial intelligence to enable one-way attack unmanned aerial systems and naval systems.”

For those that don’t speak bureaucrat, that means money for AI-powered human resources management and more money to build suicide drones.

Most notably, the bill will include a 10-year period barring states from passing any new legislation that would affect the development of AI technologies. The move would essentially make the federal government the sole authority when it comes to regulation of artificial intelligence.

The bill also includes $685 million for “military cryptographic modernization activities” and $250 million that would be spent on the Quantum Benchmarcking Initiative.

On the surface, the numbers themselves may seem mind-boggling to those unfamiliar with the U.S. government budget, but as the old saying goes: a billion here, a billion there, eventually it all starts to add up.

The bill and the overall government budget have been subject to criticism. Senate Democrats have sought protections for the Cyber Security Review Board (CSRB), the agency that oversees reviews on cyberattacks and data breach incidents.

Critics have suggested that, by not committing to funding the organization, key checks and reviews of organizations that expose customer data to threat actors will no longer be possible.

Legislators, including House Republicans, have expressed concern that the Trump administration’s proposed budget could bring about as much as a 20% cut in funding for the Cybersecurity and Infrastructure Security Agency (CISA) and cuts to critical defenses against foreign threat actors.

Similarly, there is concern among critics that the bill and the Trump administration's agenda neglects security concerns, including to vulnerability records the cybersecurity industry depends on. The future of MITRE’s CVE database was thrown into doubt. While the CVE database was ultimately funded for another year, the long-term outlook for the organization and its database remains in doubt.

Not surprisingly, the One Big Beautiful Bill has come under some scrutiny and criticism, though not from a source most people would have expected. Trump advisor and financial-backer Elon Musk appears to have some buyer’s remorse and is now lamenting the bill and the associated increase in debt that will be created.

Federal management of AI?

There is, however, some portions of the bill that could be good news for IT vendors and enterprises. The Big Beautiful Bill’s provisions for AI could provide some much-needed guidance on how AI could be managed.

Dustin Sachs, a cybersecurity expert and chief technologist for the CyberRisk Collaborative, told SC Media that by taking responsibility for AI policy, the federal government can clarify policy that would otherwise vary from state to state.

“One of the complaints we have had about data privacy has been that there hasn’t been a federal response,” Sachs said.

“From an enterprise standpoint, there is an advantage to have the federal government involved and not the states.”

Additionally, Sachs noted that the introduction of AI might not necessarily mean a lapse in accountability and vigilance for federal agencies.

“There has been talk about the need for modernization for years” the analyst noted. “There is a misnomer that it is about replacing human decision-making, it is about easing human decision-making.”

Satyam Sinha, CEO of AI security provider Acuvity similarly noted the need for a unified policy regarding AI management.

"AI is incredibly transformative as a technology which can make incredible progress for a nation or for that matter in businesses becoming more efficient," Sinha told SC Media.

"It's in the nascent stages and fragmented regulations the way they are can stifle the growth, which can be a setback. If we look at innovation in the biometrics area, it becomes very hard for businesses to apply fragmented rules."

Not everyone is sold on the plans for AI management. Virginia Sen. Mark Warner told SC Media that by putting AI regulations in the hands of Congress instead of the states, the Trump administration could be preventing much-needed oversight of the market.

"The underlying technology behind ChatGPT and most of the frontier models has been out since 2017 and it has been nearly three years since ChatGPT’s release captured the attention of Americans — both for its positive applications and also its many obvious misuses. In that time, Congress has consistently failed to address even the clearest areas of abuse — including misuse of AI tools in political campaigns, online harassment and stalking, and for market manipulation," Warner said.

"While my preference is — as with long-overdue privacy or data breach protections — for a strong federal standard, in the face of continued inaction by Congress it’s incumbent on us to allow for states to fill this void.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Shaun Nichols

A career IT news journalist, Shaun has spent 17 years covering the industry with a specialty in the cybersecurity field.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds