AI/ML, Generative AI, Government Regulations

US House forbids staff members from using AI chatbot Microsoft Copilot

Microsoft Copilot AI chatbot brand

Microsoft's planned release April 1 of Copilot for Security hit some speed bumps when the House of Representatives on March 29 banned the use of the software maker’s alternative chatbot to OpenAI’s ChatGPT by House staffers.

This step by the House follows an edict last summer when the chamber also banned the use by staffers of ChatGPT, limiting the use of the paid version and banning the free version.

An Axios report said the House Office of Cybersecurity has deemed Microsoft Copilot a risk to users because of the threat of leaking House data to non-House approved cloud services. The guidance added that Copilot would be removed and blocked on all House Windows devices.

Both of these moves by the House come at a time when the federal government has been grappling with how to regulate AI. The Biden administration released an Executive Order on AI last fall, while Vice President Kamala Harris on March 28 announced an OMB policy to advance governance, innovation, and risk management for AI at federal agencies — an action that aims to put tighter controls on AI.

The Axios report said Microsoft’s response was to say that it plans to address all security and compliance issues by releasing a more secure government version this summer.

Feds cautious of AI while Microsoft works to address conncerns

“The ban on congressional staffers' use of Microsoft Copilot highlights the government's careful approach to AI while trying to regulate it,” said Callie Guenther, senior manager, cyber threat research at Critical Start, and an SC Media columnist. “The risks include data security, potential bias, dependence on external platforms, and opaque AI processes.”

Guenther said the industry must enhance security, improve transparency, develop government-specific solutions, and support ongoing evaluation to address these concerns.

“Congress might reconsider its stance if these issues are effectively addressed, especially with government-tailored AI versions demonstrating high security and ethical standards,” said Guenther. “The future of AI in government will depend on the industry's response to these challenges.”

Narayana Pappu, chief executive officer at Zendata, added that if companies can offer opt-out from training, transparency on sources used to generate the output, and a way to evaluate the output for bias, it will allow for a level of structure, trust, and transparency that the current process lacks.

“Without these in place, it would be difficult for any government agency to certify or provide guidance on any of these tools without such controls in place,” said Pappu. “If anything, going into this year, the concerns and control bans are only going to increase.”

Kevin Surace, Chair at Token, said Microsoft already has its own isolated version of GPT4 for use in Copilot and contends no corporate data will be shared with anyone else. However, Surace said there’s some risk of the model learning from prompts and sharing prompt information in some circumstances, and that's why Microsoft has already announced this summer’s release of a government version.

“An updated government version would surely isolate every request from any other, even if the system was jailbroken,” said Surace. “While the current risk appears low, Microsoft very well knows how to appease government officials and will do so later this year, obsoleting this congressional action quite quickly.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds