AI benefits/risks
Biden’s AI executive order: A small step in the right direction

Today’s columnist, Mike Britton of Abnormal Security, writes about how the Biden administration’s new executive order on AI represents a good first step in creating a collaborative relationship between government and industry around AI. (Photo by Chip Somodevilla/Getty Images)
The White House released a new executive order (EO) this week that seeks to increase federal oversight of rapidly expanding AI systems, promote the safety and security of AI development, and reduce its risks for consumers and national security.The EO has arrived at a critical time, as artificial “general” intelligence has become a reality faster than many expected. Lots of people were surprised this year by the transformational power of ChatGPT, but advances in AI promise to be exponentially more powerful in the year ahead. The implications of these intelligent systems are world-changing – in good ways and bad – and the government needs to act fast if it hopes to effectively manage those impacts.The release of the EO stands as an important step in the right direction, setting us on a path to effectively harness the enormous potential of AI to make our lives better, while also keeping security and safety top of mind. It introduces several components that are certain to improve the way we create and interact with AI, but it also presents a few gaps and areas for continued development as the order’s guidelines are actioned by public and private sector companies.Greater protections for consumers: Machine learning models rely on vast amounts of data to generate useful outputs, but these large data volumes come with inherent privacy risks. Consumers are becoming increasingly aware and concerned about how their data gets collected, stored, and used. The pressure on organizations to respect consumer privacy will only continue to mount, so it will be important to have more stringent guidelines in place to drive transparency around how data privacy gets prioritized to build consumer trust in AI. Additionally, by establishing watermarking guidelines that help distinguish AI-generated content, the EO encourages greater consumer protections against AI-enabled fraud and deception. Today’s widespread use of ChatGPT means that we are constantly encountering – and sometimes unwittingly using – content that’s inauthentic. These guidelines should help promote greater transparency around content origins, minimizing misinformation and fraudulent activity. An AI talent surge: Though the use and adoption of AI has accelerated, the pace of AI skills and talent development has lagged behind. Studies have shown that more than half of organizations don’t have the right mix of skilled AI talent, and that a lack of skilled talent has become the leading barrier to progressing their AI initiatives. The EO should start to reverse this trend by expanding the nation’s pool of skilled AI workers, through grants to develop domestic talent as well as initiatives to attract and retain foreign talent. Increased demand for AI-native products in the tech industry: While the EO mainly targets federal agencies, technology companies in the private sector will soon see benefits trickle down. Once centralized government funding gets allocated against the EO’s guidelines, federal agencies can acquire specific AI products and services faster and more cheaply through rapid and efficient contracting. This should bolster demand for AI-native products and ultimately increase both productivity and security for the agencies that will rely on these tools.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds