AI/ML, Government Regulations
AI-generated media to be regulated under new bill

Mounting cybersecurity threats stemming from artificial intelligence-based deepfakes, including a recording mimicking the voice of President Joe Biden, have prompted House representatives from both sides of the aisle to introduce new legislation that would mandate AI-generated media identification and labeling efforts, reports The Associated Press. Such labeling activities would not only be conducted by AI developers but also by online platforms, including Facebook, TikTok, and YouTube, according to the bill, which would impose civil cases against violators should it become law. "We've seen so many examples already, whether it’s voice manipulation or a video deepfake. I think the American people deserve to know whether something is a deepfake or not. To me, the whole issue of deepfakes stands out like a sore thumb. It needs to be addressed, and in my view the sooner we do it the better," said bill co-sponsor Rep. Anna Eshoo, D-Calif. Such a bill has gained the support of various groups calling for increased AI protections, as well as AI developers.
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds