AI/ML, Critical Infrastructure Security

AI security protections advanced by US, allies

(Adobe Stock)

Increasing concerns regarding the potentially significant impact of vulnerable artificial intelligence models on critical infrastructure have led the FBI, Cybersecurity and Infrastructure Security Agency, National Security Agency, and cybersecurity agencies in Australia, New Zealand, and the UK to establish joint guidance seeking improved protections for AI training data and infrastructure, according to Cybersecurity Dive.

AI system developers should not only ensure information security across the AI lifecycle by leveraging digital signatures, trusted infrastructure, and risk evaluations, but also work to curb the occurrence of accidental or deliberate data quality issues, said the advisory. Aside from harnessing cryptographic hashes and anomaly detection algorithms to guarantee data safety and reliability, AI developers should also work to address imprecise information, statistical bias, identical records, and input data degradation. "The principles outlined in this information sheet provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes," said the countries.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds