Increasing concerns regarding the potentially significant impact of vulnerable artificial intelligence models on critical infrastructure have led the FBI, Cybersecurity and Infrastructure Security Agency, National Security Agency, and cybersecurity agencies in Australia, New Zealand, and the UK to establish joint guidance seeking improved protections for AI training data and infrastructure, according to Cybersecurity Dive.
AI system developers should not only ensure information security across the AI lifecycle by leveraging digital signatures, trusted infrastructure, and risk evaluations, but also work to curb the occurrence of accidental or deliberate data quality issues, said the advisory. Aside from harnessing cryptographic hashes and anomaly detection algorithms to guarantee data safety and reliability, AI developers should also work to address imprecise information, statistical bias, identical records, and input data degradation. "The principles outlined in this information sheet provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes," said the countries.
AI system developers should not only ensure information security across the AI lifecycle by leveraging digital signatures, trusted infrastructure, and risk evaluations, but also work to curb the occurrence of accidental or deliberate data quality issues, said the advisory. Aside from harnessing cryptographic hashes and anomaly detection algorithms to guarantee data safety and reliability, AI developers should also work to address imprecise information, statistical bias, identical records, and input data degradation. "The principles outlined in this information sheet provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes," said the countries.