Identity, AI/ML

AI won’t save identity if it can’t tell a dog’s paw from a fingerprint

dog wearing glasses using a laptop doing digital work on a bed at home

Everyone wants to believe AI will rescue us from the complexity of modern identity. But if it can’t tell a dog’s paw from a fingerprint, what else is it getting wrong?

That was the blunt warning from Diana Kelley, chief information security officer at Protect AI, during her keynote at Identiverse 2025. She told the crowd a now-famous anecdote: a well-known foundation model, asked about biometric data, confidently replied that dog paw prints are unique and reliable. The truth? Most dog paw prints are nearly identical. It’s the nose prints that are unique.

AI is confident. It will lie to you with charm,” Kelley said. “If you don’t know the answer ahead of time, how do you know when it’s wrong?”

This is the risk when AI enters the identity stack. The hallucination isn’t a punchline. It’s a threat. In a world where machine-made decisions determine access, privilege, and authentication, mistaken certainty becomes a security liability.

Identity at scale, and out of control

With agentic AI on the rise, the number of nonhuman identities — bots, agents, scripts, systems — is exploding. Kelley noted there may already be 50 machine identities for every human one, and that number will grow as AI begins autonomously chaining tasks across services and environments.

That kind of scale makes traditional identity governance unsustainable. AI will be necessary to keep up, particularly in access reviews, privilege scoring, and onboarding. But Kelley warned that adding AI into a broken identity foundation will only accelerate risk.

“You can’t automate a broken system,” she said. “Fix it before you scale it.”

The mirage of intelligence

The core of Kelley’s talk was not anti-AI. It was anti-naivete. She called on organizations to treat AI as software, not sorcery.

That means validating models, understanding inputs, documenting behavior, and monitoring outputs as you would with any high-risk code. AI may feel like a black box, but the consequences of misconfiguration are very real. This is especially true in identity, where trust is the currency.

“Identity has gotten really complicated,” Kelley said. “Even at a 150-person startup, it’s nothing like it was 20 years ago. We need help, but we need the right kind of help.”

She also emphasized that pre-deployment testing is not enough. AI systems need to be continually evaluated after they go live, because their performance can drift as environments change.

“Monitoring AI is just as critical as testing it,” she said. “The inputs shift, the data evolves, and the risks don’t stop.”

Without ongoing scrutiny, even the most effective model can become inaccurate — or dangerous — over time.

A smarter, safer path

Kelley was clear that AI does have a role to play. Its ability to handle fuzzy data, identify weak signals, and improve decision velocity could be transformative if implemented responsibly.

Used carefully, AI can eliminate low-value tasks like manual user access reviews, dynamically trigger step-up authentication, and reduce human bottlenecks in high-volume identity operations.

But Kelley urged transparency.

“Tell your users when you’re using AI,” she said. “Include it in your responsible disclosure. Bake it into your privacy policies. Trust starts with honesty.”

She closed by reminding attendees that this is not about resisting innovation. It’s about building it right.

“AI isn’t magic. It’s math. And identity is too important to get wrong.”

The adversary is already using AI

Kelley also warned that while defenders debate governance frameworks and disclosure standards, attackers are already fluent in AI. Threat actors are generating synthetic identities, crafting phishing emails with language precision, and even using deepfake voicemails to impersonate executives and bypass verification.

“Once a synthetic identity is accepted into the system, it becomes much harder to distinguish,” she said.

This underscores why verification at the front door remains critical. Organizations cannot rely on behavioral analytics alone to detect fraud if the identity has already been approved.

AI may eventually help close those gaps, Kelley noted, but only if it is trained and monitored with the same discipline defenders apply to code, infrastructure, and policy. Anything less risks accelerating the very threats we are trying to contain.

An In-Depth Guide to Identity

Get essential knowledge and practical strategies to fortify your identity security.
Tom Spring, Editorial Director

Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.

You can skip this ad in 5 seconds