AI benefits/risks

My AI-generated voice is my (scammers) password: Now what?

AI-generated voice scams

We all know about financial fraud, it’s not new. Neither are scams that use impersonation to take over accounts or make online purchases. What’s new is the proliferation and power of generative AI technology to perpetrate payments fraud.

Already weaponized, generative AI poses significant risks as the use of biometric-dependent voice-based payment methods grows. Maybe we should reconsider using authentication tactics that we know will become increasingly suspect.

Voice generation tools now require just a few seconds of a target victim’s recorded voice sample — easily obtainable on social media — to produce a voice deepfake that will “say” whatever the fraudster wants. Voice deepfakes pose significant risks to manual reviews of high-value payments because it’s so easy to impersonate a person of authority, as in the case of a bank swindled out of $35 million.

Realistic video deepfakes require more sample material, time and skill to produce, but the tools are getting faster and easier to use. While imposter deepfakes are necessary for fooling facial recognition or perpetrating account takeover fraud, scammers could use synthetic video, or alternatively faked photo ID credentials, to open new accounts and run up significant credit card debt. Net/Net: Deepfakes can cause big losses to the payments industry.

The fraudster trifecta: GenAI, social engineering and stolen data payment fraud gets even more effective when cyber criminals combine AI tools with social engineering to steal credentials or stolen credit card data. Scammers take phishing to a whole new level using AI text generators like ChatGTP to create emails with perfect grammar, spelling and punctuation — which are much more likely to persuade targets to share account information or unwittingly provide a voice sample over the phone.

Armed with account information and deepfake identities, scammers can game multi-factor authentication systems. Biometric authentication has been considered the gold standard for identifying a trusted user, but now that AI can spoof biometric factors, it behooves payments professionals to pause any new voice authentication projects and develop a contingency plan that doesn’t depend solely on identity.

The European Payment Services Directive (PSD2) mandates strong customer authentication that consists of two out of the three categories of authentication: something you are, something you have and something you know. But with the right amount of time, technology and skills, virtually anything is hackable — even multi-factor authentication.

OK, so, how can financial institutions (FIs) mitigate deepfake-powered fraud?

By focusing on intent, not identification. For many FIs, they can do this  by mining the data they already have or can easily start collecting using existing tools. To thwart fraud that uses deepfakes to authenticate, FI’s need to focus not on individual alerts within a single session, but by identifying deviations of clear patterns. What merchants do they usually engage with? What constitutes an abnormally large transaction? Have they ever added a new user to a card before?

Because trusted patterns of user behavior are difficult for a deepfake to mimic, they offer a foundation for trust decisions. If deepfake technology lets fraudsters definitively authenticate themselves, FIs need to find new and better ways to verify the intent of seemingly legitimate users.

Start experimenting now. Fraudsters are used to being several steps ahead. Through continuous trial and error, we can work on closing the gap. We must try: Because intent matters.

Alisdair Faulkner, co-founder and CEO, Darwinium

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds