COMMENTARY: There’s a new reality of cybercrime we must face.
GenerativeAI (GenAI) no longer represents a distant, hypothetical threat waiting for us in the future — cybercriminals are now actively leveraging it to fuel a wide range of malicious schemes.
Amid the buzz surrounding AI’s potential, it’s crucial to acknowledge its darker implications: AI makes it easier to impersonate humans and
steal their identities.
[
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
With AI advancing weekly — if not daily — now’s a critical moment to understand how AI impacts our online safety and take proactive steps to protect ourselves.
The threat from sophisticated spear-phishing
AI has taken phishing to a whole new level, revolutionizing its sophistication, personalization and scale. Cybercriminals use AI tools to collect, analyze and link data — from information resellers to social media platforms, the dark web and beyond — to build detailed profiles on potential victims. This now takes mere minutes with AI automation instead of hours of manual labor. In addition, scammers have shifted from operating individually to forming sophisticated global criminal organizations that collaborate on attacks.
This results in highly personalized spear-phishing messages. While traditional
phishing scams reach large groups in hopes a small number of recipients will respond, spear-phishing messages are targeted, highly personalized and more convincing.
Phishing has evolved from obvious scams — like fake bank emails asking users to verify an account they don’t have — to sophisticated attacks, such as a well-crafted message from a "boss" asking an employee to download documents for a legitimate, recent project.
This challenge has been compounded by the increasing availability of personal information on the deep and dark web. High-profile data breaches like AT&T and National Public Data contributed to a surge in Social Security numbers being exposed. According to LifeLock, nearly 75% of Americans are at risk of having their Social Security numbers compromised because of data breaches, a stark reminder that even the savviest among us are far more vulnerable than they realize.
Users can prevent personal information from getting into the wrong hands by reducing their digital footprints. Limit the personal data shared online and regularly review privacy settings on all social media platforms. Consider only providing necessary details when filling out forms.
Voice cloning: A new frontier for scammers
AI has not only transformed text-based scams, but it’s also taking fraud into the audio realm. Today, scammers are taking malicious activities a step further and using AI to create eerily natural-sounding voice replications.
Voice cloning technology lets scammers replicate pitch, tone and speech patterns with as little as three seconds of audio. This means something as simple as a voicemail greeting can offer cybercriminals the material they need to convincingly impersonate someone.
Once a voice gets cloned, scammers can exploit it in various ways — posing as a family member in distress, impersonating a coworker to gain sensitive company information or even bypassing voice verification security measures at financial institutions.
The FBI has advised individuals to limit the amount of voice content they share online. For instance, users should stay mindful of what they say in voicemail greetings, avoid posting audio or video files with unnecessary personal details, and use scam blockers to stop suspicious callers. It’s also a good idea to establish a “safe word” with family members that can confirm the legitimacy of a call. This simple but effective measure can minimize a user’s vulnerability to AI-powered voice scams.
The emerging risk of AI agents
While it’s still the early days of
AI agents, they pose a unique set of risks when misused. These agents are programmed to follow commands precisely, often without fully understanding the broader context or consequences of requests. This makes them a powerful, yet dangerous tool when they fall into the wrong hands.
As AI agents become more sophisticated and widely adopted, their potential misuse in identity theft could escalate significantly. Attackers can use them to bypass CAPTCHA verifications, exploit vulnerabilities in websites that rely on bot detection systems, or even inadvertently expose sensitive information.
Businesses and individuals alike must implement safeguards such as activity monitoring to mitigate the risks associated with AI agents.
Five years ago, a voicemail greeting might have seemed like a harmless application. Today, we know this brief recording could offer scammers with everything they need to launch a highly realistic voice cloning attack against a family member.
As scammers’ tactics evolve, so must our approaches to online safety. Users must reduce their digital footprint, stay informed about emerging threats, and adopt proactive security measures to protect their identities in an increasingly AI-driven world.
By recognizing the risks and taking deliberate actions, we can stay ahead of cybercriminals and safeguard our identities in this new threat landscape.
Ian Bednowitz, general manager, identity and privacy, GenSC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.