AI Spurs Wave of Identity Theft Scams & Deepfakes: How to Protect Yourself

Voice, photo and video deepfakes are becoming more realistic, and scammers are taking advantage of AI advancements to approach more people with targeted threats. Learn how to protect yourself from AI identity theft scams.

Jackie LeavittAleksander Hougen

Written by Jackie Leavitt (Editor at Large)

Reviewed by Aleksander Hougen (Chief Editor)

Last Updated:

As artificial intelligence and chatbot use becomes more widespread, the underbelly of the internet is also embracing these AI tools. There’s rising concern noted in the 2025 Trends in Identity Report that identity theft is allowing bad actors to target victims more precisely and more efficiently, while also allowing them to operate at a larger scale.

From all the scams reported to the Identity Theft Resource Center (ITRC), impersonation scams and job scams ranked at the top, with tactics including using AI to spoof websites, posting search engine ads with fake customer service numbers, and sending realistic-looking emails or text messages from a large company or legitimate sources.

Additionally, AI-based apps are enabling “new and increasingly effective ways” for malicious actors to make money through deepfake-enabled social engineering and misinformation, according to TrendMicro’s July 2025 report.

Just this week, a mother in Buffalo, New York, received a ransom phone call with her son’s voice used in the scam — luckily discovering that it was a deepfake voice and that her son was safe at a football game.

Additionally, some tech companies might leave security holes in their rush to gather data for AI training. This week it was reported that the Neon app — which pays people to record their conversations for AI training — became the second most popular social app in the App Store, and sixth overall. 

Shortly after, TechCrunch reported a security flaw: any logged-in Neon user could access someone else’s data through the servers, potentially exposing users to future AI deepfake scams.

Learning to spot red flags — or even yellow flags — is critical in preventing identity theft, especially when scammers use AI-generated audio, video and text. 

How to Protect Yourself From AI Identity Theft Scams

It’s best to be prepared before a scam occurs. You can limit possible exposure of your own voice and image by making social media accounts private and limiting online content of your image or voice. 

Create a secret word or phrase to verify the identity of family members and friends. You can also create pre-planed questions with fake-answers, giving the caller a choice of two wrong options.

Opt for being suspicious. Don’t share sensitive information online or over the phone unless you initiate the interaction through official channels. If you receive a suspicious call, you can hang up and call the phone number directly to verify the caller’s identity.

When it comes to AI-generated phone call scams, where scammers might be using AI-generated voice of a loved one, listen for lag time, as well as the tone and word choice of callers to distinguish AI-generated vocal cloning. If you share phone locations with your family, check to see where they are.

If you receive AI-scam images and videos, study them for subtle imperfections, such as distorted hands or accessories, inaccurate shadows, watermarks, unrealistic movements or actions.

Remember that scammers thrive off fear. The best thing you can do is to take a deep breath and not respond without thinking. For more tips, read our online scams guide and cybercrime guides.

↑ Top