You pick up the phone and hear your daughter screaming, begging for help. You might listen to kidnappers asking for ransom or other demands that you’re willing to fulfil in a state of panic because you keep hearing your child call out in suffering.
However, it’s not really your child. It’s just an AI replica of their voice.
Cases like these have been on the rise, with a whopping 241% increase post-pandemic, when most of us relied on online communication. The FTC stated 2024 saw an increase in the amount of money lost to scams. It was reported that $12.5 billion was lost, up several billion from 2023.
Tech advancements that opened doors for innovation are now being used to develop increasingly complex fraud systems. One such category is AI voice tools, which benefit podcasters, animators, YouTubers, and anyone using vocal components as part of their job.
Unfortunately, scammers are using these AI voice tools to impersonate loved ones or official authorities to extort money and steal your identity. With AI becoming more publicly available, anyone is able to use these tools. 2025 is going to take AI deepfake voice scams to a whole new level.
We tracked thousands of such AI deepfake voice scams reported between 2021 and May 2025. Our team analyzed the top scam types, who’s being targeted, how these attacks work, and what you can do to protect yourself.
Scam Reports Over Time: How Deepfake Attacks Have Surged
Let’s walk through the numbers, but more importantly, let’s connect the dots between when these scams surged and why. These aren’t isolated incidents. They’re closely tied to major leaps in generative AI and tech adoption.
Reported deepfake AI scams increased by 33% from 2021 to 2022, but the biggest jump came in 2023, when the numbers went up by over 241%.
This massive growth is likely due to the pandemic, when everyone was working and communicating online. We got used to texting, calling, and video conferencing, so AI voice cloning and video deepfakes flourished.
It also coincides with the rapid proliferation of AI tools like ChatGPT and image generators, which exploded in popularity and put complex tech in the hands of everyone, for free! Now, anyone could start creating stuff without requiring much special training.
This made it easier for scammers to create realistic-sounding messages, craft fake identities, and run entire conversations without requiring much human input. AI started to be used to write emotionally manipulative messages, mimic the tone and writing style of loved ones, and automate scam scripts at scale.
In the wrong hands, these tools become multipliers for deception, blurring the line between real and fake faster than most people can keep up.
Things are getting worse in 2025, as there are already more reported cases halfway through the year than there were the entire year previously. Current numbers are showing at least a 100% growth rate, with reports reaching an all-time high.
And these are just the ones we know about. Many cases go unreported, dismissed as technical glitches or mistaken identity. As the technology gets more convincing and more accessible, we’re not just watching scam numbers rise; we’re witnessing trust itself erode in real time.
The question isn’t if this will affect you. It’s when. And whether you’ll recognize it when it does.
Top 5 Types of AI Deepfake Voice Scams in 2025
As AI becomes more capable, so do the people looking to exploit it. Over the last few years, fraudsters have developed a wide range of scam techniques to trap even the most technically savvy.
Here are the most common types our study found.
Family distress and kidnapping scams
/filters:quality(60)/1.png)
This one is among the most common because it tugs at your heartstrings. Which parent can resist their child’s screams for help?
Here, scammers typically scrape a few seconds of your child’s voice from content they may have downloaded from social media. This is used to generate an AI clone of their voice to call you and convince you to pay the ransom for their safety.
In June last year, a mother was asked for a $1 million payment as she heard her daughter being threatened. However, the husband found the daughter was safe at home in her bed.
This can also be used on children, as seen in a May 2024 case, when a 19-year-old was scammed out of $1,000 after being told his younger sister was in grave danger.
The emotional shock of hearing a loved one in trouble can cloud judgment and lead to quick, costly decisions.
Silent ‘Hello’ and voiceprint capture
/filters:quality(60)/2.png)
This scam takes Adele’s “Hello from the other side” to a whole new level.
Imagine someone calls you. You keep saying ‘hello’ but no one responds. You think there might be issues with the network. Maybe they can’t hear you well, so you keep trying to communicate.
Actually, there is no one on the other side. It’s just a ploy to capture your voice so a replica can be created.
The more you speak, awaiting a response, the more your voice is scraped for future scams.
You may be asked, “Is this [your name]?” That clip of you saying “yes” or your name can be used to trick voice-activated banking systems or other security measures.
Romance scams
One of the most popular and fastest-growing categories, romance scams, led people to lose over $1 billion in 2023 alone. The Federal Trade Commission found that people lose $2,000 on average, representing the highest reported losses for any form of imposter scam.
Romance scams can take many forms, but a common tactic is: You find someone on a dating app or a charming stranger DMs you. You build a connection and develop trust over time. You spend hours texting, calling, and even video chatting, but none of it is real.
You’re talking to a mix of AI chatbots, voice cloning, and deepfake videos. It’s so convincing that you get invested in the relationship and send them gifts or money for fake emergencies. They might introduce you to risky or downright fake investments, and you might be willing to trust them due to the bond built over time.
It’s getting more difficult to spot these as growing AI capabilities make it easier to impersonate someone or build a fake persona. As AI becomes increasingly human-like, there’s a stronger likelihood that more people might get swept up in the fantasy until they find their bank account empty.
Mimicking businesses and authorities
/filters:quality(60)/4.png)
In Feb 2024, a finance worker got a call from the company’s chief financial officer asking for a $25 million transaction. The worker attended a video conference with several staff members, not knowing that nobody was real. It was all done through deepfake videos and AI voice cloning.
This isn’t a one-off case. With the widespread use of AI voice tech, such scams are growing worldwide.
Here’s how it typically works: You get a call from your superior. Scammers replicate their voice and mine key data from public company information to make the call seem legitimate. The AI might reference specific details about the company, like when the next event is due or figures from a recently published report, to trick you into believing it’s real.
You might be asked to pay for fake invoices, share account details, or buy fraudulent products and services. People are likely to comply even if they are a little suspicious because they see their boss on the screen or hear their voice on the phone.
Extortion through fake evidence
/filters:quality(60)/5.png)
One of the most psychologically damaging scams is where you get a call saying there’s evidence against you. You might receive audio clips that appear to capture your voice saying incriminating things—offensive remarks, illegal behavior, or confessions.
Some callers pretend to be police or lawyers, others claim to be “concerned parties” warning you of reputational damage. The voice sounds like yours and the details feel oddly specific.
You can be sure you never said anything such, but you can’t deny the “evidence” in front of you. Many people panic and pay up to avoid leaking such problematic content that might cost them their job, relationships, and reputation built over the years.
These are the most common AI deepfake voice scams we discovered, but there are many more, and the types continue to expand as innovations in tech enable scammers to develop increasingly advanced techniques.
Most Frequently Targeted Groups
AI voice scams don’t just target the random and unlucky—they’re strategic. Certain demographics get hit harder than others because of predictable behaviors, emotional vulnerabilities, or a lack of familiarity with the tech involved.
Here’s who scammers are focusing on in 2025.
Elderly
Elderly people are the most targeted group due to:
- Not being tech-savvy.
- Having larger savings built over the years.
- Participating in government assistance and insurance programs.
- Living alone or having little in-person support.
- Having strong family ties.
Older people tend to be unfamiliar with AI, caller ID spoofing, and modern scam tactics, so it can be easier to trick them compared to youth who spend more time online and are often aware of cyber risks.
They also tend to have larger financial accounts compared to people just entering the job market. Retirement accounts, pension programs, and healthcare payouts can be a prime target.
Older people often actively use medical assistance programs, so scammers impersonate these authorities to extort money.
In many cases, older people live alone or have limited real-time support, so they can be easier to scam without the presence of someone who can guide them through the technical stuff.
Scammers also prey on their strong family connections by impersonating loved ones seeking help. Older people might receive a call saying, “Grandma, please don’t tell Mom. I need money now.”
Parents
Parents are often targeted with kidnapping scams as they’re prone to rash, emotional, irrational decision-making when they believe their child is in danger.
Modern parents often post their children’s names, voices, and videos online, giving scammers data to build a realistic AI replica. It becomes more convincing as scammers scrape key family data through social media posts.
Dating app and social media users
People desperate for connection online are often targeted with romance scams. A combination of AI chatbots, video deepfakes, and a convincing persona built from scraping publicly available information is used to trick someone into getting emotionally invested in a fake relationship.
People who frequently talk to new people on dating apps and social media may be more open to DMs from a stranger. They’re also used to online communication and more likely to get attached over texts and calls than the older generation who did everything in person.
Foreign language speakers
People who aren’t fluent in English may be more vulnerable to scams as they struggle with understanding the language and verifying complex documentation. They might be prone to panic when scammers impersonate authority figures and are likely to comply.
Knowing this, scammers usually impersonate embassies or tax authorities to create a fake sense of danger.
Nowadays, as AI becomes more advanced, any voice in any language can be replicated with the right accent and tone, making it harder to determine what’s real and what’s not, especially if you already struggle with the language.
How to Protect Yourself From AI Deepfake Voice Scams
The threat is real, and it’s coming for everyone. But there are steps you can take to increase your chances of staying ahead of the cons and protecting yourself.
Let unknown calls go to voicemail
If you don’t recognize the number, don’t answer—even if it looks local or familiar. Let it go to voicemail. This will let you see who called. Real people with a genuine reason to contact you might leave a message or get in touch another way.
Voicemail gives you time to assess the message calmly, without being emotionally manipulated in real time.
Bonus: Letting it go to voicemail protects your voice, which is the primary goal in “hello” scams.
Never say “Yes” to a stranger on the phone
Avoid saying:
- “Yes”
- “That’s me”
- “Correct”
- “Sure”
Even if the caller sounds friendly or insistent. If they’re trying to “confirm your identity,” don’t comply until you confirm their legitimacy.
Tip: If the call claims to be from a bank, hospital, or government agency, hang up and call the official number from their website directly.
Warn your loved ones
Elderly individuals are often targeted because they may be less familiar with how advanced these scams have become.
Teach them that if a call begins with just “hello” and silence, it could be a setup. Scammers might be recording their voice to use in future fraud attempts. Encourage them to avoid giving out personal information and to always double-check before reacting to an emotional plea.
Have a family verification code
This is especially useful for parents, grandparents, and close-knit families.
Create a secret question-answer or phrase that only your real loved ones would know. For example:
“What did we name the dog in 2012?”
“What did Mom cook for your graduation?”
If someone calls claiming to be your child in danger and can’t answer that question, it’s likely a scam.
Confirm emergencies through multiple channels
Scammers rely on panic to short-circuit your judgment. If you get an urgent call claiming that a loved one is in danger, hang up and try to contact them directly through their real number, text, or social media. You can also check with someone close to them or use location-sharing apps if available.
Use available verification tools
Telecom providers are improving caller ID systems and can sometimes flag calls as “spam risk” or “likely AI-generated.” Enable these features on your phone and consider using apps that provide enhanced call filtering or reverse-number lookup.
These tools can act as your first line of defense, helping you identify suspicious calls before you even pick up.
Report it even if you’re not sure
Even if the scam didn’t succeed, your report could protect someone else.
Where to report:
FTC.gov/complaint
ReportFraud.ftc.gov
ReversePhone.com (add details to help flag AI-generated voice cases)
Your local police or cybercrime unit
Every report builds a stronger public pattern and helps researchers track scam trends.
Methodology
This data study analyzed over 1,000 user-submitted phone scam stories and reports between 2021 and May 2025. Reports were filtered for clear markers of AI, deepfake voice use, voice changers, or “voiceprint theft” patterns. Data sources include open phone lookup forums and ReversePhone’s community database. Actual attack numbers are likely significantly higher, as many victims never report their experiences.
Customer quotes featured in the article may have been edited for clarity.