From Celebrity Deepfakes to Avatar Farms, How Scammers Industrialize Trust
When the face is familiar but the person isn’t real
Published
March 26, 2026
Read time
10 Minutes

Written by
Published
March 26, 2026
Read time
10 Minutes

The deepfake story you think you know is incomplete.
When most people hear “deepfake,” they picture a fake face.
One important nuance: a deepfake face does not automatically mean a scam, and a scam does not even require a deepfake face at all. Visuals can be a useful clue, but identifying synthetic media is getting more complex, and simplistic “looks fake” judgments are increasingly unreliable. That is why we lean on multiple signals, including audio analysis and transcript, to reason about intent and behavior.
Connecting the term deepfake with deceit makes sense, it is the most visual part of the problem, and it has become a cultural shorthand for deception. But in real-world scam campaigns, the face is often just packaging. The engine is usually the message, including the script, the call to action, the pressure, and the promise.
That's why our detection starts where the persuasion lives: the content, especially the audio. Scam videos often reuse the same pitches, again and again, such as “guaranteed returns,” “limited-time access,” “you’ve been selected,” “act before it’s too late.” Even when the visuals look polished, even when the footage looks familiar, the audio can give away the intent.
As we continued our research on what makes a deepfake a scam, we started asking another question that changed what we looked for:
Once a video is already labeled as scam content by our audio and content analysis, which faces keep showing up across those scam videos?
The answer was not only “famous people.”
It was actually a set of VIPs that do not exist.
Why we looked for “VIPs” in scam video
In scam ecosystems, repetition is rarely an accident. Criminals optimize what works, then scale it.
So instead of asking “who is this person?” we asked “how often does this face appear across scam-labeled videos, and where does it show up?” If the same presenter is repeatedly used as a delivery vehicle for scam scripts, that recurrence becomes a signal of reuse, coordination, and industrialization.
To do that, our proof of concept takes scam-labeled videos, extracts faces, generates compact visual descriptors, then groups them by similarity. The most prevalent identities in that scam-labeled set become “VIPs,” meaning faces that appear unusually often across scam content. We use faces here as an additional signal for grouping and speed, to connect related scam videos and spot scale. It complements audio and transcript analysis, it is not the primary factor deciding whether a video is a scam.
We expected the usual outcome: a list dominated by high-profile public figures that scammers frequently impersonate.
And yes, those faces are there. Public figures like Elon Musk have become recurring bait in investment-themed scams, giveaways, and “exclusive opportunity” narratives.
But the most important finding was what appeared alongside the celebrity targets.
VIPs that don’t exist
Some of the most prevalent “presenters” in scam video are not real people.
They look like ordinary talking-head hosts: friendly expression, neutral styling, camera-ready delivery. They speak clearly, they project confidence, and they often appear in exactly the kind of format people have learned to trust: a calm explainer, a “news-style” segment, a professional endorsement, a casual recommendation.
But they are AI avatars, synthetic presenters generated to be reused. To be clear, an AI avatar is not automatically a scam; what makes these specific presenters relevant is that we found them recurring inside videos already labeled as scams based on the message and behavior.
In general, proving that a person on screen is synthetic can require detailed and expensive analysis, things like lip-sync consistency, motion artifacts, and other forensic techniques. Our approach is different: we identify and aggregate recurring avatar identities from scam-labeled videos using lightweight visual descriptors and similarity grouping. That gives us a faster, cheaper way to recognize AI avatar presenters that are not yet “known,” but are already showing up frequently in the wild.
We are confident about this classification. These are not ambiguous “maybe enhanced” clips. They are AI avatar outputs, and we have also seen the supply chain behind them, including avatar creation services marketed openly on freelance marketplaces like Fiverr.
This matters because it changes the threat model.
If you only think about deepfakes as “someone impersonating a celebrity,” you miss the bigger shift: scammers are increasingly manufacturing credibility without impersonating anyone at all.
That might sound like a minor distinction, but it changes the incentives.
- A celebrity deepfake creates obvious controversy and is easier for platforms and victims to challenge.
- A synthetic presenter has no “real person” attached, no victim to complain, and no public figure to debunk the content.
- The same synthetic presenter can be reused endlessly, across topics and languages, with minimal cost.
In short, these “VIPs” are not famous because they are recognized, they are VIPs because they are operational assets in scam production.
From celebrity deepfakes to avatar farm
Celebrity impersonation is still here, and it will remain attractive. Reputation is a powerful shortcut to trust.
But impersonation has trade-offs. It is high-profile, easier to report, and riskier for scammers. It can also be brittle, a single debunk can collapse a campaign.
Synthetic presenters reduce that risk. You do not have to mimic a specific person, you just have to look credible enough to keep someone listening for 20 seconds.
This is why the “avatar farms” mental model fits.
Instead of crafting a one-off forgery, scammers can run a production line:
- Create or acquire a synthetic presenter that looks broadly trustworthy
- Generate scripts at volume, tailored to whatever is trending
- Swap voiceovers, captions, and visual overlays
- Publish, measure engagement, then publish again with minor variations
That production line scales. It is designed for experimentation, iteration, and reach.
And it works because most scams do not require cinematic realism. They require momentum.
Synthetic trust, the new social engineering
Humans are wired to treat faces and voices as social proof. A confident presenter can make a claim feel less risky, even when the claim is outrageous. A calm delivery can make urgency feel “reasonable.” A familiar format can make a stranger’s instructions feel like a normal next step.
That is the core idea behind synthetic trust.
AI avatars are not trying to be memorable. They are trying to be acceptable. They are engineered to pass a quick gut-check, “this seems legitimate,” long enough to drive a click, a download, a payment, or a message.
For scammers, that is an upgrade. A synthetic presenter does not get tired. It does not demand a fee. It does not have a moral objection. It can be generated in many languages, in endless variations, and deployed in parallel across platforms.
You may still see celebrity bait at the top of the funnel. But underneath it, a lot of scam content is shifting toward manufactured presenters that are easier to scale.
The twist: sometimes the video is real and only the audio is weaponized
There is another reason audio-first detection matters: not every scam video relies on a fake face.
Sometimes criminals take legitimate footage, a real interview, a real tutorial, a real product explainer, and overlay a scam pitch on top. The visuals can be authentic. The deception sits in the narration, the instructions, and the call to action.
This technique does two things for scammers:
- It reduces production cost, because they can recycle existing footage
- It reduces suspicion, because the visuals look “normal” at a glance
If defenses focus only on “does the face look fake?”, this kind of repurposed content can slip through. That is why we treat the message as primary. Then, once content is labeled as scam, we look for the repeated visual infrastructure that carries those messages, including recurring AI avatar presenters.
A note on hired actors
Not every recurring presenter in scam content is synthetic. We have also seen scams that use hired actors or commissioned videos. Sometimes it is a human reading a script, sometimes it is a human clip later repurposed with different narration or overlays.
In other words: the “avatar farm” trend does not replace all old techniques. It sits alongside them.
That is also why we avoid simplistic rules like “this face is fake therefore it is a scam.” What matters is intent and behavior. Faces help us connect the dots faster as a signal, they are not the verdict.
Practical tips: how to protect yourself from scam videos
If a video is pushing you toward money, logins, downloads, or urgent verification, treat the presenter as part of the packaging, not proof.
- Follow the action, not the face.
If the video is urging urgency, secrecy, or immediate payment, pause. That pressure is often the scam. - Verify outside the platform.
Search for the claim from independent sources, not links in the description or comments. - Be suspicious of “too smooth” certainty.
Scam scripts often sound confident, generic, and strangely frictionless, as if risk does not exist. - Watch for mismatch.
If the visuals feel unrelated to the pitch, or the audio feels pasted-on, treat it as a warning sign. - Avoid installs prompted by videos.
“Install this to verify,” “download this viewer,” “run this command,” these are common routes into malware and credential theft. - Use protection tools.
Tools like Norton’s AI-powered Scam Protection and Avast Deepfake Guard can help spot what’s easy to miss, including deepfake audio in videos and scam signals across web, SMS and more.
The bottom line
Deepfakes may get the headlines, but they are only part of the story. From impersonating real people to manufacturing trust at scale, faces, voices, scripts, formats, all of it can be generated, reused and optimized.
Which means the real skill is not spotting what looks fake; it is recognizing how scams behave.
Follow the pressure. Question the promise. Verify outside the moment.