From helpful AI to harmful deepfakes: Where video scams cross the line



We’re immersed in a digital feed that never sleeps. In just a few years, our habits have shifted from group chats and photo sharing to an always-on stream of video. Short clips, social feeds, and streaming platforms now blend into a single, continuous online viewing experience.
At the same time, AI-generated content, known as deepfakes, has quietly moved from spectacle to infrastructure. What once felt like a novelty, or something reserved for viral clips and obvious hoaxes, is now part of everyday video production. Today’s deepfakes are often subtle, designed to blend seamlessly into the videos people already watch, making it hard to tell what’s real and what’s fake. This seamlessness and the question mark over authenticity matter, because video content has become the default way people learn, relax, and make decisions. In August 2025, YouTube accounted for 13.4 percent of all U.S. television viewing. In the UK, regulators report that 41 percent of YouTube viewing already happens on TV sets. Around the world, streaming continues to expand its share of home TV use, bringing the “what to watch next” algorithms, suggested videos, autoplay queues, and targeted ads directly into living rooms.
So, what’s the problem? People instinctively trust what they see and hear. And while deepfakes aren’t inherently malicious, AI-generated videos and audio can convincingly mimic real people, making false information feel authentic and emotionally persuasive. This blurs the lines between reality and fabrication, and it also enables a new frontier for scams - delivered faster and with greater impact than ever before.
How deepfake scams hide in plain sight
The tools AI creators rely on for voice cloning, synthetic narration, automated editing, and composite visuals, are now standard capabilities across many popular platforms and production workflows.
Adobe’s latest global survey found that 86 percent of creators use generative AI somewhere in their process. Analysts expect a growing share of outbound marketing messages to be synthetically generated, enabling persuasion at scale, delivered faster and with fewer human bottlenecks.
In practice, this makes deepfakes harder to spot. A video may look ordinary while the audio has been cloned. A familiar voice may be paired with stock footage. A stitched interview clip may borrow credibility from a real person without ever showing their face. Most of these techniques are not harmful by design, but criminals have learned how to use them for malicious intent by combining them with scam narratives that feel personal and believable. Criminals do not need a million views. They need the right viewer at the right moment.
How we’re protecting people from deepfake scams
In the second half of 2025, Norton introduced proactive on-device detection of malicious AI-generated audio in video content on platforms including YouTube, Facebook, Instagram, X, TikTok, Vimeo, Twitch, and DailyMotion. We began by delivering this capability on AI PCs in partnership with chipmakers like Intel and Qualcomm, where on-device processing makes it possible to analyze audio without sending sensitive data to the cloud. The capability has since expanded to traditional high-end PCs as well, making this early protection accessible to a broader audience.
At the same time, we released a preliminary version of cloud-based video manipulation detection and started collecting telemetry to improve accuracy and coverage over time. Today, at the Consumer Electronics Show (CES) in Las Vegas, we are showcasing an early preview of the next evolution of this protection: on-device video manipulation detection for AI PCs, in partnership with Intel.
Utilizing Intel’s Panther Lake processor and our new, in-development image analysis tool, we are now able to scan video and detect malicious deepfakes of well-known public figures used by criminals for scam campaigns. This can be done directly on the device for faster and more private protection – a new benchmark for the industry. Over time, our technology is expected to expand to detect more than just celebrities and famous people – it will also help to protect against more devastating scams such as family member impersonation.
What the preliminary data shows
As of now, two patterns stand out. YouTube accounts for the largest share of blocked deepfake-enabled scam videos on the Internet, followed by Facebook, and then X. Additionally, the risk skews toward longer viewing sessions rather than short clips. This aligns with how people consume video today, especially on TVs and PCs, where extended watch time gives scammers more opportunity to persuade, and mirrors YouTube’s prominence as the default destination for long-form recommendation-driven viewing.
By country, the United States accounts for the largest share of intercepted deepfake scam videos, followed by Germany, United Kingdom, Canada, and Australia. All other countries combined make up a larger share than any single market. These figures show where our protection intercepted scams, not where overall deepfake risk is highest. Results are influenced by feature availability and regional usage patterns. As adoption expands, we will add per-user risk metrics to provide more context.
Out of all the data, one detail is especially important: most deepfake scams were spotted in real time, during playback. They appeared as part of normal viewing, not as downloads, attachments, or links sent separately. This means scam deepfakes hide in plain sight, woven into normal video consumption. They feel routine and familiar, not disruptive, and serve as the delivery mechanism inside a broader attempt to steal money.
Our Deepfake Protection is built to stop these financial scams that cause the most harm – but it also protects against other scam types, grouping them into a broader scam category. Most blocked videos fell into this group, with financial and cryptocurrency following closely behind.
Deepfakes are neutral. Intent is not.
There is far more AI-generated and deepfake-assisted video online than outright scam content. Deepfake techniques are now widely used for legitimate purposes, including accessibility, localization, editing efficiency, and creative expression.
The presence of a deepfake alone is not the risk. Risk emerges when deepfake capabilities are paired with intent: urgent financial requests, promises of guaranteed returns, pressure to act quickly, or instructions to move conversations or payments off platform.
Our telemetry focuses on that overlap: deepfake-assisted media that is also a scam.
Most detections fall into a broad manipulated media category, often voice-led clips designed to sound authoritative, familiar, or reassuring. Within the smaller, clearly labeled set, the dominant themes remain finance, investment, and cryptocurrencies.
As deepfakes become normal in media production, the right way to judge risk is by behavior and outcome, not by whether AI or deepfake tools were involved.
What the signals suggest
Several themes emerge clearly.
First, scammers follow attention. Deepfake scams appear wherever people spend time watching video. If they can blend into ordinary playlists on major platforms, they do not need mass reach. Believability matters more than scale.
Second, finance remains the primary lure. Investment advice, trading schemes, and giveaways dominate. This mirrors broader trends in digital advertising, where AI-targeted calls to action increasingly appear across social video and connected TV.
Third, deepfakes scale because AI use is normal. With creator adoption this high, criminal groups naturally use the same tools. The difference is intent, and our detections focus squarely on deepfake-enabled deception tied to scams.
What people can do right now
Do not trust a video on its own. If it asks you to move money or share sensitive information, pause and verify through an official website or contact method you look up yourself.
Ignore urgency and exclusivity. Limited time offers, countdowns, and “only for subscribers” language are common deepfake scam tactics.
Listen closely to the audio. Many deepfake scams rely on cloned voices. Watch for unnatural pacing, missing pauses, or mismatched sound environments.
Report and move on. Reporting helps platforms downrank deepfake scam content faster and protects others. Scams scale when people share them, so reporting matters.
How good technology is now playing a role
At the same time, defensive technology is evolving just as quickly. At Gen, we are embracing and innovating with AI to counter AI-driven deception through deepfake protection built into our Norton and Avast products, including work that analyzes both audio and visual signals for signs of manipulation. This includes detecting synthetic voices and identifying subtle visual inconsistencies that indicate digital alteration. The initial proof of concept at CES is the first step in applying our technology to detect well-known public figures, where verified reference data makes detection more reliable.
When a suspected scam deepfake is detected, our products do more than flag it. They provide contextual guidance, helping people understand what they are seeing and what steps to take next. The goal is not to label every AI-edited video as dangerous, but to focus on intent and harm, intervening when manipulated media is being used to deceive, pressure, or defraud. As deepfakes blend more seamlessly into everyday viewing, protections like these help restore balance by bringing scrutiny to moments where trust is most easily exploited.
Deepfakes work because they align what we hear with what we expect to see. Audio carries warmth, authority, or urgency, but visuals often provide the final proof. A familiar face or trusted brand image can override doubt in seconds, even when the message itself makes little sense. Presented as interviews or tutorials, these scams feel helpful, not harmful. This is why detecting visual manipulation is becoming just as important as analyzing synthetic audio, especially as scam campaigns increasingly rely on recognizable faces to create instant credibility. That is why nearly all interceptions in our data occur during playback rather than through manual checks. Protection has to operate where attention lives.
Deepfakes are now part of modern media production, and that will not change. The question is not whether deepfake technology is involved, but whether it is being used to drive a fraudulent action.
This is our first share-based look at deepfake-enabled scam videos across platforms and countries. It establishes a baseline and moves the conversation from anecdotes to patterns. As adoption grows, we will continue to highlight where people are actually encountering deepfake scams and which lures are costing them real money.
To learn more about deepfakes and how Gen and Intel are fighting back, visit https://norton.com/feature/ai-scam-protection.