Research

The Scam Ad Machine

Nearly one in three Meta ads found to point to a scam, phishing or malware
Luis Corrons's photo
Efe Karabeyli's photo
Daniil Khmelnytskyi's photo
Thomas Bühler's photo
Michalis Pachilakis's photo
Written by Luis Corrons, Efe Karabeyli, Daniil Khmelnytskyi, Thomas Bühler, Michalis Pachilakis
Published
February 1, 2026
Read time
9 Minutes
The Scam Ad Machine
Written by
Luis Corrons
Security Evangelist at Gen
Efe Karabeyli
Senior Principal Research Engineer
Daniil Khmelnytskyi
Junior Data Scientist
Thomas Bühler
AI Researcher
Michalis Pachilakis
Research Manager
Published
February 1, 2026
Read time
9 Minutes
The Scam Ad Machine
    Share this article

    Over a 23-day period, Gen Threat Labs analyzed 14.5 million ads running on Meta platforms across the EU and UK, representing more than 10.7 billion impressions. Nearly one in three of those ads (30.99%) pointed to a scam, phishing or malware link. In total, scam ads generated more than 300 million impressions in less than a month. The activity was highly concentrated, with just 10 advertisers responsible for over 56% of all observed scam ads. Repeated campaign clusters were traced to shared payment and infrastructure linked to China and Hong Kong, indicating organized, industrial-scale operations rather than isolated bad actors. Read on to learn about how this ecosystem operates, why existing enforcement fails to contain it and what the data reveals about who is driving scam advertising at scale.

    Advertising built to persuade

    Today, social advertising is being used to defraud at scale across some of the largest platforms. 

    This is not about isolated policy failures or a handful of malicious advertisers slipping through moderation. It is about an advertising system that is structurally attractive to criminals and consistently delivers results for them.

    Across the internet, ads have quietly become one of the most efficient delivery mechanisms for scams, phishing and malware. Today, dangerous ads don’t look suspicious; they look professional, familiar and seem to target your exact needs. On social networks, the same optimization engines designed to maximize engagement and conversion are being repurposed to maximize victimization. And that’s not accidental.

    When advertising becomes the attack channel

    Most people still picture cyberattacks starting with a sketchy download or a suspicious email or text. Increasingly, they start with something far more normal: an ad. Malvertising (malicious advertising) has surged because it gives criminals what every marketer wants: instant reach, precise targeting and scale. Gen telemetry shows that malvertising has become the single largest threat to individuals, accounting for 41% of all cyberattacks

    Malvertising has become the top threat to consumers because they don’t look like they used to. People think they still look like loud pop-ups that scream, “I’m a virus,” or fake subscriptions. The modern model is quieter and more effective. It is a social engineering toolkit that blends into whatever people already trust and are currently paying attention to. We see this clearly in “scam-yourself” attacks like FakeCaptcha and ClickFix, where victims are nudged into doing the attacker’s job for them, approving a browser prompt, enabling push notifications, copying and pasting commands or “verifying” something that feels routine. Browser push notifications in particular have become a reliable hook, as one click can turn a normal website visit into a persistent stream of scam prompts and redirects.

    Attackers also borrow directly from legitimate marketing. They ride trends, exploit urgency, and add credibility signals. That includes pairing malicious ads with deepfakes and timely themes, especially in investment and crypto scams. Gen researchers have documented this extensively in CryptoCore campaigns, where deepfake videos and hijacked accounts were used to promote fraudulent investment schemes at scale. 

    Another persistent and successful tactic is impersonation. Cybercriminals buy ads on major ad networks (including search ads) to pose as trusted brands. The outcomes range from malicious redirects to phishing sites to drive-by downloads where malware is installed as part of the ad’s click path, often without the victim realizing what happened until later. 

    And malvertising is not confined to shady corners of the web. Even the most trusted websites can unknowingly display malicious ads because ads are delivered through complex, automated supply chains. The net effect is that malicious advertising has become a fast, efficient way to scam and phish people. Gen research on social media platforms shows that malicious advertising comprises roughly 30 % of scam incidents observed across social networks, making it one of the most common threats users encounter in feeds and ads. 

    If malvertising is now one of the primary ways criminals reach victims, then measuring what is happening inside major ad ecosystems is not just interesting; it is necessary.

    When online fraud metastasizes

    What we’ve discovered is not just growth, it’s an uncontrolled spread. What starts as an isolated scam tucked into an obscure corner of the web does not stay there. It spreads, mutates, and embeds itself into channels billions of people use every day. Online advertising, once a tool to connect consumers with products and services, has become part of the attack surface itself.

    Meta’s family of apps, including Facebook, Instagram, Threads, WhatsApp and Messenger, reaches an astonishing number of people. In Q2 2025, Meta reported that daily active users across its platforms reached about 3.48 billion worldwide. When a system that large becomes a gateway for scams, the impact is no longer isolated.

    Scam infrastructure does not operate in parallel to these platforms. It operates through them, leveraging their trust signals, engagement mechanics and targeting capabilities to propagate at speed.

    That is the context for what we measured.

    How we measured the problem

    Our Gen Threat Labs knew Meta ads were a problem and set out to measure the exact scale. Rather than chasing individual scam examples, we built a large-scale measurement pipeline around Meta’s Ad Transparency API, which provides visibility into ads that are currently active or were recently active on Meta platforms. Each day, we collected ads that were active or had recently been active, contained English ad text, and were delivered to users in regions covered by Meta’s Ad Transparency Library, primarily the EU and UK. While the ads themselves often promoted content intended for a global audience, our analysis focused on ads visible through the transparency framework rather than attempting to infer targeting outside of what the API explicitly exposes. We intentionally avoided narrowing the scope further in order to capture how scam advertising behaves at scale, not just in obvious edge cases.

    This work exists because regulation forced visibility. In the EU and UK, ad transparency requirements expose advertiser behavior in ways that cannot be replicated elsewhere, enabling independent measurement of scam advertising at an industrial scale. Where such regulation is absent, scam activity does not disappear; it simply becomes harder to measure. Transparency does not create abuse; it reveals it.

    For this analysis, we focused on consumer-facing ads and did not intentionally collect or analyze political advertising. Political ads are self-declared by advertisers, so while some may still appear in the dataset due to misclassification, they were not the target of this study.

    Even with these constraints, the dataset quickly became massive.

    14.5 million ads, billions of impressions, one pattern

    Over a 23-day period, we collected 14.57 million ads, representing 10.76 billion impressions delivered across the EU and UK. Even as a partial view of the ecosystem, the scale was impossible to ignore.

    Using intelligence and classification technology, we identified ads pointing to infrastructure associated with e-commerce scams, phishing campaigns, malware distribution and other consumer-facing threats. This allowed us to move beyond ad text and examine what users were actually being sent to.

    The results were not subtle. 4.51 million ads in our dataset were identified as scam-related, meaning nearly one in three ads (30.99%) pointed to scam infrastructure. In total, these scam ads generated 143.8 million impressions in the EU and 304.11 million impressions across the EU and UK in less than a month.

    This is not an edge case. It is an incredible volume.

    Why takedowns don’t scale

    Volume alone does not explain why this problem persists. So, we looked at behavior over time.

    Across the dataset, we repeatedly observed scammers reusing the same infrastructure, identical domains, and near-identical ad text across many campaigns. In multiple cases, when one ad disappeared from the ad library for policy violations, other ads using the same domain and messaging remained active until their campaigns naturally expired.

    From the outside, it seems enforcement of scam ads is reactive. Ads appear to be removed one at a time, often following reports or reviews, while related campaigns using the same infrastructure continue running. Even when scam components are reused at scale, generalization appears slow, even if they are delivered by the same advertiser.

    We cannot see Meta’s internal detection signals, so we are careful not to assign intent. But the observable outcome is clear: known scam building blocks often remain usable long after one instance is taken down.

    At this scale, that distinction matters.

    A small number of advertisers cause outsized harm

    Scale tells us how big the problem is. Enforcement shows us why it persists.
    But, to understand who is actually driving the harm, we asked a different question: are scam ads driven by countless small actors or by a smaller number of highly active criminal groups?

    The answer was clear.

    Scam advertising was also highly concentrated. The top 10 scam advertisers alone accounted for 56.1% of all scam ads, representing 2.53 million unique scam advertisements and 57.92 million impressions. A relatively small number of advertiser entities were responsible for a majority of observed harm.

    These advertisers are not anonymous hobbyists. They are organized, persistent operators running industrial-scale campaigns.

    We repeatedly traced clusters of campaigns back to payers and infrastructure linked to China and Hong Kong, operating fleets of short-lived pages created almost exclusively to run ads. Western brand names and English storefronts cycled rapidly, while payers and beneficiaries rotated or disappeared from disclosures altogether. What did not rotate were the mechanics: the same domains, URL patterns and ad behaviors resurfaced across supposedly unrelated advertisers. 

    What comes next

    The numbers alone are unsettling. Millions of scam ads. Hundreds of millions of impressions. A small number of actors are driving disproportionate harm, repeatedly and at a scale.

    This isn’t about imperfect moderation. It’s about a system that, in practice, allows attackers to move faster than constraints can keep up. In our next blog post, we’ll move this research from measurement to mechanics. We break down the techniques scammers use to hide malicious ads in plain sight, including how they manipulate displayed URLs, chain redirects, blend malicious links with legitimate ones and exploit the limits of transparency tools themselves. 

    Because understanding the size of the problem is only the first step. Understanding how it persists is what allows it to be disrupted.

    Luis Corrons
    Security Evangelist at Gen
    At Gen, Luis tracks evolving threats and trends, turning research into actionable safety advice. He has worked in cybersecurity since 1999. He chairs the AMTSO Board and serves on the Board of MUTE.
    Efe Karabeyli
    Senior Principal Research Engineer
    Efe Karabeyli is a Senior Principal Research Engineer at Gen Digital based in Berlin. His expertise includes security research, large-scale systems and applied machine learning for real-world products.
    Daniil Khmelnytskyi
    Junior Data Scientist
    Daniil Khmelnytskyi is a Junior Data Scientist at Gen Digital. He works with data, machine learning, and AI to build practical solutions that strengthen cybersecurity and protect users worldwide.
    Thomas Bühler
    AI Researcher
    Thomas Bühler is an AI Researcher at Gen, developing large-scale machine learning systems to protect customers from malware, scams and evolving cyberthreats.
    Michalis Pachilakis
    Research Manager
    Michalis Pachilakis is a Research Manager at Gen Digital, with a background in online transparency, advertising transparency, and security.
    Follow us for more