Research

The Scam Ad Machine: Part II

How scam ads hide in plain sight, survive takedowns, and keep coming back
Luis Corrons's photo
Efe Karabeyli's photo
Daniil Khmelnytskyi's photo
Thomas Bühler's photo
Michalis Pachilakis's photo
Written by Luis Corrons, Efe Karabeyli, Daniil Khmelnytskyi, Thomas Bühler, Michalis Pachilakis
Published
March 20, 2026
Read time
12 Minutes
The Scam Ad Machine: Part II
Written by
Luis Corrons
Security Evangelist at Gen
Efe Karabeyli
Senior Principal Research Engineer
Daniil Khmelnytskyi
Junior Data Scientist
Thomas Bühler
AI Researcher
Michalis Pachilakis
Research Manager
Published
March 20, 2026
Read time
12 Minutes
The Scam Ad Machine: Part II
    Share this article

    Scam advertising does not thrive on social platforms because it is invisible. It thrives because it is designed to look legitimate, just long enough to reach victims before detection catches up.

    In our first investigation, The Scam Ad Machine, we showed the scale of the problem and how widespread scam ads have become on Meta platforms, with roughly 1 in 3 ads approved for publication in the UK and EU being a scam. This second part builds on that foundation, focusing on how these campaigns actually operate in practice.

    Across our investigation of Meta platforms, we saw the same pattern again and again, scammers combining normal advertising features, deceptive presentation tricks, and short-lived campaign tactics to stay in circulation. This is not a single evasion trick. It is an operating model.

    Below, we break down the techniques we found at scale, and why many of them are difficult to catch through automated moderation, manual review, or public transparency tools.

    How scammers hide in plain sight

    Multi-facet ads, hiding one scam among legitimate links

    One of the most common techniques we uncovered is what we call multi-facet advertisements.

    These ads bundle multiple images, products, and links into a single ad. Most of the links point to legitimate marketplaces or well-known ecommerce platforms such as Amazon, eBay, AliExpress, or Etsy. Hidden among them is a single malicious or scam-related link, often leading to fake ecommerce stores, fraudulent service pages, account scams, or impersonation pages tied to public figures.

    The trick is simple: make the scam look like part of a normal shopping experience.

    By surrounding one malicious destination with a cluster of legitimate brands and products, the ad looks ordinary at first glance. That lowers suspicion for users and can also make review harder if the ad is assessed superficially. It also creates collateral damage, legitimate brands and products end up visually associated with fraudulent campaigns.

    From a detection perspective, this should be a solvable problem. A robust review process could inspect the full set of linked domains and compare what is shown with what is actually being promoted. In practice, however, we identified large volumes of these campaigns running simultaneously from different advertisers on Meta platforms, often linked to newly created advertisers and low-history accounts.

    An example of financial fraud hidden among Amazon products. To make the offer more credible, the scammers use a fake BBC publication.
    An example of financial fraud hidden among Amazon products. To make the offer more credible, the scammers use a fake BBC publication.
    The advertisement mentioning Elon Musk in order to make the product appear credible
    The advertisement mentioning Elon Musk in order to make the product appear credible

    URL masking, when the ad shows one domain and sends users to another

    This is one of the most effective deception techniques we observed.

    Scammers make an ad appear to link to a trusted domain, such as a major marketplace or brand, while the click sends the user somewhere else. The ad displays a familiar name, but the landing page belongs to unrelated infrastructure controlled by the attacker.

    For users, especially on mobile, that visible domain cue is often enough. Many people do not inspect the final URL before clicking, and once they land on an attractive page with compelling offers or urgency signals, many will continue browsing.

    In our observations, Meta’s API may expose the domain shown in the ad, not necessarily the final destination the user reaches after clicking. That makes this technique harder to detect programmatically and significantly complicates analysis by transparency researchers, watchdogs, journalists, regulators, and law enforcement.

    In other words, what the ad appears to be, and where it actually leads, can diverge in ways that are difficult to verify at scale.

    The actual link leads to the website selling products that are completely unrelated to the advertised ones.

    Typosquatting, deceptive subdomains, and misleading previews

    Why abandon a tactic that still works?

    A classic technique remains highly effective, registering domains that closely resemble legitimate brands. A small spelling change, an extra character, or a visually similar character can be enough to fool users at a glance. In fast-scrolling environments, these differences are easy to miss.

    Scammers also use deceptive subdomain structures to look more credible. Instead of using a plainly suspicious domain, they may deploy subdomains such as go.[domain], shop.[domain], or offers.[domain]. These can look more legitimate to users and may also create inconsistencies in detection coverage. In some cases, we observed situations where one domain variant was flagged while a related subdomain remained active.

    Another layer of deception appears in the creative itself. In ads using domain impersonation or typosquatting, we also observed mismatches between the ad’s visual preview and its actual content. For example, a video thumbnail may show a legitimate product, a game scene, or unrelated imagery, while the underlying content promotes a scam. If neither an automated system nor a human reviewer clicks through or expands the media, the discrepancy may go unnoticed.

    This combination, deceptive domain naming plus misleading visuals, increases both click-through potential and evasion resilience.

    An example of an advertisement using the prefix “go.”. The link “GO.DATING4SINGLES.COM” does not lead to the website, but “DATING4SINGLES.COM” does.

    Unicode text obfuscation that breaks detection, not readability

    Sometimes the URL is not the main problem, the text is.

    Many scam ads rely on familiar triggers, high-return investments, miracle products, fake financial services, urgent offers, or celebrity-backed promises. In plain text, these patterns should be relatively easy to flag for additional review.

    Scammers increasingly avoid this by using obfuscated characters and Unicode tricks. To a human reader, the text may look normal. To an automated system, parts of the same string may be transformed into unusual Unicode sequences, invisible joiners, or look-alike characters that break keyword matching and reduce classifier reliability.

    The result is elegant from the scammer’s perspective, readable to people, unreliable for machines.

    We observed this technique at scale in scam ads circulating across Meta platforms, often in combination with other evasive tactics such as domain impersonation and celebrity references.

    Hidden impersonation, names that look real to people but not to machines

    Impersonation remains one of the fastest ways to manufacture trust in scam advertising.

    Scammers know that names associated with wealth, authority, or credibility can dramatically increase engagement, whether it is a celebrity, a business leader, or a title such as “Dr.” or “PhD.” Those same names should also be obvious candidates for automated scrutiny.

    Scammers work around this by manipulating text so that it looks normal to a user but is encoded differently for machines. One common technique we observed uses special spacing characters or joiners that visually preserve a name while changing its underlying structure. To the naked eye, it may read as a well-known person’s name. To a detection system, it may be processed as a different token entirely.

    This technique was especially common in scam ads designed to manufacture credibility through famous names, credentials, or authority signals.

    This advertisement uses the name of dentist Dr. Margaret Hunt to gain users' trust. At the same time, a special character is used in the name to avoid detection.

    Why scam ads keep coming back

    The problem is not only that scam ads evade detection. It is that they regenerate faster than enforcement can disrupt the operation behind them.

    This is the Hydra problem of scam advertising. Remove one ad, and several variants can reappear through new accounts, duplicate creatives, and short-lived campaigns built to expire before meaningful intervention happens.

    Our investigation did not only examine what scam ads looked like. It also examined what happened to them over time, how they were removed, how quickly similar variants reappeared, and how easily the underlying operations continued.

    What we observed suggests a recurring pattern, many interventions remove individual ad instances, while the broader scam operation remains fully capable of regeneration.

    Takedowns that remove ads, but not the operation

    One of the first patterns we noticed was that removals often appeared to happen on a single ad instance basis.

    We observed cases where the same scam ad, or functionally identical copies of it, continued running even after one instance had been removed. This happened across different advertisers and, in some cases, within the same scam-linked advertiser account. The text, target audience, visuals, and destination could remain effectively the same, while only one instance was taken down.

    Based on these observations, moderation often appears to act on the ad object that was detected, rather than reliably escalating to advertiser-level or campaign-cluster disruption.

    For scammers, this is a gift. If enforcement targets copies one by one, duplication becomes a business advantage.

    Short-lived campaigns, high volume beats persistence

    Scammers do not need every ad to survive. They need enough ads to run long enough.

    A common pattern we observed was the use of many short-lived advertisements, often with small budgets and short runtimes. Instead of relying on one long campaign, scammers distribute risk across large numbers of disposable ads. If some are removed, others remain active and continue reaching users.

    This strategy fits the takedown pattern above. If moderation is slower than campaign turnover, short-lived scam ads become highly efficient. The ad dies, the operation continues.

    We also repeatedly observed advertiser pages with very limited organic activity, minimal followers, random page names, and behavior patterns consistent with throwaway infrastructure, yet still capable of launching scam ads.

    Permanence is not the business model, throughput is.

    User reporting is too slow for disposable scam ads

    We also tested the user reporting path directly by reporting scam ads from platform accounts.

    In multiple cases, we observed the same ads still active after reporting, and in some cases visible again on subsequent checks. This does not necessarily mean reports are ignored, but it does suggest that user reporting alone is too slow or too inconsistent to function as an effective frontline defense against high-volume scam advertising.

    This becomes even more problematic when campaigns are designed to run briefly and aggressively. By the time an ad is reviewed manually, it may already have completed its useful life, reached a large audience, and been replaced by multiple variants.

    At this scale, platform moderation faces a difficult asymmetry. Scammers can generate and rotate ads faster than manual review can reasonably keep up.

    Advertisers accounts are cheap to replace

    Another structural issue is the ease of creating or replacing advertiser accounts.

    Even when accounts are blocked, scammers can often resume operations quickly by shifting to new accounts, new pages, or repurposed profiles. During our investigation, we observed large numbers of suspicious advertisers with limited page history, low follower counts, recently created pages, and naming patterns consistent with disposable setups.

    This creates a persistent regeneration advantage. If the cost of losing an advertiser account is low, and the cost of creating a replacement is also low, enforcement becomes a recurring operating expense for the scammer, not a meaningful deterrent.

    That is the Hydra problem in practical terms, remove one head, and the infrastructure behind it keeps producing more.

    Scam ads are engineered for resilience

    Scam advertising is not static. It is adaptive, distributed, and engineered for resilience.

    From URL presentation tricks and Unicode obfuscation to disposable ad waves and rapid advertiser regeneration, scammers exploit structural weaknesses in how social advertising ecosystems are built, moderated, and audited.

    Reactive takedowns matter, but they are not enough when enforcement is slower than replication.

    Meaningful progress requires systemic changes, stronger advertiser verification, better destination-level transparency, campaign-level enforcement, and faster cross-platform disruption mechanisms.

    The Hydra metaphor matters here because it captures the real failure mode. Removing individual ads without disrupting the regeneration model does not solve the problem, it manages the symptoms while the system keeps producing new heads.

    Luis Corrons
    Security Evangelist at Gen
    At Gen, Luis tracks evolving threats and trends, turning research into actionable safety advice. He has worked in cybersecurity since 1999. He chairs the AMTSO Board and serves on the Board of MUTE.
    Efe Karabeyli
    Senior Principal Research Engineer
    Efe Karabeyli is a Senior Principal Research Engineer at Gen Digital based in Berlin. His expertise includes security research, large-scale systems and applied machine learning for real-world products.
    Daniil Khmelnytskyi
    Junior Data Scientist
    Daniil Khmelnytskyi is a Junior Data Scientist at Gen Digital. He works with data, machine learning, and AI to build practical solutions that strengthen cybersecurity and protect users worldwide.
    Thomas Bühler
    AI Researcher
    Thomas Bühler is an AI Researcher at Gen, developing large-scale machine learning systems to protect customers from malware, scams and evolving cyberthreats.
    Michalis Pachilakis
    Research Manager
    Michalis Pachilakis is a Research Manager at Gen Digital, with a background in online transparency, advertising transparency, and security.
    Follow us for more