Gen Q4/2025 Threat Report


Foreword
It’s time to take a closer look at the trends shaping the cyber safety landscape. Welcome to the Gen Q4 2025 Threat Report. In Q4, we blocked 1.43 billion attacks, and our global risk ratio rose 17.6% quarter over quarter.
If you spent any time online this quarter, the pattern will feel familiar. You open a browser to look something up. A friend messages you. A bank or payment app asks you to confirm something small. Somewhere in that stream is a link to click, a QR code to scan, a pairing request to approve, or a code to enter. What looks like ordinary digital noise is often the setup for a scam.
The center of gravity hasn’t shifted away from people. It has spread across the places they spend their digital lives: browsers, chats and money apps. What changed this quarter is how often that tiny, routine action is the scam. The attack no longer comes later. It happens the moment the door is quietly opened. We also see more campaigns that only work if the victim does the “technical” step themselves. Fake tutorials and download sites persuade users to fetch and run the payload. Device linking tricks people into adding an attacker’s browser as a trusted companion to their messaging app. Verification screens quietly turn into pairing flows that mirror conversations and contacts. Infostealers, spyware and other classics are still there, but some of the most damaging incidents start from what looks like a harmless confirmation dialog.
AI sits underneath a lot of this, usually without being named. It is behind fluent, local chats in cloned Steam profiles that sound exactly like a real friend. It can also turn just a few seconds of audio into a believable emergency call from a relative. It lets small groups produce convincing videos that look like investment advice, giveaways or charity requests and drop them into the same feeds people already trust for entertainment and news. At the sharper edge, we have seen state-backed actors using AI systems to handle much of the reconnaissance and scripting work in intrusions, and experimental malware that asks a model how to rewrite itself each time it runs. On the defensive side, AI is just as present in our own security stack, from spotting strange login journeys to flagging scam videos during playback, but there is still a gap between what security technology can see and what people feel they can understand.
Privacy and identity continue to act as long-term fuel for abuse. A single weakness in WhatsApp’s number lookup exposed how easy it was to map billions of phone numbers to active accounts before the issue was fixed. A support vendor for a popular chat platform leaked tens of thousands of government ID photos collected for age checks. Data brokers handed out “samples” that revealed precise daily movements of European officials, easily linked to homes and workplaces. End-to-end encryption is essential, but it can’t protect you once spyware has compromised the device itself. The Landfall campaign showed again that endpoint compromise bypasses privacy promises made at the network layer. Across our own identity alerts and in external research, the pattern is consistent. Breaches, leaks and data sales rarely end with a single notification email. Instead, they resurface months later as account takeovers, fake professionals, synthetic identities and long, difficult recovery cycles.
Financial threats are now visible nearly every time people make daily money decisions. They show up when someone opens a new account, tips a creator, pays a bill or reacts to a text that looks like it came from their bank. Stolen identity data is used to pose as the “ideal” new customers applying for loans. Payment features designed for convenience, such as tipping and post settlement adjustments, are used to turn one-dollar transactions into five-figure debits. Microtransactions in games and social apps are increasingly abused for chargeback and refund fraud, where attackers reclaim purchases that were unauthorized after digital goods have already been spent or transferred, shifting the loss onto platforms and merchants. Old-fashioned SMS phishing, upgraded with better language and more convincing lures, still drives some of the largest direct losses when it persuades people to hand over one-time codes that bypass nearly every other safeguard.
As we wrapped up 2025, the most striking changes we see are no longer brand new types of malware; it is how tightly attacks are woven into every fiber of our everyday digital routines. Browsers, SMS, chat apps, social media and the tools people use to manage their money now form one continuous surface where trust, identity, privacy and finances intersect. Attackers lean on that continuity, on familiar brands and interfaces, then add automation and AI to reach more people with less effort and less noise.
Our job, and the purpose of this report, is to spot those patterns early, measure how they evolve from quarter to quarter and keep shifting protections closer to the small moments where people actually make these choices. Thank you for taking the time to read it, and I hope you enjoy the report.
Luis Corrons, Security Evangelist (aka Threat Whisperer)
Threat Landscape Highlights: Q4/2025
Security Threats & Scams
Across Q4, we saw the same core pattern as earlier in the year: most attacks today are not about exotic exploits; they are about getting people to lower their own defenses. What changed last quarter is how many of those scripts now hopped between devices and channels. It started in the browser, continued on the phone, then maybe even landed directly in the calendar instead of the inbox. In parallel, mobile spyware and enterprise extortion continued to evolve under pressure from law enforcement and falling ransom payments, while platforms and business software remained a very profitable surface for scams.
Scam-Yourself Attacks
The Scam-Yourself pattern we highlighted in the past quarters is still very much alive, now with cleaner and more focused campaigns that are easy to see in our data. In November, a single “fake tutorial” domain drove a sharp spike in Scam-Yourself detections. The lure is familiar: users search for tips or cracked software, and land on a page that pretends to walk them through an installation. The twist is that the instructions explicitly ask them to “scan” the screen with their phone camera to continue, pushing the second half of the attack chain onto mobile, where they are more likely to grant extra permissions or install side-loaded apps. A separate campaign in the same period, again dominated by a single domain, used similar tactics to pull people into bogus investment apps on mobile. In both cases, the criminals are not fighting the security model; they are persuading users to carry the payload across device boundaries for them.

A New Frontier: Calendar Invite Scams
We also saw classic tech support scams arrive through a new front door: calendar invites. In this scam, instead of a spam email, victims receive an unexpected calendar event that claims a security product subscription has just renewed for hundreds of dollars and includes a phone number “for support.” In our telemetry, tens of thousands of these invites were blocked in a single month. When our testers called one of the numbers, the script followed the same path we have seen for years in phone-based tech support fraud: the agent anchored the fake charge, insisted on remote access to “fix” the issue, and walked the victim through installing a remote access tool such as AnyDesk. The branding was almost incidental. In one call, the calendar mentioned Norton while the scammer talked about “your McAfee subscription.” The details may change; however, the core playbook is stable: steer the conversation to remote control and financial data as quickly as possible.
Mobile Threats
On mobile, spyware continued to trend upward in the first half of Q4 and became more invasive. Two Android families, Tambir and SpyMax, dominated our spyware detections in Q4, with a clear peak in November. SpyMax, commonly masquerading as legitimate apps such as Chrome or Netflix, requested network and package-installation permissions, and then used those to silently install additional components with full spying and monitoring capabilities. The impact was global but not uniform, with hotspots in countries such as Morocco, Yemen, Azerbaijan, Portugal and Türkiye. In one of the more sophisticated spyware campaigns of the quarter, researchers uncovered a new commercial-grade Android spyware family called Landfall. This spyware exploited a zero-day vulnerability in Samsung’s image processing library (CVE-2025-21042), which allowed attackers to compromise Galaxy devices via malicious image files, including files sent over apps such as WhatsApp. At this scale, a person’s trust becomes part of the attack surface. While encryption protects data transit, it cannot defend a device that has already been compromised.

The flaw was quickly added to CISA’s Known Exploited Vulnerabilities catalog and patched, but it underlines how messaging and media apps are now both social-engineering channels and mechanisms for exploit delivery.

The economic and legal pressure around spyware increased as well. Apple doubled its top bug bounty award to 2 million dollars for exploit chains that can achieve mercenary-spyware-level access, with bonuses that can push the total award above 5 million. In parallel, a US court granted a permanent injunction barring NSO Group from targeting WhatsApp users with spyware, after finding the company liable for hacking hundreds of accounts, although it significantly reduced an earlier damages award. Combined with the Landfall case and our own spyware telemetry, the picture is clear: phones are now the primary surveillance target for both everyday stalkerware-like campaigns and highly resourced actors, and the market value of mobile zero-days has risen accordingly.
Ransomware
Ransomware and extortion remained active but showed signs of stress. On the consumer side, ransomware encounters in our telemetry declined 6.8% year over year in 2025, staying well below the elevated levels we saw during the Magniber-driven highs that began in 2024 and persisted until this summer, when the campaign was stopped. We still observed smaller, more targeted incidents, including new Trinity-family ransomware samples going after small and mid-sized businesses, but not a broad return of mass-spread ransomware lockers.
Externally, incident response data points in the same direction. Coveware reported that only about 23 percent of ransomware victims paid in Q3 2025, a historical low, with average and median payments dropping by roughly two-thirds compared to the previous quarter as big enterprises increasingly refuse to pay and mid-market firms negotiate smaller amounts. Chainalysis, looking at blockchain flows, estimated that global ransomware payments in 2024 fell from 1.25 billion dollars to around 813 million, roughly a one-third drop year on year, a proof point that consumers and businesses alike are becoming more educated on how to manage ransomware extortion. Taken together, this suggests that the traditional encrypt-and-extort model is under real pressure, even though the number of groups and leak sites remains high.
Infostealers, Vulnerabilities and Other Exploits
Law enforcement pressure also spilled over into commodity crimeware infrastructure. In November, Operation Endgame, a coordinated international action, disrupted infrastructure used by infostealers such as Rhadamanthys, VenomRAT and Elysium, taking large numbers of servers and several domains offline. In our telemetry, Rhadamanthys detections dropped sharply after mid-November and remained low throughout the rest of the quarter. This marks a rare instance where a public takedown aligns with a clear and sustained decline in activity from a specific threat family.
The broader infostealer ecosystem, however, did not disappear. And other stealer families continued to operate. Something to think about here: do high-profile disruptions like this meaningfully change attacker behavior over the long term, or do they simply create space for the next malware-as-a-service brand to emerge?

One structural shift that happened in the background during Q4 is the nearing of the end of support for Windows 10, which will leave millions of still-active PCs on an increasingly unpatched operating system. That long tail of unsupported devices keeps older exploit chains profitable and turns legacy Windows into a standing pool of soft targets, from ransomware affiliates and botnet herders to infostealer operators.
In the world of extortion, this quarter, the Cl0p group and other actors ran large-scale extortion campaigns against organizations using Oracle E-Business Suite, exploiting a critical unauthenticated remote code execution flaw, CVE-2025-61882, to break into internet-facing instances and exfiltrate data. Victims ranged from universities and airlines to health providers and media outlets, many of whom only discovered the breach when they received extortion emails or saw their data listed on leak sites. Separately, the Scattered Lapsus$ Hunters collective claimed to have stolen more than a billion customer records from companies that rely on Salesforce, and published samples tied to firms in sectors such as insurance, airlines and credit reporting. In both cases, the pattern is similar: rather than encrypting endpoints, attackers look for central business platforms, exploit a single point of failure and then treat stolen data as the primary ransom lever.
Scams
Last but certainly not least, scams continued to scale by riding on top of mainstream platforms and trusted infrastructure. Scams, overall, continue to be a leading problem for consumer cyber safety. In 2025, there were 41 scams blocked every second, on average.
In our Q4 analysis of scams originating on social platforms, one theme stands out: concentration. Facebook alone accounts for 78.04% of social-origin scam blocks on desktop. When combined with YouTube, that figure rises to 95.71%. In plain terms, the vast majority of risky scam clicks begin in just two places: the social feed and the video loop.
The second defining pattern is what those clicks are designed to do. In Q4, e-shop scams dominate at 65.4% of social-origin blocks. Phishing followed at 15.09%, then generic scams (7.58%), tech support scams (6.28%), dating scams (3.45%) and financial scams (2.12%).
Each scam type also has its own “platform fingerprint.” For example, tech support scams are almost entirely Facebook-driven (99.25%), while phishing spreads more widely across Facebook (77.34%), YouTube (13.66%) and Reddit (3.94%).
Where the scam click starts (platforms)
- Facebook, 78.04%
- YouTube, 17.67%
- Instagram, 2.16%
- Reddit, 0.65%
- X, 0.60%
- TikTok, 0.38%
What the scam is trying to do (types)
- E-shop scams, 65.40%
- Phishing, 15.09%
- Generic scams, 7.58%
- Tech support scams, 6.28%
- Dating scams, 3.46%
- Financial scams, 2.12%
Our telemetry also showed a marked increase in e-shop scams in Q4 on both desktop and mobile. In fact, roughly half of all e-shop scam blocks we recorded in 2025 occurred in Q4, reflecting a surge in blocked fraudulent shops. The data is unsurprising considering the holiday period and public reports showing that Meta made around 10 percent of its annual revenue in 2024, roughly 16 billion dollars, from fraudulent ads and banned product listings, based on internal documents discussed in recent investigations. Those reports describe billions of scam-related ads per day, with some internal systems rating ads as highly likely to be fraudulent yet allowing them to run. For ordinary users, the distinction between an “ad platform” and a “scam delivery system” is increasingly academic. On the supply-chain side, the discovery of the ShaiHulud v2 malware in npm packages reinforced how popular developer ecosystems can be abused to push info-stealers and backdoors into thousands of build environments with minimal friction.

In short, Q4’s security threats and scams were less about brand-new malware families and more about the reuse of proven scripts and approaches across new channels: fake tutorials that jump from screen to phone, tech support scams that land in calendars instead of inboxes, spyware that hides inside image files and extortion crews that skip encryption and go straight for the data. Law-enforcement takedowns and falling ransom payments are clearly having an impact; the broader crime stack, from Scam-Yourself flows to platform-scale scams and enterprise app exploits, remains very active and continues to evolve.
AI-Driven Attacks & Defenses
Across nearly all threat types we monitored this quarter, AI stopped being a side topic and became part of the default background for both attackers and defenders. We saw three clear shifts: 1. Threat actors are starting to treat LLM exploits and “agentic” workflows as ordinary tools, 2. Social-engineering scams are being upgraded with AI voices, faces and scripts, and 3. On the defensive side, both platform providers and security products are racing to build AI-native protections.
On the offensive side, LLM abuse is moving from experiments into the real economy. Recent investigations into LLM black markets describe sellers offering jailbreak prompt packs, prompt injection recipes, and access to leaked models or stolen API keys, packaged much like exploit kits or malware builders. Rather than inventing their own jailbreaks, criminals can now buy prompts that work against the most popular models and plug them into phishing frameworks, fraud bots or custom tools.

The most visible AI abuse leap in Q4 came from the nation-state space. Anthropic disclosed that a China-linked group jailbroke its Claude Code model and used it to orchestrate a multi-step espionage campaign against roughly thirty organizations, including financial and government-related targets. According to Anthropic, the AI handled approximately eighty to ninety percent of the workflow, from reconnaissance and code generation to log parsing and data staging, with humans mostly approving steps and correcting mistakes. This is no longer “AI helped me write some malware”; it is AI acting as a junior operator across an entire campaign.
Google’s Threat Intelligence Group reported something similar at the tooling level. Its AI Threat Tracker documents experimental malware such as PROMPTFLUX and PROMPTSTEAL, droppers that call an LLM at runtime to rewrite their own Visual Basic Script, change obfuscation and adjust evasion on the fly. Instead of shipping a static payload, attackers ship a template that asks an AI model how to change itself every time it runs. This matches the “just in time” self-modification pattern we highlighted in earlier quarters, but now in code that actually runs on victim machines.
Operating systems are also becoming more “agentic,” which opens a new front. Microsoft’s new agent workspace features for Windows 11 are designed to let AI assistants perform tasks for the user with their own desktop, local account and access to folders such as Documents, Downloads and Desktop, when enabled. In its own security guidance, Microsoft warns that these agents introduce novel risks such as cross-prompt injection, where malicious content embedded in a document or user interface can override agent instructions and trigger data exfiltration or software installation. For now, these features are disabled by default and limited to preview builds, but the direction is clear. If you let an AI agent act inside the OS, anything that can steer it becomes part of the attack surface.

Where AI touches people directly, the main shift is in social engineering, which already accounted for over 90% of all blocked attacks in 2025. Voice cloning has already changed “grandparent scams” and family emergency fraud. Police in Canada, among others, have warned about an uptick in cases where criminals use a few seconds of audio from social media or voicemail to generate a convincing copy of a relative’s voice, pair it with a scripted accident or kidnapping story, and pressure victims into urgent transfers. In our featured story Love at machine speed, we zoom in on this broader pattern. AI makes romance scams and sextortion campaigns feel like full-on relationships, with coherent photos, voice notes, short videos and even documents that all feel fully authentic. The scam does not have to be linguistically brilliant; it just has to feel consistent and emotionally tuned, then pivot into money asks, “verification” flows or intimate exchanges that can be turned into extortion.
We also see smaller, more personal versions of these emotional, relational scams happening on gaming platforms. On Steam, for example, we have observed waves of friend requests from cloned or lookalike accounts that mirror real contacts, followed by smooth, localized chat messages that clearly have an LLM “feel” to them. The goal is not to impress anyone with language skills. It is to lower suspicion in your native language, then steer the conversation toward off-platform links, giveaways or trading “opportunities”.
Our featured story How AI made video scams personal, one viewer at a time looks at the same trend from the video side. In 2025, YouTube and other platforms became fixtures of living-room TV time, not just phone habits, and video recommendation engines now shape a large share of what people watch. That shift is exactly why we built Deepfake Protection around the intersection of manipulated media and scam intent, using multi-modal signals from the clip itself (including indicators of cloned or heavily edited audio) plus contextual cues like money requests, urgency scripts, and off-platform handoffs. Early telemetry shows that most blocked AI scam videos cluster on a handful of major platforms, with YouTube leading by share of blocks and Facebook and X behind it. The majority are not spectacular, viral deepfakes. They are ordinary-looking clips, often with cloned or heavily edited audio, tied to financial and cryptocurrency lures. That is why the story argues that AI in a clip is not a risk signal by itself; the meaningful signal is AI paired with a request for money, an off-platform handoff or a time pressure script.
AI also shows up around account takeovers. In The code that steals your WhatsApp, we follow a device-linking abuse pattern – what we call GhostPairing – where a simple “I found your photo” message lures victims to a fake login page that uses WhatsApp’s own pairing flow to add the attacker’s browser as a ghost device on the account. The core trick is pure social engineering around a legitimate feature; however, the consequences line up with our AI concerns. Once a WhatsApp account is silently mirrored, attackers can read private chats, learn which contacts are most trusted, and harvest voice notes and photos that can later be fed into voice cloning and synthetic media tools. The attack itself is not “AI-malware," but it builds the raw material that AI-powered fraud will later exploit.
Outside of personal relationships and messaging, AI is also being used to impersonate professionals and institutions. Reporting from the Bureau of Investigative Journalism found dozens of Fiverr listings where scammers posed as real UK solicitors (attorneys), stealing names and registration numbers from the Solicitors Regulation Authority and pairing them with AI-generated headshots and boilerplate text. These profiles advertised legal services and, in some cases, boasted about using AI to draft contracts. The same playbook maps easily onto other roles, such as investment advisors or “remote employees.” We deliberately handle those themes more fully in the Trust and Identity section, but they belong in the AI picture as well.
Meanwhile, defenders are adapting, although not always as quickly as attackers. On the platform side, Anthropic, Google and Microsoft all treat AI misuse as a security problem, not just a policy issue. Anthropic’s disruption of the Claude-driven espionage campaign fed directly into new monitoring and safeguards against similar abuse. Google’s AI Threat Tracker is building a public catalogue of how state-backed and financially motivated actors use AI across reconnaissance, exploitation and post-compromise stages. Microsoft’s agentic security guidance, while still evolving, at least names cross-prompt injection and agent containment as core design problems for the Windows ecosystem.
In our own security stack, AI runs on both sides of the line. The same technology that helps criminals localize scripts and industrialize lures also helps our protections spot subtle combinations of signals that only make sense in context, not as single keyword rules. That includes login and device linking flows that deviate from legitimate patterns, payment journeys that show telltale friction points and coercion cues, and video or audio that pairs polished, synthetic-sounding narration with high-pressure financial pitches and off-platform handoffs. In Q4, on devices where our video scam detection feature was enabled, we detected 159,378 unique deepfake scam video instances that matched this intersection of manipulated media and scam intent. The three featured stories in this report are case studies of how this plays out in the real world: deepfake-style media stitched into financial scams, AI-orchestrated romance and sextortion scripts, and messaging account takeovers that quietly gather the raw material for future AI misuse.
The common thread in most research on AI is that it has become part of the normal toolkit, not a special effect. Some campaigns, such as the Anthropic case and PROMPTFLUX, use AI inside the malware and command chain. Others, such as AI video scams, romance fraud and GhostPairing, use AI to shape what victims see and hear. On the defensive side, both cloud providers and endpoint protection are starting to respond in kind. For people and organizations, the practical rule is straightforward. Any tool that can act on your behalf or strongly shape what you see and hear now deserves the same caution as a high privilege application: less access, explicit approval for sensitive actions, and a healthy suspicion of any AI-shaped request that tries to move money, data or trust onto someone else’s schedule.
Privacy
This quarter also demonstrated how fragile privacy still is when convenience, third-party vendors and opaque data markets collide. Even as lawmakers tighten some rules and platforms announce new “privacy-preserving” architectures, our own telemetry and outside incidents point to the same pattern: data collected for one purpose keeps landing in the wrong hands, and the basic identifiers that anchor digital life, especially phone numbers and device IDs, remain sought after.
Data keeps leaking through the side doors

Two incidents captured the structural risk of outsourcing sensitive data handling, from identity verification to location tracking, to third parties. Discord disclosed that a breach at a customer-support vendor exposed around 70,000 government ID photos used for age verification, meaning a single helpdesk relationship turned into a trove of passport-level data.
In Europe, investigative reporters demonstrated how easy it is to buy precise mobile location trails from data brokers. A “free sample” dataset contained hundreds of millions of location points that could be tied back to individual EU officials simply by matching daily patterns to home and work addresses. Both cases illustrate the same trend: once data is collected for “support” or “advertising,” it can be repackaged, resold, and analyzed in ways users never anticipated.

The biggest privacy shock of the quarter came from WhatsApp’s contact discovery system. Researchers from the University of Vienna and SBA Research showed that by abusing an under-protected API, they could enumerate roughly 3.5 billion WhatsApp accounts, confirming whether a given phone number is registered and, for many users, retrieving profile photos and “about” text. Meta has now tightened rate limiting, and our own blog, The WhatsApp privacy scare you probably missed, walks through why “no evidence of abuse” is not the same as “no risk” and how a single phone number can be the thread that links old and new data leaks together.
Commercial spyware campaigns offered another stark reminder that once a device is compromised, theoretical privacy guarantees collapse. The Landfall spyware abused a zero-day in Samsung’s image processing to infect Galaxy phones via malicious pictures sent in WhatsApp, then silently accessed photos, messages, contacts, call logs, the microphone and precise location for months. At that level of compromise, encryption and consent banners offer little more than the appearance of protection.
Our telemetry: trackers stabilize while breaches continue to climb
On the tracking side, our Norton AntiTrack Windows telemetry shows that classic browser tracking and fingerprinting remain pervasive, but there are early signs of stabilization. The number of trackers and fingerprinting attempts detected per device on Windows flattened and even decreased in Q4 compared with earlier quarters, although overall volumes remain high. In practical terms, this means that the underlying tracking ecosystem has not gone away, even as some platforms talk about moving “beyond cookies.” Tools that actively block trackers and fingerprinting continue to do real work in the background.

Global privacy rules are drifting in different directions
Regulators did not stand still this quarter, but the net effect is patchwork rather than a clear global standard.
In the EU, the Commission’s proposed “Digital Omnibus” package would delay key “high-risk” AI rules until late 2027 and relax how companies can use sensitive data, including health and biometric information, for AI training under “legitimate interest.” Heated discussions are expected ahead as data access will need to be balanced with privacy needs.
India moved in the opposite direction by activating its Digital Personal Data Protection Rules 2025, which formalize a consent-centric regime with tighter rules on purpose limitation and breach notification for everyday apps and digital services. In the United States, the proposed Health Information Privacy Reform Act (HIPRA) would extend HIPAA-style protections to health and fitness apps that have historically lived in a legal grey zone, reflecting concern over the volume of sensitive health telemetry collected by wearables and wellness platforms.
At the state level, California continued to blaze new frontiers. The new “Opt Me Out Act” (AB 566) will require browsers to offer a built-in global opt-out signal by 2027, and websites subject to the California Consumer Privacy Act (CCPA) will have to honor that signal as a valid “do not sell or share” request. The California Privacy Protection Agency is also progressing separate work on deletion and opt-out platforms and child-focused privacy safeguards. Together, these moves suggest that browser-level privacy controls and automated opt-out signals are becoming a default expectation in key US states.
Big tech recalibrates privacy promises, but tracking lives on
Platform responses to these pressures have been mixed. Google effectively shut down most of its Privacy Sandbox initiative, acknowledging that the proposed replacements for third-party cookies saw limited adoption and heavy scrutiny. In practice, this means that conventional tracking cookies and related techniques will stick around longer than many anticipated, which matches what we continue to see in our AntiTrack telemetry.

At the same time, Google introduced its Private AI Compute architecture, a cloud environment that promises to run powerful Gemini models on user data inside hardware-isolated, encrypted enclaves that even Google cannot directly inspect. Apple, for its part, updated App Store Review Guidelines so that apps must explicitly tell users and obtain permission before sending personal data to third-party AI services.
These are meaningful steps, but they do not remove the core risk that sensitive data is leaving the device. They shift the conversation from “whether” to share data with cloud AI to “under what rules and guarantees”. As long as traditional web tracking also persists, we expect anti-tracking and anti-fingerprinting controls to remain critical scaffolding for any realistic privacy posture.
What this adds up to
Across all of these signals, the privacy story this quarter is less about a single spectacular leak and more about accumulation. Third-party vendors and data brokers expand the surface where sensitive information can escape. Messaging flaws and commercial spyware show how metadata and on-device compromise can undo the comfort of end-to-end encryption. Regulators are pushing in different directions, with some jurisdictions tightening controls while others are making AI training easier. Platforms are quietly backing away from ambitious tracking replacements while introducing new AI-centric data flows.
For our customers, that combination means privacy is no longer a static setting. It is a moving target that depends on how many third parties touch their data, how their devices are secured, and whether they run tools that can strip out trackers, flag breaches early and warn when their identifiers, from phone numbers to government IDs, are being reused in risky ways. Increasingly, that also includes services like Privacy Monitor Assistant, which automatically scans data broker sites for their personal information and requests its removal when it resurfaces.
Trust & Identity
Trust used to be about whether a website had a padlock or a caller knew your name. In Q4, it is increasingly about whether the person, service or document on the other side is even real. Identity is no longer just a target; it is the main attack surface. Our data and external research both show the same pattern: more identity-based fraud, more synthetic and stolen personas, and a fast-growing ecosystem of tools that claim to protect people from exactly that.
Identity fraud moves from edge case to default risk
Across industries, businesses report that fraud is increasingly identity-led rather than purely transactional. AuthenticID’s latest analysis of enterprise customers shows a 42 percent year-over-year surge in fake IDs and suspicious biometric matches, with overall fraud rates hitting 2.10 percent, the highest level in three years. In the same survey, nearly 70 percent of organizations said they were concerned about generative AI making identity fraud harder to spot, and more than two-thirds had already experienced workforce-related fraud in the last year, often involving compromised or fabricated identities rather than simple account misuse.
The 2025 Global Identity and Fraud report tells a similar story. Around 90 percent of businesses say they are worried about fraud, and almost 60 percent report higher fraud losses year on year, with identity theft, account takeover and payment fraud at the top of the list. Many have already increased their fraud prevention budgets and expect AI-generated deepfakes and synthetic identities to be major challenges by 2026. Consumers, however, are not reassured. A majority still feel uneasy about having more of their lives online, and there is a noticeable gap between the confidence businesses have in their own controls and the level of trust people actually report.
In short, identity is where attackers invest and where defenses are playing catch-up. That is exactly what we see in our own telemetry.
What our identity alerts say about real people
Our LifeLock data looks at consumer-focused identity events over time: new records, suspicious transactions, dark web exposure and scam-driven alerts, all converted into risk ratios so we can see what share of our customers are actually at risk during a given period.
In Q4 2025, several types of alerts climbed noticeably, the figures below show the quarter-over-quarter change in average monthly alerted users:
- Property-related record alerts (unauthorized filings): +252.27%
- Breach alerts (possibly data exposure, even when the source isn’t yet known): +221.72%
- Bank account activity alerts (suspicious activity beyond credit lines): +112.34%
We also introduced and started tracking a set of new alerts that monitor identity abuse deeper in the financial system. These include:
- Alerts on suspicious activity within credit-based accounts, such as credit cards or lines of credit.
- Alerts on new applications for installment loans, leases and retail cards opened in a customer’s name.
- Transaction-level alerts on credit cards and installment loans that flag unusual or high-risk payments.
These alerts do not just add volume. They move protection closer to the first moment where a stolen or synthetic identity turns into a concrete financial obligation: a new loan, a new lease, a new credit line, or a suspicious transaction. The rising risk ratios in these categories reinforce what external reports suggest: identity fraud is getting more layered, touching property records, deposits, credit instruments and scam-driven social engineering in parallel.

Data breach events accelerated sharply in Q4, both in frequency and scale (based on our product users). Monthly breach events rose from a low of 307 in January to a peak of 3k+ in November, before closing the year at 2,243 incidents in December, representing a +175.23% quarter-over-quarter increase in total breach events.
The volume of exposed data followed a similar trajectory: breached records climbed from 557k in July to more than 2.09 million records in December, driven by a sharp spike in November (1.4 million records). Overall, the amount of breached data increased +157.36% QoQ, underscoring not just more frequent breaches, but larger and more consequential exposure events as the quarter progressed.
To interpret the identity alert ratio correctly, it helps to separate volume from severity. The identity alert ratio shows the share of active users who received at least one alert of a given type in the period, including early warning signals as well as higher-impact events. Some alerts are low-friction indicators; others point to concrete exposure, such as an unrequested loan application, suspicious account activity, or confirmed data exposure. Taken together, the trend suggests identity-related issues are affecting a growing portion of our user base, not just a small unlucky subset.

Personal identifiers are becoming skeleton keys
The WhatsApp incident covered in our blog, “The WhatsApp privacy scare you probably missed”, was a clear reminder that a single identifier can unlock a lot more than people realize. The underlying lesson – and risk – is not limited to WhatsApp. Phone numbers and similar identifiers are increasingly treated as universal keys across messaging apps, banking, recovery flows and two-factor prompts. When they leak, attackers can combine them with old breach data, open profile information and cheap lookup tools to reconstruct profiles of who you are, where you live, who you talk to and which services you most likely use. These profiles make it much easier to craft believable phishing, SIM-swap attempts, fake support calls or password reset tricks that look “just personalized enough."
We see the same pattern in our LifeLock alerts when scam or phishing events spike at the same time as breach notifications and new financial applications. Your identity is usually not compromised in a single dramatic event. It is probed, enriched and reused over time, with each exposed data point making the next attack more convincing.
Two of this quarter’s featured stories highlight this trajectory at a personal level. The GhostPairing WhatsApp takeover piece shows how a single pairing code can silently add a “ghost” device to your account, turning your own chat history, voice notes, and contacts into tools for further impersonation. The AI-assisted romance scams story shows how photos, voice and social profiles can be stitched into complete synthetic partners that feel real to victims while they are being nudged toward verification pages, investment portals or intimate exchanges. Both cases build on the same idea: your identifiers and traces become the raw material for someone else’s script.
Synthetic professionals and ghost employees
It is not only individuals who are dealing with identity risk. Organizations are also being fooled by fake colleagues and professionals who exist only on paper and in pixels.
In the United States, several defendants pleaded guilty in a case that revealed how North Korean IT workers, operating under false names and using stolen or borrowed US identities, managed to secure remote jobs at hundreds of companies. They funneled salaries and back to North Korea, bypassing sanctions and giving access to corporate networks and codebases were being handled by people very different from the resumes on file.
On the services side, investigations into the freelance marketplace Fiverr exposed dozens of profiles that impersonated real UK solicitors using stolen Solicitors Regulation Authority credentials paired with AI-generated headshots. Some of these fake “lawyers” openly admitted to using AI tools to generate legal documents for clients, while regulators reported more than 1,400 scams involving impersonation during the year.
The common thread in these cases is not just that criminals are lying about who they are. It is that they are hijacking institutional trust signals: bar numbers, professional registries, corporate HR processes, video calls and polished headshots. When these markers are cheap to fake and hard to verify at scale, both individuals and organizations end up making high-stakes decisions based on identities that never really existed.
Again, our internal telemetry aligns with this. The uptick in new applications for credit products and leases under LifeLock monitoring is consistent with synthetic identities being used to open lines of credit or rent assets that will never be paid for.
Defenses are catching up, but trust is fragile
The defensive side of this picture is evolving quickly. Global investment in digital identity protection, fraud detection and financial crime technology continues to grow. Analyst reports on the identity theft protection market, for example, expect the global sector to rise from roughly 14.5 billion dollars in 2025 to more than 23 billion by 2029, with double-digit annual growth as more consumers pay for dedicated monitoring and restoration services.
In regulated industries like banking and insurance, digital identity protection is now treated as core infrastructure rather than a nice-to-have. Everest Group’s recent work on digital identity and financial crime notes that financial institutions are moving toward layered systems that combine identity verification, strong authentication and compliance automation, increasingly with AI and biometrics in the loop.
The challenge is that technology alone does not fix the trust gap. An Experian survey shows that more than 80 percent of people expect companies to take active steps to protect their identity and privacy, yet far fewer actually trust that those companies are doing enough. Many people want both stronger protection and less friction, which is a hard combination to deliver. When identity checks are too intrusive or wrongly block legitimate users, people quickly lose patience. When checks are invisible and permissive, fraud finds the gaps.
Our own experience echoes that tension. Adding new alert types makes it easier to spot the early stages of identity abuse, but it also increases how often we ask people to verify themselves. If every warning feels urgent, users tune out. If one critical alert looks like every marketing email, it gets ignored – our products work hard to balance these tensions to ensure people are alerted appropriately.
Where this leaves people
The trend in Trust & Identity this quarter is clear: more of the risk people face is expressed as identity events rather than isolated malware infections. Stolen or fabricated identities show up as new property records, breach notifications, suspicious deposits, new credit lines and scam-driven alerts in customers’ accounts. Businesses are ramping up their use of digital identity tools and AI-assisted fraud detection, yet still struggle to keep pace with criminals who can cheaply clone faces, voices, documents and resumes.
For individuals, the practical implications are simple but uncomfortable:
- Your phone number, email, government IDs and professional credentials are treated as universal keys across many systems. When one of them leaks, it is not a minor nuisance; it is a long-term risk factor.
- “Verified” is not binary anymore. A bar number, a corporate email or a polished profile can be perfectly fake, especially when AI does the heavy lifting.
- Digital identity protection is becoming something you must maintain continuously, not a one-time fix after a breach notice.
In the remainder of this report, we will show how these abstract trends play out in concrete stories, from account takeovers that add ghost devices to your chats, to AI-orchestrated romance scripts that wrap themselves around your emotions, finances and documents. Taken together, these stories show that the real battleground is not a login page; it is the evolving relationship between who you think you are online and who your data claims you are.
Financial Wellness
Financial scams follow money habits. They show up wherever people check bank balances, apply for loans, move money between accounts, or try to stretch a paycheck. Until now, our view of that space came mostly from classic cyber telemetry, for example, banking phishing, fake investment portals and payment fraud.
In 2025, Gen added MoneyLion to its portfolio, empowering consumers to confidently manage and protect their digital and financial lives. MoneyLion democratizes access to financial tools like credit-building, savings, and personalized advice through a single mobile app. With MoneyLion now part of Gen, we have a new vantage point on how financial risk appears in real life, at the exact moment when people are choosing products and making decisions about their money.
Across Q4, four patterns stand out. They do not look like traditional “banking Trojans,” but together they show how fraud is threading itself into everyday financial behavior.
- Stolen identities are pushed into onboarding, not just into logins
In November 2025, MoneyLion’s data science team flagged a burst of new account applications that looked legitimate on the surface but failed deeper checks. Follow-up analysis showed three constants:
- The identity data looked real, not invented, and was good enough to pass basic Know Your Customer (KYC) processes.
- Deeper checks on devices, IP addresses and declared locations did not line up, with discrepancies between IP and claimed home address or very recent sightings of those devices.
- A clear clustering around the same external bank as the linked account.
In other words, this was not a random spray of fake applications. It was a low-volume, high-quality push using stolen identity kits and existing bank accounts to try to pass as “perfect new customers” for a banking-style product. MoneyLion’s layered controls, including step-up identity verification backed by risk scoring, stopped all of these attempts, so there were no recorded losses from this wave.
What we learned from this pattern is important. It repeats behavior seen earlier against other US institutions, such as Citizens Bank (2023) and Varo Bank (2024): once attackers have clean identity and account data, the next logical step is to walk that data through fintech and credit products, not just use it for brute force logins. Fraud volumes stay deliberately low to avoid bot and velocity checks, which means the quality of each identity matters more than the number of attempts.
- Tipping and post-settlement edits are an attractive fraud surface
A second pattern involves how cards handle tips and post-settlement changes. MoneyLion has been fighting a recurring fraud method where a fraudster:
- Acquires or controls a bank account with a small balance, for example, five dollars.
- Seeks authorization for a tiny transaction, often one dollar, which easily passes.
- After authorization, the settlement amount dramatically increased, for example, to fifty thousand dollars.
Because the authorization has already gone through, the system often treats the later settlement as approved and pushes the burden of recovery into slow dispute processes. In Q4, two incidents illustrate the impact:
- In October 2025, one actor used this method to create potential losses of around 33,000 dollars.
- In November 2025, a single user caused about 16,000 dollars of loss in a single day while on a cruise.
On a monthly basis, forced post-settlement abuse still represents a small share of overall revenue, measured in a few basis points. This approach exploits a feature originally designed for hospitality and tipping, where post-transaction adjustments are normal. If platforms let new or lightly vetted merchants perform large modifications without extra checks, a handful of actors can repeatedly convert one-dollar tests into five-figure hits.

3. Microtransaction disputes blur the line between customer and fraudster
Microtransaction abuse shows up at the intersection of creators, social platforms and financial apps. The basic approach looks like this:
- A user connects a bank account to a money app,
- They send a high volume of small tips or purchases to a creator on platforms such as TikTok or Meta,
- Once the funds have cleared, they dispute all the transactions, claiming they were unauthorized or that goods were not received.
Sometimes the creator is an unwitting victim, seeing their income clawed back. In other cases, there is a collusion between the account holder and the creator, who split the proceeds. Either way, the economic incentive for banks and processors is often to accept the disputes, because fighting thousands of small chargebacks costs more than writing them off.
MoneyLion’s view is that, since January 2025, the use of microtransactions has been rising across gaming and social platforms, and in some months, dispute volumes linked to this behavior can balloon into the 20,000 to 30,000 dollar range if all those transactions are auto-approved and later contested. That is real money at the scale of a single merchant or creator ecosystem.
From a financial wellness perspective, this is where first-party fraud stops being a rounding error. People are being coached online to treat chargeback abuse as a “trick” against big platforms, not as fraud that raises costs for everyone. At the same time, some genuine consumers, including families managing kids’ in-app spending, are nudged into overspending through manipulative in-app purchase design, then fall back on disputes when bills arrive. Both sides push more friction and higher fees into the system.

4. Phishing and account takeover still drive the biggest direct losses
The most traditional pattern in MoneyLion’s data is also the most expensive: SMS phishing that leads to full account takeover. The recent waves share a familiar anatomy:
- Attackers obtain phone numbers and partial identity data for real customers, often from breaches at other financial institutions or data brokers.
- They send localized SMS messages that mimic MoneyLion alerts and direct people to a fake login page,
- The fake page collects credentials and one-time passcodes, which are then replayed immediately against the real service.
This method has the potential to cause substantial losses, even when baseline verification and risk systems block many attempts. In response, MoneyLion applies a layered approach that escalates checks at the moments of highest risk, including targeted re-verification when activity deviates from legitimate patterns. The tradeoff is friction; stronger checks can occasionally surface for legitimate customers. Looking ahead, that balance will get harder as AI-assisted fraud attempts improve at spoofing identity signals, increasing the need for resilient liveness and deepfake-aware verification. Across incidents, device profiling also showed consistent environment indicators, including repeated locale and input-setting artifacts, but these signals should be treated as investigative context, not attribution.
These campaigns also tie back to the broader identity picture in this report. The same compromised phone numbers and partial profiles that make WhatsApp or banking phishing so effective are often sourced from earlier breaches or data sales. Once an attacker can get a customer to hand over a temporary code, every other safeguard has to be fast and precise enough to recognise the session as abnormal in real time.
MoneyLion’s answer has been to invest in stronger multi-factor orchestration and alternative authentication methods that are harder to replay through phishing proxies, for example, device-bound approvals and more aggressive step-up checks when fresh devices or locations appear.
What this adds up to for financial wellness
Taken together, these patterns show financial fraud moving closer to everyday behavior:
- Stolen identity data is being spent first on high-quality onboarding attempts, not only on brute force logins.
- Legacy payment features such as tipping and post-settlement edits are being bent into tools for outsized paydays.
- Microtransactions and creator economies create fertile ground for chargeback abuse and collusive “friendly” fraud.
- Classic phishing and account takeover remain a primary driver of direct dollar losses, powered by recycled identity data and convincing lures.
For Gen, MoneyLion’s signals let us see those risks at the moment they touch real customers: when an application is blocked before funds move, when a one-dollar test suddenly settles as a five-figure charge, when dozens of micro tips are quietly disputed, or when an SMS lure turns into a drained account. That is where financial wellness is decided in practice, not in a spreadsheet of average balances, but in the repeated small moments where people either keep control of their money or quietly lose it.
Patrik Holop, Associate Manager, Data Science
Luis Corrons, Security Evangelist
Featured Stories: how current threats play out in real life
The numbers in this report show how large the problem has become. These stories show how threats come to life in our daily lives. Each of the following featured stories zooms in on a specific campaign or technique we investigated in depth, from account takeovers spreading through trusted apps to AI-shaped scams and media manipulation designed to make people hand over control of their device or account with a single tap or scan. The common thread is not a particular piece of malware, but a pattern of behavior. Attackers exploit familiar brands, everyday tools and believable scenarios, then use automation and AI to scale those tricks across millions of people. These cases reveal where attackers are concentrating their efforts, what defenses have proved effective, and which risks are likely to accelerate in the year ahead.
How AI made video scams personal, one viewer at a time

We live in a digital feed that never sleeps. Within a few years, our habits have shifted from group chats and photos to an always-on stream of short and long videos, social posts and streaming. YouTube has become a fixture of living-room TV time in many countries, and is no longer just a mobile pastime. In August 2025, it captured 13.4% of all U.S. television viewing, its second-highest share on record, a clear sign of how mainstream video creators have become stars in homes across the country. In the UK, regulators report that 41% of YouTube viewing happens on TV sets, another sign that social video has migrated from phones to sofas. Streaming overall keeps expanding its share of home TV use, and that shift brings more advertising and recommendation engines into connected TV, shaping what people see next.
At the same time, AI has quietly carved out a key role in video production. AI is no longer a novelty technology; it is often the most important tool for creators and marketers. Adobe’s latest global survey found 86% of creators are using generative AI somewhere in their workflow, and most say it allows them to create content they never could before. Analysts forecast that a growing share of outbound marketing messages will be synthetically generated, which means persuasion at scale will arrive with fewer human bottlenecks. Put simply, a larger portion of the videos that reach people will have AI fingerprints on them, whether that is to clean up audio, generate B-roll, clone a voice, or stitch a composite face or content that combines all of the above.
Criminals always follow the audience and the tooling. The same advances that let a small team produce studio-grade content also let a small criminal crew fabricate a convincing plea for money or a fake endorsement. That does not always mean chasing virality. It often means crafting clips that look and sound intimate enough to convert a small group of potential victims.
What the data shows
In the second half of 2025, we added on-device video scam detection to our Windows protection. The goal is simple: focus on content intending harm, not just novelty, and flag videos where manipulated media and scam intent come together.
The feature is new and not on every device yet, so results reflect where we have coverage, for example, which platforms and countries have more users with it. To keep the picture fair, we show shares by platform and country, not absolute numbers.
YouTube dominates the overall share of blocked AI scam videos, followed by Facebook as a distant second, and X in third place. The most danger comes from longer, engaged watches as opposed to short snippets. This lines up with where attention sits today, many people consume creator video in lean-back sessions on TVs and PCs, which gives scammers longer windows to persuade.

In our Q4 telemetry from Windows devices where this feature was enabled, the distribution of blocked AI scam videos was highly concentrated on a small set of platforms. YouTube accounted for 64.8% of blocks, followed by Facebook at 11.1%. Other platforms each represented a small share in this dataset, including X (0.5%), TikTok (0.4%), Vimeo (0.2%), and Instagram (0.1%). These shares should be read as where blocks occurred within covered Windows PCs, not as a ranking of overall platform risk across the internet. Mobile-first viewing and app-only behaviors are not captured here, which can materially underrepresent platforms that are primarily consumed on phones.
This is not a verdict on those platforms at large; it is a reflection of where our customers actually encountered scam videos when on their PC. The dominance of YouTube likely reflects where people spend their video hours, and how recommendations string clips together in longer sessions on TVs and PCs.
By country share of blocked AI scam videos, the United States leads, followed by Australia, the United Kingdom, Canada and Germany. All other countries together make up a bigger share than any single country. We treat this as a ranking of where our protection intercepted scams, not a map of overall risk, because results still depend on where the feature is installed and most widely used. As adoption grows, we will add a per-user risk view, users with at least one block divided by active users with the feature.
We stopped most attacks in real time during playback. These scams appeared during playback – while you watched the video – not in downloads or attachments.
Most scam videos blocked were general scam content, with financial and cryptocurrency lures close behind. In other words, AI manipulation is not the entire scam but a working part of a larger money play. We are continuously working on refining the automatic classification of these scams to have more granular insights.
AI itself is neutral. Danger comes from intent
There is far more AI-generated or AI-edited video in the wild than outright scam video. That is expected and healthy. The presence of AI in a clip is not a useful risk signal on its own. Much of what AI does is benign or even helpful, for example, noise reduction, captioning, translation, and providing stock footage. The risk appears when AI capability is paired with a request for money, data, an urgent ask or an off-platform handoff – when AI and the scam meet.
This is why our telemetry tracks only the intersection: AI-made or AI-assisted media that is also a scam. In Q4, on devices where the feature was enabled, we detected 159,378 unique deepfake video scam attempts within this intersection, most of which blended into ordinary content, often voice-led clips embedded in otherwise normal videos. In the smaller, clearly labeled set, the top themes are money-related, finance and crypto. As AI becomes the default ingredient in media production, the right way to judge risk is by intent and behavior, not by whether AI touched the file.

What the signals suggest:
- Scammers follow the audience. Wherever people spend time watching videos, that is where scammers go. If they can slip their content into normal playlists on the biggest platforms, they do not need huge view counts. They only need to look believable enough for a small percentage of viewers to fall for it.
- Finance remains the North Star. Our detections show a strong concentration of investment, trading, and giveaway content. This mirrors broader shifts in digital media spend, where AI-targeted ads and calls to action are flooding connected TV and social video. The same tools that amplify legitimate campaigns are also lowering the barrier for fraud.
- AI is normal, so AI misuse is normal too. With creator-level adoption this widespread, it would be surprising if criminal crews were not using the same tools. The difference is intent. Our detections focus specifically on AI-touched media that is also a scam, not every video that happens to include AI.
Why this matters for consumer harm
The risk is not just the astonishing fake. It is the ordinary-looking clip that blends seamlessly into your subscriptions. A cloned voice reading a script over stock footage is enough to move money when it is seen next to a trusted logo or comes from a familiar face or a creator you already follow. This represents a very different problem from yesterday’s focus on spotting viral deepfakes.
What people can do right now
Beyond using deepfake protection products, here’s a list of actions that can help protect you across platforms:
- Do not trust a clip to stand on its own. If it asks you to move money, pause and switch channels. Verify through a known contact method or an official website you type in yourself.
- Ignore urgency and exclusivity. Timers, limited slots, or “only for subscribers” offers are classic pressure cues for investment and giveaway scams.
- Check audio more than visuals. Many scams hinge on cloned or stitched audio. Listen for pacing that never breaks, awkward breaths, and mismatched tone or echoes.
- Treat QR codes and pinned comments as links from strangers. Search for the destination yourself instead of scanning codes and clicking links.
- Report and move on. If you spot a scam, report it. Platform reporting helps downrank bad content faster and reduces the chance others will see it.
Criminals do not need a million views; they need the right viewer at the right time. Audio is the quiet driver. Cloned or heavily edited narrations can project authority or urgency, while ordinary visuals avoid attention. Together, this lowers skepticism, exploits fluency bias and presents the message as guidance rather than persuasion. A clip labeled as an interview or a tutorial feels safe, especially if the voice sounds familiar, even if the visuals are generic or the proof is just an overlay. That dynamic helps explain why almost all intercepts happen during playback, showing where protection is most critical.
AI is now normal in content production, and there is likely no turning back from this – rather, the inclusion of AI in the media we consume will only increase. The question is not whether AI is in the loop; it is whether the clip uses AI to push a fraudulent action. This is our first share-based read on AI-made or AI-assisted scam videos across platforms and countries, and it lets us set a baseline and move from anecdotes to patterns. You will see these figures evolve as adoption grows, and we will keep calling out two things clearly in future quarters: where people are actually getting hit and which lures are costing them money.
Love at machine speed
At one time, romance scams were clumsy cons, full of spelling mistakes and obvious lies. That changed the moment AI learned to play humans. Today, a scam can arrive in your inbox as a fully formed character, complete with a believable face, a convincing voice, a life history, and a pattern of small, attentive gestures that feel, to the target, exactly like falling in love.

Imagine the scene: You match with someone on a dating app. Their profile pictures are glossy but intimate, a mix of sunny holidays, a coffee shop selfie, a dog framed by a living room bookcase. Their messages are clever – they remember details about your weekend that you barely mentioned, and they send sweet voice notes that sound warm and somehow familiar. Over a few weeks, the conversation deepens, the stranger remembers that you prefer black coffee, asks about the scar on your knee, and sends a private clip where they say your name. It all feels painfully real. Unfortunately, what you are experiencing is not human love; it is an engineered performance, orchestrated by AI tools that fill in every gap where human deception used to stumble.
What makes today’s romance scams so effective is not just the synthetic photos or the cloned voices. It is the emotional precision. In psychology, you sometimes see this described as a “dark empath,” someone who can read your cues and mirror your emotions but uses that insight instrumentally to manipulate you. AI helps scammers mimic that behavior at scale, remembering tiny details, matching tone, and timing warmth or vulnerability so it feels like real care, right up until the ask arrives.
Although we often speak about this type of con in relation to dating apps, the same playbook runs across mainstream social media. You can meet an almost perfect partner on Facebook, trade DMs on Instagram, stumble into a flirty TikTok live, or move a chat from X into a private channel, and the mechanics do not change. AI supplies the photos, voice notes, and short videos. The account looks lived in with a few reposts and friendly comments, and the conversation builds the same rhythm of attention and trust. The pivot comes later – a request for money, a verification link, a parcel fee, or an intimate exchange that later turns into extortion. Regardless of the platform, the script is the same.
Different stories, same end goal
Before naming specific schemes, it helps to recognize the pattern. First, the relationship feels real. The details stay consistent across photos, voice notes, short videos, and even documents. Then the story branches. Some paths guide you toward “safe” investments. Others steer you to verification pages that harvest IDs or payment information. Some rely on urgency, using a voice that sounds like a loved one, while others turn intimacy into pressure and threats. These flavors often overlap. Criminals pivot as soon as you hesitate. The goal is always the same: turn trust and emotion into money or data. Below are the most common variants, with plain explanations and simple examples you can recognize in the wild.
- The classic romance turned investment trap – pig butchering
What starts as a genuine emotional bond gradually shifts into financial manipulation. The scammer, posing as a caring partner, introduces the victim to a safe investment or crypto opportunity. The app or website looks legitimate, shows fake profits, and once the victim invests more, everything disappears.
Example: After weeks of flirting, your new partner persuades you to test an exclusive trading platform. You top up, and the site disappears with your money.
- Deepfake lover or influencer impersonation
Scammers use AI-generated photos, videos, and voice clones to impersonate attractive strangers or even celebrities. These fakes feel personal and intimate, convincing victims they are speaking to someone real.
Example: A short clip arrives with a familiar face saying your name, inviting you to a private chat that leads to paid perks.
- Sextortion through synthetic intimacy
Here, the scam escalates quickly. After some flirtation or exchange of images, the victim receives a doctored photo or AI generated video that appears to show them in a compromising situation. The criminal then demands payment to keep it private.
Example: A blurred video appears to show your face on someone else’s body, followed by a threat: pay or I will send this to your contacts.
- Reciprocal sextortion, AI-assisted trust building
The scammer sends AI-created nudes first to lower defenses, promises reciprocity, then pressures the victim to send real images. Once the victim complies, the scammer extorts them using those real photos.
Example: Someone on a dating app offers to prove trust by sending a nude; it looks real thanks to AI. You send one back, then receive demands to pay or it will be shared with your contacts.
- Voice cloned loved one in distress
A few seconds of audio are enough to create a believable call from a partner or family member, claiming to be in an emergency and needing immediate help.
Example: A frantic call that sounds exactly like your sibling, claiming they are stuck overseas and need cash for a hotel or medical bill.
- Fake verification or dating platform scams
The lure is safety and trust. The scammer asks you to verify your identity on a separate site, which actually collects payment details and personal information.
Example: To keep us both safe, please verify your profile here. The site looks official, but it is a front to steal your card data.
What falling for these scams actually costs
The damage is wider than a single transfer. It usually includes direct financial loss, ongoing blackmail or harassment, identity theft from “verification” flows, account takeovers, and a long tail of recovery costs and stress. At the population level, the losses are substantial. The FBI’s Internet Crime Complaint Center recorded more than 16 billion dollars in reported cybercrime losses in 2024, a 33 percent jump over 2023, and notes that the true figure is likely higher. The U.S. Federal Trade Commission reported 12.5 billion dollars in total fraud losses in 2024, up 25 percent year over year, and highlighted that scams starting online account for billions of those losses. Within that, romance scams remain among the costliest imposter scams, with 2023 reports alone totaling 1.14 billion dollars and a median loss of 2,000 dollars per person. Older adults are hit hard: an FBI report tallied more than 3.4 billion dollars stolen from Americans 60 and over in the latest year, with thousands losing over 100,000 dollars each.
Loss is not only about money. Sextortion cases bring sustained threats, reputational harm, and doxxing attempts. Identity documents uploaded to fake “safety checks” can be reused for new accounts and loans. Voice-clone emergencies corrode trust inside families and, in extreme cases, trigger dangerous real-world confrontations. Finally, victims are often targeted again by “recovery” scammers who promise refunds for a fee.
What our telemetry shows in 2025
Many attacks that start with a romance setup pivot quickly into other scams, for example, investment portals, sextortion, refund or parcel fees, or fake verification that harvests IDs. We tag an event as “dating-scam” only when the lure or page clearly uses a romance or dating pretext. If a conversation shifts early into investments, parcel fees, refund tricks, fake verification, or generic phishing, it is blocked under other categories. In other words, the dating-scam data understates the true volume of attacks that begin with a romantic hook.
In recent months, we have seen a steady rise in romance-themed scam attempts in our telemetry, as reflected in the below graph. There were over 17 million dating scam attacks blocked in Q4 2025, an over 19% increase from the same period in 2024.

How to stay safe, without killing the mood
You may be thinking: Do careful people still fall for these types of scams? The answer is a resounding yes. These scams target judgment, not knowledge. When photos, voice notes, videos, and paperwork all line up, our brains overweigh coherence. AI is excellent at making everything feel seamless. The fix is not paranoia; it is adding small moments of friction where reality is forced to show up.
First, treat any money request inside a romantic context as high risk. Pause between the message and any payment, even if the amount is small or framed as a refundable fee. If you need to verify a person, do it with a live, unpredictable check that is hard to pre-record. Ask them to join a video, write a phrase you choose on paper, hold it with both hands visible, then pan to a nearby window or street sign. If the connection is always too poor for that, keep your guard up. When someone invites you to invest, only use platforms you already trust and reach on your own, not links sent in chat. Refuse off-platform verification pages. For parcels, type the courier website yourself, and do not click tracking links sent in DMs.
Set a personal rule not to send explicit content. Scammers now send convincing AI nudes first to lower defenses. The goal is to make you send something real, then quickly change the conversation to threats. If you already shared something intimate and are being extorted, stop contact, save evidence, report the account on the platform, and file a report with local authorities. Payment rarely ends harassment and often increases demands. Tell a trusted friend so shame cannot be used against you.
Real love online exists. That is exactly why these scams work. You do not need to switch off your heart to stay safe. Just keep three habits at the top of your mind: pause before paying, verify on a channel you control, and refuse high-risk moves on someone else’s schedule. These small steps are enough to break most scripts.
The code that steals your WhatsApp
How device linking enables takeover, persistence, and rapid spread across your contacts
You are chopping vegetables when your phone buzzes. A WhatsApp from a friend you trust.
“Hey, I just found your photo!” The preview shows what looks like a Facebook post. You tap.

The page looks familiar, clean and a bit boring, which is exactly the point. There is a Facebook login box, then a blue button that says you need to verify to view the photo. When you press it, the next screen shows a six-digit code and a short instruction telling you to open WhatsApp and enter it to confirm the login.
You want to get back to dinner, so you do what the page tells you. A second later, nothing obvious happens, so you put the phone down.
What actually happened is the whole story.
What the attacker just got
That code was not a Facebook thing. It came from WhatsApp’s own device linking feature and it belonged to the attacker’s browser. When you entered it in WhatsApp, you quietly granted the attacker a live session to your account. They can now read your chats, see new messages arrive in real time, and write to your contacts as if they were you. No SIM swap, no code theft, no forced logout on your phone. It feels invisible because WhatsApp treats that browser as one of your own devices.
How the trick actually works
Most people think linking WhatsApp Web only works by scanning a QR code. WhatsApp actually has a second method for pairing devices, one almost nobody knows about: linking a device using only your phone number and a numeric code sent inside the app. It was designed as a convenience feature, but it looks so similar to a standard security confirmation that attackers have turned it into the perfect social engineering tool.
Here is what really happens in the background. When you land on the fake Facebook page, the site forwards your phone number to the real WhatsApp Web linking flow. WhatsApp then generates a pairing code that the rightful user must enter in the app to approve the new device. Instead of keeping that code on their own screen, the attacker simply prints it on the fake page and tells you that you must “confirm this login in WhatsApp to view the photo.” To most people, this looks like a normal verification step, the kind you see every time you log in somewhere.

The moment you enter that number into WhatsApp, you are not verifying anything related to Facebook. You are authorizing a new linked device and giving the attacker’s browser the same level of access as your own laptop. That is why this takeover feels so clean. There is no password theft, no malware, no obvious alert. The attacker just slips into your account and quietly becomes you.
Technically, the same abuse is also possible with QR-based linking. In practice, however, that is much harder to drive at scale when both the browser and WhatsApp run on the same phone. Asking people to deal with a numeric code on screen is far more realistic, which is why the campaigns we see rely on the phone number plus pairing code flow.
For a deeper technical walkthrough of the device-linking abuse, including additional examples and infrastructure details, see our supporting blog post, “GhostPairing Attacks: from phone number to full access: abusing WhatsApp’s device linking.”
How the snowball forms
While you finish dinner, the attacker gets busy. Your family group receives the same short message and link. So do your school chats, your work chats, and the neighbor who checks your dog. The words feel like you wrote them, short and casual, so people click. Each new victim goes through the same “verification” step, entering their own pairing code in WhatsApp and linking a new attacker-controlled browser. Each account becomes another megaphone. That is why this spreads fast. It is not a random number texting you. It is a friend.
If anyone reports you for spam, WhatsApp may temporarily limit your account. That helps, but it does not remove the attacker’s linked browser. When the limit expires, the session is still there unless you manually revoke it. Persistence is part of the playbook.
Why this technique matters
End-to-end encryption protects messages in transit, but device linking changes the threat model. A single scan or code approval gives the attacker the same view of your chats that you see on your laptop. Once inside, they do not need to rush. They can watch, learn who your closest contacts are, and wait for the perfect moment. A voice note you recorded last week becomes training data for a voice clone. A holiday photo becomes the backdrop of a payment request that looks and sounds exactly like you.
We refer to this pattern as a GhostPairing Attack. The attacker’s browser becomes a kind of ghost device on your account, present in every conversation, even though you never see it on your phone.
The scam starts as a two-minute trick. It evolves into slow, patient social engineering supported by the kind of personal data only a compromised WhatsApp account can provide.
Inside the kit
Everything about this flow suggests a packaged kit. A short lure localized for a specific country. A generic Facebook skin. A reusable linking step that connects directly to WhatsApp’s legitimate device pairing logic. The domains are disposable and interchangeable, from photobox[.]life to yourphoto[.]world to postsphoto[.]life. Tomorrow, there will be new ones with the same HTML and a different logo. This is production line fraud.
What to do right now
- Open WhatsApp, go to Settings, Linked devices, and log out of any device you do not recognize. If you see a desktop browser you never use, remove it.
- Turn on Two-Step Verification in WhatsApp and set a unique 6-digit PIN. Add a recovery email so you can reset the PIN if needed.
- Treat any code or QR that asks you to link WhatsApp as sensitive. If a website says you must link WhatsApp to view a photo or parcel, close the tab.
- Share a simple warning in your family or school groups. People listen when it comes from someone they know. A single screenshot of the lure plus the linked devices instructions is often enough to stop the chain.
What platforms can fix
This attack succeeds because pairing is too easy to misdirect and too sticky once granted. Three changes would reduce the impact significantly.
• Show a clear warning before linking that names the browser, location, and context.
• Rate limit linking attempts by browser fingerprint and region to stop mass campaigns.
• Auto-unlink unknown sessions when an account is rate-limited for spam.
The takeaway
What makes this scam dangerous is not the message or even the fake page. It is how invisible the compromise is. One quick scan links your WhatsApp to someone else’s browser, no malware required. From there, every private chat becomes an open door, every trusted contact a new target. The same mechanism could be reused anywhere device linking exists, from messaging apps to productivity tools.
The moment you approve the link, your account becomes a broadcast channel and a data source for future pressure. The defense is simple once you know it exists. Check linked devices, remove strangers, and think twice before entering a code or scanning a QR that is not inside the official WhatsApp app.
Luis Corrons, Security Evangelist
Martin Chlumecký, Malware Researcher
Patrik Holop, Associate Manager, Data Science
In Closing
As we close this report, one thing is clear: the threat landscape is not chaotic; it is consistent. The tools and brands may change, but the core playbook does not. Criminals lean on trust, routine and speed. They show up in the same browser where you watch tutorials, in the same chats where you talk to family, in the same apps where you check your balance. The most damaging attacks are the ones that feel like everyday tasks until one small decision goes wrong.
Across the telemetry and examples in this report, there are three main threads. First, many attacks only work if people do the last step themselves: installing the file, scanning the QR code, approving the pairing, and entering the code. Second, AI has become part of the plumbing on both sides, quietly shaping scams and defenses rather than acting as a headline feature. Third, the harm is increasingly counted in identities and money, fake partners, fake professionals, bad loans, drained accounts and long recovery journeys, not just in machines cleaned and files restored.
The good news is that the same patterns that make these attacks scalable also give us repeatable ways to break them. Small moments of friction, clearer warnings around device linking, stronger defaults for privacy and tracking, better authentication flows, and protections that watch what content is trying to make you do, not only where it came from. Our own data from browsers, devices, identity alerts and day-to-day money decisions shows that when those safeguards are in place, many of the “Scam Yourself” tricks simply fail to land.
None of this removes the need for skepticism. There will always be a new lure, a fresh logo, a more polished voice or video. But you do not need to chase every novelty. If you treat rushed requests, unexpected verifications and off-platform handoffs as high risk by default, you already cut off a large share of what we see here. A short pause, a second channel to verify, and a habit of saying no to surprise links are still some of the most effective defenses most people have.
Thank you for taking the time to explore this report. The work behind it comes from researchers, analysts, engineers and writers across Gen who spend their days trying to translate complex signals into something useful for real people. We hope it helps you understand not just what criminals are doing, but how to stay one step ahead of them in the places where you actually live your digital life.

Download the Q4/2025 Threat Report Key Highlights
Visit our Glossary and Taxonomy for clear definitions and insights into how we classify today’s cyberthreats.
