Gen Q3/2025 Threat Report


Foreword
The third quarter of 2025 confirmed what many of us have felt all year: the center of gravity in cybercrime is people. Criminals are not only trying to break into devices. They are working to walk users into traps that look and feel legitimate, at scale and with speed. Generative AI has lowered the cost of persuasion and raised the polish of deception. The result is a threat landscape where a convincing message, a nicely branded web page, or a friendly voice can be as dangerous as a zero-day exploit.
The patterns we highlighted earlier this year have become more pronounced. “Scam-Yourself” techniques continue to spread across everyday channels like SMS, email, and social media. Phishing moved further from crude copies to what we call VibeScams, where the trick is less in the code and more in the feeling a site gives you at first glance. Attackers are also blending formats. A text message can point to a look-alike login, which then routes a victim to a live chat or voice follow-up. The route is simple. The coordination behind it is criminal.
Defenders are adapting too. Our Gen Threat Labs work on both sides of the problem: understanding the criminal playbook and delivering protection that helps real people in real time. On any given week, that can mean reverse engineering a ransomware family to help victims recover their files without paying, training models to spot hidden scam patterns inside inboxes, or publishing practical guidance that families can act on in minutes. We believe research must translate into impact.
What you will find in this report reflects that approach. Our Feature Stories highlight real-world examples of what we see, the first of which examines VibeScams, the rise of AI-built phishing factories, and why “looking right” for two seconds is now enough to steal money or credentials. The second story zooms in on SMS – text messages remain one of the most universal and most exploited channels, and this quarter brought fresh tactics that pair automation with human-style tone and timing. The third story covers a new decryptor release and what it revealed about attacker tradecraft, economics, and mistakes. It is a reminder that ransomware is not only about prevention. It is also about giving victims a way back when the worst has already happened.
Beyond the Feature Stories, the Threat Landscape connects the dots from first touch to final loss. A sponsored link leads to a look-alike page, an SMS nudges the next click, and an inbox “verification” finishes the job. We document these handoffs with telemetry and case studies, highlight malvertising as a frequent starting point, and show the growing link between social browsing and off-platform fraud. Where numbers can be confusing, we add short notes on definitions and scope.
We also track social-to-commerce abuse, where “shop now” and creator tagging push users to off-platform checkout traps; the growing role of AI in the loop, with voice clones and live-chat assistants paired with SMS or email to close the scam; and shifts in payment rails toward instant transfers and refund bait that compress the recovery window.
Thank you to everyone who contributed, and to every reader using this report to protect families and teams. Criminals trade on trust. We work to deny that trade, click by click.
Read on, stay skeptical, and keep the advantage.
Luis Corrons, Security Evangelist (aka Threat Whisperer)
Threat Landscape Highlights: Q3/2025
In Q3 2025, cybersecurity threats continued to evolve across mobile and desktop platforms, with mobile scams and adware dominating the landscape. Mobile users were particularly targeted by Scam-Yourself Attacks and HiddenAds adware disguised as fake apps, often distributed through official stores. On desktops, Remote Access Trojans like Wincir remained active, while sextortion, gambling scams, and TechScam campaigns persisted in various regions.
Beyond device-level threats, data breaches rose in frequency, with financial data remaining an important concern, led by payday loan misuse, suspicious bank activity, and emerging web skimming and invoice scams.
Security Threats & Scams
Q3 highlighted how closely security threats and scams are now intertwined, evolving together across mobile, desktop, and beyond.
Adware shifted from desktop to mobile this quarter. After last quarter’s desktop spike and the return of Dealply, we saw a clear increase on Android. The number of blocked adware attacks rose by 77% quarter over quarter on mobile, driven mainly by HiddenAds-style apps that use obfuscation and hide after install, often posing as simple games. A notable share came through the official Google Play Store, which helps explain the growth, since people trust the store and many devices still run without active protection.

On desktop, Remote Access Trojans (RATs) remained persistent with a 10.67% Q/Q increase in risk ratio. The Wincir-driven rise from last quarter continued after a mid-year lull, and we saw noticeable RAT spikes in regions like Chile (+77.89%), Spain (+43.87%) and Brazil (+40.75%).
Some attacks now travel through trusted update channels. In August, criminals briefly slipped a bad update into a popular developer tool (Nx) that tried to copy passwords and other secrets from the computer. The takeaway is simple. Even software from familiar places can be harmful if a bad update runs. Basics still help: keep automatic updates on, install fewer add-ons, stick to well-known publishers, and if something looks odd, change your passwords and sign out of connected apps.
These risks are not abstract; they spill into daily life when critical systems go down. We are more dependent on connected systems than ever, which makes outages and attacks felt in the real world. Two Q3 examples show this clearly. Jaguar Land Rover’s UK plants paused production for more than three weeks after a cyberattack, with officials now assessing the wider supply chain fallout. And on September 20, a cyberattack on Collins Aerospace’s MUSE check-in and boarding software disrupted operations at major European airports including Heathrow, Brussels, Berlin, and Dublin, forcing manual check-ins and causing delays and cancellations.
Scam-Yourself Attacks, a new type of threat trend we have tracked in previous quarters on web and desktop, clearly extended to mobile in Q3. On Android and iOS we observed a rise in Fake Scans, a type of Scam-Yourself Attack that imitates system warnings, claiming malware was found and steering users to click malicious links. Today’s mobile threats remain mostly scam-driven rather than exploit-driven.
Sextortion activity clustered in August, reaching levels similar to previous peaks. Gambling scams also flared briefly, luring users to place bets then refusing payouts, often using URLs with words like “luck” to lure victims. TechScam continued climbing, with notable spikes in Mexico (1,650%), Chile (1,326%), Czech Republic (974%) and Colombia (700%), typically fronted by fake prompts asking for one-time PINs to “restore” access to messaging or social media.
Scams are everywhere, and they move across borders as easily as a text message or email. Law enforcement and security companies are pushing back to stop or at least minimize the impact. A good example is an INTERPOL-coordinated sweep this summer called Operation Serengeti 2.0. Running from June to August across 18 African countries with support from the UK, it led to 1,209 arrests, disrupted nearly 88,000 victimizations, recovered 97.4 million USD, and dismantled 11,432 malicious infrastructures used for scams and cybercrime.
The Operation Serengeti 2.0 cases ranged from day-to-day fraud to large networks. In Zambia, authorities broke up a crypto investment scheme tied to 65,000 victims and an estimated 300 million USD in losses. In Angola, police shut 25 illicit crypto mining centers and seized 45 illegal power stations. In Côte d’Ivoire, investigators took down a transnational inheritance scam with losses around 1.6 million USD. INTERPOL says the operation worked because police shared indicators like suspicious IPs, domains, and command servers ahead of time, with private-sector partners contributing threat intelligence and training. Results like these show that coordinated action can claw back funds, take down infrastructure, and make the next wave of scams a little less effective.
Coordinated action cannot stop every scam, but it raises costs and removes infrastructure that would otherwise keep hitting consumers.
AI-Driven Attacks & Defenses
In Q3, the AI layer moved from the novelty and research sphere to being a serious part of the real attack surface. Three trends stood out: attackers manipulating AI assistants, targeting add-ons and extensions that power AI tools, and experimenting with ways to embed AI directly into attacks. These trends have been building all year, helping explain why scams now feel faster and more convincing.

The most visible pressure point was assistant and agent abuse. Researchers demonstrated ways to nudge or quietly redirect popular enterprise assistants into revealing information or taking unintended actions. The lesson is simple. If an assistant can access your files or calendar, it deserves the same level of care as any powerful app, with strict permissions and a healthy habit of saying no to unexpected prompts.
We also saw supply-chain risk emerge inside AI tooling. In July, the Amazon Q Developer extension for Visual Studio Code briefly shipped with malicious commands after an attacker tampered with the code path. Amazon fixed the issue quickly, but the incident showed that even trusted AI extensions can become a delivery route for harmful actions through normal updates. Extensions and plugins should be treated like apps: keep them updated, choose verified publishers, and remove what you do not use.
In an example of how AI is shifting traditional threats, our previous Threat Report covered FunkSec, the first ransomware where criminals used AI to help them write ransomware code. This quarter, researchers reported a proof-of-concept (not real malware found in the wild) called PromptLock, that goes one step further. Instead of using AI during development, it uses a small model on the device during the attack to generate commands in real time. This makes the behavior more flexible and, in theory, harder for older, static detection methods to catch. Even as a demo, it points to how ransomware might evolve due to AI.
Persuasion research also stood out this quarter. Simple framing cues like authority and urgency made chatbots more likely to answer questions they should refuse. That same psychology fuels today’s “vibe-correct” phishing, where fake pages look and feel legitimate at first glance. AI helps criminals scale the look, the language, and the timing, which is why our VibeScams featured story is so timely.
We are also seeing state actors adopt the same playbook. In mid-July, security researchers reported a Kimsuky campaign that used ChatGPT to generate realistic images of South Korean military ID cards to make spear-phishing emails look official; the link delivered malware, not an ID. The same report ties the activity to earlier ClickFix-style lures and shows how AI is used to polish the “front end” of an attack. Separately, Anthropic documented North Korean operators using Claude to fabricate identities and pass coding tests for remote jobs. Together, these cases show AI as a force multiplier for social engineering, not just code writing.
The same AI boost shows up in text messages. Our SMS Threats featured story explains how AI tools now draft and localize smishing messages that match brand tone, include believable details, and land at the right hour. This is why delivery notices, bank alerts, and account lock texts feel more urgent and legitimate than they used to.
As mentioned previously, AI-enabled scams are now a real part of the threat landscape. Similarly, we are starting to see the early trends of video scams and deepfakes in our Gen telemetry. Q3 is our first quarter with telemetry from Media Shield, our new on-device deepfake protection for Windows AI PCs. The feature is still in early rollout with a limited installed base, so these results are indicative rather than market-wide. Even so, some patterns stand out. Most detections were flagged as deepfake audio, with a smaller share of scam videos. Within those scams, cryptocurrency and financial lures dominated, while the rest were more generic fraud attempts. Importantly, almost all of the scam clips were low-reach, with very few above 10,000 views, suggesting that deepfake scams are more common in niche content than in widely viral videos.
These are early indicators, not population-wide trends, but they show how manipulated media is being used today: less about chasing virality, more about convincing smaller audiences to part with money.
The picture that emerges is clear. Attackers are getting results through orchestration and persuasion, not only through clever code. Defenders are moving from written rules to built-in protections that act while you use the product: clearer guardrails in the chat, safer default settings, and tighter limits on what assistant tools can access. For consumers, the practical takeaway is to be extra careful with anything that asks an assistant to reach into your files, messages, or finances, to limit what plugins can do, and to treat perfect-looking pages with healthy doubt, especially when they ask for passwords or payments.
Privacy
When you use digital products, you leave a trail. Sites and apps collect details about your device, browser, and how you move around a page. Combined, these signals form a “digital fingerprint” that can identify a person across different sites and sessions. Fingerprinting is widely used on the web and can track people even without cookies, which is why it matters for consumers who think clearing cookies is enough.

Our telemetry shows that since July 2025 we have blocked an average of 247 million trackers per month, and we are seeing almost 37 million digital fingerprints per month, a 1% quarter-over-quarter increase. This tells us that tracking is alive and growing. Fingerprinting keeps you identifiable even without cookies, letting companies and data brokers stitch together a detailed record of what you read, buy, and search. That profile can shape the prices you see, fuel relentless targeting, and give scammers enough detail to make fake messages feel real.
While digital fingerprints allow your every move to be tracked online, encryption keeps everyday chats, photos, and backups private – and it can also frustrate criminal investigations. Some policymakers argue for special access for law enforcement, while security and privacy experts warn that any built-in workaround weakens protection for everyone. This quarter, in the UK, encryption policy reminded everyone that this is not a settled debate. In February, Apple removed its Advanced Data Protection option for iCloud in the UK after receiving a secret legal order under the Investigatory Powers Act and took the matter to the Investigatory Powers Tribunal. In August, the tide appeared to turn when the U.S. Director of National Intelligence said the UK had agreed to drop the backdoor demand. For consumers, the signal is simple: strong encryption still has heavyweight defenders, but it remains a live policy fight.
Across the EU, governments are still arguing over the proposal nicknamed “Chat Control,” which would let authorities issue detection orders that require services to scan for child-abuse material and grooming. Parliament’s position rejects generalized scanning and seeks to protect end-to-end encryption, while the Council remains split, with several countries including Germany opposing client-side scanning and others backing it ahead of scheduled votes. The upshot is that messaging privacy rules are still being decided, and outcomes could vary by country in the months ahead.
Another privacy reminder came to us this quarter from Sam Altman, OpenAI’s CEO. He has warned that therapy-style chats with ChatGPT are not protected like conversations with a doctor or lawyer, and that transcripts could be requested in a lawsuit. Treat AI chats like any other online form: if you would not say it in a crowded room, do not paste it into a chatbot, especially anything about health, legal matters, or finances.
Trust and Identity
Digital identity theft is clustering around fast-cash products and routine account probing. Criminals are applying for short-term loans and new cards in victims’ names, then testing access to existing bank accounts.
This quarter we saw more instances of breach events but smaller ones. The total number of breach events increased by 76%, while the overall volume of leaked records fell by 81%. In simple terms, there were more incidents, but each affected fewer people. What makes this trend concerning is the type of data exposed. The share of breaches containing passwords rose sharply to over 83%, even as breaches with email addresses declined. That means the “quality” of stolen data improved from a criminal’s perspective, as credentials are far more valuable than contact lists.
A breach event represents each separate case where data was exposed, while breached data refers to the total number of individual records leaked. So even if the total number of records goes down, more events mean more unique services and users potentially affected. The combination of smaller, more frequent breaches with richer data shows that attackers are focusing on precision and value rather than scale.

Across financial alerts in Q3, payday loans represent the largest share of unique alerted users at 32%, followed by new credit card applications at 15%, typically fraudulent attempts to open cards in a victim’s name. Protective lock blocks on payday loans accounted for 10%, reflecting successful blocks applied after a fraud attempt. Existing bank-account updates (unauthorized changes) and new auto applications each represent 6%.
On the transaction side, alerts are led by suspicious bank-account activity (42%) and credit card activity (20%). Smaller but notable shares include gray charges -small, technically authorized recurring fees like trial rollovers and add-on “clubs”- (9%), unusual charges (8%), and login errors at financial institutions (7%). The pattern reflects a familiar playbook: move fast to turn access into cash, then test the limits of existing accounts.

Alert patterns mirror a broader trend in which third-party access and support systems serve as frequent entry points. This quarter, a wave of attacks exploited company Salesforce ecosystems, often through connected third-party integrations. The FBI issued an advisory on two clusters—UNC6040 and UNC6395—describing tactics such as vishing for support credentials and abusing compromised OAuth tokens from the Salesloft–Drift integration to export customer data. Those exports can include support ticket text containing names, contact details, and even embedded secrets. The result: highly convincing phishing that references real cases.
Several firms confirmed exposure tied to this activity. Cloudflare reported exfiltration from its Salesforce tenant between August 12 and 17, 2025, limited to support case objects. Trade and tech press covered similar confirmations and mitigation steps across the industry.
A number of breaches also impacted well-known, non-tech brands. Air France–KLM said a third-party platform used by its contact centers was compromised in the week of July 28, exposing customer contact and loyalty details. These incidents do not always include passwords or card numbers, but they give scammers exactly what they need to personalize lures and create successful account-reset attacks.
Luxury retail provided another example of why “profile” data matters. In mid-September, Kering confirmed a June intrusion that led to theft of customer data for brands including Gucci and Balenciaga. Reports noted contact details and total spend amounts, which can be used to target high-value buyers with bespoke fraud.
Financial Wellness
Criminals are shifting financial theft to where payments actually happen: online checkout pages and accounts payable. We saw two key patterns emerge in financial theft in Q3: more attempts to skim card data directly from merchant payment pages and a rebound in fake or altered invoices that reroute money during vendor payments.
On desktop, web skimming attempts increased quarter over quarter. These are attacks that inject code into a store’s checkout page to capture card numbers and billing details as you type. We also saw invoice-scam activity rise again after a summer dip, with renewed waves targeting people who handle bills and supplier payments.

Independent threat reporting says web skimming levels remained high through 2025, with thousands of ecommerce sites affected in H1 and beyond. One July campaign hit OpenCart shops by hiding skimmers among analytics tags like Google Tag Manager and Meta Pixel. Researchers also described variants that validate cards via a legacy Stripe API and even move skimming into malicious browser extensions. When looking at these trends together, it is clear that client-side checkout is a hot target.
Payment industry standards have reacted. PCI DSS 4 emphasizes script authorization and change detection for checkout pages, reflecting how much risk now sits in third-party and client-side code. For consumers, the implication is simple: a site can look trustworthy and still be compromised on the page where you pay.
Invoice scams also increased in Q3 after a short summer lull, with attackers impersonating trusted vendors and official bodies to reroute payments. This echoes wider signals from regulators and law enforcement, including a Europol–EUIPO report in May that flagged a significant rise in fraudulent invoices across the EU, often boosted by email spoofing and lookalike domains, and the FBI’s latest IC3 report noting record losses with business email compromise still driving invoice diversion and vendor payment fraud.

Patrik Holop, Principal Data Scientist
Luis Corrons, Security Evangelist
Peter Kováč, Lead Data Scientist
Jakub Vávra, Sr Threat Analysis Engineer
Featured Stories: Consumers Under Siege
Cyber threats don’t usually announce themselves with flashing warnings or complex exploits. Increasingly, they arrive in the everyday moments of people’s lives: a website that “feels” right, a text that buzzes during morning coffee, a file that suddenly locks on your desktop. What unites these attacks isn’t just the technology behind them, but the way they prey on routine, trust, and timing.
This quarter’s featured stories spotlight three very different fronts of this same battle. Our story on VibeScams shows how AI-powered web builders are turning phishing into a plug-and-play service, where the danger isn’t a sloppy code but a flawless “vibe.” The SMS Threats feature reveals how the tiniest of texts can funnel victims into large-scale fraud, evolving into one of the most persistent and underestimated attack vectors. And our report on Midnight Ransomware reminds us that even hardened cyber extortion can falter: when copy-paste criminals forget the math, defenders can seize the opportunity to turn the tables.
Together, these stories capture the reality of consumers under siege: threats are faster, more deceptive, and closer to daily life than ever before, but also more fragile when exposed to the right defenses.
VibeScams - Phishing factories powered by AI
A new kind of phishing
We call them VibeScams because the trick isn’t in the code, it’s in the feeling. These sites pass the “vibe check”: the colors, the spacing, the logo placement, even the tiny footer links nobody reads but everybody recognizes. They feel right at first glance, and that’s enough to fool people into handing over money or credentials.
What makes VibeScams different is that the criminals behind them often aren’t technical experts. Just like VibeCoding lets anyone create software without coding, these platforms let anyone create scams without phishing know-how or web-design skills. A few words typed into an AI-powered builder are enough to generate a polished, brand-like page that can be abused straight away.
You open a text that says your DHL or FedEx parcel needs a small fee for delivery. You need your package, so you quickly click the link. The page you see passes a brand audit: the yellow (or purple, depending on brand), the spacing. Tomorrow it’s a Coinbase “wallet verification” or a Microsoft login. Everything looks great, right up until your money’s gone.
Thanks to AI, the phishing supply chain has completely upleveled. AI web-builders have turned “make me a page that feels like X” into a one-line prompt. Some even spin up a site from a screenshot. No HTML, no design school, no expensive dev hours—just a credible, on-brand clone in minutes. That shift drops the barrier to entry for scammers and multiplies the risk for everyone else.

Why this works
Our researchers stress-tested this ourselves. Using free offerings, we asked a handful of popular website builders to help us recreate the vibe of TikTok, Microsoft, Instagram, Coinbase, and Binance. The results were alarmingly realistic: polished layouts, familiar components, auto-generated copy that felt “official enough,” and one-click publishing. In other words: plug-and-play phishing.
And it’s not a fringe phenomenon. We’re seeing a surge of AI-built fakes across well-known platforms: think delivery brands, mobile carriers, crypto wallets, insurers, social media apps. Many are near-indistinguishable at a glance, especially on mobile where the URL bar is tiny and the design does most of the convincing. Since January 2025, we have blocked approximately 140,000 different AI-generated websites, roughly 580 new malicious sites every day on average. In the same period, we protected nearly 190,000 of our users, with the U.S., France, Brazil, and Germany among the most affected. That’s a lot of “almost-right” pages catching people on a busy Tuesday.

Behind the curtain
Why is this working so well on otherwise careful people? Because the old red flags (typos, clunky layouts, weird spacing) don’t show up when the page is composed by an AI design assistant. The scam succeeds on vibe: the color, the logo, the tone of the microcopy, the way the “Continue” button sits exactly where your thumb expects it. Localization is trivial now, too. If a campaign targets Spanish speakers in Madrid today and German speakers in Munich tomorrow, the builder simply re-spins the template with new text. And because set-up is cheap or free, criminals can A/B test headlines, icons, and prompts like growth teams do, then keep only what converts.

Under the hood, the workflow is painfully simple. Step one: prompt the builder to generate a page that looks and feels like a trusted brand or upload a screenshot for it to mimic. Step two: “wire up” the forms, either with built-in tools or by exporting the template and asking another AI to handle submission and exfiltration. Step three: publish, measure, iterate, repeat. When a host removes one site, a fresh copy appears somewhere else in minutes. It’s phishing as a service, and the inventory never runs out.
To understand what criminals were actually using these AI-generated sites for, we went beyond blocking the URLs. Our team examined the detected domains and analyzed the structure and behavior of the pages. That included looking at the forms they contained, the data they attempted to capture, and the redirection flows that followed. In some cases, the purpose was obvious: credential fields cloned from bank or email logins. In others, links led to cryptocurrency “investment” portals, payment card skimming forms, or remote-support software downloads. And in the future, attackers could just as easily deploy fake crypto “verification” pages that drain seed phrases. By classifying each site this way, we built the threat type breakdown shown in the chart, with phishing dominating, crypto scams close behind, and a smaller slice made up of tech-support and other fraud attempts.

Real-world outcomes follow predictable patterns. Credential-harvesting logins feed account takeovers. Crypto-themed scams try to funnel victims into fake investments. Delivery-fee pages skim cards for small “tests” before bigger hits. Tech-support look-alikes push remote-control installs. The specifics differ, but the mechanics are the same: a believable façade buys a few seconds of trust, just long enough to steal something valuable.
Turning the tables
Here’s the good news: this is not a losing game. On our side, we’re blocking the funnel. We detect and stop known-bad builder subdomains and typosquats before the page loads, so the “vibe” never gets a chance to trick you.
Some builders are stepping up, too. When we flagged abusive sites, providers like Lovable, Elementor, Flazio, Softr, Webflow, and WebWave moved quickly to remove them. It shows that responsible reporting works, and that user vigilance plays a role as well. The faster suspicious sites are reported, the faster they’re taken down. Still, the ease of spinning up new clones means takedowns alone can’t win the race; prevention and protection remain critical.
That’s why we also track bursts of brand impersonation across multiple builders and work directly with providers to limit abuse. And in our consumer products, features like Scam Protection and Scam Guardian help people sanity-check links and pages in the moment they need it most.
What should readers do differently? Start by distrusting design. If a page “feels” right but the request is unusual—seed phrases, 2FA codes, urgent refunds—treat that mismatch as a siren. Don’t follow login links from texts or DMs; use your own bookmark or the official app. For deliveries, jump to the known URL yourself rather than tapping through. And keep unique passwords with MFA turned on, so one mistake doesn’t unlock everything.
The bottom line
VibeScams don’t mean the web is broken. They mean phishing has matured. Speed, scale, and accessibility are now the attacker’s edge; precision, verification, and layered protection must be ours.
Phishing no longer requires phishers, just a prompt. If design can be faked in seconds, then trust has to live somewhere harder to fake—like the URL you typed yourself, the security tools that block bad pages on the way in, and the habit of pausing when something feels a little too perfect.
Want to dig deeper? Our full blog post on VibeScams explores detailed case studies, examples of AI-built sites we uncovered, and how criminals are adapting their tactics.
SMS threats: the tiny texts criminals love
A buzz at 8:12 a.m.
The message lands while you’re making coffee:
“Your account was accessed from a new device. If this wasn’t you, call us now: <Phone Number>”
It even appears inside the same branded thread your bank uses. In under a minute you could be talking to a convincing “agent”, handing over one-time codes; because, who ignores a security warning?
That’s the power of SMS as an attack vector: it’s cheap, instant and feels personal. And while headlines chase flashier threats, text-initiated fraud keeps quietly scaling. In 2024 alone, people reported $470 million in losses that began with a text, 5 times the amount reported in 2020. The U.S. regulator’s summary is blunt: texting is “cheap and easy,” and scammers know the ding is hard to ignore.
Carriers and industry monitors don’t expect relief soon. Research with global operators finds over half anticipate more SMS fraud this year, not less, as criminals exploit pricing shifts, routing weaknesses and artificially-inflated traffic. Meanwhile, awareness lags. In the UK, just 15% of mobile users even know about the free 7726 number to report scam texts; only 6% have used it, meaning networks miss signals that would help them filter faster. And as law enforcement has warned, waves like toll/parking smishing can hop state-to-state in weeks, keeping victims and defenders off-balance.
This quarter’s SMS hall of shame
This quarter we went through millions of malicious SMS hitting users around the globe. Below are the five most widespread campaigns we tracked. Each item refers to one concrete campaign, and the example shown comes from that campaign. The percentages are the share of all scam SMS attributed to that exact campaign in our telemetry during the period. This is not a ranking of scam types overall. Together, these five campaigns account for 26% of all malicious SMS seen this quarter:
Employment Scam (8% of all scam SMS)
Fake Refund (7%)
Tax Refund / Fine (6%)
Investment (3%)
Delivery (3%)

How the tiny text does outsized damage
The trick isn’t high-end malware. It’s friction—or rather, the lack of it. SMS lives where real life does: delivery updates, two-factor codes, appointment reminders. Scammers hide in that noise and win the first five seconds: a short link, a short deadline, and a decision before coffee kicks in. Industry surveys suggest four in ten smartphone users encountered smishing attempts in 2023, a reminder that the top of the funnel is broad, even before under-reporting narrows official stats.
Two dynamics keep SMS underrated and under-discussed. First, it’s cross-channel by design. Many campaigns quickly push you off SMS (onto a callback number or a messaging app) where pressure works better and defenses are weaker. Mobile operators/carriers can filter texts at the network level, and in some markets, regulators require them to do so; they can’t police your phone call in real time. Second, the signals are subtle. Phones can group messages by sender name, so a fake bank alert may land inside your real bank thread. Add a shortened URL and a convincing domain, and most of us are one tap from trouble.
The tricks behind the texts
Smishing doesn’t win because attackers out-engineer the phone. It wins because the channel feels routine and the disguises are simple but effective. Most texts arrive in the same inbox as your delivery updates and one-time codes; criminals hide in that noise and ask for something small before your brain has switched from “admin mode” to “security mode.” The playbook is a blend of human psychology (urgency, authority, convenience) and tiny technical tricks that make a dangerous message look ordinary for the three seconds it takes to tap.
Under the hood there’s no magic, just throwaway domains, and tricks that defeat quick visual checks and basic filters. When a lure needs more time or pressure, the conversation jumps channels—to a phone call or a WhatsApp chat—where defenses are weaker and a calm “agent” can walk a victim through the rest. And if someone hesitates or realizes too late, a second wave appears offering “recovery services” that charge a fee to “get your money back.”
Here are the mechanics doing most of the damage right now, and how they slip past both filters and busy humans:
- URL masking: public shorteners (e.g., cutt.ly, tinyurl.com, rb.gy) and throwaway domains hide destinations long enough to win a tap.
- Brand-ish web addresses: subdomains that glance official at speed, like flhsmv.govturim[.]vip exploit our habit of only reading the start of a URL.
- Username-in-URL phishing: https://www.fedex.com@… still misleads in SMS because the real hostname is everything after the @.
- Unicode obfuscation: SMS supports UTF-16, so look-alike characters turn Amazon into Αmаzon; filters and humans both slip.
- Pivots to voice/messengers: moving victims to WhatsApp/Telegram or a phone call segments the attack across platforms: harder to detect, easier to pressure. Public warnings around toll/parking smishing illustrate how fast these waves evolve.
- The “rescue” after the scam: recovery-service pitches (“Scammed? We can help—chat here”) target people already hurt, extracting a second fee.
For readers who want to see these categories in more detail, our blog post “SMS threats: the many faces of a tiny text” expands this list with screenshots, examples, and step-by-step advice.
When AI enters the picture
Artificial intelligence is quietly reshaping this landscape. Where criminals once leaned on clumsy, typo-filled texts, they can now generate fluent, localized messages at scale, each one tailored to the language and phrasing of its target country. AI can also spin up endless variations, making filters that rely on spotting repeated patterns far less effective.
And it’s not just text. Attackers are beginning to pair SMS with AI-generated voice calls or chatbot follow-ups on WhatsApp, creating a seamless multi-channel fraud that feels eerily professional. The result: scams that used to raise suspicion now read like the messages you really do get from banks, couriers, or tax offices. For defenders, this means the bar keeps rising; for consumers, it means skepticism has to become second nature.
SMS may not trend like ransomware or zero-day headlines, but its quiet efficiency is exactly the point. It reaches everyone, blends with daily admin, and routes victims into higher-value fraud with almost no friction. Until awareness (and reporting) catches up, the smallest message on your phone will keep doing outsized damage—one buzz at a time.
Cracking Midnight: when copy-paste ransomware forgets the math
Ransomware has long been treated as unbeatable: files are locked, the math is unbreakable, and victims are forced into a terrible choice: pay or lose their data. Midnight ransomware tells a different story.
Built on Babuk’s leaked source code, Midnight is part of a new wave of copy-paste ransomware projects. But instead of improving on the original, its authors weakened it. In analyzing the code, our researchers uncovered a critical weakness in Midnight’s cryptography, the kind of mistake that is rarely seen in modern ransomware. By identifying and leveraging this flaw, the team was able to develop a free decryptor that gives victims a way out without paying.
Midnight in the wild
At first glance, Midnight doesn’t look much different from other ransomware. It locks files, drops a ransom note called How To Restore Your Files.txt, and demands payment. In most cases, victims see their files renamed with the .Midnight or .endpoint extension, a tell-tale sign of the infection.

Earlier versions of Midnight mainly went after high-value data such as databases and backups, but more recent ones broadened the scope to nearly all common file types, leaving only program files untouched. To move quickly, it also uses a trick called intermittent encryption, touching just enough of a file to make it useless while speeding through the system.
On the surface, all of this looks like business as usual for ransomware. But underneath, Midnight’s authors made critical mistakes in the encryption design, mistakes that turned into opportunities for defenders.
From Babuk to broken forks
The 2021 Babuk leak was a turning point for the ransomware ecosystem. Suddenly, sophisticated source code was available to anyone who wanted to try their hand at cyber extortion. That leak spawned numerous spin-offs, some more successful than others. Midnight is one of these, and its story illustrates an important trend.
Rather than giving rise to a new “super strain,” the Babuk leak produced a wave of rushed, amateurish forks. Midnight is a textbook case: criminals re-used the code, made “improvements” they didn’t fully understand, and ended up with a product that sabotaged itself.
Our researchers uncovered weaknesses in its cryptographic design, rare flaws that opened the door to building a working decryptor. For defenders, even a single mistake of this kind is valuable; in Midnight’s case, the opportunity was even greater.
A free way out
Armed with these insights, we developed a free Midnight Ransomware Decryptor. Victims can use it to recover files safely without paying ransom.
Rather than duplicating technical detail here, we’ve published a full breakdown on our research blog, including indicators of compromise, screenshots, and step-by-step decryptor instructions. This report tells the bigger story.
For defenders and researchers who want the technical details, our colleague Samuel Vojtáš has published an analysis of Midnight on the Gen blog. His article walks through the ransomware’s evolution from Babuk, highlights its quirks, and provides step-by-step instructions for using our free decryptor. You can read the deep dive here: Decrypted: Midnight Ransomware
Why Midnight matters
Midnight may not be the most widespread ransomware of 2025. But its lessons resonate far beyond its victim count:
- Copy-paste criminals. Leaked source code is fueling countless ransomware spinoffs, but not all of them are stronger. Midnight proves that copy-paste shortcuts can weaken attackers.
- Rare crypto flaws. Finding single exploitable vulnerabilities in ransomware encryption is uncommon, but Midnight proves that criminals don’t always get the math right.
- Hope for victims. Midnight shows that ransomware isn’t always the end of the road. Free decryptors exist, and paying isn’t the only option.
- AI hype vs. reality. While headlines spotlight AI-assisted ransomware experiments, Midnight shows the other side of the ecosystem — rushed projects built on leaked code, riddled with rookie mistakes. The ransomware landscape is splitting: some groups experiment with AI, while others sabotage themselves. That fragmentation creates new openings for defenders.
The bigger picture
The ransomware ecosystem is fragmenting. Instead of a few large, disciplined groups dominating the field, we’re seeing a growing number of buggy forks, rushed experiments, and copycats built on leaked code. That’s good news for defenders: the more mistakes criminals make, the more opportunities we have to neutralize them.
Midnight is a case in point, ransomware that tried to build on Babuk’s legacy, but ended up undermining itself. For victims, our decryptor means recovery without ransom. For the industry, it’s a reminder that ransomware isn’t always unbreakable. And for criminals, the message is clear: recycling someone else’s code won’t guarantee success.
Branislav Kramár, Threat Analysis Engineer
Daniel Beneš, Senior Threat Researcher
Jan Rubín, Threat Research Team Lead
Ladislav Zezula, Senior Malware Researcher
Luis Corrons, Security Evangelist
Samuel Vojtáš, Threat Analysis Engineer
Tadeáš Zíka, Threat Analyst
In Closing
This quarter confirmed the shift we have tracked all year: cybercrime targets people first. AI has made scams look right and feel right, from VibeScam lookalikes to SMS lures that hand victims to live “support.” Ransomware remains loud, yet the quiet wins come from shrinking the gap between first contact and first protection. Our focus stays on real-time defenses inside the places people already talk and shop, with clear help in the moment of doubt.
With the holiday season nearing comes the peak shopping season. Expect scams tied to deliveries, order issues, too-good-to-be-true deals, fake support, and “in-platform” fraud across marketplaces and social. The playbook for readers is simple: slow one beat, check the sender and the URL, never share one-time codes, use 2FA or passkeys, and pay only inside trusted checkouts. We will meet criminals in the conversation with protections that spot the con and stop the loss. See you next quarter with the results.

Download the Q3/2025 Threat Report Key Highlights
Visit our Glossary and Taxonomy for clear definitions and insights into how we classify today’s cyberthreats.
