Leadership Perspectives

AI Generated Personas: Meet Gen Cyber Safety CTO's Digital Twin

Transparency is key to building trust in today's digital world
Siggi Stefnisson
Cyber Safety Chief Technology Officer
Published
June 23, 2025
Read time
5 Minutes
AI Generated Personas: Meet Gen Cyber Safety CTO's Digital Twin
Written by
Siggi Stefnisson
Cyber Safety Chief Technology Officer
Published
June 23, 2025
Read time
5 Minutes
AI Generated Personas: Meet Gen Cyber Safety CTO's Digital Twin
    Share this article

    With AI, that future is now. At Gen, we’re introducing the use of AI Avatars – or as we call them, Digital Twins – to create smart and fast ways to share information and connect in new ways. 

    But we’re also clear-eyed. The sophistication of this technology can be astounding. Even misleading. Because not everyone will use this technology for good. So, while we’re all in on the responsible use of AI, we’re also committed to fighting back against the wave of AI-powered scams popping up across the digital world. 

    Digital Twins vs. Deepfakes 

    First, let’s break down the difference between a digital twin and a deepfake. A digital twin is a virtual version of a real person, built with full consent and transparency. Think of it as a communication tool – one that lets us show up in more places, reach more people and share what matters, more clearly and faster. 

    A deepfake, on the other hand, is designed to mislead people often with the intent of scamming them. It looks like someone you trust, but it’s not. 

    We’re drawing a clear line between the two. At Gen, we use AI avatars to inform, connect and educate. Not to deceive. And we believe that transparency should be the foundation for any trusted AI-powered interaction. That’s why you’ll see a disclaimer on any AI avatar videos we create. 

    Using AI Avatars for Good 

    My personal digital twin will share real cybersecurity tips from me. That’s not just cool tech; it’s a real way we’re using AI to make information more accessible, consistent and immediate. 

    Instead of flying across time zones or waiting on schedules, an avatar lets us communicate important updates quickly. That means faster security awareness, better reach and less friction. 

    AI is powerful. But “with great power comes great responsibility.” That’s why we have guardrails around synthetic media usage like: 

    • Only create digital versions of people with full consent.
    • Make it easy for viewers to know when they’re seeing an AI-generated persona.
    • Be vocal about calling out misuse when we see it. We’ll share findings in our threat reports or even through upcoming Deepfake Detection features in our products. 

    Because we don’t just want to build cool things; we want to build the right things. 

    What We’re Seeing in the Wild 

    While we’re building digital twins with care, we’re also tracking how scammers are misusing similar technology to do the opposite. 

    In our Q1/2025 Threat Report, we detailed a rise in deepfake scams. We saw AI-generated people posing as financial experts, executives or even family members. These videos look polished, polished; they sound convincing, but they’re designed to trick people into installing malware or handing over money. 

    In one case, a group called CryptoCore used deepfakes and hijacked YouTube accounts to promote fake investments. The result? Millions of dollars were lost across thousands of incidents. 

    That’s not innovation. That’s abuse. And it’s why we’re doubling down on both education and prevention. 

    How We’re Protecting People 

    We’ve built tools like Norton Genie to help spot scams in real time, even the sophisticated ones. Genie flags suspicious content fast, giving people clarity when things look iffy. It’s simple, smart and built for today’s threats. 

    And soon, we’re taking that protection even further with Norton deepfake detection. This new capability will detect fraudulent AI-generated content in YouTube videos by analyzing audio streams for signs of synthetic speech and facial manipulation that may indicate a scam. Initially, Norton’s deepfake detection will work on Microsoft Copilot+ PCs on Windows, protecting people from financial scams, including investment, crypto and giveaway scams. Norton’s technology works quickly and quietly in the background, helping people spot deceiving videos and scam messages on the fly, without their private data ever leaving the device. 

    We’re also helping people spot red flags on their own: 

    • If something sounds too good to be true, pause.
    • Check the source. Do they exist outside the video? Are they verified?
    • Be wary of urgent instructions about crypto wallets, downloads or personal info.
    • Use trusted protection. Our security tools block shady links, fake updates and sketchy sites. 

    The more people know, the harder it becomes for scammers to succeed. 

    Let’s Build a Future Worth Trusting 

    Here’s the good news: while this technology is evolving fast, so are we. 

    At Gen, we believe that AI can be a force for good when used with intention. We’re already seeing how digital twins can improve education, accessibility, and communication. And we’re excited about what’s next. 

    We’re still early in this journey. There’s a lot to figure out collectively. But we know this for sure: trust isn’t built by accident. It’s built by design. And if something doesn’t feel right online? Trust your instincts and your tools. 

    At Gen, we’re committed to keeping people informed, protected, and empowered as the digital world keeps evolving. AI is here to stay. And we’re committed to making sure it works for us, not against us. 

    Siggi Stefnisson
    Cyber Safety Chief Technology Officer
    Follow us for more