The future of cybersecurity lives inside AI conversations


“Is this a scam?” has become one of the defining questions of the digital era.
What is changing is not just the volume of these messages. It is how people respond to them. Only now, instead of asking a friend this question, they’re turning to ChatGPT. This shift marks a profound inflection point in human technology interaction. AI is no longer just a productivity tool or a content assistant. It is becoming a decision layer. A reasoning partner. A first checkpoint before action.
At Gen, we see this as more than a trend. It signals the emergence of a new trust architecture for the internet.
AI is becoming the first line of defense
According to the Gen Threat Report, more than 90 percent of threats targeting people in 2025 stemmed from scams, phishing and deceptive advertising. The dominant risk online today is not malware quietly infecting a system. It is manipulation persuading a person.
In parallel, AI tools like ChatGPT are becoming deeply embedded in everyday workflows. People use them to draft emails, compare products, research decisions and increasingly to validate suspicious messages.
That convergence creates both opportunity and responsibility. If AI is where decisions are being shaped, then security intelligence must live there too.
From reactive security to decision-centric protection
In the Norton integration within ChatGPT, AI evaluates patterns such as impersonation tactics, urgency cues, requests for sensitive information and domain anomalies. Norton Genie, the world’s first AI-powered scam detector, pairs technical signals with behavioral indicators because modern fraud operates on both layers.
But this product moment sits within a much larger transformation.
The rise of autonomous AI and the trust imperative
Our OpenClaw research on autonomy risks explores how AI agents with increasing independence can introduce new security challenges. When AI begins to act on our behalf, make decisions or initiate actions, trust can no longer be assumed. It must be engineered.
Autonomy without guardrails amplifies risk. Autonomy with transparency amplifies trust.
That is why Gen has introduced the AI Agent Trust Hub, a framework and resource center focused on building responsible AI ecosystems. The Trust Hub emphasizes governance, explainability, accountability and human oversight as core design principles. This is not theoretical. As AI agents begin handling financial transactions, personal data and operational workflows, they will inevitably become targets themselves. The next phase of cybersecurity will not only protect humans from malicious actors. It will protect AI systems from manipulation and misuse. And it will protect humans from over-reliance on AI without understanding its limitations.
Designing AI that augments human judgment
The future of AI interaction is not about replacing decision-making. It is about strengthening it.
When someone pastes a suspicious message into ChatGPT and asks “Is this legit?” they are not outsourcing responsibility. They are seeking clarity. They are extending their cognitive reach.
That balance between automation and empowerment is critical.
AI must be capable enough to identify emerging global scam patterns, including fake delivery notifications, banking impersonation attempts and AI generated voice phishing. But it must also remain interpretable. Transparent. Accountable. The Norton app, within ChatGPT, can be that.
Building the next trust layer of the internet
We are entering an era where AI will mediate much of our online experience. It will draft our communications. Curate our information. Screen our interactions. Execute transactions. The question is not whether AI will become part of our security posture. It already is.
The real question is whether we build it with trust at the core.
At Gen, our approach is grounded in three ideas:
- Innovation must move at the speed of AI growth.
- Security must integrate into the environments where decisions happen.
- Trust must scale alongside autonomy.
The integration of AI-powered scam detection into conversational platforms is one step in a broader journey. It reflects a belief that protection should not sit on the sidelines. It should sit inside the interaction.
“Is this a scam?” may have been a common question in 2025. But tomorrow the more important question will be: Can we trust the AI systems helping us answer it?
The future of cybersecurity will be defined by how well we design trust into every layer of AI interaction.
And that future is being built now.