Research

OpenClaw: Handing AI the keys to your digital life

What autonomous AI agents can do and why they introduce a new class of security risk
Luis Corrons's photo
Luis Corrons
Security Evangelist at Gen
Published
February 2, 2026
Read time
11 Minutes
OpenClaw: Handing AI the keys to your digital life
Written by
Luis Corrons
Security Evangelist at Gen
Published
February 2, 2026
Read time
11 Minutes
OpenClaw: Handing AI the keys to your digital life
    Share this article

    AI assistants are no longer just advising users. They are starting to act. Platforms like OpenClaw make it easy for anyone to deploy autonomous AI agents that can operate across inboxes, browsers, files and workflows.

    The real risk is not artificial general intelligence. It’s autonomy plus permissions. 

    When AI can read messages, click links, approve workflows and act across accounts, a single compromise stops being “one bad click” and becomes a compromised process.

    Recent security incidents and early research show these risks are no longer theoretical. As agent ecosystems scale, vulnerabilities, misconfigurations and unintended interactions become more likely. 

    Experiments like Moltbook, where AI agents interact publicly, highlight another risk: autonomy combined with fluent language can create the illusion of intention and judgment, encouraging users to over-delegate trust. 

    This post explains what OpenClaw is, why AI agents are going viral, the security and privacy risks they introduce and how to use them safely before convenience turns into exposure.

    From concept to reality 

    Imagine having a digital personal assistant that doesn’t just chat, but actually gets things done. That’s the promise of OpenClaw, an open-source AI assistant that you run on your own computer or server. Unlike Siri or Alexa, OpenClaw runs locally. It lives on your device, connects to your favorite messaging apps and can automate tasks across your digital life. This combination has made it wildly popular and deeply controversial.

    What is OpenClaw?

    OpenClaw is a self-hosted personal AI assistant designed to run 24/7 on your machine. You interact with it through apps like iMessage, WhatsApp, Telegram, Signal or Discord, as if you were texting a helpful friend. Under the hood, OpenClaw connects large language models (like OpenAI’s or Anthropic’s AI) with the tools that let it act. It can remember past conversations (persistent memory), send proactive messages, browse the web, manage files and execute commands. Unlike traditional chatbots, it’s built to take initiative. With the right permissions, OpenClaw can manage calendars, triage email, research online, run code and automate workflows using community-built “skills”. Because it runs locally, user data stays on the device rather than being routed through central cloud services. You may also see the names Clawdbot or Moltbot. These were earlier names for the same project before it was rebranded to OpenClaw.Why Is OpenClaw Going Viral?

    In the span of days, OpenClaw skyrocketed from an obscure project to a tech phenomenon. Its rapid rise is driven by a few key factors:

    It’s autonomous and proactive.

    Unlike traditional assistants, OpenClaw doesn’t just respond. It remembers context, reaches out unprompted and chains tasks together. For many, this feels like a leap from chatbot to coworker.

    It actually does things.

    OpenClaw’s tagline is “the AI that actually does things”. With sufficient access, it can book travel, manage files, place orders or even call businesses using voice tools. One viral example showed it calling a restaurant to make a reservationwhen online booking failed. Impressive, but also unsettling.  

    It blends seamlessly into daily life.

    Because OpenClaw lives inside familiar messaging apps, it feels less like software and more like a persistent presence. Users control complex workflows just by texting. 

    It offers ownership and control.

    As an open-source, self-hosted tool, OpenClaw appeals to users who want AI infrastructure they control, not a black box run by a large company.

    It’s highly customizable.

    A fast-growing community is constantly adding new skills and integrations. Users aren’t just adopting OpenClaw, they’re co-building it.

    Together, these factors have led some to call OpenClaw the biggest shift in personal AI since ChatGPT.

    Why this is a tipping point

    OpenClaw makes autonomous agents accessible to anyone, and that speed matters.

    Recent research found more than 1,800 exposed OpenClaw instances leaking credentials and the number has risen to over 10,000 at time of writing, with thousands of instances added each day. The analysis of agent ecosystems suggests that around a quarter of autonomous-agent skills contain security weaknesses. At scale, these are not rare mistakes; they are expected failure modes.

    This marks a shift in cybersecurity. When actions are delegated to software that can operate independently, a single breach stops being an error and becomes a compromised process.

    This is a fundamental shift in how risk works.

    The core risk: Autonomy without accountability

    OpenClaw is not dangerous because AI is becoming sentient. It’s dangerous because fluent language plus autonomy can create the illusion of judgment. 

    This is what Gen calls Artificial Mindless Intelligence (AMI): systems that sound confident and intentional but lack understanding, grounding and accountability. 

    A chatbot that hallucinates is annoying. An autonomous agent that hallucinates authority can cause real harm. 

    From a security perspective, there are several areas where users should be especially careful.

    • Expanded attack surface: Running an AI agent that can read messages, browse the web and execute commands increases the number of ways something can go wrong. Instead of attacking the user directly, an attacker could target the OpenClaw assistant itself. Misconfigurations, exposed interfaces or weak authentication can effectively turn a personal AI assistant into an unintended remote access point.
    • Prompt injection and malicious instructions: Because OpenClaw processes untrusted text from emails, websites, and messages, it can be tricked into following hidden instructions embedded in that content (called prompt injection). Researchers have shown that carefully crafted prompts can hijack the agent into taking destructive actions, such as deleting files or running unsafe commands. Unlike traditional chatbots, prompt injection here isn’t just misleading; it can directly cause real-world damage.
    • Plugins and skills as a new risk layer: OpenClaw’s extensibility is also one of its biggest risks. Community-built skills are not centrally vetted and malicious plugins have already been discovered. A single untrusted skill can quietly exfiltrate data, steal credentials or enroll a system into a botnet. Installing a skill from an unknown source can be equivalent to installing malware.
    • “Root” level access and system-wide impact: Running OpenClaw with elevated permissions effectively gives an AI control over your system. That creates “unknown unknowns”: bugs, misinterpretations or compromises can have system-wide consequences. If an attacker gains control of the agent, they gain the same power as the user — access to files, credentials, keystrokes and downloads. As some have put it, this is like hiring a brilliant assistant who also has the keys to your house.
    • The Moltbook Effect: illusion of intention at scale: Projects like Moltbook, a Reddit-style social feed where AI agents post, comment and respond to one another, show how autonomous agents interacting with each other can quickly appear coordinated, intentional or self-directed. In reality, these systems are performing pattern completion, not reasoning. The risk is not that agents become sentient, but that fluent language and autonomy create the illusion of judgment. When systems feel intentional, users are more likely to over-trust them, grant broader permissions, and delegate decisions they shouldn’t.

    These risks are no longer theoretical. Instances of exposed configurations, prompt-based data leaks, and rogue plugins have already been observed in the wild. OpenClaw can be powerful but it must be treated like any high-risk system: deployed carefully, isolated where possible and guarded with strong controls.

    How to use OpenClaw safely 

    OpenClaw can be used safely, but it requires you to take some precautions and be a responsible “operator”. 

    • Run it in isolation: Don’t just install OpenClaw on your primary laptop with all your personal data. Consider running it in an isolated environment. For example, set it up on a spare computer or in a virtual machine (VM), or use Docker containers. This way, even if something goes wrong, the damage is contained to that sandbox.
    • Don’t expose it directly to the internet: By default, OpenClaw runs a local service (for its interface and API). Make sure that it isn’t accessible to the whole world. Avoid forwarding OpenClaw’s port (18789) openly online. If you want to access it remotely, use secure methods such as a mesh VPN or an authenticated tunnel (for instance, Tailscale or Cloudflare Tunnel), so the service is reachable only through a controlled path.
    • Start with limited permissions and expand slowly: You don’t have to give OpenClaw the master keys on day one. Start small. Maybe let it read your calendar but not your entire documents folder. Or allow it to suggest shell commands but not run them without approval. OpenClaw allows configuring which tools it can use. It’s wise to begin in a “read-only” mode and then gradually grant more permissions as you gain trust.
    • Be careful what accounts you connect: OpenClaw can integrate with lots of your online accounts (email, social media, banking, etc.) but think twice about each connection. The safest route is to avoid linking it to your primary personal accounts until you’re confident in its security. For testing, you might use a secondary email or a dummy account. If something requires access to sensitive accounts (like your main Google account), ensure you’ve followed the above isolation steps first.
    • Keep humans in the loop: OpenClaw is autonomous, but you don’t want it running completely unsupervised, especially at the start. Monitor its activity logs and outputs. If it’s set to perform tasks like sending emails or posting messages, review those actions initially. You can ask OpenClaw to explain why it wants to do something if you’re unsure. By regularly checking in (the OpenClaw community suggests reviewing OpenClaw doctor diagnostics often), you can catch odd behavior early.
    • Use safe channels only: One simple but important tip — don’t add OpenClaw to any group chat with strangers or untrusted people. Keep its conversations one-on-one with you or in groups of people you trust. Why? Because if random users can talk to your OpenClaw, they might try those prompt injection tricks or give it malicious commands. OpenClaw has a feature called DM pairing, which means it won’t obey unknown senders without approval.
    • Be cautious with third-party skills and plugins: Just like you’d be cautious installing unknown apps on your phone, scrutinize any OpenClaw “skill” before installing it. Stick to plugins from the official marketplace or well-known developers and read the code or community feedback if you can. By following these best practices, you can dramatically reduce the risks of using OpenClaw.

    A bright future, if used wisely

    OpenClaw represents an exciting leap forward in personal AI assistants. It’s easy to see the appeal: a tireless helper that can coordinate our digital lives, all under our own roof (or in our own device). The convenience and “wow” factor of telling OpenClaw to handle something and watching it actually get done is driving its fast adoption. We are witnessing the early days of what could be a new normal, where many of us have AI agents working alongside us, personalized to our lives.

    At the same time, OpenClaw is a powerful double-edged sword. In its current early form, using it means accepting some responsibility for security and privacy risks that never existed with simpler tools. The phrase “with great power comes great responsibility” definitely applies here. In the coming months and years, we can expect these AI assistants to get more user-friendly and secure. Guardrails will improve and best practices will become standard. For now, if you’re an adventurous consumer eager to try the future of AI automation, OpenClaw offers a thrilling glimpse; just be sure to follow safety tips and proceed with a strong level of caution.  

    The future of AI assistants will be shaped not just by what they can do, but by how carefully we decide to trust them.

    Luis Corrons
    Security Evangelist at Gen
    At Gen, Luis tracks evolving threats and trends, turning research into actionable safety advice. He has worked in cybersecurity since 1999. He chairs the AMTSO Board and serves on the Board of MUTE.
    Follow us for more