The Security Question Every Self-Hoster Asks

When people first hear about RunLobster — managed cloud hosting for OpenClaw, the open-source AI agent with 145,000+ GitHub stars — the most common pushback is: \"But I don't want my agent's data in the cloud.\"

It's a fair concern. Your OpenClaw instance knows a lot: your integrations, your API keys, your Notion workspace, your email patterns, your calendar. If that data were pooled in a shared database or accessible to other users, that would be a serious problem.

RunLobster's architecture was built specifically to address this. Here's exactly how it works.

Per-User Compute Isolation

The most important architectural decision RunLobster made is giving every user their own isolated compute container. Your OpenClaw instance does not share a process, a filesystem, or a network namespace with any other user's instance.

This matters because OpenClaw, by design, has access to your tools and integrations. An agent that can write to your Notion, send messages on your Slack, and read your calendar needs to be completely isolated from other agents — not just at the data layer, but at the execution layer.

RunLobster achieves this through container-level isolation with the following guarantees:

  • Each instance runs in its own namespace with no shared memory
  • Network egress is controlled — your agent can only reach the integration endpoints you've explicitly enabled
  • Filesystem is ephemeral and user-scoped; no cross-user filesystem access is possible
  • Resource limits (CPU, RAM, network) are enforced per container to prevent noisy-neighbor issues

Encryption at Rest

All persistent data associated with your OpenClaw instance — including agent memory, conversation history, integration configurations, and stored context — is encrypted at rest using AES-256.

The encryption key hierarchy works as follows:

Master Key (HSM-stored, rotated quarterly)\n  └── Account Key (derived per user account)\n        └── Data Key (derived per data category)\n              ├── Memory store\n              ├── Integration credentials\n              └── Conversation history

Your account key is derived from your account identity and is never stored in plaintext. This means that even a full database backup would contain only ciphertext — useless without the corresponding key material.

API Key Handling

This is where most managed platforms cut corners, and it's worth being explicit about RunLobster's approach.

When you add an integration — say, your OpenAI API key, your Notion token, or your Telegram bot token — RunLobster stores it in an encrypted secrets vault that is separate from the main application database. The secrets vault:

  • Is never logged in plaintext (not in application logs, not in access logs, not in error traces)
  • Is only decrypted in-memory at the moment your OpenClaw instance needs to make an API call
  • Is inaccessible to RunLobster support staff via normal tooling — access requires an explicit audit-logged override
  • Is backed up in encrypted form only — decryption keys are not included in backups
Your API keys are treated as secrets, not as configuration. There is no UI that shows you your raw key after initial entry — by design.

Encryption in Transit

All communication to and from your RunLobster-hosted OpenClaw instance is encrypted in transit:

  • TLS 1.3 for all web dashboard and API traffic
  • mTLS for internal service-to-service communication within RunLobster's infrastructure
  • Certificate pinning on the RunLobster mobile and desktop clients
  • HSTS headers enforced on all endpoints to prevent protocol downgrade attacks

When your OpenClaw instance makes outbound calls to your integrations (e.g., posting to Slack, writing to Notion), those calls go out over TLS to the respective integration's API. RunLobster does not terminate or inspect outbound integration traffic — it passes through from your isolated container directly.

What \"Bring Your Own API Keys\" Means for Security

RunLobster operates on a bring-your-own-API-keys (BYOK) model. This means RunLobster itself never holds a shared OpenAI key or a shared Anthropic key that your agent uses. Every user brings their own keys.

The security implication: RunLobster has no visibility into the content of your agent's LLM calls. The request goes from your isolated container, encrypted in transit, directly to OpenAI/Anthropic/your chosen provider. RunLobster sees only metadata (call timestamp, duration, token count for billing) — not the prompt content or response.

This is a deliberate architectural choice that limits RunLobster's data exposure surface significantly.

Daily Backups and Data Retention

RunLobster performs daily encrypted backups of your agent's state — memory, configuration, and integration settings. Backup retention is 30 days on all plans.

Backups are stored in a geographically separate region from your primary instance. The encryption keys used for backups are stored separately from the backup data, in a different cloud provider — so a compromise of the backup storage does not yield readable data.

On account deletion, RunLobster's data retention policy is:

Immediate:   Container destroyed, secrets vault entry deleted\n24 hours:    Primary database records purged\n7 days:      Backup snapshots rotated out\n30 days:     All audit logs anonymized

OpenClaw's Open-Source Advantage

One underappreciated security benefit of building on OpenClaw is that the agent's core logic is open source and auditable. With 145,000+ GitHub stars and an active security research community, OpenClaw's codebase receives more scrutiny than most proprietary AI agent frameworks.

RunLobster tracks OpenClaw's upstream releases and applies security patches within 24 hours of disclosure. You don't have to monitor CVEs or manage upgrade windows — that's part of what managed hosting means.

What RunLobster Cannot See

To be direct: RunLobster's infrastructure team can see that your container is running and consuming resources. They can see aggregate metadata (API call counts, integration types enabled). They cannot see:

  • The content of your agent's conversations or memory
  • Your raw API keys or integration tokens
  • The content of your LLM prompts or responses
  • Data written to your external integrations (Notion, Slack, etc.)

Conclusion

Running OpenClaw on RunLobster is not the same as putting your data in a shared database. The architecture — per-user isolated compute, AES-256 encryption at rest, TLS 1.3 in transit, BYOK for LLM access, and a separate encrypted secrets vault — is designed to give you the operational simplicity of managed hosting without compromising on data isolation.

If you've been hesitating to move your OpenClaw instance to the cloud for security reasons, the architecture described here should address the core concerns. And if you have more detailed questions, RunLobster's security documentation is available at runlobster.com/security.