Skip to main content

ZeroClaw Overview

ZeroClaw is the agent runtime that powers every Crawbl user's AI swarm.

ZeroClaw is the small program that actually does the agent work for one user. The rest of the platform decides what should happen around it, but ZeroClaw is the part that runs prompts, tools, and memory for that user's agents.

It is a ~5 MB Rust binary that runs inside a Kubernetes pod, isolated to a single user. Each ZeroClaw instance can call LLMs, execute tools, maintain persistent memory, and handle multiple agent roles.

Why ZeroClaw?

The choice of ZeroClaw comes down to density.

If each user's runtime is tiny, Crawbl can afford to give every user their own isolated runtime instead of forcing everyone through one large shared process.

Running thousands of users means running thousands of agent containers. A Python-based agent framework might consume 200-500 MB per instance. ZeroClaw uses roughly 5 MB of RAM, which means you can pack hundreds of agents onto a single Kubernetes node.

ZeroClaw is a Crawbl fork of an upstream open-source runtime. The fork adds native delegate agents, orchestration-specific integration hooks, and runtime behavior tailored to Crawbl's deployment model.

Crawbl Fork

The Crawbl fork of ZeroClaw is maintained as a private fork. The goal is to stay close to upstream while carrying the minimum runtime changes needed for Crawbl's deployment model.

  • upstream remote is kept for regular syncs
  • tags follow a versioned convention for tracking upstream compatibility
  • changes are intended to stay additive and backward compatible

Native Agent Model

The main Crawbl-specific runtime change is native multi-agent support inside a single user pod.

ZeroClaw understands real delegate agents from its runtime configuration.

Manager

Every runtime has a base Manager agent.

The Manager is defined by the shared runtime identity files:

  • SOUL.md
  • IDENTITY.md
  • TOOLS.md

It handles the default path when the orchestrator sends a message without agent_id. It is also the coordination layer that can delegate work to specialists via ZeroClaw's native delegation tool.

Delegate Agents

The operator generates real delegate-agent sections in the runtime configuration. Each delegate agent definition includes:

  • the delegate agent slug
  • provider and model fallback
  • system prompt
  • tool restrictions
  • skill directory

This means the runtime itself owns the full identity and capabilities of each delegate agent.

Default Runtime Shape

The default shape is:

  • one Manager base agent
  • one default delegate agent, wally
  • room for more delegate agents from operator config over time

The orchestrator stores workspace agents in the database and chooses who should answer. Once it selects an agent slug, ZeroClaw already knows what that agent is.

Per-Agent Memory And Skills

Each agent in a swarm gets isolated storage for sessions and long-lived memory. The durable volume survives pod restarts, so agent conversation history and memory persist across runtime restarts.

Backend Coordination

The backend provides the coordination layer for multi-agent operation:

  • workspace-visible agents and their slugs are tracked in the database
  • default agent blueprints seed the workspace view
  • message routing uses agent_id
  • the base Manager handles swarm-level turns when no specific responder is selected

The important split is:

  • the orchestrator decides which agent should answer
  • ZeroClaw owns what that agent is through native config and skill files

Default Tools

ZeroClaw ships with built-in tools that run locally inside the pod. No external API keys required:

ToolWhat It Does
web_search_toolSearches the web via DuckDuckGo
web_fetchFetches a URL and converts HTML to readable text
file_read / file_writeReads and writes files in the user's workspace
memory_store / memory_recallStores and retrieves persistent per-user memory
shellRuns allowed terminal commands inside the pod

Tools listed in the auto-approve configuration run without user confirmation. Everything else requires explicit approval from the user via the mobile app.

MCP Tools

The orchestrator embeds an MCP (Model Context Protocol) server. ZeroClaw agent runtimes connect as MCP clients to access platform capabilities without holding credentials.

If MCP is new to you, the short version is: it is the contract ZeroClaw uses to ask the backend for platform-owned capabilities.

ToolDescriptionInput
send_push_notificationSend push notification to user's phonetitle, message
get_user_profileUser name, email, preferences(none)
get_workspace_infoWorkspace details + agent list(none)
list_conversationsAll conversations in workspace(none)
search_past_messagesKeyword search in conversationconversation_id, query, limit

MCP Configuration

Each runtime receives a generated MCP configuration with the orchestrator URL and scoped credentials at provisioning time.

Key configuration decisions:

  • Eager tool loading — Tools are loaded at startup rather than deferred, because LLMs do not reliably activate deferred tool stubs.
  • Streamable HTTP transport — Uses ZeroClaw's HTTP transport for communication.
  • Scoped authentication — Each pod gets a unique token encoding the user ID and workspace ID, authenticated with HMAC-scoped tokens.
  • All MCP tools are auto-approved because they are already scoped to the authenticated user.

Audit Logging

Every MCP tool call is recorded in an audit log for compliance and debugging:

  • User and workspace context
  • Tool name, input, and output
  • Success or failure status, error details, and duration

System Prompt Assembly

ZeroClaw does not use a single static system prompt.

In plain language, the final instructions the model sees are assembled from shared runtime files plus native agent config. The normal chat path uses the runtime's built-in agent definitions instead of reconstructing agent identity on each request.

It assembles the final prompt from these layers, in order:

1
Step 1

Load personality files

SOUL.md, IDENTITY.md, and TOOLS.md are mounted read-only from a Kubernetes ConfigMap. They define the Manager's personality, identity, and tool guidance.

2
Step 2

Inject tool definitions

ZeroClaw's built-in tool registry is added by the prompt builder so the model knows what tools exist and how to call them.

3
Step 3

Apply safety rules

The runtime configuration adds forbidden paths, allowed shell commands, autonomy rules, provider defaults, MCP settings, and native agent sections.

4
Step 4

Load delegate-agent skill files

The init container expands flat ConfigMap entries into PVC-backed directories like workspace/agents/wally/. ZeroClaw then loads those via each agent's skills_directory.

5
Step 5

Select the active agent

If the orchestrator sends agent_id, ZeroClaw activates the matching native delegate agent. If not, the Manager handles the turn.

This layered approach means agent identity lives with the runtime itself. The orchestrator routes traffic, but it does not have to recreate the delegate agent definition every turn.

Secret Delivery

ZeroClaw pods need API keys like OPENAI_API_KEY to call LLM providers. Those secrets follow this pipeline:

1
Step 1

Store the secret in a secrets manager

Provider credentials are stored in an external secrets manager, separate from the cluster.

2
Step 2

Sync it into Kubernetes

An operator syncs secrets from the external store into Kubernetes secrets automatically.

3
Step 3

Deliver it to the pod

The UserSwarm operator wires the secret into the pod as environment variables.

4
Step 4

Read it at startup

ZeroClaw reads the environment variable when the runtime starts.

At no point does a secret appear in Git, in a ConfigMap, or in the container image.

Resource Footprint

MetricTypical Value
Binary size~5 MB
Memory at idle~5 MB
Memory under load~15-30 MB (depends on tool execution)
Startup time< 2 seconds

This minimal footprint, combined with KEDA-driven autoscaling and Kubernetes bin packing, is what allows the platform to run thousands of agents on a modest cluster.

Image Build Pipeline

1
Step 1

Trigger the build

CI runs on version tag pushes or manual dispatch.

2
Step 2

Publish the image

The built image is pushed to the platform container registry.

3
Step 3

Update the GitOps repository

CI updates the GitOps configuration repository with the new image digest.

4
Step 4

Let ArgoCD roll it out

ArgoCD picks up that Git change and deploys the new runtime image into the cluster.

What's Next

The orchestrator communicates with ZeroClaw over a versioned webhook API.