Перейти к основному содержимому

System Overview

This page assumes you already read the Getting Started overview.

It does not repeat the broad platform-shape diagrams. Instead, it zooms in on the layer-specific details that are more useful once you already know the high-level model.

Layer by Layer

Clients

Clients Layer Detail
Click diagram to zoom

The top layer is everything users interact with: a Flutter mobile app and a REST/WebSocket API. A web dashboard and user-facing CLI are planned but not yet built.

The important idea is that all clients talk to the same backend. There is not one backend for the mobile app and a different backend for the API. If the backend learns a new capability, every client can use it.

Why this matters: when you add a new capability to the orchestrator, every client gets it for free. The platform has a single security boundary at the API layer.

Backend Coordination

Control Plane Layer Detail
Click diagram to zoom

The middle layer is a Go service called the orchestrator. If you see the term "control plane" elsewhere in the docs, this is what it means: the central backend that coordinates the rest of the system.

It handles:

  • Auth and provisioning — Signing users up, creating workspaces, and spinning up their agent runtimes
  • Request routing — Taking a user message and forwarding it to the right agent pod
  • LLM mediation — Choosing which LLM provider to call (Claude, OpenAI, Gemini), enforcing cost limits, and tracking usage per user
  • Integration adapters — Managing OAuth connections to external apps like Gmail, Slack, and Asana

The orchestrator talks to agent runtimes over internal HTTP calls and never exposes them to the public internet. This is a deliberate security boundary: public traffic ends at Envoy, and the orchestrator finds the target runtime over cluster-internal networking rather than exposing ZeroClaw pods directly.

Runtime Layer

Runtime Layer Detail
Click diagram to zoom

The bottom layer is where AI agents actually run. Each user gets an isolated ZeroClaw container. In practice, that means one small runtime per user, with its own memory and storage, instead of one huge shared agent process for everyone.

Supporting infrastructure includes:

  • PostgreSQL for platform-wide data (users, workspaces, conversations)
  • Redis for real-time event fan-out (Socket.IO pub/sub, typing indicators, new messages)

The runtime layer scales independently of the backend coordination layer. You can run many ZeroClaw pods on a modest cluster because each one uses minimal resources.

Why This Separation Matters

This three-layer design gives Crawbl several properties that would be hard to achieve otherwise:

1
Step 1

Security isolation

Agent pods never talk to the internet directly. The orchestrator mediates everything, so you can audit and control all external access in one place.

2
Step 2

Independent scaling

The control plane and runtime layer scale separately. A spike in active agents does not require more orchestrator capacity.

3
Step 3

Client flexibility

New interfaces, like a Slack bot or a VS Code extension, just need to talk to the orchestrator API.

4
Step 4

Cloud portability

The runtime layer is pure Kubernetes. Swapping cloud providers means changing the cluster configuration, not rewriting agent logic.

What's Next

The codebase follows this layered structure closely, with each layer in its own Go package.