Skip to main content

Deployment Shape

Crawbl is designed for self-hosted deployment. This page covers infrastructure requirements and deployment models.

Deployment Models

Deploy Crawbl in your own infrastructure:

AspectDetails
ControlFull ownership of data and configuration
SecurityData never leaves your environment
ComplianceMeet regional data residency requirements
CostPredictable infrastructure costs

Supported Platforms

PlatformStatusNotes
Any managed Kubernetes✅ SupportedReference platform
AWS EKS✅ SupportedRequires minor config changes
Google GKE🔜 PlannedSimilar to EKS
Azure AKS🔜 PlannedSimilar to EKS
On-Premises✅ SupportedAny Kubernetes cluster

Infrastructure Requirements

Minimum Specifications

For development/testing:

ComponentSpecification
Kubernetes1.28+
Worker Nodes2 nodes, 4 CPU, 8GB RAM each
Storage50GB SSD per node
DatabasePostgreSQL 15+, 2GB RAM
Redis1GB RAM

Production Specifications

For production workloads:

ComponentSpecification
Kubernetes1.28+ with HA control plane
Worker Nodes3+ nodes, 8 CPU, 16GB RAM each
Storage200GB SSD per node, with PV provisioning
DatabasePostgreSQL 15+, 8GB RAM, with replication
Redis4GB RAM, with persistence

Architecture Components

Infrastructure view of the Crawbl platform
Click diagram to zoom

Deployment Process

Bootstrap Sequence

Infrastructure resources used to bootstrap Crawbl
Click diagram to zoom

Required Secrets

Before deployment, configure:

SecretPurpose
anthropic-api-keyClaude access
firebase-project-idMobile authentication
database-urlPostgreSQL connection
redis-urlRedis connection
oauth-providersOAuth app credentials

Secrets are injected at runtime via the platform's secrets management layer. See the Security Model for details on how secrets are stored and rotated.

GitOps Workflow

Crawbl uses a GitOps continuous deployment model:

GitOps and CI/CD flow for Crawbl deployments
Click diagram to zoom

CI/CD Pipeline

  1. Code is pushed to the main branch
  2. CI builds container images
  3. CI updates image tags in the deployment repository
  4. GitOps controller auto-syncs changes to the cluster

Monitoring & Observability

Health Checks

All components expose:

  • /health - Liveness probe
  • /ready - Readiness probe
  • /metrics - Prometheus metrics

Logging

  • Structured JSON logs
  • Correlation IDs for request tracing
  • Aggregated in cluster logging

Metrics

  • Request latency (p50, p95, p99)
  • Error rates by endpoint
  • Resource utilization
  • LLM token usage

Deep Dive

Detailed deployment guides and infrastructure references are available to platform operators.