Skip to main content

Run the Backend

This page gets you to a working local backend as fast as possible.

In plain language, "run the backend" means: start the API and its local dependencies on your machine, hit a health check, and confirm tests can run before you touch cluster infrastructure.

Quickest Path: make setup

1
Step 1

Bootstrap the repo-local CLI

From crawbl-backend, run make setup. This builds the repo-managed launcher, installs hooks, and checks your machine before you start anything.

2
Step 2

Load your environment

Source .env so the local commands can see the values they need.

3
Step 3

Start the stack

Run ./crawbl dev start to bring up the API, database, Redis, MCP, and other local services through Docker Compose.

4
Step 4

Verify it is alive

Hit the health endpoint and run tests once the stack is up. That confirms the local backend is actually usable.

The backend repo ships a local launcher at ./crawbl. It builds bin/crawbl on first run and refreshes it when the CLI source changes, so you do not need a separate global install.

Other useful commands:

./crawbl dev stop          # Stop containers
./crawbl dev start --clean # Wipe DB and start fresh
./crawbl dev migrate # Run migrations only
./crawbl dev verify # Manual full local check
./crawbl test e2e # Run E2E tests against dev cluster

Check the health endpoint

This is the quickest proof that the backend is up and answering requests.

curl http://localhost:7171/v1/health

You should get a JSON response indicating the service is healthy.

If you see connection refused, wait a few seconds for the containers to finish starting and try again.

Without Docker

If you prefer running the server outside Docker (for example, to use a debugger or quickly test if your code compiles):

cd crawbl-backend
./crawbl dev start --database-only # Launches Postgres in the background and runs migrations
./crawbl platform orchestrator # Starts the orchestrator on your host

./crawbl platform orchestrator runs the repo-local binary on your machine. It connects to the Dockerized Postgres that ./crawbl dev start --database-only started, so you still get a real database without putting the app itself in a container.

This option is useful when you want to:

  • Attach a debugger (like Delve)
  • See faster restart times during development
  • Run with custom Go build flags
  • Or simply test if your code compiles

Run the tests

With the stack running, you can verify everything works end-to-end:

./crawbl test e2e

This runs the full end-to-end test suite against your local stack.

info

The tests exercise authentication, user creation, runtime lifecycle, and API contract validation.

For faster feedback during development, you can also run just the unit tests:

./crawbl test unit

Useful Commands

Here is a quick reference for the commands you will use daily:

CommandWhat it does
./crawbl dev startStart the full local stack via Docker Compose
./crawbl dev start --database-onlyStart Postgres + run migrations without launching the orchestrator
./crawbl platform orchestratorRun the orchestrator binary on your host
./crawbl dev stopStop all Docker Compose containers
./crawbl dev start --cleanWipe DB and start fresh (full reset)
./crawbl test unitRun unit tests
./crawbl test e2eRun end-to-end tests against local stack
./crawbl dev verifyRun formatting, linting, and tests together
./crawbl dev lint --fixAuto-fix linting issues
make ci-checkRun the repo pre-push checks
go mod tidySync go.mod and go.sum

How Local Mode Works

When running locally, the backend uses a fake runtime driver instead of a real Kubernetes connection.

That means local development focuses on backend logic, not real cluster provisioning. You can still test API behavior, storage, and most request flows without waiting for pods to be created.

  • You do not need a running cluster to develop locally
  • API endpoints, database queries, and auth flows all work normally
  • UserSwarm lifecycle calls are simulated, so they return success without creating real pods
  • In local and test, auth middleware injects a default principal instead of requiring Firebase or E2E auth

This lets you iterate quickly on API logic without waiting for cloud infrastructure. When you are ready to test against real infrastructure, continue to Bootstrap Cluster.

Troubleshooting

Port 7171 already in use: Find it with lsof -i :7171 and stop it, or change the port in .env.

Postgres connection refused: Make sure Docker is running and ./crawbl dev start completed without errors. Check with docker ps.

Migration errors: Full reset with ./crawbl dev start --clean.


What's next: If you need to deploy to a real cluster, continue to Bootstrap Cluster. If you only need local development, you are all set — check out the Core Concepts to learn more about how the codebase is organized.