Dev Environment
There is already a shared dev environment. In most cases, use that first instead of creating your own infrastructure.
In plain language, this page answers one question: "Before I build my own setup, what shared environment already exists and how do I safely use it?"
This page answers the questions people usually have on day one:
- Which URLs matter?
- How do I test without the mobile app?
- Which namespace should I look at?
- When do I need my own deploy?
Find the shared URLs
Start with the public entry points: API, docs, ArgoCD, and the database UI. These are the services you will check most often while working in dev.
Check the backend health
Use the /v1/health endpoint first. If that is down, there is no point debugging higher-level behavior yet.
Smoke-test auth
If CRAWBL_E2E_TOKEN is set, make one authenticated request without the mobile app. That tells you whether the non-production auth path is wired correctly.
Run the live test suite
Use ./crawbl test e2e --base-url https://dev.api.crawbl.com -v when you want the suite to exercise the deployed cluster end to end.
Prerequisites
# Install mise (version manager) and all required tools
curl https://mise.run | sh
eval "$(~/.local/bin/mise activate zsh)"
cd crawbl-backend && mise install
This installs the main developer tools at the versions the repo expects. See .mise.toml for the exact list.
You will also need the shared secrets from crawbl-backend/.env, because some commands and test paths read values from that file.
Endpoints
| Service | URL |
|---|---|
| API | https://dev.api.crawbl.com |
| Docs | https://dev.docs.crawbl.com |
| ArgoCD | https://dev.argocd.crawbl.com |
| Database UI | https://dev.postgres.crawbl.com |
Fast Checks
These checks answer three different questions:
- Is the API up?
- Can I make an authenticated request without the mobile app?
- Can I run the test suite against the shared environment?
# Health check
curl https://dev.api.crawbl.com/v1/health
# Authenticated request (non-production E2E bypass)
curl -i -X POST https://dev.api.crawbl.com/v1/auth/sign-in \
-H "X-E2E-Token: $CRAWBL_E2E_TOKEN" \
-H "X-E2E-UID: test-user" \
-H "X-E2E-Email: test@crawbl.test" \
-H "X-E2E-Name: Test User" \
-H "X-Device-Info: CLI/test/test" \
-H "X-Device-ID: cli" \
-H "X-Version: 1.0.0 (1)" \
-H "X-Timezone: UTC"
# Run E2E tests against dev
./crawbl test e2e --base-url https://dev.api.crawbl.com -v
Expected results:
/v1/healthreturns HTTP200POST /v1/auth/sign-inreturns HTTP204whenCRAWBL_E2E_TOKENis set correctly./crawbl test e2estarts the end-to-end test suite against the live cluster
Auth Methods
You do not need to memorize all of this up front. The short version is:
- normal app traffic uses Firebase-based auth
- tests and smoke checks use a separate E2E test token in non-production
| Method | When to use |
|---|---|
Firebase JWT (X-Token or Authorization: Bearer) | Mobile app and normal user traffic |
E2E Token (X-E2E-Token) | CI, automated tests, and manual smoke checks in non-production |
The E2E token lives in .env as CRAWBL_E2E_TOKEN. It is a dev and test shortcut, not a normal product auth flow.
Secrets
All secrets are in crawbl-backend/.env and AWS Secrets Manager. Source them before testing:
cd crawbl-backend
set -a && source .env && set +a
Using .env with CLI Commands: Any ./crawbl command that requires environment variables (especially infrastructure commands like infra plan, infra update, etc.) must be run with the .env file sourced. Use this pattern:
set -a && source .env && set +a && ./crawbl <command>
For example:
set -a && source .env && set +a && ./crawbl infra plan
This ensures that all required credentials and configuration values are available to the command.
kubectl Access
kubectl is the command-line tool for inspecting the Kubernetes cluster. A "namespace" is just a named group of resources inside that cluster.
doctl kubernetes cluster kubeconfig save crawbl-dev
kubectl get pods -n backend
kubectl get pods -n userswarm-controller
kubectl get pods -n userswarms
What lives where:
backend: the main backend services and shared support servicesuserswarm-controller: the controller that turnsUserSwarmrecords into actual runtime resourcesuserswarms: the per-user runtime workloads and their storage
What's Deployed
CI auto-deploys every push to main in crawbl-backend.
A normal push to the backend repo updates the shared dev cluster automatically.
Build images
CI packages the backend into container images.
Publish them
The images are pushed to DigitalOcean Container Registry.
Update ArgoCD
CI updates image references in crawbl-argocd-apps, and ArgoCD syncs the cluster.
Run E2E checks
The live deployment is exercised with end-to-end checks after sync.
See dev services reference for full details on each endpoint, credentials, and TLS.