Verify Deployment
These steps can change shared cluster state. Double-check the target stack, namespace, and credentials before running mutating commands.
Your cluster is up and secrets are populated. Now check that the real pieces are alive and reachable.
In plain language, this page is the "did the deploy actually work?" checklist.
Check the pods
Look at backend, then userswarm-controller. If the main services or the controller are not running, stop here and inspect logs.
Hit the health endpoint
The public health check tells you whether DNS, TLS, routing, and the orchestrator are all answering together.
Smoke-test auth
If CRAWBL_E2E_TOKEN is available, make one signed request without the mobile app. That confirms the dev-only auth path is working.
Troubleshoot or tear down
Use the troubleshooting commands if something looks wrong. If you are done with the environment, destroy it from the backend repo.
Check pod status
Most of the app workloads run in backend. Metacontroller runs in userswarm-controller.
If those namespace names are unfamiliar, focus on the practical question first: are the main backend services running, and is the runtime controller running?
kubectl get pods -n backend
You should see pods for the orchestrator, userswarm webhook, PostgreSQL, Redis, docs, and any other enabled workloads. If any pods show Pending or CrashLoopBackOff, see the troubleshooting section below.
kubectl get pods -n userswarm-controller
This should show the Metacontroller pod in Running state.
Hit the live health endpoint
The orchestrator exposes a health check on the public API domain:
curl -s https://dev.api.crawbl.com/v1/health
A healthy response means DNS resolves, TLS terminates, the gateway routes traffic, and the orchestrator is responding.
Optional: test an authenticated request in dev
If CRAWBL_E2E_TOKEN is configured, you can smoke-test auth without the mobile app:
curl -i -X POST https://dev.api.crawbl.com/v1/auth/sign-in \
-H "X-E2E-Token: $CRAWBL_E2E_TOKEN" \
-H "X-E2E-UID: smoke-user-1" \
-H "X-E2E-Email: smoke@example.com" \
-H "X-E2E-Name: Smoke Test" \
-H "X-Device-Info: my-laptop" \
-H "X-Device-ID: smoke-001" \
-H "X-Version: 0.1.0+dev" \
-H "X-Timezone: UTC"
You should get HTTP 204 No Content. That confirms the non-production E2E auth path is working end to end.
Troubleshooting
Pods stuck in CrashLoopBackOff
Check the logs to see what went wrong:
kubectl logs -n backend <pod-name> --previous
The --previous flag shows logs from the last crash, which usually contains the error message.
Secrets not syncing
If pods are failing because of missing environment variables, the ExternalSecret resources might not have synced yet:
kubectl get externalsecret -n backend
Look for resources with a SecretSynced condition of False. This usually means the secret does not exist in AWS Secrets Manager yet. Go back to Bootstrap Cluster and create the missing secret.
Image pull errors
DigitalOcean Container Registry pull secrets expire weekly. If you see ImagePullBackOff errors, refresh your credentials:
doctl registry login
DNS not resolving
DNS propagation can take a few minutes after cluster creation. If curl returns a DNS error, wait 5 minutes and try again. You can also check the Cloudflare dashboard to confirm the records were created.
Teardown
If you need to destroy the dev environment (for example, to save costs when you are not actively using it), run:
cd crawbl-backend
crawbl infra destroy
This tears down resources in reverse order: edge, platform, then cluster. It takes a few minutes.
Some long-lived resources like the DOCR registry and VPC may not be destroyed automatically if they were created outside Pulumi. Check the DigitalOcean dashboard for any leftover resources.
To roll back a single component without destroying everything, use ArgoCD or Helm directly:
# Re-sync a single app
argocd app sync <app-name>
# Roll back a Helm release
helm rollback <release-name> -n <namespace>
You are all set
Congratulations — you have a fully working Crawbl development environment. From here, you can:
- Push code to
mainincrawbl-backendand watch CI deploy it automatically - Read the Architecture docs to understand how the codebase is organized
- Check the API Reference for the full list of endpoints
- See the Infrastructure docs for details on the cluster setup
What's next: Explore the Core Concepts to understand how the backend codebase is organized and the patterns it follows.