Перейти к основному содержимому

Dev Environment

There is already a shared dev environment. In most cases, use that first instead of creating your own infrastructure.

In plain language, this page answers one question: "Before I build my own setup, what shared environment already exists and how do I safely use it?"

This page answers the questions people usually have on day one:

  • Which URLs matter?
  • How do I test without the mobile app?
  • Which namespace should I look at?
  • When do I need my own deploy?
1
Step 1

Find the shared URLs

Start with the public entry points: API, docs, ArgoCD, and the database UI. These are the services you will check most often while working in dev.

2
Step 2

Check the backend health

Use the /v1/health endpoint first. If that is down, there is no point debugging higher-level behavior yet.

3
Step 3

Smoke-test auth

If CRAWBL_E2E_TOKEN is set, make one authenticated request without the mobile app. That tells you whether the non-production auth path is wired correctly.

4
Step 4

Run the live test suite

Use ./crawbl test e2e --base-url https://dev.api.crawbl.com -v when you want the suite to exercise the deployed cluster end to end.

Prerequisites

# Install mise (version manager) and all required tools
curl https://mise.run | sh
eval "$(~/.local/bin/mise activate zsh)"
cd crawbl-backend && mise install

This installs the main developer tools at the versions the repo expects. See .mise.toml for the exact list.

You will also need the shared secrets from crawbl-backend/.env, because some commands and test paths read values from that file.

Endpoints

ServiceURL
APIhttps://dev.api.crawbl.com
Docshttps://dev.docs.crawbl.com
ArgoCDhttps://dev.argocd.crawbl.com
Database UIhttps://dev.postgres.crawbl.com

Fast Checks

These checks answer three different questions:

  • Is the API up?
  • Can I make an authenticated request without the mobile app?
  • Can I run the test suite against the shared environment?
# Health check
curl https://dev.api.crawbl.com/v1/health

# Authenticated request (non-production E2E bypass)
curl -i -X POST https://dev.api.crawbl.com/v1/auth/sign-in \
-H "X-E2E-Token: $CRAWBL_E2E_TOKEN" \
-H "X-E2E-UID: test-user" \
-H "X-E2E-Email: test@crawbl.test" \
-H "X-E2E-Name: Test User" \
-H "X-Device-Info: CLI/test/test" \
-H "X-Device-ID: cli" \
-H "X-Version: 1.0.0 (1)" \
-H "X-Timezone: UTC"

# Run E2E tests against dev
./crawbl test e2e --base-url https://dev.api.crawbl.com -v

Expected results:

  • /v1/health returns HTTP 200
  • POST /v1/auth/sign-in returns HTTP 204 when CRAWBL_E2E_TOKEN is set correctly
  • ./crawbl test e2e starts the end-to-end test suite against the live cluster

Auth Methods

You do not need to memorize all of this up front. The short version is:

  • normal app traffic uses Firebase-based auth
  • tests and smoke checks use a separate E2E test token in non-production
MethodWhen to use
Firebase JWT (X-Token or Authorization: Bearer)Mobile app and normal user traffic
E2E Token (X-E2E-Token)CI, automated tests, and manual smoke checks in non-production

The E2E token lives in .env as CRAWBL_E2E_TOKEN. It is a dev and test shortcut, not a normal product auth flow.

Secrets

All secrets are in crawbl-backend/.env and AWS Secrets Manager. Source them before testing:

cd crawbl-backend
set -a && source .env && set +a

Using .env with CLI Commands: Any ./crawbl command that requires environment variables (especially infrastructure commands like infra plan, infra update, etc.) must be run with the .env file sourced. Use this pattern:

set -a && source .env && set +a && ./crawbl <command>

For example:

set -a && source .env && set +a && ./crawbl infra plan

This ensures that all required credentials and configuration values are available to the command.

kubectl Access

kubectl is the command-line tool for inspecting the Kubernetes cluster. A "namespace" is just a named group of resources inside that cluster.

doctl kubernetes cluster kubeconfig save crawbl-dev
kubectl get pods -n backend
kubectl get pods -n userswarm-controller
kubectl get pods -n userswarms

What lives where:

  • backend: the main backend services and shared support services
  • userswarm-controller: the controller that turns UserSwarm records into actual runtime resources
  • userswarms: the per-user runtime workloads and their storage

What's Deployed

CI auto-deploys every push to main in crawbl-backend.

A normal push to the backend repo updates the shared dev cluster automatically.

1
Step 1

Build images

CI packages the backend into container images.

2
Step 2

Publish them

The images are pushed to DigitalOcean Container Registry.

3
Step 3

Update ArgoCD

CI updates image references in crawbl-argocd-apps, and ArgoCD syncs the cluster.

4
Step 4

Run E2E checks

The live deployment is exercised with end-to-end checks after sync.

See dev services reference for full details on each endpoint, credentials, and TLS.