Skip to main content
Sayna ships as a single Axum binary that runs anywhere you can schedule containers. The sections below walk through the recommended workflow for container builds, Docker Compose, and Kubernetes.

1. Prerequisites

  • LiveKit cluster reachable from the Sayna pod/container (LAN or VPC). Set LIVEKIT_URL to the internal address and LIVEKIT_PUBLIC_URL to what clients should use.
  • Provider credentials for the STT/TTS services you plan to use:
    • DEEPGRAM_API_KEY for Deepgram STT/TTS
    • ELEVENLABS_API_KEY for ElevenLabs TTS
  • Optional: S3-compatible bucket for LiveKit recording egress (RECORDING_S3_*).
  • Optional: Authentication settings from the Authentication guide.
  • Persistent volume (or host path) for CACHE_PATH when you want cached audio and turn-detector assets to survive container restarts.

2. Build the container image

Use the provided multi-stage Dockerfile (Rust toolchain + runtime slim image):
docker build -t sayna:latest .
Key flags:
  • --build-arg RUST_VERSION=1.75.0 pins the toolchain when you need reproducibility.
  • sayna init runs during the image build to pre-download turn-detection assets when CACHE_PATH is set. Re-run sayna init at runtime if you mount a fresh cache volume.
  • Sayna listens on port 3001 by default; override via -e PORT=XXXX.

3. Runtime environment

VariablePurposeExample
HOSTBind address inside the container.0.0.0.0
PORTAxum listener port.3001
CACHE_PATHDirectory that stores cached audio and turn-detect assets./data/cache
DEEPGRAM_API_KEYEnables Deepgram STT/TTS.dg-secret
ELEVENLABS_API_KEYEnables ElevenLabs TTS.el-secret
LIVEKIT_URLServer-to-server WebSocket URL (internal).ws://livekit:7880
LIVEKIT_PUBLIC_URLURL clients should dial.https://rtc.yourdomain.com
LIVEKIT_API_KEY / LIVEKIT_API_SECRETCredentials used to mint /livekit/token responses and join rooms.lk_key / lk_secret
RECORDING_S3_*Bucket configuration for LiveKit recordings.bucket, region, endpoint, access key, secret key
AUTH_*Optional shared-secret or JWT auth; see Authentication.

4. Local Docker run

docker run --rm \
  -p 3001:3001 \
  -e HOST=0.0.0.0 \
  -e PORT=3001 \
  -e CACHE_PATH=/data/cache \
  -e DEEPGRAM_API_KEY=dg-secret \
  -e ELEVENLABS_API_KEY=el-secret \
  -e LIVEKIT_URL=ws://livekit:7880 \
  -e LIVEKIT_PUBLIC_URL=https://rtc.localhost \
  -e LIVEKIT_API_KEY=lk_key \
  -e LIVEKIT_API_SECRET=lk_secret \
  -v sayna-cache:/data/cache \
  sayna:latest
Mounting sayna-cache ensures cached voices and turn detection assets persist between runs.

5. Docker Compose example

version: "3.9"
services:
  livekit:
    image: livekit/livekit-server:latest
    environment:
      - LIVEKIT_KEYS=lk_key:lk_secret
    ports:
      - "7880:7880"
      - "7881:7881"

  sayna:
    image: sayna:latest
    depends_on:
      - livekit
    ports:
      - "3001:3001"
    environment:
      HOST: 0.0.0.0
      PORT: 3001
      CACHE_PATH: /data/cache
      DEEPGRAM_API_KEY: ${DEEPGRAM_API_KEY}
      ELEVENLABS_API_KEY: ${ELEVENLABS_API_KEY}
      LIVEKIT_URL: ws://livekit:7880
      LIVEKIT_PUBLIC_URL: https://rtc.example.com
      LIVEKIT_API_KEY: lk_key
      LIVEKIT_API_SECRET: lk_secret
      RECORDING_S3_BUCKET: sayna-egress
      RECORDING_S3_REGION: us-east-1
      RECORDING_S3_ENDPOINT: https://s3.amazonaws.com
      RECORDING_S3_ACCESS_KEY: ${S3_ACCESS_KEY}
      RECORDING_S3_SECRET_KEY: ${S3_SECRET_KEY}
    volumes:
      - sayna-cache:/data/cache

volumes:
  sayna-cache: {}
Compose networking keeps LIVEKIT_URL=ws://livekit:7880 internal while LIVEKIT_PUBLIC_URL remains externally accessible.

6. Kubernetes deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sayna
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sayna
  template:
    metadata:
      labels:
        app: sayna
    spec:
      containers:
        - name: sayna
          image: ghcr.io/your-org/sayna:latest
          ports:
            - containerPort: 3001
          envFrom:
            - secretRef:
                name: sayna-secrets
            - configMapRef:
                name: sayna-config
          volumeMounts:
            - name: cache
              mountPath: /data/cache
      volumes:
        - name: cache
          persistentVolumeClaim:
            claimName: sayna-cache-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: sayna
spec:
  selector:
    app: sayna
  ports:
    - name: http
      port: 3001
      targetPort: 3001
  type: ClusterIP
Recommended ConfigMap snippet:
apiVersion: v1
kind: ConfigMap
metadata:
  name: sayna-config
data:
  HOST: "0.0.0.0"
  PORT: "3001"
  CACHE_PATH: "/data/cache"
  LIVEKIT_URL: "ws://livekit.livekit.svc.cluster.local:7880"
  LIVEKIT_PUBLIC_URL: "https://rtc.example.com"
Store DEEPGRAM_API_KEY, ELEVENLABS_API_KEY, LIVEKIT_API_KEY, LIVEKIT_API_SECRET, and S3 credentials inside sayna-secrets.

7. LiveKit configuration checklist

  1. Networking – ensure the Sayna pod can reach the LiveKit signaling endpoint specified in LIVEKIT_URL (Compose service name, Kubernetes service DNS, etc.).
  2. Credentials – set LIVEKIT_API_KEY / LIVEKIT_API_SECRET. Sayna uses these to mint:
    • Agent tokens during the WebSocket config workflow (livekit block).
    • User tokens returned by the /livekit/token REST endpoint.
  3. Client workflow – a typical config message includes:
{
  "type": "config",
  "stream_id": "support-room-42-2024-01-31",
  "audio": true,
  "stt_config": { "provider": "deepgram", "language": "en-US", "sample_rate": 16000 },
  "tts_config": { "provider": "deepgram", "voice_id": "aura-aria" },
  "livekit": {
    "room_name": "support-room-42",
    "enable_recording": true,
    "sayna_participant_identity": "sayna-ai",
    "sayna_participant_name": "Sayna AI",
    "listen_participants": ["agent-1", "customer-42"]
  }
}
When enable_recording=true, LiveKit needs a valid egress target. Set a session-level stream_id to control the {server_prefix}/{stream_id}/audio.ogg path, or omit it to let the server generate one. Ensure RECORDING_S3_* or other LiveKit egress settings are populated.