Skip to main content
@sayna-ai/js-sdk is the browser-oriented companion to Sayna. It follows the official README and focuses on product builders who need to drop token fetching, microphone publishing, and remote audio playback into a front-end app without touching low-level WebRTC primitives.

What it solves

  • Token orchestration – Fetch /livekit/token responses from your backend via a simple tokenUrl, or provide a custom tokenFetchHandler if you already control auth.
  • Room lifecycleconnect(), publishMicrophone(), and disconnect() manage the entire LiveKit flow and keep track of connection state for you.
  • Remote playback – The SDK can create and manage an <audio> element automatically, or you can pass your own element to fit your UI.
  • Framework agnostic – It’s a plain ES module, so you can import it in React, Vue, Svelte, or a vanilla JS project alike.

Install

npm install @sayna-ai/js-sdk

Typical flow

import { SaynaClient } from "@sayna-ai/js-sdk";

// 1) Configure how tokens are fetched (either tokenUrl or tokenFetchHandler is required)
const client = new SaynaClient({
  tokenUrl: "/api/sayna/token",
  enableAudioPlayback: true, // auto-create an <audio> tag for remote audio
});

// 2) Join the room Sayna already participates in
await client.connect();

// 3) Publish microphone audio so humans can converse with the Sayna session
await client.publishMicrophone();

// 4) Tear down when the call ends
await client.disconnect();
If you need to plug in your own auth logic, supply a tokenFetchHandler instead of tokenUrl. The handler must resolve to { token, liveUrl }, mirroring the backend response from the /livekit/token endpoint.

Constructor options

OptionTypeNotes
tokenUrlstring | URLRelative URLs resolve against window.location.
tokenFetchHandler() => Promise<{ token: string; liveUrl: string }>Overrides tokenUrl when provided.
audioElementHTMLAudioElementReuse an existing element for remote playback styling.
enableAudioPlaybackbooleanDefaults to true; set to false if you only need microphone capture.
Provide at least one of tokenUrl or tokenFetchHandler.

Lifecycle API

  • await client.connect(connectOptions?) – Fetches a token and joins the LiveKit room. Resolves to the underlying Room instance if you need fine-grained control.
  • await client.publishMicrophone(audioOptions?) – Requests browser microphone permissions and publishes the track into the room. Call only after connect().
  • await client.disconnect() – Leaves the room, detaches remote audio, and cleans up listeners. Safe to call multiple times.

Helpful properties

  • client.currentRoom – Read-only reference to the current LiveKit Room (or null while disconnected).
  • client.isConnected – Boolean flag to toggle UI states.
  • client.playbackElement – The <audio> element used for remote playback, useful if you want to embed it in your layout.

When to reach for it

  • You expose /livekit/token from your backend (often via Sayna’s REST endpoint) and want browsers to join the same room.
  • You’d rather not manage raw LiveKit SDK calls, autoplay policies, or microphone publishing by hand.
  • You need an SDK that matches Sayna’s recommended flow without bringing in framework-specific dependencies.