<VoiceVisualizer />

A complete guide to the voice visualizers in TurboStarter AI, covering the six web visualizer styles, the mobile bar visualizer, and how each one is configured.

TurboStarter AI ships with a small family of voice visualizers rather than one fixed component. On web, the voice experience can render six different styles from packages/ui/web, while mobile uses a focused bar visualizer from packages/ui/mobile.

Web

The web side is the more flexible implementation. It includes six distinct visualizers, and the app-level voice screen selects between them based on the current visualizer settings.

Web voice visualizer demo

The web package includes six visualizer styles. They all react to voice-session but each one gives the interface a different character.

VisualizerComponentBest fitNotes
OrbOrbHero-style voice sessionsA shader-driven focal point with blended colors and volume-driven motion.
BarAudioVisualizerBarClear, familiar voice UIThe most direct option when you want a classic speech-bar treatment.
GridAudioVisualizerGridStructured layoutsAnimates a matrix of cells and works well in more system-like interfaces.
RadialAudioVisualizerRadialCircular layoutsWraps bars around a center point for a more ambient feel.
WaveAudioVisualizerWaveMinimal wide layoutsUses a shader-based waveform that feels clean and elegant.
AuraAudioVisualizerAuraPremium, immersive surfacesRenders a soft glowing field that feels more atmospheric than literal.

In the app, the selected visualizer shape is read from the voice settings store and mapped to the matching primitive from @workspace/ui-web/voice/*.

Mobile

The mobile implementation is intentionally simpler. Instead of exposing a full visualizer family, it uses a single bar visualizer that stays clear and readable on a smaller screen.

Mobile voice visualizer demo

The mobile package currently exposes one voice visualizer primitive, and the app-level mobile voice screen follows that same direction.

VisualizerComponentBest fitNotes
BarAudioVisualizerBarNative full-screen voice sessionsA five-bar layout with animated idle and speaking states, optimized for compact mobile UI.

That keeps the mobile experience consistent and easy to place next to transcript, controls, and the rest of the session UI.

What it brings to the session

A good voice interface needs more than controls and transcript text. The visualizer is what makes the session feel active before the next response is read or heard.

Makes state visible

Listening, thinking, and speaking each feel different, so the session never looks idle or frozen.

Adds polish without extra clutter

It creates a strong visual focal point without introducing more buttons, labels, or status chips.

Scales across platforms

The idea stays consistent between web and mobile, even though each platform renders it differently.

Usage

If you are building a custom voice surface, it is often better to use the primitives directly instead of relying on the app-level wrapper. Each example below stays minimal, but it uses the available props so you can see how the visualizer is meant to be configured.

The orb is the most configurable visualizer in the set. It works best when the visualizer is the centerpiece of the screen rather than a supporting detail.

import { Orb } from "@workspace/ui-web/voice/orb";
import { useRef } from "react";

export function OrbVisualizer() {
  const colorsRef = useRef<["#93c5fd", "#1d4ed8"]>(["#93c5fd", "#1d4ed8"]);
  const inputVolumeRef = useRef(0.2);
  const outputVolumeRef = useRef(0.45);

  return (
    <Orb
      colors={["#93c5fd", "#1d4ed8"]}
      colorsRef={colorsRef}
      resizeDebounce={0}
      seed={7}
      agentState="thinking"
      volumeMode="manual"
      manualInput={0.2}
      manualOutput={0.45}
      inputVolumeRef={inputVolumeRef}
      outputVolumeRef={outputVolumeRef}
      getInputVolume={() => 0.2}
      getOutputVolume={() => 0.45}
    />
  );
}
PropPurpose
colorsSets the base gradient pair.
colorsRefUpdates colors dynamically without remounting.
resizeDebounceControls how quickly the canvas reacts to resize changes.
seedKeeps the visual pattern deterministic.
agentStateDrives the current animation state.
volumeModeChooses automatic or manual volume control.
manualInput / manualOutputPass explicit input and output levels.
inputVolumeRef / outputVolumeRefProvide refs for external live volume data.
getInputVolume / getOutputVolumePull volume from callbacks instead of refs.

Under the hood

Although the public API is intentionally small, the visualizer system is doing real session work for you. It ties motion to actual voice state instead of treating animation as decoration.

  • On web, the wrapper chooses a visualizer style from the active voice settings and adapts it to the current theme, session and live input and output volume.
  • On mobile, the implementation stays closer to a single native pattern and focuses on keeping the visualization readable and stable in a compact layout.
  • Both versions react to the assistant lifecycle, so connecting, listening, thinking, and speaking can each look distinct.

<VoiceVisualizer /> usually lives at the center of a broader voice session. These pages are the most useful companions when you are building out the rest of that surface.

How is this guide?

Last updated on

On this page

Make AI your edge, not replacement.Get TurboStarter AI