Chat

Build a powerful AI assistant with multiple LLMs, generative UI, web browsing, and image analysis.

The Chat demo application showcases an advanced AI assistant capable of engaging in complex conversations, browsing the web, and working with file attachments. It integrates multiple large language models (LLMs), supports reasoning-enabled models, and streams responses in real time.

Loading...
Web
Mobile
Loading...

Features

The chat app offers a variety of capabilities for an enhanced conversational experience:

Multi-model integration

Switch between models from providers like OpenAI, Anthropic, Google AI, xAI, and DeepSeek from one consistent chat interface.

Deep reasoning

Experience an AI that truly understands complex questions and delivers thoughtful, nuanced responses based on comprehensive reasoning.

Live web information

Access up-to-the-minute information from the web through the integrated search capability powered by Tavily AI.

File sharing

Enrich conversations by sharing and analyzing file attachments directly in the chat interface.

Instant response delivery

Enjoy natural, fluid conversations with responses that stream in real-time, eliminating waiting periods.

Conversation history

Seamlessly manage your conversation history with features to save, organize, and revisit previous discussions.

Setup

To implement your advanced AI assistant, you'll need several services configured. If you haven't set these up yet, start with:

AI models

Different models offer varying capabilities for tool calling, reasoning, and file processing. Consider these differences when selecting the optimal model for your specific use case.

The Chat app uses the AI SDK to support multiple language and vision-capable models. You can switch models based on your needs. Explore the most relevant providers here:

For detailed configuration of specific providers and other supported models, refer to the AI SDK documentation.

Web browsing

The chat app uses Tavily AI to provide real-time web search capabilities. Tavily is a search engine optimized for LLM and agent workflows, returning structured, AI-friendly search results.

We selected Tavily because it dramatically simplifies the integration of current web data into AI applications through a single API call that returns comprehensive, AI-ready search results.

Free tier available

Tavily offers a generous free tier with 1,000 API credits per month without requiring credit card information. A basic search consumes 1 credit, while an advanced search uses 2 credits. Paid plans are available for higher volume usage.

To enable web browsing, follow these steps:

Get Tavily API Key

Sign up or log in at the Tavily Platform to obtain your API key from the dashboard.

Add API Key to Environment

Add your API key to your project's .env file (e.g., in apps/web):

.env
TAVILY_API_KEY=tvly-your-api-key

With the API key properly configured, the chat app can use Tavily for searches when contextually appropriate.

Data persistence

User interactions and chat history are persisted to ensure a continuous experience across sessions.

Database

Learn more about database service in TurboStarter AI.

Conversation data is organized within a dedicated PostgreSQL schema named chat to maintain clear separation from other application data.

  • chat: stores records for each conversation session, including metadata like userId, name, and timestamps.
  • message: stores individual messages linked to a parent chat.
  • part: stores structured message parts, including text parts and file parts.
  • usage: stores model/provider usage metadata for assistant responses.

Storage

Learn more about cloud storage service in TurboStarter AI.

Files shared within conversations are uploaded to cloud storage (S3-compatible), with attachment metadata stored in message parts and signed URLs generated when the files need to be read back.

Devtools

TurboStarter AI includes a built-in devtools tool designed to help you inspect, debug, and understand all aspects of the AI chat experience. When you run the development server, it becomes available at http://localhost:3001.

The devtools provide a detailed view into chat request/response flows, message payloads, model invocations, and step-by-step assistant function calls as they occur.

Devtools

You can monitor live chat events, observe intermediate reasoning traces, and troubleshoot issues - making it much easier to build, test, and optimize AI-powered conversations with full transparency.

Structure

The Chat functionality is distributed across shared packages and platform-specific modules for web and mobile, ensuring strong code reuse and a consistent product experience.

Core

The shared chat logic lives in @workspace/ai-chat, implemented in packages/ai/chat/src. It includes:

  • Zod schemas for chat payloads and options
  • Model definitions and provider strategy wiring
  • Chat persistence helpers for messages, parts, attachments, and usage
  • Tavily-powered web search tooling
  • Streamed AI responses built on the AI SDK

API

Built with Hono, the packages/api package wires the chat app through packages/api/src/modules/ai/chat.ts.

That module validates incoming payloads, applies shared middleware like authentication and credit deduction, and then forwards the request into @workspace/ai-chat, where the chat stream, persistence, attachment handling, and model/tool execution actually happen.

Web

The Next.js web application in apps/web implements the user-facing chat experience:

  • src/app/[locale]/(apps)/chat/**: route entry points for the chat app
  • src/modules/chat/**: the actual feature modules for composer, history, conversation UI, web search rendering, and attachment handling

Mobile

The Expo/React Native mobile application in apps/mobile delivers a native chat experience:

  • src/app/(apps)/chat/**: route entry points for the mobile chat app
  • src/modules/chat/**: mobile-native chat modules for composer, history, and conversation UI
  • API interaction: uses the same shared Hono client as the web app for consistent backend communication

This modular structure promotes separation of concerns and facilitates independent development and scaling of different parts of the application.

How is this guide?

Last updated on

On this page

Make AI your edge, not replacement.Get TurboStarter AI