Chatbot

Build a powerful AI assistant with multiple LLMs, generative UI, web browsing, and image analysis.

The Chatbot demo application showcases an advanced AI assistant capable of engaging in complex conversations, performing web searches, and understanding context. It integrates multiple large language models (LLMs) and allows users to attach files to the chat window.

Mobile
Web
Live preview

Features

The chatbot offers a variety of capabilities for an enhanced conversational experience:

Multi-model integration

Switch effortlessly between leading AI providers like OpenAI and Anthropic within a single, consistent chat interface.

Deep reasoning

Experience an AI that truly understands complex questions and delivers thoughtful, nuanced responses based on comprehensive reasoning.

Live web information

Access up-to-the-minute information directly from the web through the integrated search capability powered by Tavily AI.

File sharing

Enrich conversations by sharing and analyzing files, images, or web links directly within the chat interface for contextual discussion.

Instant response delivery

Enjoy natural, fluid conversations with responses that stream in real-time, eliminating waiting periods.

Conversation history

Seamlessly manage your conversation history with features to save, organize, and revisit previous discussions.

Setup

To implement your advanced AI assistant, you'll need several services configured. If you haven't set these up yet, start with:

AI models

Different models offer varying capabilities for tool calling, reasoning, and file processing. Consider these differences when selecting the optimal model for your specific use case.

The Chatbot leverages the AI SDK to support various language and vision models. You can easily switch between models based on your needs. Explore the documentation for the most popular models:

For detailed configuration of specific providers and other supported models, refer to the AI SDK documentation.

Web browsing

The chatbot utilizes Tavily AI to provide real-time web search capabilities. Tavily is a specialized search engine optimized for LLMs and AI agents, designed to deliver highly relevant search results by automatically handling the complexities of web scraping, filtering, and extracting relevant information.

We selected Tavily because it dramatically simplifies the integration of current web data into AI applications through a single API call that returns comprehensive, AI-ready search results.

Free tier available

Tavily offers a generous free tier with 1,000 API credits per month without requiring credit card information. A basic search consumes 1 credit, while an advanced search uses 2 credits. Paid plans are available for higher volume usage.

To enable web browsing, follow these steps:

Get Tavily API Key

Sign up or log in at the Tavily Platform to obtain your API key from the dashboard.

Add API Key to Environment

Add your API key to your project's .env file (e.g., in apps/web):

.env
TAVILY_API_KEY=tvly-your-api-key

With the API key properly configured, the chatbot will automatically utilize Tavily for searches when contextually appropriate.

Data persistence

User interactions and chat history are persisted to ensure a continuous experience across sessions.

Database

Learn more about database service in TurboStarter AI.

Conversation data is organized within a dedicated PostgreSQL schema named chat to maintain clear separation from other application data.

  • chats: stores records for each conversation session, including essential metadata like user ID and creation timestamp.
  • messages: maintains the content of individual messages exchanged within conversations, linked to their parent chat session.
  • parts: handles complex message structures by breaking down content into smaller components, particularly useful for generative UI elements or multi-modal content.
  • tool_invocations: records instances where the AI model invokes external tools (such as web search or function calls), tracking both inputs and outputs.

Storage

Learn more about cloud storage service in TurboStarter AI.

Files shared within conversations (such as images or documents) are uploaded to cloud storage (S3-compatible), with references to these attachments stored within the message content or parts.

Structure

The Chatbot functionality is thoughtfully distributed across shared packages and platform-specific code for web and mobile, ensuring optimal code reuse and consistency.

Core

The @turbostarter/ai package, located in packages/ai, contains the central chat functionality in the src/chat directory. It includes:

  • Essential constants, types, and validation schemas for chat interactions
  • Core API logic for managing conversations and messages
  • Comprehensive chat history persistence and retrieval functionality
  • AI model provider configuration and initialization
  • Integrations for external tools like web search

API

Built with Hono, the packages/api package defines all API endpoints. Chat-specific routes are organized under src/modules/ai/chat:

  • chat.router.ts: establishes Hono RPC routes, handles input validation, and connects frontend requests to the core AI logic in packages/ai
  • Manages authentication, request processing, and database interactions through the core package

Web

The Next.js web application in apps/web implements the user-facing chat interface:

  • src/app/[locale]/(apps)/chat/**: contains the Next.js App Router pages and layouts dedicated to the chat experience
  • src/components/chat/**: houses reusable React components for the chat interface (message bubbles, input area, model selector, etc.)

Mobile

The Expo/React Native mobile application in apps/mobile delivers a native chat experience:

  • src/app/chat/**: defines the primary screens for the mobile chat interface
  • src/components/chat/**: contains React Native components styled to match the web version, optimized for mobile interaction
  • API interaction: utilizes the same Hono RPC client (packages/api) as the web app for consistent backend communication

This modular structure promotes separation of concerns and facilitates independent development and scaling of different parts of the application.

How is this guide?

Last updated on

On this page

Make AI your edge, not replacement.