API
Overview of the API service in TurboStarter AI, including its architecture, technology stack, and core functionalities.
For the complete documentation index, see llms.txt. Prefer markdown by appending.mdto documentation URLs or sendingAccept: text/markdown.
The API service acts as the central hub for all backend logic within TurboStarter AI. It handles interactions with AI models, data processing, and communication between the frontend and backend systems.
Technology
We use Hono, a fast, TypeScript-first web framework. This ensures efficient handling of API requests, especially for AI interactions like streaming responses.
Importantly, this single API layer serves both web and mobile applications, guaranteeing consistent business logic and data handling across all platforms.
In the AI kit, the API is mounted under /api (base path), and routes are grouped by module, for example:
/api/ai/*for AI features (chat, RAG, image, voice, TTS)/api/auth/*for authentication helpers/api/storage/*for upload and signed URL helpers
AI integration
While the API package (@workspace/api) exposes the endpoints, most AI functionality is implemented in dedicated AI packages and imported into the API routes. In practice:
@workspace/aicontains shared AI primitives like credit costs and server helpers (credit balance, deductions, etc.)- Each demo app has its own AI package (e.g.
@workspace/ai-chat,@workspace/ai-rag,@workspace/ai-image,@workspace/ai-tts,@workspace/ai-voice) that contains the module-specific API functions, schemas, and strategy/provider wiring
The AI packages are responsible for:
- Communicating with various AI providers and models (OpenAI, Anthropic, Google AI, etc.)
- Processing and formatting data specifically for AI interactions
- Parsing responses from AI models and producing consistent outputs
- Reading/writing AI module data (chat history, RAG documents/embeddings, image generations, etc.) via
@workspace/dband@workspace/storagewhere needed
The API layer itself focuses on registering Hono routes, applying middleware (auth, validation, credits, etc.), and exposing these AI features to web and mobile clients.
This separation ensures AI-specific logic remains modular and reusable, while the API package stays focused on request handling and routing.
API keys for AI services are managed securely on the backend within these packages, ensuring they never appear client-side.
Middlewares
Hono middlewares streamline request handling by tackling common tasks before the main logic runs. In TurboStarter AI, they handle:
- Authentication: verifying user sessions before allowing access to protected routes (the AI kit starts with anonymous sessions by default)
- Validation: validating query params and JSON bodies with Zod; validation errors can be localized using the i18n layer
- Rate limiting: restricting request frequency (for example, for costly operations like image generation or RAG ingestion)
- Credits management: checking a user's credit balance and deducting costs before running an AI operation
- Localization: detecting the user's locale (cookie /
Accept-Language) so API errors and validation messages can be translated - Security: CORS and CSRF protections where appropriate
These middlewares keep core route logic clean and focused, while consistently enforcing security, usage limits, and data integrity across the API.
Core API documentation
For general information about the API setup, architecture, authentication integration, and how to add new endpoints, please refer to the Core API documentation.
API documentation
Learn about the general API setup, structure, and best practices in the core TurboStarter documentation.
Specific configurations related to AI providers or templates can be found in their respective documentation sections.
How is this guide?
Last updated on