file: ./src/content/docs/(ai)/architecture.mdx
meta: {
"title": "Architecture",
"description": "A quick overview of the different parts of the TurboStarter AI.",
"icon": "StrategyIcon"
}
TurboStarter AI integrates several best-in-class open source libraries to power its diverse functionalities, including authentication, data persistence, text generation, and more. Here's a concise overview of the architecture that makes everything work together.
## Application framework
The project leverages a [monorepo structure](https://turbo.build/repo) powered by [Turborepo](https://turbo.build/) to enable efficient code sharing and consistent tooling across the entire application ecosystem. This approach creates a single source of truth for shared code and dramatically simplifies dependency management.
### Web
Built with [Next.js](https://nextjs.org) and [React](https://react.dev), the web application leverages server-side rendering and static site generation for optimal performance and SEO. The UI is styled with [Tailwind CSS](https://tailwindcss.com) and [shadcn/ui](https://ui.shadcn.com) components for rapid development and consistent design. API routes are handled by [Hono](https://hono.dev) for edge computing, chosen for its minimal overhead and excellent TypeScript support.
### Mobile
The mobile application uses [React Native](https://reactnative.dev) with [Expo](https://expo.dev) for cross-platform development. This combination was selected for its ability to share up to 90% of code between platforms while maintaining native performance. The integration with the monorepo allows seamless sharing of business logic and types with the web application.
## API
The API is implemented as a dedicated package using [Hono](https://hono.dev), a lightweight framework optimized for edge computing. This architectural decision creates a clear separation between frontend and backend logic, enhancing maintainability and testability.
Hono's exceptional TypeScript support ensures type safety across all endpoints, while its minimal footprint and edge-first design deliver outstanding performance.
## Model providers
TurboStarter AI seamlessly integrates with leading AI model providers including [OpenAI](/ai/docs/openai), [Anthropic](/ai/docs/anthropic), [Google AI](/ai/docs/google), [xAI](/ai/docs/xai), and more. The architecture employs [AI SDK](https://sdk.vercel.ai/) to create a unified interface across diverse providers, simplifying experimentation with different models.
The platform strategically utilizes specialized models for distinct AI tasks:
* **Text generation** models for conversational AI and content creation
* **Structured output** models for precise data extraction and formatting
* **Image generation** models for visual content creation
* **Voice synthesis** models for natural audio production
* **Embedding** models for semantic search and information retrieval
Switching models requires just a **one-line code change**, allowing you to rapidly adapt to emerging models or change providers based on your specific requirements. This flexibility ensures your application can leverage the latest AI advancements without extensive refactoring.
## Authentication
The applications use [Better Auth](https://www.better-auth.com/) for authentication, providing a secure and flexible authentication system. By default, the AI implementation creates an anonymous user session at startup, which is then used for all subsequent queries and interactions with the AI models. This approach maintains user context across sessions while minimizing friction.
For more sophisticated authentication requirements, you can easily extend the flow by leveraging the [Core implementation](/docs/web/auth/overview), which supports email/password authentication, magic links, OAuth providers, and more. This modular design lets you implement precisely the level of security your application demands.
## Persistence
Persistence in TurboStarter AI refers to the system's ability to store and retrieve data from a database. The application uses [PostgreSQL](https://www.postgresql.org/) as its primary database to store critical information such as:
* Chat history and conversation context
* User accounts and preference settings
* Vector embeddings for retrieval-augmented generation
To interact with the database from route handlers and server actions, TurboStarter AI leverages [Drizzle ORM](https://orm.drizzle.team/), a high-performance TypeScript ORM that provides type-safe database operations. This ensures robust data integrity and simplified query construction throughout the application.
A key advantage of Drizzle is its compatibility with multiple database providers including [Neon](https://neon.tech/), [Supabase](https://supabase.com/), and [PlanetScale](https://planetscale.com/). This flexibility allows seamless switching between providers based on your specific requirements without modifying queries or schema definitions β making your application highly adaptable to evolving infrastructure needs.
## Blob storage
File storage is managed through S3-compatible services, providing scalable, reliable storage for diverse file types. The system efficiently handles user-uploaded images, AI-generated content, and document files. This approach ensures optimal file management and straightforward integration with various storage providers including [AWS S3](https://aws.amazon.com/s3/), [Cloudflare R2](https://www.cloudflare.com/products/r2/), or [MinIO](https://min.io/).
## Security
Security is implemented comprehensively to protect both the application and its users. All API endpoints incorporate **rate limiting** to prevent abuse and ensure fair resource allocation.
The system uses a **credits-based access** control system, where each user has a limited number of credits for AI operations, preventing resource exhaustion and enabling monetization options.
All external API interactions, including those with AI model providers, occur exclusively server-side. This ensures that sensitive API keys are **never exposed** to client-side code, significantly reducing vulnerability to unauthorized access or credential theft.
Additionally, the system implements industry-standard security practices including thorough input validation, proper authentication enforcement, and regular dependency security audits.
file: ./src/content/docs/(ai)/index.mdx
meta: {
"title": "Get started",
"description": "An overview of the TurboStarter AI starter kit.",
"icon": "PlayIcon",
"index": true
}
TurboStarter AI is a **starter kit with ready-to-use demo apps** that helps you quickly build powerful AI applications without starting from scratch. Whether you're launching a small side project or a full-scale enterprise solution, it provides the structure you need to jump right into building your own unique AI application.
## Features
TurboStarter AI comes packed with features designed to accelerate your development process:
### Core framework
### AI
### Data storage
### Authentication
### User interface
## Demo apps
TurboStarter AI includes several production-ready demo applications that showcase diverse AI capabilities. Use these examples to understand implementation patterns and jumpstart your own projects.
} />
} />
} />
} />
} />
## Scope of this documentation
This documentation focuses specifically on the AI features, architecture, and demo applications included in the **TurboStarter AI** kit. While we provide comprehensive coverage of AI integrations, for information about core framework elements (authentication, billing, etc.), please refer to the [Core documentation](/docs/web).
Our goal is to guide you through setting up, customizing, and deploying the AI starter kit efficiently. Where relevant, we include links to official documentation for the integrated AI providers and libraries.
## Setup
Getting started with TurboStarter AI requires configuring the core applications first. For detailed setup instructions, refer to:
} />
} />
After establishing the core applications, you can configure specific AI providers and demo applications using the dedicated sections in this documentation (see sidebar). For a quick start, you might also want to check our [TurboStarter CLI guide](/blog/the-only-turbo-cli-you-need-to-start-your-next-project-in-seconds) to bootstrap your project in seconds.
When working with the AI starter kit, remember to use the `ai` repository instead of `core` for Git commands. For example, use `git clone turbostarter/ai` rather than `git clone turbostarter/core`.
## Deployment
Deploying TurboStarter AI follows the same process as deploying the core web application. Ensure you configure all necessary environment variables, including those for your selected AI providers (like [OpenAI](/ai/docs/openai), [Anthropic](/ai/docs/anthropic), etc.), in your deployment environment.
For comprehensive deployment instructions across various platforms, consult our core deployment guides:
For mobile app store deployment, refer to our mobile publishing guides:
Each AI demo app may have specific deployment considerations, so check their dedicated documentation sections for additional guidance.
## `llms.txt`
Access the complete TurboStarter documentation in Markdown format at [/api/llms.txt](/api/llms.txt). This file contains all documentation in an LLM-friendly format, enabling you to ask questions about TurboStarter using the most current information.
### Example usage
To query an LLM about TurboStarter:
1. Copy the documentation contents from [/api/llms.txt](/api/llms.txt)
2. Use this prompt format with your preferred LLM:
```
Documentation:
{paste documentation here}
---
Based on the above documentation, answer the following:
{your question}
```
## Let's build amazing AI!
We're excited to help you create innovative AI-powered applications quickly and efficiently. If you have questions, encounter issues, or want to showcase your creations, connect with our community:
* [Follow updates on X](https://x.com/turbostarter_)
* [Join our Discord](https://discord.gg/KjpK2uk3JP)
* [Report issues on GitHub](https://github.com/turbostarter)
* [Contact us via email](mailto:hello@turbostarter.dev)
Happy building! π
file: ./src/content/docs/(ai)/stack.mdx
meta: {
"title": "Tech stack",
"description": "Learn which tools and libraries power TurboStarter AI.",
"icon": "Tools"
}
## Turborepo
[Turborepo](https://turbo.build/) is a high-performance monorepo tool that optimizes dependency management and script execution across your project. We chose this monorepo setup to simplify feature management and enable seamless code sharing between packages.
} />
## Next.js
[Next.js](https://nextjs.org) is a powerful [React](https://react.dev) framework that delivers server-side rendering, static site generation, and more. We selected Next.js for its exceptional flexibility and developer experience. It also serves as the foundation for our serverless API.
} />
} />
## React Native + Expo
[React Native](https://reactnative.dev/) is a leading open-source framework created by Facebook that enables building native mobile applications using [React](https://react.dev). It provides access to native platform capabilities while maintaining the development efficiency of React.
[Expo](https://expo.dev/) extends React Native with a comprehensive toolkit that streamlines development, building, and deployment of iOS, Android, and web apps from a single codebase.
} />
} />
## AI SDK
[Vercel AI SDK](https://sdk.vercel.ai/) provides a robust toolkit for building AI-powered applications. It offers essential utilities and components for integrating advanced AI features, including streaming responses, interactive chat interfaces, and more.
} />
## LangChain
[LangChain](https://js.langchain.com/) is a sophisticated framework designed for language model-powered applications. It delivers critical abstractions and tools for building complex AI systems, including prompt management, memory systems, and agent architectures.
} />
## Hono
[Hono](https://hono.dev) is an ultrafast, lightweight web framework optimized for edge computing. It includes a type-safe RPC client for secure function calls from the frontend. We leverage Hono to create efficient serverless API endpoints.
} />
## Tailwind CSS
[Tailwind CSS](https://tailwindcss.com) is a utility-first CSS framework that accelerates UI development without writing custom CSS. We complement it with [Radix UI](https://radix-ui.com), a collection of accessible headless components, and [shadcn/ui](https://ui.shadcn.com), which lets you generate beautifully designed components with a single command.
} />
} />
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a type-safe, high-performance [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) for modern database management. It generates TypeScript types from your schema and enables fully type-safe queries.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but Drizzle's flexibility allows you to easily switch to MySQL, SQLite, or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
file: ./src/content/docs/(ai)/(apps)/agents.mdx
meta: {
"title": "Agents",
"description": "Build powerful, autonomous AI agents capable of performing complex tasks within your web and mobile applications.",
"icon": "WorkflowCircle01Icon"
}
This feature is currently under development and will be
available in a future release.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
The AI Agents demo will showcase how to create intelligent, autonomous agents capable of executing complex tasks within your web and mobile applications.
These agents will leverage advanced AI techniques to interact with users, tools, and data sources.
## Features
Design agents once and deploy them seamlessly across multiple platforms
including React, React Native, Expo, and Next.js through a unified
architecture.
Implement sophisticated context retention that allows agents to maintain
state and recall critical information across conversations and devices with
perfect continuity.
Enable agents to take meaningful actions by integrating with external tools,
accessing APIs, and executing functions dynamically within secure,
controlled environments.
Leverage the [Model Context
Protocol](https://microsoft.github.io/model-context-protocol/) to
standardize context delivery between agents and Large Language Models
(LLMs). This enables frictionless connections to diverse data sources and
tools, dramatically enhancing agent capabilities.
Orchestrate complex workflows combining Retrieval-Augmented Generation
(RAG), tool utilization, and MCP server interactions to solve sophisticated
tasks that previously required human intervention.
Stay tuned for the release of this exciting functionality!
file: ./src/content/docs/(ai)/(apps)/chat.mdx
meta: {
"title": "Chatbot",
"description": "Build a powerful AI assistant with multiple LLMs, generative UI, web browsing, and image analysis.",
"icon": "Chatting01Icon"
}
The [Chatbot](https://ai.turbostarter.dev/chat) demo application showcases an advanced AI assistant capable of engaging in complex conversations, performing web searches, and understanding context. It integrates multiple large language models (LLMs) and allows users to attach files to the chat window.
## Features
The chatbot offers a variety of capabilities for an enhanced conversational experience:
Switch effortlessly between leading AI providers like
[OpenAI](/ai/docs/openai) and [Anthropic](/ai/docs/anthropic) within a
single, consistent chat interface.
Experience an AI that truly understands complex questions and delivers
thoughtful, nuanced responses based on comprehensive reasoning.
Access up-to-the-minute information directly from the web through the
integrated search capability powered by [Tavily AI](https://tavily.com/).
Enrich conversations by sharing and analyzing files, images, or web links
directly within the chat interface for contextual discussion.
Enjoy natural, fluid conversations with responses that stream in real-time,
eliminating waiting periods.
Seamlessly manage your conversation history with features to save, organize,
and revisit previous discussions.
## Setup
To implement your advanced AI assistant, you'll need several services configured. If you haven't set these up yet, start with:
### AI models
Different models offer varying capabilities for tool calling, reasoning, and file processing. Consider these differences when selecting the optimal model for your specific use case.
The Chatbot leverages the AI SDK to support various language and vision models. You can easily switch between models based on your needs. Explore the documentation for the most popular models:
} />
} />
} />
} />
For detailed configuration of specific providers and other supported models, refer to the [AI SDK documentation](https://sdk.vercel.ai/providers/ai-sdk-providers).
### Web browsing
The chatbot utilizes [Tavily AI](https://tavily.com/) to provide real-time web search capabilities. Tavily is a specialized search engine optimized for LLMs and AI agents, designed to deliver highly relevant search results by automatically handling the complexities of web scraping, filtering, and extracting relevant information.
We selected Tavily because it dramatically simplifies the integration of current web data into AI applications through a single API call that returns comprehensive, AI-ready search results.
Tavily offers a generous free tier with [1,000 API credits per
month](https://docs.tavily.com/documentation/api-credits) without requiring
credit card information. A basic search consumes 1 credit, while an advanced
search uses 2 credits. Paid plans are available for higher volume usage.
To enable web browsing, follow these steps:
#### Get Tavily API Key
Sign up or log in at the [Tavily Platform](https://app.tavily.com/sign-in) to obtain your API key from the dashboard.
#### Add API Key to Environment
Add your API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
TAVILY_API_KEY=tvly-your-api-key
```
With the API key properly configured, the chatbot will automatically utilize Tavily for searches when contextually appropriate.
## Data persistence
User interactions and chat history are persisted to ensure a continuous experience across sessions.
Conversation data is organized within a dedicated PostgreSQL schema named `chat`
to maintain clear separation from other application data.
* `chats`: stores records for each conversation session, including essential metadata like user ID and creation timestamp.
* `messages`: maintains the content of individual messages exchanged within conversations, linked to their parent chat session.
* `parts`: handles complex message structures by breaking down content into smaller components, particularly useful for generative UI elements or multi-modal content.
* `tool_invocations`: records instances where the AI model invokes external tools (such as web search or function calls), tracking both inputs and outputs.
Files shared within conversations (such as images or documents) are uploaded to [cloud storage](/ai/docs/storage) (S3-compatible), with references to these attachments stored within the message content or parts.
## Structure
The Chatbot functionality is thoughtfully distributed across shared packages and platform-specific code for web and mobile, ensuring optimal code reuse and consistency.
### Core
The `@turbostarter/ai` package, located in `packages/ai`, contains the central chat functionality in the `src/chat` directory. It includes:
* Essential constants, types, and validation schemas for chat interactions
* Core API logic for managing conversations and messages
* Comprehensive chat history persistence and retrieval functionality
* AI model provider configuration and initialization
* Integrations for external tools like web search
### API
Built with Hono, the `packages/api` package defines all API endpoints. Chat-specific routes are organized under `src/modules/ai/chat`:
* `chat.router.ts`: establishes Hono RPC routes, handles input validation, and connects frontend requests to the core AI logic in `packages/ai`
* Manages authentication, request processing, and database interactions through the core package
### Web
The Next.js web application in `apps/web` implements the user-facing chat interface:
* `src/app/[locale]/(apps)/chat/**`: contains the Next.js App Router pages and layouts dedicated to the chat experience
* `src/components/chat/**`: houses reusable React components for the chat interface (message bubbles, input area, model selector, etc.)
### Mobile
The Expo/React Native mobile application in `apps/mobile` delivers a native chat experience:
* `src/app/chat/**`: defines the primary screens for the mobile chat interface
* `src/components/chat/**`: contains React Native components styled to match the web version, optimized for mobile interaction
* **API interaction**: utilizes the same Hono RPC client (`packages/api`) as the web app for consistent backend communication
This modular structure promotes separation of concerns and facilitates independent development and scaling of different parts of the application.
file: ./src/content/docs/(ai)/(apps)/image.mdx
meta: {
"title": "Image Generation",
"description": "Learn how to generate images using AI models within the TurboStarter AI demo application.",
"icon": "Image02Icon"
}
The [Image Generation](https://ai.turbostarter.dev/image) demo application allows users to create unique visuals from textual descriptions using various AI models. It provides a simple interface to input prompts, select models, and view generated images.
## Features
Explore the capabilities of the AI-powered image generation tool:
Create images simply by describing what you want to see in text.
Choose from different AI image generation models offered by various
providers.
Select the desired aspect ratio for your generated images (e.g. square,
landscape, portrait).
Create multiple design variations from a single prompt simultaneously,
accelerating your creative workflow.
Access and reference your complete generation history, including all prompts
and resulting images for continued iteration.
## Setup
To implement image generation in your application, you'll need to configure the necessary backend services.
You'll also need API keys for your preferred AI models. Follow the detailed setup instructions in the provider documentation linked below.
## AI models
The Image Generation app leverages the AI SDK to support various models capable of creating images from text. Configure the providers for the models you wish to use:
} />
} />
For detailed implementation guidance, refer to the [AI SDK documentation](https://sdk.vercel.ai/docs/ai-sdk-core/image-generation) covering the `generateImage` function and supported providers.
## Data persistence
Details about image generation requests and the resulting images are stored to maintain user history.
Data is organized within a dedicated PostgreSQL schema named `image`:
* `generations`: captures detailed information about each generation request, including the `prompt`, selected `model`, `aspectRatio`, requested image `count`, `userId`, and precise timestamps.
* `images`: stores complete metadata for each generated image, linked to its parent `generation` record via `generationId` and maintaining the `url` reference to the stored image file.
The generated image files are securely stored in [cloud storage](/ai/docs/storage) (S3-compatible). Each image's location is tracked via the `url` field in the `images` table for reliable retrieval.
## Structure
The Image Generation feature is architected across the monorepo for optimal code organization and reusability.
### Core
The `@turbostarter/ai` package (`packages/ai`) contains the essential logic under `modules/image`:
* Comprehensive types, validation schemas (for prompts, aspect ratios, etc.), and constants
* Core API logic for processing image generation requests and interfacing with AI models
* Database operations for recording generation details and image metadata
* Utilities for uploading generated images to cloud storage
### API
The `packages/api` package defines the backend API endpoints using Hono:
* `src/modules/ai/image/image.router.ts`: implements Hono RPC routes for image generation, handles input validation, applies necessary middleware (authentication, credit management), and invokes the core logic from `@turbostarter/ai`.
### Web
The Next.js application (`apps/web`) delivers an intuitive user interface:
* `src/app/[locale]/(apps)/image/**`: contains the Next.js App Router pages and layouts for the image generation experience
* `src/components/image/**`: houses reusable React components tailored to the image generation UI (prompt input, model selector, image gallery, etc.)
### Mobile
The Expo/React Native application (`apps/mobile`) provides a native mobile experience:
* `src/app/image/**`: defines the screens for the mobile image generation interface
* `src/components/image/**`: contains React Native components optimized for mobile interaction
* **API integration**: utilizes the same Hono RPC client (`packages/api`) as the web app for consistent backend communication
This architecture ensures perfect consistency across platforms while enabling tailored UI implementations optimized for each environment.
file: ./src/content/docs/(ai)/(apps)/pdf.mdx
meta: {
"title": "Chat with PDF",
"description": "Engage in conversations with your PDF documents using AI to extract insights and answer questions.",
"icon": "File01Icon"
}
The [Chat with PDF](https://ai.turbostarter.dev/pdf) demo application enables intelligent interaction with document content through a conversational AI interface. Upload PDF files and instantly engage in natural dialogue about their contents, asking questions, requesting summaries, and extracting key information with remarkable accuracy.
## Features
Transform how you interact with document content through these powerful capabilities:
Easily upload PDF files directly into the application for analysis.
Chat with an AI that understands the content of your uploaded PDF, providing
relevant answers based on the text.
Quickly find specific information, key points, or summaries within the
document through natural language queries.
Visualize exactly which document sections informed the AI's responses with
precise source highlighting.
Conduct sophisticated conversations spanning multiple uploaded documents,
enabling cross-document analysis and comparison.
## Setup
To implement the "Chat with PDF" application in your project, configure these essential backend services:
Set up PostgreSQL with the `pgvector` extension to efficiently store
conversation history, document metadata, and vector embeddings for semantic
search.
Configure S3-compatible cloud storage for secure management of uploaded PDF
documents.
You'll also need to obtain API keys for both the conversational AI models and the embedding models used for text processing.
## AI models
This application leverages two complementary AI model types working together:
1. **Large Language Models (LLMs):** Provide sophisticated natural language understanding to interpret your questions and generate contextually appropriate responses based on document content.
2. **Embedding Models:** Convert document text segments into numerical vector representations that enable efficient semantic similarity search and [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation).
Configure the providers for the models you wish to use:
} />
} />
} />
} />
For comprehensive configuration details, consult the [AI SDK documentation](https://sdk.vercel.ai/docs) covering provider setup and model selection.
## Data persistence
The application stores data related to chats, documents, and embeddings to provide a persistent experience.
Application data is organized within a dedicated PostgreSQL schema named `pdf`:
* `chats`: captures essential metadata for each document-specific conversation session.
* `messages`: stores all user queries and AI responses within conversation threads.
* `documents`: maintains comprehensive tracking of uploaded PDF files, including filenames and storage locations.
* `embeddings`: contains text segments extracted from PDFs along with their vector representations (using [`pgvector`](https://github.com/pgvector/pgvector)'s `vector` data type). To optimize similarity searches critical for RAG processing, the system creates an index (`embeddingIndex` using [HNSW](https://github.com/pgvector/pgvector#hnsw)) on the `embedding` column.
The PDF files uploaded by users are securely stored in your configured [cloud storage](/ai/docs/storage) bucket. The `path` field in the `documents` table maintains the precise reference to each file's location.
## Structure
The "Chat with PDF" feature is architected across the monorepo for optimal organization and code reuse:
### Core
The `@turbostarter/ai` package (`packages/ai`) contains the essential logic under `modules/pdf`:
* Comprehensive types, validation schemas, and constants specific to PDF processing
* Advanced document parsing, text segmentation, and embedding generation utilities
* Core API logic for managing conversations, performing RAG-based lookups, and interacting with LLMs
* Database operations for storing and retrieving conversations, documents, and embeddings
* Shared utilities for managing PDF file uploads and downloads
### API
The `packages/api` package defines the backend API endpoints using [Hono](https://hono.dev/):
* `src/modules/ai/pdf/pdf.router.ts`: implements Hono RPC routes for document upload and conversation management, handles input validation, applies middleware (authentication, credit management), and invokes the core functionality from `@turbostarter/ai`.
### Web
The [Next.js](https://nextjs.org/) application (`apps/web`) delivers an intuitive user interface:
* `src/app/[locale]/(apps)/pdf/**`: contains the Next.js App Router pages and layouts for the document conversation experience
* `src/components/pdf/**`: houses reusable React components specific to the PDF interaction UI (document upload, conversation interface, message display)
### Mobile
The [Expo](https://expo.dev/)/[React Native](https://reactnative.dev/) application (`apps/mobile`) provides a native mobile experience:
* `src/app/pdf/**`: defines the screens for the mobile document conversation interface
* `src/components/pdf/**`: contains React Native components optimized for mobile document interaction
* **API integration**: utilizes the same Hono RPC client (`packages/api`) as the web app for consistent backend communication
This architecture ensures that core AI processing and data handling logic is shared across platforms, while enabling optimized UI implementations tailored to each environment.
file: ./src/content/docs/(ai)/(apps)/tts.mdx
meta: {
"title": "Text to Speech",
"description": "Convert text into natural-sounding speech using advanced AI voice synthesis models.",
"icon": "AudioWaves"
}
The [Text to Speech (TTS)](https://ai.turbostarter.dev/tts) demo application transforms written text into high-quality spoken audio. It leverages state-of-the-art AI models to generate lifelike voices in various languages and styles.
## Features
Discover the powerful capabilities of this AI-powered voice synthesis solution:
Access a wide range of voices from providers like [Eleven
Labs](https://elevenlabs.io/), including different accents, ages, and
emotional tones, to find the perfect match for your content.
Experience near-instantaneous audio generation with streaming delivery,
providing immediate feedback as your content comes to life.
Enjoy a full-featured playback interface with precise controls for playback
speed and convenient options to download generated audio files.
Fine-tune your audio output with adjustable parameters for pitch, speed, and
pauses, creating the most natural and engaging delivery possible (available
options vary by provider).
Benefit from a thoughtfully designed interface that makes transforming text
to speech effortless and efficient, even for first-time users.
## AI models
This application primarily utilizes specialized text-to-speech models from [Eleven Labs](https://elevenlabs.io/).
} />
For comprehensive information about available voices and advanced customization techniques, consult the [ElevenLabs SDK documentation](https://elevenlabs.io/docs/overview).
## Structure
The Text-to-Speech feature is organized across the monorepo for maximum flexibility and maintainability:
### Core
The `@turbostarter/ai` package (`packages/ai`) contains the essential logic under `modules/tts`:
* Comprehensive types, validation schemas, and constants specific to TTS functionality
* Core API logic for processing text-to-speech requests and interfacing with AI models
* Robust handling of generated audio file uploads to cloud storage
### API
The `packages/api` package defines the backend API endpoints using [Hono](https://hono.dev/):
* `src/modules/ai/tts/tts.router.ts`: implements Hono RPC routes for TTS generation, handles input validation, applies critical middleware (authentication, credit management), and invokes the core functionality from `@turbostarter/ai`.
### Web
The [Next.js](https://nextjs.org/) application (`apps/web`) provides the user interface:
* `src/app/[locale]/(apps)/tts/**`: contains the Next.js App Router pages and layouts for the TTS experience
* `src/components/tts/**`: houses reusable React components specific to the TTS interface (text input area, voice selector, audio player, etc.)
### Mobile
The [Expo](https://expo.dev/)/[React Native](https://reactnative.dev/) application (`apps/mobile`) provides the native mobile experience:
* `src/app/tts/**`: defines the screens for the mobile TTS interface
* `src/components/tts/**`: contains React Native components optimized for the mobile experience
* **API interaction**: utilizes the same Hono RPC client (`packages/api`) as the web app for consistent communication with the backend
This architecture ensures perfect consistency between platforms while allowing for optimized UI implementations tailored to each environment.
file: ./src/content/docs/(ai)/(providers)/anthropic.mdx
meta: {
"title": "Anthropic",
"description": "Setup Anthropic provider and learn how to use it in the starter kit.",
"icon": "Anthropic"
}
The [Anthropic](https://www.anthropic.com) provider integrates Anthropic's powerful Claude models into your application through the AI SDK, with an emphasis on safety, helpfulness, and natural interactions.

## Setup
### Generate API Key
Visit the [Anthropic Console](https://console.anthropic.com/) to create an account and generate a new API key for your project.
### Add API Key to Environment
Add your generated API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
ANTHROPIC_API_KEY=your-api-key
```
### Configure Provider (Optional)
The starter kit automatically uses the `ANTHROPIC_API_KEY` environment variable. For advanced configurations (such as proxies or custom headers), refer to the [AI SDK Anthropic documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic#provider-instance).
## Features
Leverage Anthropic's state-of-the-art Claude models for sophisticated
conversational AI, creative text generation, in-depth analysis, and more
through the intuitive Messages API.
Enable models to understand and process image inputs alongside text for
multimodal applications.
Allow models to interact with external tools and APIs to perform actions and
retrieve real-time information.
Create structured data outputs (like JSON) from natural language prompts,
streamlining the integration of AI capabilities with your existing systems.
Access detailed insights into the model's thought process, enhancing
transparency, debuggability, and trust in AI-generated responses.
(Experimental) Allow models to directly interact with computer desktop
environments to complete complex, multi-step tasks autonomously.
## Use Cases
Craft intelligent, context-aware chatbots capable of nuanced conversations
and sophisticated task completion. Experience this capability in our [Chat
Demo](/ai/docs/chat).
Generate high-quality text for various purposes, or summarize long documents
and conversations accurately.
Extract structured information from unstructured text or analyze complex
data sets combined with visual inputs for comprehensive insights.
Seamlessly integrate Claude models with your existing tools via function
calling to automate complex business processes and tasks. Explore
[Agents](/ai/docs/agents) for advanced implementation options.
## Links
* [Anthropic Website](https://www.anthropic.com)
* [Anthropic Documentation](https://docs.anthropic.com)
* [AI SDK - Anthropic Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic)
file: ./src/content/docs/(ai)/(providers)/deepseek.mdx
meta: {
"title": "DeepSeek",
"description": "Integrate DeepSeek's powerful AI models into your applications with minimal setup.",
"icon": "DeepSeekMonochrome"
}
The [DeepSeek](https://www.deepseek.com/) provider delivers access to DeepSeek's advanced AI models through the AI SDK, bringing reasoning capabilities to your applications.

## Setup
### Generate API Key
Visit the [DeepSeek Platform](https://platform.deepseek.com/) and navigate to the API keys section to create your personal secret key.
### Add API Key to Environment
Add your generated API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
DEEPSEEK_API_KEY=your-api-key
```
### Configure Provider (Optional)
The starter kit automatically utilizes the `DEEPSEEK_API_KEY` environment variable. For advanced configurations, consult the comprehensive [AI SDK DeepSeek documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/deepseek#provider-instance).
## Features
Utilize DeepSeek's language models, known for their deep reasoning
capabilities, for tasks like text generation, translation, and
conversational AI applications.
Tap into models with reasoning abilities designed specifically for complex
problem-solving, logical deduction, and analytical tasks that require deep
understanding.
Enable language models to interact with external tools and functions,
allowing for more complex and automated task execution.
## Use Cases
Create intelligent chatbots that engage in natural, meaningful conversations
and assist users with a wide range of tasks. Experience this capability in
our [Chat Demo](/ai/docs/chat).
Produce diverse, high-quality creative text content including articles,
summaries, code explanations, and marketing copy with language
understanding.
Integrate language models with other tools via function calling to automate
processes like data analysis or report generation.
## Links
* [DeepSeek Website](https://www.deepseek.com/)
* [DeepSeek Platform](https://platform.deepseek.com/)
* [AI SDK - DeepSeek Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/deepseek)
file: ./src/content/docs/(ai)/(providers)/eleven-labs.mdx
meta: {
"title": "Eleven Labs",
"description": "Setup ElevenLabs and learn how to integrate its AI audio capabilities into the starter kit.",
"icon": "ElevenLabs"
}
[ElevenLabs](https://elevenlabs.io/) stands at the forefront of AI audio innovation, specializing in ultra-realistic Text-to-Speech (TTS), voice cloning, and advanced audio generation. While not a native provider within the AI SDK core, ElevenLabs' powerful services integrate seamlessly with AI applications to deliver exceptional voice experiences.

## Setup
Integrating ElevenLabs involves using their purpose-built SDKs (Python, TypeScript/JavaScript) alongside your application logic:
### Generate API Key
Visit the [ElevenLabs website](https://elevenlabs.io/), create an account or sign in, then navigate to your profile settings to generate your unique API key.
### Add API Key to Environment
Add your API key to your project's `.env` file (e.g., in `apps/web` or the appropriate package):
```bash title=".env"
ELEVENLABS_API_KEY=your-api-key
```
### Configure SDK
Initialize the ElevenLabs client with your API key:
```typescript title="client.ts"
import { ElevenLabsClient } from "elevenlabs";
import { env } from "../../env";
export const client = new ElevenLabsClient({
apiKey: env.ELEVENLABS_API_KEY,
});
// Now use the client object...
```
For comprehensive implementation details, refer to the [ElevenLabs Quickstart Guide](https://elevenlabs.io/docs/quickstart).
## Features
ElevenLabs offers a comprehensive suite of AI audio technologies:
Transform written text into remarkably natural speech across numerous
languages, voices, and styles, with flexible options for quality or
low-latency delivery.
Transcribe spoken audio into text accurately, supporting multiple languages
and providing features like speaker diarization.
Create stunningly accurate digital replicas of voices from audio samples,
with both instant and professional-grade options to suit your needs.
Craft entirely new, unique synthetic voices based on descriptive parameters,
enabling custom voice creation without requiring sample recordings.
Build and deploy end-to-end conversational voice agents, integrating STT,
LLMs (like GPT, Claude, Gemini), TTS, and turn-taking logic.
Automatically dub audio or video content into different languages while
preserving the original voice characteristics.
Create custom sound effects and ambient audio from simple text descriptions,
adding rich audio elements to your applications.
Access an extensive collection of pre-made, ready-to-use voices contributed
by the ElevenLabs community.
## Use Cases
Power conversational AI applications like customer service bots, virtual
assistants, or interactive characters with low-latency TTS.
Create professional-quality narration for audiobooks, articles, videos, and
e-learning content in multiple languages and voices. Experience this in the
[TTS Demo](/ai/docs/tts).
Enhance digital accessibility by converting text content into natural
speech, making your applications more inclusive for users with visual
impairments or reading difficulties.
Deliver dynamic, personalized audio experiences with custom-designed or
cloned voices, creating unique and engaging user interactions.
Utilize dubbing and multilingual TTS to easily adapt content for
international audiences.
Generate character voices, ambient sounds, and dynamic audio for immersive
experiences.
## Links
* [ElevenLabs Website](https://elevenlabs.io/)
* [ElevenLabs Documentation](https://elevenlabs.io/docs)
* [Developer Quickstart](https://elevenlabs.io/docs/quickstart)
* [API Reference](https://elevenlabs.io/docs/api-reference/introduction)
* [Pricing](https://elevenlabs.io/pricing)
file: ./src/content/docs/(ai)/(providers)/google.mdx
meta: {
"title": "Google AI",
"description": "Setup Google Generative AI provider and learn how to use its models like Gemini in the starter kit.",
"icon": "GoogleMonochrome"
}
The [Google Generative AI](https://ai.google/) provider integrates Google's state-of-the-art models, including the versatile Gemini family, into your applications through the AI SDK.

## Setup
### Generate API Key
Visit the [Google AI Studio](https://aistudio.google.com/app/apikey) to create your API key. For enterprise applications using Google Cloud, you can alternatively configure authentication via Application Default Credentials or service accounts.
### Add API Key to Environment
Add your API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
GOOGLE_GENERATIVE_AI_API_KEY=your-api-key
```
If using Google Cloud credentials instead, ensure they're properly configured in your environment.
### Configure Provider (Optional)
The starter kit automatically uses the `GOOGLE_GENERATIVE_AI_API_KEY` environment variable. For advanced configurations (such as proxies, custom API versions, or specific headers), you can create a tailored provider instance using `createGoogleGenerativeAI`. See the [AI SDK Google documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai#provider-instance) for comprehensive details.
## Features
Leverage Google's advanced Gemini models for chat, text generation,
reasoning, and complex instruction following.
Utilize text embedding models to convert text into numerical representations
for tasks like semantic search, clustering, and RAG.
Analyze and understand various file types (including images and PDFs)
alongside text prompts, enabling rich multimodal applications with
comprehensive content understanding.
Empower models to interact seamlessly with external tools and APIs, allowing
them to perform real-world actions and retrieve up-to-date information for
more capable applications.
Configure safety thresholds to control model responses regarding harmful
content categories. Access safety ratings in the response metadata.
Cache content to optimize context reuse and potentially reduce latency and
costs for repeated queries with similar context.
(With compatible models) Ground responses in real-time search results,
dramatically enhancing factual accuracy and providing up-to-date information
on current topics.
## Use Cases
Create sophisticated conversational agents powered by Gemini models that can
engage in natural dialogue and handle complex, multi-step tasks. Experience
this in our [Chat Demo](/ai/docs/chat).
Generate diverse text formats, from creative writing and marketing copy to
code explanations and summaries.
Build applications that seamlessly analyze and understand images, documents,
and other file types alongside text, creating richer, more contextual user
experiences.
Implement powerful search capabilities or sophisticated Retrieval-Augmented
Generation systems using Google's high-performance embedding models for more
accurate information retrieval.
Streamline operations by connecting language models to external tools and
APIs through function calling, automating complex business processes and
repetitive tasks with minimal human intervention.
## Links
* [Google AI](https://ai.google/)
* [Google AI Studio](https://aistudio.google.com/)
* [Google Generative AI Documentation](https://ai.google.dev/docs)
* [AI SDK - Google Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai)
file: ./src/content/docs/(ai)/(providers)/meta.mdx
meta: {
"title": "Meta",
"description": "Setup Meta's Llama models and learn how to use them in the starter kit via various hosting providers.",
"icon": "MetaMonochrome"
}
The [Meta](https://ai.meta.com/) provider integration brings Meta's cutting-edge Llama family of open-weight models to your applications through the AI SDK. Renowned for their exceptional performance across diverse tasks, these models deliver state-of-the-art capabilities for your AI solutions.

## Setup
Deploying Llama models in your applications involves leveraging a third-party hosting provider that integrates seamlessly with the AI SDK, such as DeepInfra, Fireworks AI, Amazon Bedrock, Baseten, and others.
### Choose a hosting provider & get API Key
Select a trusted provider that hosts Llama models (e.g., [DeepInfra](https://deepinfra.com/), [Fireworks AI](https://fireworks.ai/), or [Amazon Bedrock](https://aws.amazon.com/bedrock/)). Register with your preferred provider and generate a secure API key through their platform console.
### Add API Key to environment
Add your provider-specific API key to your project's `.env` file (e.g., in `apps/web`). Use the appropriate environment variable for your chosen provider:
```bash title=".env"
# Example for DeepInfra
DEEPINFRA_API_KEY=your-deepinfra-api-key
# Example for Fireworks AI
FIREWORKS_API_KEY=your-fireworks-api-key
# Example for Amazon Bedrock (requires AWS credentials)
# AWS_ACCESS_KEY_ID=...
# AWS_SECRET_ACCESS_KEY=...
# AWS_REGION=...
```
### Configure provider
When implementing AI SDK functions (`generateText`, `streamText`, etc.), initialize the client for your selected provider and specify the appropriate Llama model identifier:
```ts
import { generateText } from "ai";
import { deepinfra } from "@ai-sdk/deepinfra";
// Or: import { fireworks } from '@ai-sdk/fireworks';
// Or: import { bedrock } from '@ai-sdk/amazon-bedrock';
const { text } = await generateText({
// Example using DeepInfra
model: deepinfra("meta-llama/Meta-Llama-3.1-8B-Instruct"),
// Example using Fireworks AI
// model: fireworks('accounts/fireworks/models/llama-v3p1-8b-instruct'),
// Example using Amazon Bedrock
// model: bedrock('meta.llama3-1-8b-instruct-v1:0'),
prompt: "Why is the sky blue?",
});
```
For comprehensive implementation details, consult the AI SDK documentation for your specific provider: [DeepInfra](https://sdk.vercel.ai/providers/ai-sdk-providers/deepinfra), [Fireworks AI](https://sdk.vercel.ai/providers/ai-sdk-providers/fireworks), [Amazon Bedrock](https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock), etc.
## Features
Llama models accessible through the AI SDK offer a range of powerful capabilities, with specific features varying based on model version and hosting provider implementation.
Utilize Llama's instruction-tuned models for dialogue generation,
translation, reasoning, and other conversational tasks. Available in various
sizes (e.g., 8B, 70B, 405B).
Empower Llama models to interact with external tools and functions, enabling
complex, multi-step task execution and real-world system integration.
(Capabilities may vary depending on your selected provider).
Leverage Llama's capabilities for complex reasoning problems and generating
code snippets in various programming languages.
## Use Cases
Create intelligent, responsive chatbots capable of natural conversations,
accurate information retrieval, and efficient task execution. Experience
this capability in our [Chat Demo](/ai/docs/chat).
Produce diverse, high-quality text content spanning articles, summaries,
creative narratives, marketing copy, and moreβtailored to your specific
requirements.
Boost developer productivity with AI-powered code generation, insightful
code explanations, effective debugging assistance, and programming guidance
across multiple languages.
Streamline operations by combining Llama models with tool usage capabilities
to automate complex business processes and seamlessly interact with your
existing systems.
## Links
* [Meta AI](https://ai.meta.com/)
* [Meta Llama Models](https://ai.meta.com/llama/)
* [AI SDK - Llama 3.1 Guide](https://sdk.vercel.ai/docs/guides/llama-3_1)
* [AI SDK - Providers](https://sdk.vercel.ai/providers) (Find hosting provider docs here)
file: ./src/content/docs/(ai)/(providers)/openai.mdx
meta: {
"title": "OpenAI",
"description": "Setup OpenAI provider and learn how to use it in the starter kit.",
"icon": "OpenAI"
}
The [OpenAI](https://openai.com) provider integrates OpenAI's powerful suite of language models, image generation capabilities, and embedding technologies into your application through the AI SDK.

## Setup
### Generate API Key
Visit the [OpenAI API keys page](https://platform.openai.com/api-keys) to create your personal secret key for API access.
### Add API Key to Environment
Add your API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
OPENAI_API_KEY=your-api-key
```
### Configure Provider (Optional)
By default, the starter kit automatically uses the `OPENAI_API_KEY` environment variable. For advanced configurations (such as using a proxy or specific organization ID), you can customize the provider instance. For detailed options, refer to the [AI SDK OpenAI documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/openai#provider-instance).
## Features
Leverage state-of-the-art models for building sophisticated conversational
AI, generating creative text formats, and answering complex questions.
Transform text into rich numerical representations with powerful models like
`text-embedding-3-large`, enabling advanced semantic search, intelligent
text clustering, and highly personalized recommendation systems.
Generate unique images from textual descriptions using OpenAI's DALLΒ·E
models, enabling creative applications and content generation.
Convert written text into natural-sounding human speech with various voices
using Text-to-Speech (TTS) models, ideal for accessibility features or voice
interfaces.
Empower models like GPT-4o or GPT-4 Turbo with Vision capabilities to
understand, analyze, and describe the content of images provided in prompts.
Allow language models to intelligently interact with your external tools,
APIs, and custom functions, orchestrating complex multi-step tasks and
creating powerful AI agents that can take actions in the real world.
## Use Cases
Create intelligent, context-aware conversational agents that engage in
natural dialogue, answer complex questions, and complete sophisticated tasks
based on user needs. Experience this capability in our [Chat
Demo](/ai/docs/chat).
Automate the creation of diverse text-based content, including blog posts,
marketing copy, emails, code snippets, and creative writing pieces.
Build advanced search systems that truly understand the meaning behind user
queries, enhanced with Retrieval-Augmented Generation (RAG) for delivering
exceptionally accurate, contextually relevant answers from your data.
Develop applications that can generate images from text prompts or analyze
and interpret the content of existing images for tagging, description, or
moderation. Check out the [Image Generation Demo](/ai/docs/image).
Design engaging voice-enabled experiences, including lifelike virtual
assistants, expressive audiobook narration, real-time translation services,
and accessibility tools that convert text to natural speech for visually
impaired users.
Transform business processes by connecting powerful language models to your
existing tools and systems through function calling, automating complex
workflows for data processing, report generation, customer support, and
more.
## Links
* [OpenAI Website](https://openai.com/)
* [OpenAI API Documentation](https://platform.openai.com/docs)
* [AI SDK - OpenAI Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/openai)
file: ./src/content/docs/(ai)/(providers)/replicate.mdx
meta: {
"title": "Replicate",
"description": "Setup Replicate provider and learn how to use it in the starter kit.",
"icon": "Replicate"
}
The [Replicate](https://replicate.com) provider unlocks access to an extensive library of open-source AI models through a streamlined cloud API, seamlessly integrated with the AI SDK. It's particularly well-known for image generation capabilities.

## Setup
### Generate API Key
Visit the [Replicate website](https://replicate.com/), create an account or sign in, then navigate to your account settings to generate your personal API token.
### Add API Key to Environment
Add your API token to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
REPLICATE_API_TOKEN=your-api-key
```
### Configure Provider (Optional)
The starter kit automatically uses the `REPLICATE_API_TOKEN` environment variable. For advanced configurations (such as proxies or custom headers), you can create a tailored provider instance. For comprehensive details, refer to the [AI SDK Replicate documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/replicate#provider-instance).
## Features
Gain instant access to a diverse ecosystem of community-contributed models
spanning text generation, image creation, audio processing, video synthesis,
and numerous other AI capabilities.
Create stunning visuals using various state-of-the-art open-source models
directly through the AI SDK's intuitive `generateImage` function, with
support for specific model versions and custom parameters.
Fine-tune model behavior by passing specific parameters via
`providerOptions.replicate`, allowing precise control over generation
settings according to each model's unique capabilities.
## Use Cases
Create unique visuals, artwork, or variations based on text prompts using a
diverse set of image models. Check out the [Image Generation
Demo](/ai/docs/image).
Utilize specialized open-source models for specific tasks that might not be
available through other major providers.
Quickly experiment with different community-published models for various AI
tasks without managing infrastructure.
## Links
* [Replicate Website](https://replicate.com)
* [Replicate Documentation](https://replicate.com/docs)
* [AI SDK - Replicate Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/replicate)
file: ./src/content/docs/(ai)/(providers)/xai.mdx
meta: {
"title": "xAI Grok",
"description": "Setup xAI provider and learn how to use it in the starter kit.",
"icon": "XAI"
}
The [xAI](https://x.ai) provider integrates Grok models into your application using the AI SDK.

## Setup
### Generate API Key
Visit the [xAI website](https://x.ai) to create an account. After signing in, navigate to your account settings to generate an API key.
### Add API Key to Environment
Once you've acquired an API key, add it to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
XAI_API_KEY=your-api-key
```
### Configure Provider (Optional)
The starter kit automatically uses the `XAI_API_KEY` environment variable. For advanced configurations and customization options, refer to the comprehensive [AI SDK xAI documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/xai#provider-instance).
## Features
Utilize xAI's language models for conversational AI, text generation, and
other natural language processing tasks.
Enable language models to interact with external tools and functions,
allowing for more complex and automated task execution.
Generate images based on textual descriptions using xAI's models.
## Use Cases
Create intelligent chatbots that engage users in natural, informative
conversations powered by xAI's Grok models, delivering responsive and
contextually relevant interactions. Experience this capability in our [Chat
Demo](/ai/docs/chat).
Produce diverse, high-quality text content across various formats and
styles, harnessing the unique characteristics and capabilities of Grok
models for creative and informational outputs.
Streamline operations by connecting xAI's language models with your existing
tools through function calling, enabling sophisticated automation of complex
business processes and repetitive tasks.
Design striking visuals and artwork directly from text descriptions using
xAI's image generation capabilities, enabling creative applications and rich
visual content. Explore our [Image Generation Demo](/ai/docs/image) to see
these features in action.
## Links
* [xAI Website](https://x.ai)
* [AI SDK - xAI Provider Docs](https://sdk.vercel.ai/providers/ai-sdk-providers/xai)
file: ./src/content/docs/(ai)/(services)/api.mdx
meta: {
"title": "API",
"description": "Overview of the API service in TurboStarter AI, including its architecture, technology stack, and core functionalities.",
"icon": "CloudIcon"
}
The API service acts as the central hub for all backend logic within TurboStarter AI. It handles interactions with AI models, data processing, and communication between the frontend and backend systems.
## Technology
We use [Hono](https://hono.dev), a lightning-fast web framework optimized for edge computing. This ensures efficient handling of API requestsβparticularly critical for real-time AI interactions like streaming responses.
**Importantly, this single API layer serves both web and mobile applications, guaranteeing consistent business logic and data handling across all platforms.**
## AI integration
While the API package (`@turbostarter/api`) exposes the endpoints, the core AI logic lives in a dedicated package: `@turbostarter/ai`. This package is strictly responsible for:
* Communicating with various AI providers and models ([OpenAI](/ai/docs/openai), [Anthropic](/ai/docs/anthropic), [Google AI](/ai/docs/google), etc.)
* Processing and formatting data specifically for AI interactions
* Parsing responses from AI models
* Handling AI-specific data storage or retrieval when necessary
The `@turbostarter/api` package utilizes `@turbostarter/ai` to perform these AI tasks. The API layer itself focuses on registering Hono routes, applying middlewares (like authentication and validation), and exposing AI functionalities to the frontend applications.
This separation ensures AI-specific logic remains modular and reusable, while the API package stays focused on request handling and routing.
API keys for AI services are managed securely on the backend within these packages, ensuring they never appear client-side.
## Middlewares
Hono middlewares streamline request handling by tackling common tasks before the main logic runs. In TurboStarter AI, they handle:
* **Authentication:** verifying user sessions to protect routes, ensuring only logged-in users access certain features
* **Validation:** using schemas to check if incoming request data (like query parameters or JSON bodies) matches expected formats, preventing invalid data from reaching route handlers
* **Rate limiting:** shielding the API from abuse by restricting the number of requests a user or IP address can make within a given timeframe
* **Credits management:** automatically checking if a user has enough credits for an AI operation and deducting the cost before proceeding
* **Localization:** detecting the user's preferred language to deliver localized responses and error messages
These middlewares keep core route logic clean and focused, while consistently enforcing security, usage limits, and data integrity across the API.
## Core API documentation
For general information about the API setup, architecture, authentication integration, and how to add new endpoints, please refer to the [Core API documentation](/docs/web/api/overview).
Specific configurations related to AI providers or demo apps can be found in their respective documentation sections.
file: ./src/content/docs/(ai)/(services)/auth.mdx
meta: {
"title": "Authentication",
"description": "Learn about the authentication flow in TurboStarter AI.",
"icon": "Auth"
}
TurboStarter AI implements a streamlined authentication approach powered by [Better Auth](https://www.better-auth.com/). Since the primary focus is showcasing AI capabilities, we've kept the initial authentication simple, allowing you to quickly integrate and experiment with AI features.
## Anonymous sessions
When someone first visits the AI application, an **anonymous session** is automatically created. This establishes a unique user identity without requiring login credentials.
These anonymous sessions serve two critical purposes:
1. **Persistence:** links data like chat history or generated content to specific users in your database
2. **Usage control:** enables tracking for rate limiting and the credits system, ensuring fair AI resource usage even for anonymous visitors
## Extending authentication
While the default anonymous setup provides a frictionless initial experience, TurboStarter is built for growth. The authentication logic uses Better Auth in the shared `packages/auth` package, ensuring consistency between web and mobile applications.
When your project needs more sophisticated authentication features like:
* Email/Password login
* Magic links
* Social logins (OAuth)
* Multi-factor authentication
You can easily integrate these by leveraging the comprehensive authentication system in the [TurboStarter Core kit](/docs/web). The underlying structure is already in place, making this transition straightforward.
For detailed implementation guides, check out the core documentation:
By starting with anonymous sessions, the AI kit lets you focus on building compelling AI features first, while providing a clear path to implement advanced user management and security as your application evolves.
file: ./src/content/docs/(ai)/(services)/billing.mdx
meta: {
"title": "Billing",
"description": "Discover how to manage billing and payment methods for AI features.",
"icon": "Money"
}
TurboStarter AI includes a straightforward middleware setup to manage user credits for AI features. This lets you control access based on available credits without complex payment integrations.
## Credit-based access
A focused middleware verifies if users have enough credits before allowing them to access specific AI-powered routes or actions.
```ts title="ai.router.ts"
export const aiRouter = new Hono().post(
"/chat",
rateLimiter,
validate("json", chatMessageSchema),
deductCredits({
amount: 10, // [!code highlight]
}),
streamChat,
);
```
This example shows how the `deductCredits` middleware subtracts a specific amount (10 credits) for each request to the `/chat` endpoint.
## Coming soon
We're actively expanding the billing capabilities for AI services, including:
* **Usage-based billing:** implementing a system where users pay based on their actual consumption of AI resources (tokens used, API calls made, etc.)
* **Payment provider integration:** connecting with popular services like [Stripe](/docs/web/billing/stripe), [Lemon Squeezy](/docs/web/billing/lemon-squeezy), and more for hassle-free payment processing
## Extending billing
For more advanced billing scenarios or immediate needs, you can tap into the core TurboStarter billing features. The main documentation provides detailed guidance on setting up and managing billing with third-party providers.
Stay tuned for updates as we enhance the AI-specific billing functionalities!
file: ./src/content/docs/(ai)/(services)/database.mdx
meta: {
"title": "Database",
"description": "Overview of the database service in TurboStarter AI.",
"icon": "Database02Icon"
}
The database service, managed within the `packages/db` directory (as `@turbostarter/db`), stores data essential for both core application functions and AI features. It ensures that information like user profiles, conversation history, and AI-generated content is reliably preserved and efficiently accessed.
## Technology
We've chosen [PostgreSQL](https://www.postgresql.org) as our primary relational database for its exceptional reliability, extensibility (including powerful tools like `pgvector` for similarity searches), and proven track record in production environments.
Database interactions are handled through [Drizzle ORM](https://orm.drizzle.team/), a cutting-edge TypeScript ORM that offers outstanding type safety (generating types directly from your schema), high performance, and a developer-friendly API.
For detailed guidance on setup, configuration, schema management (including migrations), and general usage patterns of Drizzle and PostgreSQL in the TurboStarter ecosystem, check out our core documentation:
## What is stored in the database?
Beyond standard application data (like users and accounts), the database plays a crucial role in storing AI-specific information:
* **Chat history**: saves conversations between users and AI models (including reasoning and usage details), enabling continuous conversations and history features
* **Vector embeddings**: stores numerical representations (vectors) of text data (like document chunks) that power Retrieval-Augmented Generation (RAG) techniques, allowing features like [Chat with PDF](/ai/docs/pdf) to quickly find relevant context from large document collections
* **Document references**: tracks metadata and storage identifiers (paths in [Blob Storage](/ai/docs/storage)) for files like uploaded PDFs or AI-generated images, connecting them to relevant user interactions
* **Tool calls & results**: records actions (such as [web searches](/ai/docs/chat) or calculations) that AI models ([Agents](/ai/docs/agents)) perform, along with their outcomesβvaluable for debugging, auditing, and improving agent capabilities
## Schema
The core database schema, defined in `packages/db/src/schema`, contains essential tables for the overall application (users, accounts, sessions, etc.).
To maintain clarity as AI features grow, tables specifically related to AI demo applications (like chat history for the [PDF app](/ai/docs/pdf)) are often placed in dedicated [PostgreSQL schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) (e.g. a schema named `pdf`).
This logical separation helps manage complexity and isolates feature-specific data structures. You'll typically find AI-specific schema definitions either alongside the relevant demo app code or within the main `packages/db/src/schema` directory, clearly labeled and organized.
file: ./src/content/docs/(ai)/(services)/internationalization.mdx
meta: {
"title": "Internationalization",
"description": "Learn how we manage internationalization in TurboStarter AI.",
"icon": "Globe2"
}
TurboStarter AI builds on the core internationalization (i18n) setup from the main TurboStarter framework. The shared `@turbostarter/i18n` package in `packages/i18n` handles translation management across platforms.
This gives you the benefit of a proven system using [i18next](https://www.i18next.com/) for managing translations on both web and mobile apps. Plus, the AI models and LLMs integrated within TurboStarter AI generally support multiple languages, enabling interactions beyond what's covered by UI translations alone.
For detailed information on configuring languages, adding translations, or using the `useTranslation` hook, check out the core documentation:
## AI-specific translations
While most translations are shared across the platform, TurboStarter AI introduces a dedicated `ai` namespace within translation files. This namespace contains strings specifically for AI features, demo applications, and UI elements unique to the AI starter kit.
```json title="packages/i18n/locales/en/ai.json"
{
"chat": {
"title": "AI Chatbot",
"description": "Engage in intelligent conversations."
},
"image": {
"title": "Image Generation",
"description": "Create stunning visuals with AI."
}
// ... other AI-specific translations
}
```
When adding translations for new AI features or modifying existing ones, place them within the `ai` namespace in the appropriate language files (e.g., `en/ai.json`, `es/ai.json`). This keeps AI-related text organized and separate from core application translations.
file: ./src/content/docs/(ai)/(services)/security.mdx
meta: {
"title": "Security",
"description": "Learn about the security measures implemented in TurboStarter AI.",
"icon": "Shield01Icon"
}
Remember to regularly review your security implementations and update them as needed.
The starter kit incorporates several security measures to protect your application and users when interacting with AI services.
## Authenticated endpoints
All AI operation endpoints require user authentication. This is enforced through middleware that verifies the user's session before granting access to any AI features.
The system creates anonymous sessions by default, but you can implement stronger authentication using the core framework's capabilities or the dedicated [authentication setup](/docs/web/auth/overview).
## Credit-based access
To prevent AI resource abuse, TurboStarter AI includes a credit-based system. Users receive a limited number of credits that are consumed when using AI features.
This approach avoids misuse while enabling potential monetization. Learn about the implementation details in the [Core billing documentation](/docs/web/billing/overview).
## Rate limiting
API endpoints are guarded by rate limiting to prevent abuse and ensure fair usage. This protects your application from potential denial-of-service attacks and excessive request volumes.
We use [`hono-rate-limiter`](https://github.com/rhinobase/hono-rate-limiter), which supports various storage options including [Redis](https://redis.io/), [Cloudflare KV](https://developers.cloudflare.com/workers/runtime-apis/kv/), and [Memcached](https://memcached.org/) for distributed rate limiting.
## Secure API key handling
Sensitive API keys for AI providers ([OpenAI](/ai/docs/openai), [Anthropic](/ai/docs/anthropic), [Google AI](/ai/docs/google), etc.) are managed exclusively on the backend.
They are **NEVER** exposed to client-side code, dramatically reducing the risk of key leakage or unauthorized usage.
## AI service abuse protection
While TurboStarter AI provides application-level safeguards like credit limits and rate limiting, it's essential to implement additional protection directly with your AI providers.
Always configure spending limits, usage quotas, and monitoring alerts in your
AI provider dashboards (e.g., [OpenAI](/ai/docs/openai),
[Anthropic](/ai/docs/anthropic), [Google AI](/ai/docs/google)). These serve as
critical safety nets against unexpected costs or potential abuse that might
bypass your application-level controls.
By combining application-level security with provider-level controls, you'll build truly robust and secure AI applications.
file: ./src/content/docs/(ai)/(services)/storage.mdx
meta: {
"title": "Storage",
"description": "Explore cloud storage services for AI applications.",
"icon": "FolderFileStorageIcon"
}
Blob storage in TurboStarter AI offers a scalable solution for handling the diverse file types essential to modern AI applications. It works seamlessly with S3-compatible services including [AWS S3](https://aws.amazon.com/s3/), [Cloudflare R2](https://www.cloudflare.com/products/r2/), and [MinIO](https://min.io/).
## Use cases
Blob storage powers several key AI functions:
* **Managing user uploads:** safely storing files like PDFs or images that users upload for AI processing, as seen in the ["Chat with PDF" demo](/ai/docs/pdf) and image analysis features
* **Preserving AI-generated content:** storing outputs from AI models, such as images from the [Image Generation demo](/ai/docs/image) or audio files from the [Text-to-Speech demo](/ai/docs/tts)
* **Powering RAG systems:** housing documents and files that serve as knowledge sources for Retrieval-Augmented Generation, used in demos like [Chat with PDF](/ai/docs/pdf) and intelligent [Agents](/ai/docs/agents)
## Security
Properly configuring bucket permissions for your storage provider is critical. Always restrict access based on the principle of least privilege:
* Buckets containing user uploads or sensitive RAG documents should typically **not** be publicly accessible
* Set precise permissions that allow your application server (API) to read/write as needed while blocking unauthorized access
Refer to your provider's documentation ([AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html), [Cloudflare R2](https://developers.cloudflare.com/r2/data-access/r2-api-tokens/), [MinIO](https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html)) for specific guidance on securing your storage buckets.
## Storage documentation
For detailed setup instructions, configuration options for different storage providers, and implementation best practices, check out the core storage documentation:
In summary, blob storage is essential for building sophisticated AI applications - enabling you to handle user uploads, store AI-generated files, and manage RAG document collections.
file: ./src/content/docs/(ai)/(services)/ui.mdx
meta: {
"title": "UI",
"description": "Learn more about UI components and design system in AI starter kit.",
"icon": "Palette"
}
TurboStarter AI builds on the core TurboStarter UI foundation to create engaging interfaces for all AI features.
The UI architecture uses shared components and styles with platform-specific implementations:
* **`@turbostarter/ui`**: includes shared assets, themes, and fundamental styles
* **`@turbostarter/ui-web`**: contains web components built with [Tailwind CSS](https://tailwindcss.com), [Radix UI](https://www.radix-ui.com/), and [shadcn/ui](https://ui.shadcn.com)
* **`@turbostarter/ui-mobile`**: delivers mobile components using [Nativewind](https://www.nativewind.dev/) and [react-native-reusables](https://rnr-docs.vercel.app/)
This approach maximizes code reuse while optimizing for each platform's unique capabilities.
## UI in AI applications
The AI starter kit leverages this foundation to create intuitive interfaces for various features and demo apps:
Components for displaying conversations, user input, and streaming responses
(used in [Chatbot](/ai/docs/chat) and [Chat with PDF](/ai/docs/pdf) demos).
Displaying AI-generated images as masonry grids with options for interaction
(used in [Image Generation](/ai/docs/image) demo).
Structured forms for configuring AI tasks (e.g., selecting models, adjusting
parameters, modifying prompts).
Visual feedback during AI processing, such as loading spinners or progress
indicators (e.g. [Text to Speech](/ai/docs/tts) voice avatar animation).
UI elements for users to rate or provide feedback on AI outputs. This can
include thumbs up/down buttons or text input fields for comments.
Components for displaying error messages or alerts when AI tasks fail or
encounter issues.
Ensuring that all UI components are usable for individuals with
disabilities, including keyboard navigation and screen reader support.
Components for displaying data or model outputs visually, such as charts,
graphs, or progress bars.
## Generative UI
A standout aspect of AI applications is their ability to dynamically create or modify UI elements based on AI responses. TurboStarter AI enables this through:
* **AI SDK components**: libraries like the [AI SDK](https://sdk.vercel.ai/docs/introduction) provide specialized components and hooks (like `useActions` and `useUIState`) designed to render UI based on AI actions or structured data. This creates interactive elementsβbuttons, forms, or visualizationsβthat appear dynamically within conversations or workflows.
* **Structured output**: AI models can return data in specific formats (such as JSON) that your frontend parses to render appropriate components, display information, or trigger actions. For example, an AI might return product details that automatically render as interactive cards.
* **Conditional rendering**: the platform uses standard React patterns for showing, hiding, or transforming UI components based on AI interaction states. This creates smooth transitions between loading states, results displays, and follow-up options tailored to AI suggestions.
This approach delivers truly responsive user experiences where interfaces adapt intelligently to ongoing AI processes. The [Chat demo app](/ai/docs/chat) showcases these generative UI capabilities in action.
## Customization and further details
Customizing appearance (themes, styling) or adding new UI components follows the same process as core TurboStarter applications. For complete guides on styling, theme management, and component development, see our core documentation:
By leveraging the core UI system, TurboStarter AI ensures consistent user experiences across platforms while letting you focus on creating unique AI functionalities.
file: ./src/content/docs/(core)/extension/ai.mdx
meta: {
"title": "AI",
"description": "Leverage AI in your TurboStarter extension."
}
When it comes to AI within the browser extension, we can differentiate two approaches:
* **Server + client**: Traditional implementation, same as for [web](/docs/web/ai/overview) and [mobile](/docs/mobile/ai), used to stream responses generated on the server to the client.
* **Chrome built-in AI**: An [experimental implementation](https://developer.chrome.com/docs/ai/built-in) of [Gemini Nano](https://blog.google/technology/ai/google-gemini-ai/#performance) that's built into new versions of the Google Chrome browser.
We recommend relying more on the traditional server + client approach, as it's more versatile and easier to implement. Chrome's built-in AI is a nice feature, but it's still experimental and has some limitations.
Of course, you can always implement a *hybrid* approach which combines both solutions to achieve the best results.
## Server + client
The traditional usage of AI integration in the browser extension is the same as for [web app](/docs/web/ai/configuration#client-side) and [mobile app](/docs/mobile/ai). We use the exact same [API endpoint](/docs/web/ai/configuration#api-endpoint), and we leverage streaming to display answers incrementally to the user as they're generated.
```tsx title="main.tsx"
import { useChat } from "ai/react";
const Popup = () => {
const { messages } = useChat({
api: "/api/ai/chat",
});
return (
{messages.map((message) => (
{message.content}
))}
);
};
export default Popup;
```
It's the most reliable and recommended way to use AI in the browser extension. Feel free to reuse or modify it to suit your specific needs.
## Chrome built-in AI
Chrome's implementation of [built-in AI with Gemini Nano](https://developer.chrome.com/docs/ai/built-in) is experimental and will change as they test and address feedback.
Chrome's built-in AI is a preview feature. To use it, you need Chrome version 127 or greater and you must enable these flags:
* [chrome://flags/#prompt-api-for-gemini-nano](chrome://flags/#prompt-api-for-gemini-nano): `Enabled`
* [chrome://flags/#optimization-guide-on-device-model](chrome://flags/#optimization-guide-on-device-model): `Enabled BypassPrefRequirement`
* [chrome://components/](chrome://components/): Click `Optimization Guide On Device Model` to download the model.
Once enabled, you'll be able to use `window.ai` to access the built-in AI and do things like this:

You can even use a [dedicated provider](https://sdk.vercel.ai/providers/community-providers/chrome-ai) from the Vercel AI SDK ecosystem to simplify its usage. Please remember that this API is still in its early stages and might change in the future.
The best thing is that you can use this API in every part of your extension, e.g., popup, background service worker, etc.
It's completely safe to use on the client-side, as we're not exposing any sensitive data to the user (such as the API key in the traditional server + client approach).
To learn more, please check out the official [Chrome documentation](https://developer.chrome.com/docs/ai/built-in) and the articles listed below.
file: ./src/content/docs/(core)/extension/billing.mdx
meta: {
"title": "Billing",
"description": "Get started with billing in TurboStarter."
}
As you could guess, there is no sense in implementing the whole billing process inside the browser extension, so we're relying on the [web app](/docs/web/billing/overview) to handle it.
> You probably won't display pricing tables inside a popup window, right?
You can customize the whole flow and onboarding process when a user purchases a plan in your [web app](/docs/web/billing/overview).
Then you would be able to easily fetch customer data to ensure that the user has access to correct extension features.
## Fetching customer data
When your user has purchased a plan from your landing page or web app, you can easily fetch their data using the [API](/docs/extension/api/client).
To do so, just invoke the `getCustomer` query on the `billing` router:
```tsx title="customer-screen.tsx"
import { api } from "~/lib/api";
export default function CustomerScreen() {
const { data: customer, isLoading } = useQuery({
queryKey: ["customer"],
queryFn: () => handle(api.billing.customer.$get()),
});
if (isLoading) return
Loading...
;
return
{customer?.plan}
;
}
```
You may also want to ensure that user is logged in before fetching their billing data to avoid unnecessary API calls.
```tsx title="header.tsx"
import { api } from "~/lib/api";
export const User = () => {
const {
data: { user },
} = useSession();
const { data: customer } = useQuery({
queryKey: ["customer"],
queryFn: () => handle(api.billing.customer.$get()),
enabled: !!user, // [!code highlight]
});
if (!user || !customer) {
return null;
}
return (
{user.email}
{customer.plan}
);
};
```
Read more about [auth in extension](/docs/extension/auth/overview).
file: ./src/content/docs/(core)/extension/cli.mdx
meta: {
"title": "CLI",
"description": "Start your new app project with a single command.",
"icon": "Command",
"mirror": "../web/cli.mdx"
}
file: ./src/content/docs/(core)/extension/database.mdx
meta: {
"title": "Database",
"description": "Get started with the database."
}
To enable communication between your WXT extension and the server in a production environment, the web application with Hono API must be deployed first.
As browser extensions use only client-side code, **there's no way to interact with the database directly**.
Also, you should avoid any workarounds to interact with the database directly, because it can lead to leaking your database credentials and other security issues.
## Recommended approach
You can safely use the [API](/docs/extension/api/overview) and invoke procedures which will run queries on the database.
To do this you need to set up the database on the [web, server side](/docs/web/database/overview) and then use the [API client](/docs/extension/api/client) to interact with it.
Learn more about its configuration in the web part of the docs, especially in the following sections:
file: ./src/content/docs/(core)/extension/extras.mdx
meta: {
"title": "Extras",
"description": "See what you get together with the code.",
"icon": "Gift",
"mirror": "../web/extras.mdx"
}
file: ./src/content/docs/(core)/extension/faq.mdx
meta: {
"title": "FAQ",
"description": "Find answers to common technical questions.",
"icon": "Question",
"mirror": "../web/faq.mdx"
}
file: ./src/content/docs/(core)/extension/index.mdx
meta: {
"title": "Introduction",
"description": "Get started with TurboStarter extension kit.",
"icon": "Home",
"index": true,
"mirror": "../web/index.mdx"
}
file: ./src/content/docs/(core)/extension/internationalization.mdx
meta: {
"title": "Internationalization",
"description": "Learn how to internationalize your extension."
}
Turbostarter's extension uses [i18next](https://www.i18next.com/) and web cookies to store the language preference of the user. This allows the extension to be fully internationalized.
We use i18next because it's a robust and widely-adopted internationalization framework that works seamlessly with React.
The combination with web cookies allows us to persistently store language preferences across all extension contexts and share it with the web app while maintaining excellent performance and browser compatibility.

## Configuration
The global configuration is defined in the `@turbostarter/i18n` package and shared across all applications. You can read more about it in the [web configuration](/docs/web/internationalization/configuration) documentation.
By default, the locale is automatically detected based on the user's device settings. You can override it and set the default locale of your mobile app in the [app configuration](/docs/extension/configuration/app) file.
Also, the locale configuration is **shared between the web app and the extension** (same as [session](/docs/extension/auth/session)), which means that changing the locale in one place will automatically update it in the other. It's a common pattern for modern apps, simplifying the user experience and reducing the maintenance effort.
### Cookies
When a user first opens the [web app](/docs/web), the locale is detected and a cookie is set. This cookie is used to remember the user's language preference.
You can find its value in the *Cookies* tab of the developer tools of your browser:

To enable your extension to read the cookie and that way share the locale settings with the web app, you need to set the cookies permission in the `wxt.config.ts` under `manifest.permissions` field:
```ts
export default defineConfig({
manifest: {
permissions: ["cookies"],
},
});
```
And to be able to read the cookie from your app url, you need to set host\_permissions, which will include your app url:
```ts
export default defineConfig({
manifest: {
host_permissions: ["http://localhost/*", "https://your-app-url.com/*"],
},
});
```
Then you would be able to share the cookie between your apps and also read its value using `browser.cookies` API.
Avoid using `` in `host_permissions`. It affects all urls and may cause security issues, as well as a [rejection](https://developer.chrome.com/docs/webstore/review-process#review-time-factors) from the destination store.
## Translating extension
To translate individual components and screens, you can use the well-known `useTranslation` hook.
```tsx
import { useTranslation } from "@turbostarter/i18n";
export const Popup = () => {
const { t } = useTranslation();
return
{t("hello")}
;
};
```
That's the recommended way to translate stuff inside your extension.
### Store presence
As we saw in the [manifest](/docs/extension/configuration/manifest#locales) section, you can also localize your extension's store presence (like title, description, and other metadata). This allows you to customize how your extension appears in different web stores based on the user's language.
Each store has specific requirements for localization:
* [Chrome Web Store](https://developer.chrome.com/docs/webstore/cws-dashboard-listing/) requires a `_locales` directory with JSON files for each language
* [Firefox Add-ons](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Internationalization) uses a similar structure but with some differences in the manifest
Although most of the config is abstracted behind common structure, please follow the store-specific guides below for detailed instructions on setting up localization for your extension's store listing.
## Language switcher
TurboStarter ships with a language switcher component that allows users to switch between languages in your extension. You can import and use the `LocaleSwitcher` component in your popup, options page, or any other extension view:
```tsx
import { LocaleSwitcher } from "@turbostarter/ui-web";
export const Popup = () => {
return ;
};
```
As the web app and extension share the same i18n configuration (cookie), changing the language in one will affect the other. **This is intentional** and ensures a consistent experience across both platforms, since your extension likely serves as a companion to the web app and should maintain the same language preferences.
## Best practices
Here are key best practices for managing translations in your browser extension:
* Use descriptive, hierarchical translation keys
```ts
// β Good
"popup.settings.language";
"content.toolbar.save";
// β Bad
"saveButton";
"text1";
```
* Organize translations by extension views and features
```
_locales/
βββ en/
β βββ messages.json
β βββ popup.json
β βββ options.json
βββ es/
βββ messages.json
βββ popup.json
βββ options.json
```
* Handle fallback languages gracefully
* Keep manifest descriptions localized for store listings
* Consider context in translations:
```ts
// Context-aware messages
t("button.save", { context: "document" }); // "Save document"
t("button.save", { context: "settings" }); // "Apply changes"
```
* Use placeholders for dynamic content:
```ts
// With variables
t("status.saved", { time: "2 minutes ago" }); // "Last saved 2 minutes ago"
// With plurals
t("items", { count: 5 }); // "5 items"
```
* Keep translations in sync between extension views
* Cache translations for offline functionality
file: ./src/content/docs/(core)/extension/marketing.mdx
meta: {
"title": "Marketing",
"description": "Learn how to market your mobile application."
}
As you saw in the [Extras](/docs/extension/extras) section, TurboStarter comes with a lot of tips and tricks to make your product better and help you launch your extension faster with higher traffic.
The same applies to [submission tips](/docs/extension/extras#submission-tips) to help you get your extension approved by the browser stores faster.
We'll talk more about the whole process of deploying and publishing your extension in the [Publishing](/docs/extension/publishing/checklist) section, here we'll go through some guidelines that you need to follow to make your store's visibility higher.
## Before you submit
To help your extension approval go as smoothly as possible, review the common missteps listed below that can slow down the review process or trigger a rejection. This doesn't replace the official guidelines or guarantee approval, but making sure you can check every item on the list is a good start.
Make sure you:
* Test your extension thoroughly for crashes and bugs
* Ensure that all extension information and metadata is complete and accurate
* Update your contact information in case the review team needs to reach you
* Provide clear instructions on how to use your extension, including any special setup required
* If your extension requires an account, provide a demo account or a way to test all features without signing up
* Enable and test all backend services to ensure they're live and accessible during review
* Include detailed explanations of non-obvious features in the extension description
* Ensure your extension complies with the specific browser store's policies (e.g., [Chrome Web Store](https://developer.chrome.com/docs/webstore/program-policies/best-practices), [Firefox Add-ons](https://extensionworkshop.com/documentation/publish/add-on-policies/), etc.)
* Remove any references to features not supported in browser extensions (e.g., in-app purchases)
Following these basic steps during development and before submission will help you get your extension approved faster and with fewer issues.
## Guidelines
Each store has slightly different guidelines, but some of them are general and can be applied to all stores:
* **Security**: Your extension must not contain malicious code or behavior that can harm users' devices or data.
* **Performance**: Your extension must be performant and stable, with a smooth user experience.
* **Privacy**: Your extension must respect user privacy and not collect unnecessary data without explicit consent.
* **Compliance**: Your extension must comply with all relevant laws and regulations.
You can read more about official guidelines for each store in the following links:
* [Chrome Web Store](https://developer.chrome.com/docs/webstore/program-policies/best-practices)
* [Firefox Add-ons](https://extensionworkshop.com/documentation/publish/add-on-policies/)
## Common mistakes
There are a few common mistakes that you should avoid to make sure your extension can be accepted in the stores. The most common ones are:
* **Not enough description** - make sure to describe all the features of your extension and how it works in your store listing, that way users won't be confused about what your extension does. Also include detailed information in the single purpose field regarding your extension's primary functionality.
* **Privacy issues** - respect user privacy and require as least permissions as possible, don't ask for permissions that are not necessary for your extension to work
* **Customer support** - provide a way to contact you in case the user has any issues with your extension
* **Stay up-to-date** - keep your extension and its documentation up-to-date to ensure a smooth user experience and to prevent issues during the review process.
file: ./src/content/docs/(core)/extension/stack.mdx
meta: {
"title": "Tech Stack",
"description": "A detailed look at the technical details.",
"icon": "Tools"
}
## Turborepo
[Turborepo](https://turbo.build/) is a monorepo tool that helps you manage your project's dependencies and scripts. We chose a monorepo setup to make it easier to manage the structure of different features and enable code sharing between different packages.
} />
## WXT (Vite)
> It's like Next.js for browser extensions.
[WXT](https://www.wxt.dev/) is a very lightweight and powerful framework (based on [Vite](https://vite.dev/)) for building browser extensions using most popular frontend tools. It provides a modern development experience with features like hot module reloading, TypeScript support, and automatic manifest generation.
WXT simplifies the process of creating cross-browser extensions, allowing you to focus on your extension's functionality rather than boilerplate setup.
} />
} />
## React
[React](https://reactjs.org/) is a JavaScript library for building user interfaces. It's the core technology we use for creating the UI of our browser extension, allowing for efficient updates and rendering of components.
} />
## Tailwind CSS
[Tailwind CSS](https://tailwindcss.com) is a utility-first CSS framework that helps you build custom designs without writing any CSS. We also use [Radix UI](https://radix-ui.com) for our headless components library and [shadcn/ui](https://ui.shadcn.com) which enables you to generate pre-designed components with a single command.
} />
} />
} />
## Hono
[Hono](https://hono.dev) is a small, simple, and ultrafast web framework for the edge. It provides tools to help you build APIs and web applications faster. It includes an RPC client for making type-safe function calls from the frontend. We use Hono to build our serverless API endpoints.
} />
## Better Auth
[Better Auth](https://www.better-auth.com) is a modern authentication library for fullstack applications. It provides ready-to-use snippets for features like email/password login, magic links, OAuth providers, and more. We use Better Auth to handle all authentication flows in our application.
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a super fast [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) tool for databases. It helps manage databases, generate TypeScript types from your schema, and run queries in a fully type-safe way.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but thanks to Drizzle's flexibility, you can easily switch to MySQL, SQLite or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
file: ./src/content/docs/(core)/mobile/ai.mdx
meta: {
"title": "AI",
"description": "Learn how to use AI integration in your mobile app."
}
As AI integration for [web](/docs/web/ai/overview), [extension](/docs/extension/ai), and mobile is based on the same battle-tested [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction), the implementation is very similar across platforms.
In this section, we'll focus on how to consume AI responses in the mobile app. For server-side implementation details, please refer to the [web documentation](/docs/web/ai/overview).
## Features
The most common AI integration features are also supported in the mobile app:
* **Chat**: Build chat interfaces inside native mobile apps.
* **Streaming**: Receive AI responses as soon as the model starts generating them, without waiting for the full response to be completed.
* **Image generation**: Generate images based on a given prompt.
You can easily compose your application using these building blocks or extend them to suit your specific needs.
## Usage
The usage of AI integration in the mobile app is the same as for [web app](/docs/web/ai/configuration#client-side) and [browser extension](/docs/extension/ai#server--client). We use the exact same [API endpoint](/docs/web/ai/configuration#api-endpoint), and since TurboStarter ships with built-in support for streaming on mobile, we can leverage it to display answers incrementally to the user as they're generated.
```tsx title="ai.tsx"
import { useChat } from "ai/react";
const AI = () => {
const { messages } = useChat({
api: "/api/ai/chat",
});
return (
{messages.map((message) => (
{message.content}
))}
);
};
export default AI;
```
By leveraging this integration, we can easily manage the state of the AI request and update the UI as soon as the response is ready.
TurboStarter ships with a ready-to-use implementation of AI chat, allowing you to see this solution in action. Feel free to reuse or modify it according to your needs.
file: ./src/content/docs/(core)/mobile/billing.mdx
meta: {
"title": "Billing",
"description": "Get started with billing in TurboStarter."
}
For now, billing has a limited functionalities on mobile, we're mostly relying on the [web app](/docs/web/billing/overview) to handle billing.
We are working on a fully-featured mobile billing to help you monetize your mobile app easier. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
## Fetching customer data
When your user purchased a plan from your landing page or web app, you can easily fetch their data using the [API](/docs/mobile/api/client).
To do so, just call the `/api/billing/customer` endpoint:
```tsx title="customer-screen.tsx"
import { api } from "~/lib/api";
export default function CustomerScreen() {
const { data: customer, isLoading } = useQuery({
queryKey: ["customer"],
queryFn: () => handle(api.billing.customer.$get()),
});
if (isLoading) return Loading...;
return {customer?.plan};
}
```
You may also want to ensure that user is logged in before fetching their billing data to avoid unnecessary API calls.
```tsx title="customer-screen.tsx"
import { api } from "~/lib/api";
export default function CustomerScreen() {
const {
data: { user },
} = useSession();
const { data: customer } = useQuery({
queryKey: ["customer"],
queryFn: () => handle(api.billing.customer.$get()),
enabled: !!user, // [!code highlight]
});
if (!user || !customer) {
return null;
}
return (
{user.email}{customer.plan}
);
}
```
Be mindful when implementing payment-related features in your mobile app. Apple has strict guidelines regarding external payment systems and **may reject your app** if you aggressively redirect users to web-based payment flows. Make sure to review the [App Store Review Guidelines](https://developer.apple.com/app-store/review/guidelines/#payments) carefully and consider implementing native in-app purchases for iOS users to ensure compliance.
We are currently working on a fully native payments system that will make it easier to comply with Apple's guidelines - stay tuned for updates!
file: ./src/content/docs/(core)/mobile/cli.mdx
meta: {
"title": "CLI",
"description": "Start your new project with a single command.",
"icon": "Command",
"mirror": "../web/cli.mdx"
}
file: ./src/content/docs/(core)/mobile/database.mdx
meta: {
"title": "Database",
"description": "Get started with the database."
}
To enable communication between your Expo app and the server in a production environment, the web application with Hono API must be deployed first.
As a mobile app uses only client-side code, **there's no way to interact with the database directly**.
Also, you should avoid any workarounds to interact with the database directly, because it can lead to leaking your database credentials and other security issues.
## Recommended approach
You can safely use the [API](/docs/mobile/api/overview) and call the endpoints which will run queries on the database.
To do this you need to set up the database on the [web, server side](/docs/web/database/overview) and then use the [API client](/docs/mobile/api/client) to interact with it.
Learn more about its configuration in the web part of the docs, especially in the following sections:
file: ./src/content/docs/(core)/mobile/extras.mdx
meta: {
"title": "Extras",
"description": "See what you get together with the code.",
"icon": "Gift",
"mirror": "../web/extras.mdx"
}
file: ./src/content/docs/(core)/mobile/faq.mdx
meta: {
"title": "FAQ",
"description": "Find answers to common technical questions.",
"icon": "Question",
"mirror": "../web/faq.mdx"
}
file: ./src/content/docs/(core)/mobile/index.mdx
meta: {
"title": "Introduction",
"description": "Get started with TurboStarter mobile kit.",
"icon": "Home",
"index": true,
"mirror": "../web/index.mdx"
}
file: ./src/content/docs/(core)/mobile/internationalization.mdx
meta: {
"title": "Internationalization",
"description": "Learn how to internationalize your mobile app."
}
TurboStarter mobile uses [i18next](https://www.i18next.com/) and [expo-localization](https://docs.expo.dev/versions/latest/sdk/localization/) for internationalization. This powerful combination allows you to leverage both i18next's mature translation framework and Expo's native device locale detection.
While i18next handles the translation management, expo-localization provides
seamless integration with the device's locale settings. This means your app
can automatically detect and adapt to the user's preferred language, while
still maintaining the flexibility to override it when needed.
The mobile app's internationalization is configured to work out of the box with:
* Automatic device language detection
* Right-to-left (RTL) layout support
* Locale-aware date and number formatting
* Fallback language handling
You can read more about the underlying technologies in their documentation:
* [i18next documentation](https://www.i18next.com/overview/getting-started)
* [expo-localization documentation](https://docs.expo.dev/versions/latest/sdk/localization/)

## Configuration
The global configuration is defined in the `@turbostarter/i18n` package and shared across all applications. You can read more about it in the [web configuration](/docs/web/internationalization/configuration) documentation.
By default, the locale is automatically detected based on the user's device settings. You can override it and set the default locale of your mobile app in the [app configuration](/docs/mobile/configuration/app) file.
## Translating app
To translate individual components and screens, you can use the `useTranslation` hook.
```tsx
import { useTranslation } from "@turbostarter/i18n";
export default function MyComponent() {
const { t } = useTranslation();
return {t("hello")};
}
```
It's a recommended way to translate your app.
### Store presence
If you plan on shipping your app to different countries or regions or want it to support various languages, you can provide localized strings for things like the display name and system dialogs.
To do so, check the [official Expo documentation](https://docs.expo.dev/guides/localization/) as it requires modifying your app configuration (`app.config.ts`).
You can find the resources below helpful in this process:
## Language switcher
TurboStarter ships with a language switcher component that allows you to switch between languages. You can import and use the `LocaleSwitcher` component and drop it anywhere in your application to allow users to change the language seamlessly.
```tsx
import { LocaleSwitcher } from "@turbostarter/ui-mobile";
export default function MyComponent() {
return ;
}
```
The component automatically displays all languages configured in your i18n settings. When a user switches languages, it will be reflected in the app and saved into persistent storage to keep the language across app restarts.
## Best practices
Here are key best practices for managing translations in your mobile app:
* Use clear, hierarchical translation keys for easy maintenance
```ts
// β Good
"screen.home.welcome";
"component.button.submit";
// β Bad
"welcomeText";
```
* Organize translations by app screens and features
```
translations/
βββ en/
β βββ layout.json
β βββ common.json
βββ es/
βββ layout.json
βββ common.json
```
* Consider device language settings and regional formats
* Cache translations locally for offline access
* Handle dynamic content for mobile contexts:
```ts
// Device-specific messages
t("errors.noConnection"); // "Check your internet connection"
// Dynamic values
t("storage.space", { gb: 2.5 }); // "2.5 GB available"
```
* Keep translations concise - mobile screens have limited space
* Test translations with different screen sizes and orientations
file: ./src/content/docs/(core)/mobile/marketing.mdx
meta: {
"title": "Marketing",
"description": "Learn how to market your mobile application."
}
As you saw in the [Extras](/docs/mobile/extras) section, TurboStarter comes with a lot of tips and tricks to make your product better and help you launch your app faster with higher traffic.
The same applies to [submission tips](/docs/mobile/extras#submission-tips) to help you get your app approved by Apple and Google faster.
We'll talk more about the whole process of deploying and publishing your app in the [Publishing](/docs/mobile/publishing/checklist) section, here we'll go through some guidelines that you need to follow to make your store's visibility higher.
## Before you submit
To help your app approval go as smoothly as possible, review the common missteps listed below that can slow down the review process or trigger a rejection. This doesn't replace the official guidelines or guarantee approval, but making sure you can check every item on the list is a good start.
Make sure you:
* Test your app for crashes and bugs
* Ensure that all app information and metadata is complete and accurate
* Update your contact information in case App Review needs to reach you
* Provide App Review with full access to your app. If your app includes account-based features, provide either an active demo account or fully-featured demo mode, plus any other hardware or resources that might be needed to review your app (e.g. login credentials or a sample QR code)
* Enable backend services so that they're live and accessible during review
* Include detailed explanations of non-obvious features and in-app purchases in the App Review notes, including supporting documentation where appropriate
Following these basic steps during development and before submission will help you get your app approved faster.
## App Store (iOS)
Apple reviews are much stricter than Google reviews, so you need to make sure your app is ready for the App Store.
### Guidelines
Apple has a set of [guidelines](https://developer.apple.com/app-store/review/guidelines/) that you need to follow to make sure your app can be accepted in the App Store.
These include:
* **Safety**: Your app must not contain content or behavior that is harmful, abusive, or threatening.
* **Performance**: Your app must be performant and stable, with a smooth user experience.
* **Business**: Your app must not engage in unethical or deceptive practices.
* **Design**: Your app must have a clean and intuitive design.
* **Legal**: Your app must comply with all relevant laws and regulations.
You can read more about each guideline in the [official App Review Guidelines](https://developer.apple.com/app-store/review/guidelines/).
### Search optimization
App store optimization is the process of increasing an app or game's visibility in an app store, with the objective of increasing organic app downloads. Apps are more visible when they rank high on a wide variety of search terms, maintain a high position in the top charts, or get featured on the store.
There are a few actions that you can take to improve your app's visibility in the App Store:
* **Choose accurate keywords**: Use relevant keywords in your app's store listing.
* **Create a compelling app name, subtitle, and description**: Your app's title should be catchy and descriptive, the same applies to the subtitle and description.
* **Assign the right categories**: Make sure your app is categorized in the right category, this will help you reach the right audience.
* **Foster positive ratings**: Ratings and reviews appear on your product page and influence how your app ranks in search results. They can encourage people to engage with your app, so focus on providing a great app experience that motivates users to leave positive reviews.
* **Publish in-app events**: You can publish in-app events to promote your app and encourage users to engage with your app. (e.g. game competitions)
* **Promote in-app purchases**: Your promoted in-app purchases appear in search results on the App Store. Tapping an in-app purchase leads to your product page, which displays your app's description, screenshots, app previews, and in-app events β and lets people initiate an in-app purchase.
Read more about App Store Optimization in the [official documentation](https://developer.apple.com/app-store/search/).
## Google Play Store (Android)
Google reviews are less stringent than Apple reviews and usually take less time to review, but you still need to make sure your app is ready for the Play Store.
### Guidelines
Google has its own guidelines that apps must adhere to. Some important aspects to consider include:
* **Spam, functionality, and user experience**: Your app must not be spammy, must work as expected and must provide a good user experience.
* **Restricted content**: Before submitting an app to Google Play, ensure it complies with these content policies and with local laws.
* **Privacy**: Apps that are deceptive, malicious, or intended to abuse or misuse any network, device, or personal data are strictly prohibited
* **Monetization**: Your app must not engage in unethical or deceptive practices.
For more detailed information and an interactive checklist, check the [Google requirements page](https://developers.google.com/workspace/marketplace/about-app-review).
### Search optimization
Ensuring that your app and store listing is thorough and optimized is an important factor in getting discovered by users on Google Play.
Follow these steps to optimize your app's visibility on Google Play:
* **Build a comprehensive store listing**: This includes providing accurate **title**, **description** and **promo text**.
* **Use high-quality graphics and images**: App icons, images, and screenshots help make your app stand out in search results, categories, and featured app lists.
* **Diversify your audience**: Google provides automated machine translations of store listings that you don't explicitly define for your app. However, using a professional translation service for your *Description* can lead to better search results and discoverability for worldwide users.
* **Create a great user experience**: Google Play search factors in the overall experience of your app based on user behavior and feedback. Apps are ranked based on a combination of ratings, reviews, downloads, and other factors.
## Common mistakes
There are a few common mistakes that you should avoid to make sure your app can be accepted in the stores. Apple reports that, on average, over **40%** of unresolved issues relate to [guideline 2.1: App Completeness](https://developer.apple.com/app-store/review/guidelines/#2.1), so make sure to avoid these:
* **Crashes and bugs**
* **Broken links**
* **Placeholder content**
* **Incomplete information**
* **Privacy policy issues**
* **Inaccurate screenshots**
* **Repeated submission of similar apps**
Don't worry if your first submission is rejected, improve it, fix all the mentioned issues and try again.
file: ./src/content/docs/(core)/mobile/push-notifications.mdx
meta: {
"title": "Push notifications",
"description": "Engage your users with personalized notifications."
}
We are working on push notifications to help you engage your users. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
file: ./src/content/docs/(core)/mobile/stack.mdx
meta: {
"title": "Tech Stack",
"description": "A detailed look at the technical details.",
"icon": "Tools"
}
## Turborepo
[Turborepo](https://turbo.build/) is a monorepo tool that helps you manage your project's dependencies and scripts. We chose a monorepo setup to make it easier to manage the structure of different features and enable code sharing between different packages.
} />
## React Native + Expo
[React Native](https://reactnative.dev/) is an open-source mobile application development framework created by Facebook. It is used to develop applications for Android and iOS by enabling developers to use [React](https://react.dev) along with native platform capabilities.
> It's like Next.js for mobile development.
[Expo](https://expo.dev/) is a framework and a platform built around React Native. It provides a set of tools and services that help you develop, build, deploy, and quickly iterate on iOS, Android, and web apps from the same JavaScript/TypeScript codebase. It's like Next.js for mobile development.
} />
} />
## Tailwind CSS
[NativeWind](https://www.nativewind.dev/) uses Tailwind CSS as scripting language to create a universal style system for React Native. It allows you to use Tailwind CSS classes in your React Native components, providing a familiar styling experience for web developers. We also use [React Native Reusables](https://github.com/mrzachnugent/react-native-reusables) for our headless components library with support of CLI to generate pre-designed components with a single command.
} />
} />
## Hono
[Hono](https://hono.dev) is a small, simple, and ultrafast web framework for the edge. It provides tools to help you build APIs and web applications faster. It includes an RPC client for making type-safe function calls from the frontend. We use Hono to build our serverless API endpoints.
} />
## Better Auth
[Better Auth](https://www.better-auth.com) is a modern authentication library for fullstack applications. It provides ready-to-use snippets for features like email/password login, magic links, OAuth providers, and more. We use Better Auth to handle all authentication flows in our application.
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a super fast [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) tool for databases. It helps manage databases, generate TypeScript types from your schema, and run queries in a fully type-safe way.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but thanks to Drizzle's flexibility, you can easily switch to MySQL, SQLite or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
## EAS (Expo Application Services)
[EAS](https://expo.dev/eas) is a set of cloud services provided by Expo for React Native app development. It includes tools for building, submitting, and updating your app, as well as over-the-air updates and analytics.
} />
file: ./src/content/docs/(core)/web/cli.mdx
meta: {
"title": "CLI",
"description": "Start your new project with a single command.",
"icon": "Command"
}
To help you get started with TurboStarter **as quickly as possible**, we've developed a [CLI](https://www.npmjs.com/package/@turbostarter/cli) that enables you to create a new project (with all the configuration) in seconds.
The CLI is a set of commands that will help you create a new project, generate code, and manage your project efficiently.
Currently, the following action is available:
* **Starting a new project** - Generate starter code for your project with all necessary configurations in place (billing, database, emails, etc.)
**The CLI is in beta**, and we're actively working on adding more commands and actions. Soon, the following features will be available:
* **Translations** - Translate your project, verify translations, and manage them effectively
* **Installing plugins** - Easily install plugins for your project
* **Dynamic code generation** - Generate dynamic code based on your project structure
## Installation
You can run commands using `npx`:
```bash
npx turbostarter
npx @turbostarter/cli@latest
```
If you don't want to install the CLI globally, you can simply replace the examples below with `npx @turbostarter/cli@latest` instead of `turbostarter`.
This also allows you to always run the latest version of the CLI without having to update it.
## Usage
Running the CLI without any arguments will display the general information about the CLI:
```bash
Usage: turbostarter [options] [command]
Your Turbo Assistant for starting new projects, adding plugins and more.
Options:
-v, --version display the version number
-h, --help display help for command
Commands:
new create a new TurboStarter project
help [command] display help for command
```
You can also display help for it or check the actual version.
### Starting a new project
Use the `new` command to initialize configuration and dependencies for a new project.
```bash
npx turbostarter new
```
You will be asked a few questions to configure your project:
```bash
β All prerequisites are satisfied, let's start! π
? What do you want to ship? βΊ
β Web app
β Mobile app
β― Browser extension
? Enter your project name. βΊ
? How do you want to use database? βΊ
Local (powered by Docker)
Cloud
? What do you want to use for billing? βΊ
Stripe
Lemon Squeezy
...
π You can now get started. Open the project and just ship it! π
Problems? https://www.turbostarter.dev/docs
```
It will create a new project, configure providers, install dependencies and start required services in development mode.
file: ./src/content/docs/(core)/web/extras.mdx
meta: {
"title": "Extras",
"description": "See what you get together with the code.",
"icon": "Gift"
}
## Tips and Tricks
In many places, next to the code you will find some marketing tips, design suggestions, and potential risks. This is to help you build a better product and avoid common pitfalls.
```tsx title="Hero.tsx"
import Hero from "~/components/Hero";
return (
{/* π‘ Use something that user can visualize e.g.
"Ship your startup while on the toilet" */}
);
};
```
It's using the `@tanstack/react-query` [useQuery API](https://tanstack.com/query/latest/docs/framework/react/reference/useQuery), so you shouldn't have any troubles with it.
## Mutations
If you want to perform a mutation in your extension code, you can use the `useMutation` hook that comes straight from the integration with [Tanstack Query](https://tanstack.com/query):
```tsx title="components/popup/form.tsx"
import { api } from "~/lib/api";
export const CreatePost = () => {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
},
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
};
```
Here, we're also invalidating the query after the mutation is successful. This is a very important step to make sure that the data is updated in the UI.
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api";
export const Posts = () => {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
};
```
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/client";
export const CreatePost = () => {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
};
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
file: ./src/content/docs/(core)/extension/api/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with the API.",
"index": true
}
To enable communication between your WXT extension and the server in a production environment, the web application with Hono API must be deployed first.
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe manner, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS. It also
includes an [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers are defined in the `packages/api/` package. Here you will find a `modules` folder that contains the different feature modules of the API. Each module has its own folder and exports all its resolvers.
For each module, we create a separate Hono route in the `packages/api/index.ts` file and aggregate all sub-routers into one main router.
The API is then exposed as a route handler that will be provided as a Next.js API route:
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@turbostarter/api";
const handler = handle(appRouter);
export { handler as GET, handler as POST };
```
Learn more about how to use the API in your browser extension code in the following sections:
file: ./src/content/docs/(core)/extension/auth/overview.mdx
meta: {
"title": "Overview",
"description": "Learn how to authenticate users in your extension."
}
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration required, while maintaining enterprise-grade security standards. Its framework-agnostic approach and focus on performance makes it the perfect choice for TurboStarter.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
To keep things simple and secure, **the extension shares the same authentication session with your web app.**
This is a common approach used by popular services like [Notion](https://www.notion.so) and [Google Workspace](https://workspace.google.com/). The benefits include:
* Users only need to sign in once through the web app
* The extension automatically inherits the authenticated session
* Sign out actions are synchronized across platforms
* Reduced security surface area and complexity
Before setting up extension authentication, make sure to first [configure authentication for your web app](/docs/web/auth/overview) and then head back to the extension code.
The following sections cover everything you need to know about authentication in your extension:
file: ./src/content/docs/(core)/extension/auth/session.mdx
meta: {
"title": "Session",
"description": "Learn how to manage the user session in your extension."
}
We're not implementing fully-featured auth flow in the extension. Instead, **we're sharing the same auth session with the web app.**
It's a common practice in the industry used e.g. by [Notion](https://www.notion.so) and [Google Workspace](https://workspace.google.com/).
That way, when the user is signed in to the web app, the extension can use the same session to authenticate the user, so he doesn't have to sign in again. Also signing out from the extension will affect both platforms.
For browser extensions, we need to define an [authentication trusted origin](https://www.better-auth.com/docs/reference/security#trusted-origins) using an extension scheme.
Extension schemes (like `chrome-extension://...`) are used for redirecting users to specific screens after authentication and sharing the auth session with the web app.
To find your extension ID, open Chrome and go to `chrome://extensions/`, enable Developer Mode in the top right, and look for your extension's ID. Then add it to your auth server configuration:
```tsx title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["chrome-extension://your-extension-id"],
...
});
```
Adding your extension scheme to the trusted origins list is crucial for security - it prevents CSRF attacks and blocks malicious open redirects by ensuring only requests from approved origins (your extension) are allowed through.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
## Cookies
When the user signs in to the [web app](/docs/web) through our [Better Auth API](/docs/web/auth/configuration#api), web app is setting the cookie with the session token under your app's domain, which is later used to validate the session on the server.
You can find your cookie in *Cookies* tab in the browser's developer tools (remember to be logged in to the app to check it):

To enable your extension to read the cookie and that way share the session with the web app, you need to set the `cookies` permission in the `wxt.config.ts` under `manifest.permissions` field:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["cookies"],
},
});
```
And to be able to read the cookie from your app url, you need to set `host_permissions`, which will include your app url:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
host_permissions: ["http://localhost/*", "https://your-app-url.com/*"],
},
});
```
Then you would be able to share the cookie with API requests and also read its value using `browser.cookies` API.
Avoid using `` in `host_permissions`. It affects all urls and may cause security issues, as well as a [rejection](https://developer.chrome.com/docs/webstore/review-process#review-time-factors) from the destination store.
## Reading session
You **don't** need to worry about reading, parsing, or validating the session cookie. TurboStarter comes with a pre-built solution that ensures your session is correctly shared with the web app.
It also ensures that appropriate cookies are passed to [API](/docs/web/api/overview) requests, so you can safely use [protected endpoints](/docs/web/api/protected-routes) (that require authentication) in your extension.
To get session details in your extension code (e.g., inside a popup window), you can leverage the `useSession` hook provided by the [auth client](https://www.better-auth.com/docs/basic-usage#client-side) (which is also used in the web and mobile apps):
```tsx title="user.tsx"
import { useSession } from "~/lib/auth";
const User = () => {
const {
data: { user, session },
isPending,
} = useSession();
if (isPending) {
return
Loading...
;
}
/* do something with the session data... */
return
{user?.email}
;
};
```
That's how you can access user details right in your extension.
## Signing out
Signing out from the extension also involves using the well-known `signOut` function that is derived from our [auth client](https://www.better-auth.com/docs/basic-usage#signout):
```tsx title="logout.tsx"
import { signOut } from "~/lib/auth";
export const Logout = () => {
return ;
};
```
The session is automatically invalidated, so the next use of `useSession` or any other query that depends on the session will return `null`. The UI for both the extension and the web app will be updated to show the user as logged out.
As web app is using the same session cookie, the user will be signed out from the web app as well. **This is intentional**, as your extension will most probably serves as an add-on for the web app and it doesn't make sense to keep the user signed in there if the extension is not used.

file: ./src/content/docs/(core)/extension/configuration/app.mdx
meta: {
"title": "App configuration",
"description": "Learn how to setup the overall settings of your extension."
}
The application configuration is set at `apps/extension/src/config/app.ts`. This configuration stores some overall variables for your application.
This allows you to host multiple apps in the same monorepo, as every application defines its own configuration.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/extension/src/config/app.ts"
import { env } from "~/lib/env";
export const appConfig = {
name: env.VITE_PRODUCT_NAME,
url: env.VITE_SITE_URL,
locale: env.VITE_DEFAULT_LOCALE,
theme: {
mode: env.VITE_THEME_MODE,
color: env.VITE_THEME_COLOR,
},
} as const;
```
For example, to set the extension default theme color, you'd update the following variable:
```dotenv title=".env.local"
VITE_THEME_COLOR="yellow"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
## WXT config
To configure framework-specific settings, you can use the `wxt.config.ts` file. You can configure a lot of options there, such as [manifest](/docs/extension/configuration/manifest), [project structure](https://wxt.dev/guide/essentials/project-structure.html) or even [underlying Vite config](https://wxt.dev/guide/essentials/config/vite.html):
```ts title="wxt.config.ts"
import { defineConfig } from "wxt";
export default defineConfig({
srcDir: "src",
entrypointsDir: "app",
outDir: "build",
modules: [],
manifest: {
// Put manifest changes here
},
vite: () => ({
// Override config here, same as `defineConfig({ ... })`
// inside vite.config.ts files
}),
});
```
Make sure to setup it correctly, as it's the main source of config for your development, build and publishing process.
file: ./src/content/docs/(core)/extension/configuration/environment-variables.mdx
meta: {
"title": "Environment variables",
"description": "Learn how to configure environment variables."
}
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/extension` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, extension).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/extension`). These are specific to the app and are not shared between apps.
* **Bundle-specific variables**: Specific to the [bundle target](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) (e.g. `.env.safari`, `.env.firefox`) or [bundle tag](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) (e.g. `.env.testing`)
* **Build environment variables**: Not stored in the `.env` file. Instead, they are stored in the environment variables of the CI/CD system.
* **Secret keys**: They're not stored on the extension side, instead [they're defined on the web side.](/docs/web/configuration/environment-variables#secret-keys)
## Shared variables
Here you can add all the environment variables that are shared across all the apps.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/extension`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/extension/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in WXT (Vite) extension
VITE_SITE_URL="${URL}"
VITE_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
VITE_THEME_MODE="system"
VITE_THEME_COLOR="orange"
...
```
To make environment variables available in the browser extension code, you need to prefix them with `VITE_`. They will be injected to the code during the build process.
Only environment variables prefixed with `VITE_` will be injected.
[Read more about Vite environment variables.](https://vite.dev/guide/env-and-mode.html#env-files)
## Bundle-specific variables
WXT also provides environment variables specific to a certain [build target](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) or [build tag](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) when creating the final bundle. Given the following build command:
```json title="package.json"
"scripts": {
"build": "wxt build -b firefox --mode testing"
}
```
The following env files will be considered, ordered by priority:
* `.env.firefox`
* `.env.testing`
* `.env`
You shouldn't worry much about this, as TurboStarter comes with already configured build processes for all the major browsers.
## Build environment variables
To allow your extension to build properly on CI you need to define your environment variables on your CI/CD system (e.g. [Github Actions](https://docs.github.com/en/actions/learn-github-actions/environment-variables)).
TurboStarter comes with predefined Github Actions workflow used to build and submit your extension to the stores. It's located in `.github/workflows/publish-extension.yml` file.
To correctly set up the build environment variables, you need to define them under `env` section and then add them as a [secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets-and-variables) to your repository.
```yaml title="publish-extension.yml"
...
jobs:
extension:
name: π Publish extension
runs-on: ubuntu-latest
environment: Production
env:
VITE_SITE_URL: ${{ secrets.SITE_URL }}
...
```
We'll go through the whole process of building and publishing the extension in the [publishing guide](/docs/extension/publishing/checklist).
## Secret keys
Secret keys and sensitive information are to be **never** stored on the extension app code.
It means that you will need to add the secret keys to the **web app, where the API is deployed.**
The browser extension should only communicate with the backend API, which is typically part of the web app. The web app is responsible for handling sensitive operations and storing secret keys securely.
[See web documentation for more details.](/docs/web/configuration/environment-variables#secret-keys)
This is not a TurboStarter-specific requirement, but a best practice for security for any
application. Ultimately, it's your choice.
file: ./src/content/docs/(core)/extension/configuration/manifest.mdx
meta: {
"title": "Manifest",
"description": "Learn how to configure the manifest of your extension."
}
As a requirement from web stores, every extension must have a `manifest.json` file in its root directory that lists important information about the structure and behavior of that extension.
It's a JSON file that contains metadata about the extension, such as its name, version, and permissions.
You can read more about it in the [official documentation](https://developer.chrome.com/docs/extensions/reference/manifest).
## Where is the `manifest.json` file?
WXT **abstracts away** the manifest file. The framework generates the manifest under the hood based on your source files and configurations you export from your code, similar to how Next.js abstracts page routing and SSG with the file system and page components.
That way, you don't have to manually create the `manifest.json` file and worry about correctly setting all the fields.
Most of the common properties are taken from the `package.json` and `wxt.config.ts` files:
| Manifest Field | Abstractions |
| ------------------------ | ------------------------------------------------------------- |
| icons | Auto generated with the `icon.png` in the `/assets` directory |
| action, browser\_actions | Popup window |
| options\_ui | Options page |
| content\_scripts | Content scripts |
| background | Background service worker |
| version | set by the `version` field in `package.json` |
| name | set by the `name` field in `wxt.config.ts` |
| description | set by the `description` field in `wxt.config.ts` |
| author | set by the `author` field in `wxt.config.ts` |
| homepage\_url | set by the `homepage` field in `wxt.config.ts` |
WXT build process centralizes common metadata and resolves any static file references (such as popup, background, content scripts, and so on) automatically.
This enables you to focus on the metadata that matters, such as name, description, OAuth, and so on.
## Overriding manifest
Sometimes, you want to override the default manifest fields (e.g. because you need to add a new permission that is required for your extension to work).
You'll need to modify your project's `wxt.config.ts` like so:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["activeTab"],
},
});
```
Then, your settings will be merged with the settings auto-generated by WXT.
### Environment variables
You can use environment variables inside the manifest overrides:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
browser_specific_settings: {
gecko: {
id: import.meta.env.VITE_FIREFOX_EXT_ID,
},
},
},
});
```
If the environment variable could not be found, the field will be removed completely from the manifest.
### Locales
TurboStarter extension supports [extension localization](https://developer.chrome.com/docs/extensions/reference/api/i18n) out-of-the-box. You can customize e.g. your extension's name and description based on the language of the user's browser.
Locales are defined in the `/public/_locales` directory. The directory should contain a `messages.json` file for each language you want to support (e.g. `/public/_locales/en/messages.json` and `/public/_locales/es/messages.json`).
By default, the first locale alphabetically available is used as default. However, you can specify a `default_locale` in your manifest like so:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
default_locale: "en",
},
});
```
To reference a locale string inside your manifest overrides, wrap the key inside `__MSG___`:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
name: "__MSG_extensionName__",
description: "__MSG_extensionDescription__",
},
});
```
Apart of this, we also configure [in-extension internationalization](/docs/extension/internationalization) out-of-the-box to easily translate your components and views.
file: ./src/content/docs/(core)/extension/customization/add-app.mdx
meta: {
"title": "Adding apps",
"description": "Learn how to add apps to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `extension` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/extension` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/extension` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/extension` folder. We will create a branch named `extension-branch` and create a subtree for the `apps/extension` folder.
```bash
git subtree split --prefix=apps/extension --branch extension-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/extension` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin extension-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/extension` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `extension-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the extension-branch:
```bash
git subtree split --prefix=apps/extension --branch extension-branch
```
Now, you can push the updates to the `extension-branch`:
```bash
git push origin extension-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin extension-branch --squash
```
That's it! You now have a new application in the monorepo π
file: ./src/content/docs/(core)/extension/customization/add-package.mdx
meta: {
"title": "Adding packages",
"description": "Learn how to add packages to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/web` or modify existing packages under `packages`. You don't need to do this to add a new page or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@turbostarter/`. If you named it `example`, the package will be named `@turbostarter/example`.
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@turbostarter/example/client'`
2. The `server` module can be imported using `import { server } from '@turbostarter/example/server'`
## Use the package in your extension
You can now use the package in your extension by importing it using the package name:
```ts title="app/popup/index.tsx"
import { example } from "@turbostarter/example";
console.log(example());
```
Et voilΓ ! You have successfully added a new package to your TurboStarter extension. π
file: ./src/content/docs/(core)/extension/customization/components.mdx
meta: {
"title": "Components",
"description": "Manage and customize your extension components.",
"mirror": "../../web/customization/components.mdx"
}
file: ./src/content/docs/(core)/extension/customization/styling.mdx
meta: {
"title": "Styling",
"description": "Get started with styling your extension."
}
To build the extension interface TurboStarter comes with [Tailwind CSS](https://tailwindcss.com/) and [Radix UI](https://www.radix-ui.com/) pre-configured.
The combination of Tailwind CSS and Radix UI gives ready-to-use, accessible UI components that can be fully customized to match your brands design.
## Tailwind configuration
In the `tooling/tailwind/config` directory you will find shared Tailwind CSS configuration files. To change some global styles you can edit the files in this folder.
Here is an example of a shared Tailwind configuration file:
```ts title="tooling/tailwind/config/base.ts"
import type { Config } from "tailwindcss";
export default {
darkMode: "class",
content: ["src/**/*.{ts,tsx}"],
theme: {
extend: {
colors: {
...
primary: {
DEFAULT: "hsl(var(--color-primary))",
foreground: "hsl(var(--color-primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--color-secondary))",
foreground: "hsl(var(--color-secondary-foreground))",
},
success: {
DEFAULT: "hsl(var(--color-success))",
foreground: "hsl(var(--color-success-foreground))",
},
...
},
},
},
plugins: [animate, containerQueries, typography],
} satisfies Config;
```
For the colors part, we bet stricly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) format to allow for easy theme management without a need for any JavaScript.
Also, each app has its own `tailwind.config.ts` file which extends the shared config and allows you to override the global styles.
Here is an example of an app's `tailwind.config.ts` file:
```ts title="apps/mobile/tailwind.config.ts"
import type { Config } from "tailwindcss";
import { fontFamily } from "tailwindcss/defaultTheme";
import baseConfig from "@turbostarter/tailwind-config/web";
export default {
// We need to append the path to the UI package to the content array so that
// those classes are included correctly.
content: [
...baseConfig.content,
"../../packages/ui/{shared,web}/src/**/*.{ts,tsx}",
],
presets: [baseConfig],
theme: {
extend: {
fontFamily: {
sans: ["DM Sans", ...fontFamily.sans],
mono: ["DM Mono", ...fontFamily.mono],
},
},
},
} satisfies Config;
```
That way we can have a separation of concerns and a clear structure for the Tailwind CSS configuration.
## Themes
TurboStarter comes with **9+** predefined themes which you can use to quickly change the look and feel of your app.
They're defined in `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) array, which is then converted to a CSS variable at build time (by our custom build script). That way we can ensure full type-safety and reuse themes across parts of our apps (e.g. use the same theme in emails).
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a theme to your app, you can use the `data-theme` attribute on your layout wrapper for each part of the extension:
```tsx title="components/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
In TurboStarter, we're using [Storage API](/docs/extension/structure/storage) to persist the user's theme selection and then apply it to the root `div` element.
## Dark mode
The starter kit comes with a built-in dark mode support.
Each theme has a corresponding dark mode variables which are used to change the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
As we define the `darkMode` as `class` in the shared Tailwind configuration, we need to add the `dark` class to the root wrapper element to apply the dark mode styles.
The same as for the theme, we're using [Storage API](/docs/extension/structure/storage) to persist the user's dark mode selection and then apply correct class name to the root `div` element:
```tsx title="components/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
We can define the default theme mode and color in [app configuration](/docs/extension/configuration/app).
file: ./src/content/docs/(core)/extension/installation/clone.mdx
meta: {
"title": "Cloning repository",
"description": "Get the code to your local machine and start developing your extension.",
"mirror": "../../web/installation/clone.mdx"
}
file: ./src/content/docs/(core)/extension/installation/commands.mdx
meta: {
"title": "Common commands",
"description": "Learn about common commands you need to know to work with the extension project.",
"mirror": "../../web/installation/commands.mdx"
}
file: ./src/content/docs/(core)/extension/installation/conventions.mdx
meta: {
"title": "Conventions",
"description": "Some standard conventions used across TurboStarter codebase.",
"mirror": "../../web/installation/conventions.mdx"
}
file: ./src/content/docs/(core)/extension/installation/dependencies.mdx
meta: {
"title": "Managing dependencies",
"description": "Learn how to manage dependencies in your project.",
"mirror": "../../web/installation/dependencies.mdx"
}
file: ./src/content/docs/(core)/extension/installation/development.mdx
meta: {
"title": "Development",
"description": "Get started with the code and develop your browser extension."
}
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (20.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local database)
* [pnpm](https://pnpm.io)
## Project development
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
Check [Environment variables](/docs/extension/configuration/environment-variables) for more details on setting up environment variables.
### Start database
If you want to use local database (**recommended for development purposes**), ensure Docker is running, then setup your database with:
```bash
pnpm db:setup
```
This command initiates the PostgreSQL container and runs migrations, ensuring your database is up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your development server should now be running π
WXT will create a dev bundle for your extension and a live-reloading development server, automatically updating your extension bundle on file changes and reloading your browser on source code changes.
It also makes the icon grayscale to distinguish between development and production extension bundles.
### Load the extension
Head over to `chrome://extensions` and enable **Developer Mode**.

Click on "Load Unpacked" and navigate to your extension's `apps/extension/build/chrome-mv3` directory.
To see your popup, click on the puzzle piece icon on the Chrome toolbar, and click on your extension.

Pin your extension to the Chrome toolbar for easy access by clicking the pin button.
Head over to `about:debugging` and click on "This Firefox".
Click on "Load Temporary Add-on" and navigate to your extension's `apps/extension/build/firefox-mv2` directory. Pick any file to load the extension.

The extension now installs, and remains installed until you restart Firefox.
To see your popup, click on your extension icon on the Firefox toolbar.

Loaded extension starts as pinned on the Firefox toolbar. Don't remove it to easily access it later.
You can easily configure your development server to automatically start the browser when you start the server. To do it, create a `web-ext.config.ts` file in a root of your extension and configure it with your browser [binaries](https://wxt.dev/guide/essentials/config/browser-startup.html#set-browser-binaries) and [argumens](https://wxt.dev/guide/essentials/config/browser-startup.html#persist-data).
Learn more in the [official documentation](https://wxt.dev/guide/essentials/config/browser-startup.html).
### Publish to stores
When you're ready to publish the project to the stores, follow the [guidelines](/docs/extension/marketing) and [checklist](/docs/extension/publishing/checklist) to ensure everything is set up correctly.
file: ./src/content/docs/(core)/extension/installation/editor-setup.mdx
meta: {
"title": "Editor setup",
"description": "Learn how to set up your editor for the fastest development experience.",
"mirror": "../../web/installation/editor-setup.mdx"
}
file: ./src/content/docs/(core)/extension/installation/structure.mdx
meta: {
"title": "Project structure",
"description": "Learn about the project structure and how to navigate it."
}
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Browser extension application structure
The browser extension application is located in the `apps/extension` folder. It contains the following folders:
file: ./src/content/docs/(core)/extension/installation/update.mdx
meta: {
"title": "Updating codebase",
"description": "Learn how to update your codebase to the latest version.",
"mirror": "../../web/installation/update.mdx"
}
file: ./src/content/docs/(core)/extension/publishing/checklist.mdx
meta: {
"title": "Checklist",
"description": "Let's publish your TurboStarter extension to stores!"
}
When you're ready to publish your TurboStarter extension to stores, follow this checklist.
This process may take a few hours and some trial and error, so buckle up β you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](https://supabase.com/) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities for doing a migration:
TurboStarter comes with a predefined GitHub Action to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is set your `DATABASE_URL` as a [secret for your GitHub repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run GitHub Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in the `.env.local` file and run the following command:
```bash
pnpm db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## Set up web backend API
**Why it's necessary?**
Setting up the backend is necessary to have a place to store your data and to have other features work properly (e.g., auth, billing).
**How to do it?**
Please refer to the [web deployment checklist](/docs/web/deployment/checklist) on how to set up and deploy the web app backend to production.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the extension to function correctly. These variables include API keys, database URLs, and other configuration details required for your extension to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your CI/CD provider as a [secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
## Build your app
**Why it's necessary?**
Building your extension is necessary to create a standalone extension bundle that can be published to the stores.
**How to do it?**
You basically have two possibilities to build a bundle for your extension:
TurboStarter comes with a predefined GitHub Action to handle building your extension for submission. You can find its definition in the `.github/workflows/publish-extension.yml` file.
[Check how to run GitHub Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
This will also save the `.zip` file as an [artifact](https://docs.github.com/en/actions/guides/storing-workflow-data-as-artifacts) of the workflow run, so you can download it from there and submit your extension to stores (if configured).
You can also run your build locally, although this is not recommended for production.
To do it, run the following command:
```bash
pnpm turbo build --filter=extension
```
This will build the extension and package it into a `.zip` file. You can find the output in the `build` folder.
## Submit to stores
**Why it's necessary?**
Submitting your extension to the stores is necessary to make it available to your users. That's the only way to get your extension in front of your users.
**How to do it?**
We've prepared dedicated guides for each store that TurboStarter supports out-of-the-box, please refer to the following pages:
That's it! Your extension is now live and accessible to your users, good job! π
* Optimize your store listing description, keywords, and other relevant information for the stores.
* Remove the placeholder content in the extension or replace it with your own.
* Update the favicon, scheme, store images, and logo with your own branding.
file: ./src/content/docs/(core)/extension/publishing/chrome.mdx
meta: {
"title": "Chrome Web Store",
"description": "Publish your extension to Google Chrome Web Store."
}
[Chrome Web Store](https://chromewebstore.google.com/) is the most popular store for browser extensions, as it makes them available in any Chromium-based browser, including Google Chrome, Edge, Brave, and many others.
To submit your extension to Chrome Web Store, you'll need to complete a few steps. Here, we'll go through them.
Make sure your extension follows the [guidelines](/docs/extension/marketing) and other requirements to increase your chances of getting approved.
## Developer account
Before you can publish items on the Chrome Web Store, you must register as a CWS developer and pay a one-time registration fee. You must provide a developer email when you create your developer account.
To register, just access the [developer console](https://chrome.google.com/webstore/devconsole). The first time you do this, the following registration screen will appear. First, agree to the developer agreement and policies, then pay the registration fee.

Once you pay the registration fee and agree to the terms, your account will be created, and you'll be able to proceed to fill out additional information about it.

There are a few fields that you'll need to fill in:
* **Publisher name**: Appears under the title of each of your extensions. If you are a verified publisher, you can display an official publisher URL instead.
* **Verified email**: Verifying your contact email address is required when you set up a new developer account. It's only displayed under your extensions' contact information. Any notifications will be sent to your Chrome Web Store developer account email.
* **Physical address**: Only items that offer functionality to purchase items, additional features, or subscriptions must include a physical address.
## Submission
After registering your developer account, setting it up, and preparing your extension, you're ready to publish it to the store.
You can submit your extension in two ways:
* **Manually**: By uploading your extension's bundle directly to the store.
* **Automatically**: By using GitHub Actions to submit your extension to the stores.
**The first submission must be done manually, while subsequent updates can be submitted automatically.** We'll go through both approaches.
### Manual submission
To manually submit your extension to stores, you will first need to get your extension bundle. If you ran the build step locally, you should already have the `.zip` file in your extension's `build` folder.
If you used GitHub Actions to build your extension, you can find the results in the workflow run. Download the artifacts and save them on your local machine.
Then, use the following steps to upload your item:
1. Go to the [Chrome Web Store Developer Dashboard](https://chrome.google.com/webstore/devconsole/).
2. Sign in to your developer account.
3. Click on the *Add new item* button.
4. Click *Choose file* > *your zip file* > *Upload*. If your item's manifest and other contents are valid, you will see a new item in the dashboard.

After you upload the bundle, you'll need to fill in the extension's details, such as the icons, privacy settings, permissions justification, and other information.
Please refer to the official guides on how to set up your extension's details.
### Automated submission
The first submission of your extension to Chrome Web Store must be done manually because you need to provide the store's credentials and extension ID to automation, which will be available only after the first bundle upload.
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your extension to web stores automatically. It's located in the `.github/workflows/publish-extension.yml` file.
What you need to do is fill the environment variables with your store's credentials and extension's details and set them as a [secrets in your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) under correct names:
```yaml title="publish-extension.yml"
env:
CHROME_EXTENSION_ID: ${{ secrets.CHROME_EXTENSION_ID }}
CHROME_CLIENT_ID: ${{ secrets.CHROME_CLIENT_ID }}
CHROME_CLIENT_SECRET: ${{ secrets.CHROME_CLIENT_SECRET }}
CHROME_REFRESH_TOKEN: ${{ secrets.CHROME_REFRESH_TOKEN }}
```
Please refer to the [official guide](https://github.com/PlasmoHQ/bms/blob/main/tokens.md#chrome-web-store-api) to learn how to get these credentials correctly.
That's it! You can [run the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) and it will submit your extension to the Chrome Web Store π
This workflow will also try to send your extension to review, but it's not guaranteed to happen. You need to have all required information filled in your extension's details page to make it possible.
Even then, when you introduce some **breaking change** (e.g. add another permission), you'll need to update your extension store metadata and automatic submit won't be possible.
To opt out of this behavior (and use only automatic uploading to store, but not sending to review) you can set `--chrome-skip-submit-review` flag in the `publish-extension.yml` file for the `wxt submit` command:
```yaml title="publish-extension.yml"
// [!code word:--chrome-skip-submit-review]
- name: π¨ Publish!
run: |
npx wxt submit \
--chrome-zip apps/extension/build/*-chrome.zip --chrome-skip-submit-review
```
Then, your extension bundle will be uploaded to the store, but you will need to send it to review manually.
Check out the [official documentation](https://wxt.dev/api/cli/wxt-submit) for more customization options.
## Review
After filling out the information about your item, you are ready to send it to review. Click on *Submit for review* button and confirm that you want to submit your item in the following dialog:

The confirmation dialog shown above also lets you control the timing of your item's publishing. If you uncheck the checkbox, your item will **not** be published immediately after its review is complete. Instead, you'll be able to manually publish it at a time of your choosing once the review is complete.
After you submit the item for review, it will undergo a review process. The time for this review depends on the nature of your item. See [Understanding the review process](https://developer.chrome.com/docs/webstore/review-process) for more details.
There are important emails like take down or rejection notifications that are enabled by default. To receive an email notification when your item is published or staged, you can enable notifications on the *Account page*.

The review status of your item appears in the [developer dashboard](https://chrome.google.com/webstore/devconsole) next to each item. The status can be one of the following:
* **Published**: Your item is available to all users.
* **Pending**: Your item is under review.
* **Rejected**: Your item was rejected by the store.
* **Taken Down**: Your item was taken down by the store.

You'll receive an email notification when the status of your item changes.
If your extension has been determined to violate one or more terms or policies, you will receive an email notification that contains the violation description and instructions on how to rectify it.
If you did not receive an email within a week, check the status of your item. If your item has been rejected, you can see the details on the *Status* tab of your item.

You'll need to fix the issues and upload a new version of your extension, make sure to follow the [guidelines](/docs/extension/marketing) or check [publishing troubleshooting](/docs/extension/troubleshooting/publishing) for more info.
If you have been informed about a violation and you do not rectify it, your item will be taken down. See [Violation enforcement](https://developer.chrome.com/docs/webstore/review-process#enforcement) for more details.
You can learn more about the review process in the official guides listed below.
file: ./src/content/docs/(core)/extension/publishing/mozilla.mdx
meta: {
"title": "Mozilla Add-ons",
"description": "Publish your extension to Mozilla Add-ons."
}
Mozilla Firefox doesn't share extensions with [Google Chrome](/docs/extension/publishing/chrome), so you'll need to publish your extension to it separately.
Here, we'll go through the process of publishing an extension to [Mozilla Add-ons](https://addons.mozilla.org/).
Make sure your extension follows the [guidelines](/docs/extension/marketing) and other requirements to increase your chances of getting approved.
## Developer account
Before you can publish items on Mozilla Add-ons, you must register a developer account. In comparison to the Chrome Web Store, Mozilla Add-ons doesn't require a registration fee.
To register, go to [addons.mozilla.org](https://addons.mozilla.org/) and click on the *Register* button.

It's important to set at least a display name on your profile to increase transparency with users, add-on reviewers, and the greater community.
You can do it in the *Edit My Profile* section:

## Submission
After registering your developer account, setting it up, and preparing your extension, you're ready to publish it to the store.
You can submit your extension in two ways:
* **Manually**: By uploading your extension's bundle directly to the store.
* **Automatically**: By using GitHub Actions to submit your extension to the stores.
**The first submission must be done manually, while subsequent updates can be submitted automatically.** We'll go through both approaches.
### Manual submission
To manually submit your extension to stores, you will first need to get your extension bundle. If you ran the build step locally, you should already have the `.zip` file in your extension's `build` folder.
If you used GitHub Actions to build your extension, you can find the results in the workflow run. Download the artifacts and save them on your local machine.
Then, use the following steps to upload your item:
#### Sign in to your developer account
Go to the [Add-ons Developer Hub](https://addons.mozilla.org/developers/) and sign in to your developer account.
#### Choose distribution method
You should reach the following page:

Here, you have two ways of distributing your extension:
* **On this site**, if you want your add-on listed on AMO (Add-ons Manager).
* **On your own**, if you plan to distribute the add-on yourself and don't want it listed on AMO.
We recommend going with the first option, as it will allow you to reach more users and get more feedback. If you decide to go with the second option, please refer to the [official documentation](https://extensionworkshop.com/documentation/publish/self-distribution/) for more details.
#### Submit your extension
On the next page, click on *Select file* and choose your extension's `.zip` bundle.

Once you upload the bundle, the validator checks the add-on for issues and the page updates:

If your add-on passes all the checks, you can proceed to the next step.
You may receive a message that you only have warnings. It's advisable to address these warnings, particularly those flagged as security or privacy issues, as they may result in your add-on failing review. However, **you can continue with the submission**.
If the validation fails, you'll need to address the issues and upload a new version of your add-on.
#### Submit source code (if needed)
You'll need to indicate whether you need to provide the source code of your extension:

If you select *Yes*, a section displays describing what you need to submit. Click *Browse* and locate and upload your source code package. See [Source code submission](https://extensionworkshop.com/documentation/publish/source-code-submission/) for more information.
You may receive a message that you only have warnings. It's advisable to address these warnings, particularly those flagged as security or privacy issues, as they may result in your add-on failing review. However, **you can continue with the submission**.
If the validation fails, you'll need to address the issues and upload a new version of your add-on.
#### Add metadata
On the next page, you'll need to provide the following additional information about your extension:

* **Name**: Your add-on's name.
* **Add-on URL**: The URL for your add-on on AMO. A URL is automatically assigned based on your add-on's name. To change this, click Edit. The URL must be unique. You will be warned if another add-on is using your chosen URL, and you must enter a different one.
* **Summary**: A useful and descriptive short summary of your add-on.
* **Description**: A longer description that provides users with details of the extension's features and functionality.
* **This add-on is experimental**: Indicate if your add-on is experimental or otherwise not ready for general use. The add-on will be listed but with reduced visibility. You can remove this flag when your add-on is ready for general use.
* **This add-on requires payment, non-free services or software, or additional hardware**: Indicate if your add-on requires users to make an additional purchase for it to work fully.
* **Select up to 2 Firefox categories for this add-on**: Select categories that describe your add-on.
* **Select up to 2 Firefox for Android categories for this add-on**: Select categories that describe your add-on.
* **Support email and Support website**: Provide an email address and website where users can get in touch when they have questions, issues, or compliments.
* **License**: Select the appropriate license for your add-on. Click Details to learn more about each license.
* **This add-on has a privacy policy**: If any data is being transmitted from the user's device, a privacy policy explaining what is being sent and how it's used is required. Check this box and provide the privacy policy.
* **Notes for Reviewers**: Provide information to assist the AMO reviewer, such as login details for a dummy account, source code information, or similar.
#### Finalize the process
Once you're ready, click on the *Submit Version* button.

You can still edit your add-on's details from the dedicated page after submission.
The first submission of your extension to Mozilla Add-ons must be done manually because you need to provide the store's credentials and extension ID to automation, which will be available only after the first bundle upload.
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your extension to web stores automatically. It's located in the `.github/workflows/publish-extension.yml` file.
What you need to do is fill the environment variables with your store's credentials and extension's details and set them as a [secrets in your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) under correct names:
```yaml title="publish-extension.yml"
env:
FIREFOX_EXTENSION_ID: ${{ secrets.FIREFOX_EXTENSION_ID }}
FIREFOX_JWT_ISSUER: ${{ secrets.FIREFOX_JWT_ISSUER }}
FIREFOX_JWT_SECRET: ${{ secrets.FIREFOX_JWT_SECRET }}
```
Please refer to the [official guide](https://github.com/PlasmoHQ/bms/blob/main/tokens.md#firefox-add-ons-api) to learn how to get these credentials correctly.
That's it! You can [run the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) and it will submit your extension to the Mozilla Add-ons π
This workflow will also try to send your extension to review, but it's not guaranteed to happen. You need to have all required information filled in your extension's details page to make it possible.
Even then, when you introduce some **breaking change** (e.g., add another permission), you'll need to update your extension store metadata and automatic submission won't be possible.
## Review
Once you submit your extension bundle, it's automatically sent to review and will undergo a review process. The time for this review depends on the nature of your item.
The add-on review process includes the following phases:
1. **Automatic Review**: Upon upload, the add-on undergoes several automatic validation steps to ensure its general safety.
2. **Content Review**: Shortly after submission, a human reviewer inspects the add-on to ensure that the listing adheres to content review guidelines, including metadata such as the add-on name and description.
3. **Technical Code Review**: The add-on's source code is examined to ensure compliance with review policies.
4. **Basic Functionality Testing**: After the source code is verified as safe, the add-on undergoes basic functionality testing to confirm it operates as described.
There are important emails like takedown or rejection notifications that are enabled by default. To receive an email notification when your item is published or staged, you can enable notifications in the *Account Settings*.

The review status of your item appears in the [developer hub](https://addons.mozilla.org/en-US/firefox/) next to each item.

You'll receive an email notification when the status of your item changes.
If your extension has been determined to violate one or more terms or policies, you will receive an email notification that contains the violation description and instructions on how to rectify it.
You can also check the reason behind the rejection on the *Status* page of your item.

You'll need to fix the issues and upload a new version of your extension. Make sure to follow the [guidelines](/docs/extension/marketing) or check [publishing troubleshooting](/docs/extension/troubleshooting/publishing) for more info.
You can learn more about the review process in the official guides listed below.
file: ./src/content/docs/(core)/extension/publishing/updates.mdx
meta: {
"title": "Updates",
"description": "Learn how to update your published extension."
}
When you publish your extension to the stores, you can update it later to give your users new features and bug fixes.
TurboStarter comes with ready-to-use way to update your extensions, we'll quickly go through it.
## Uploading new version
The best way to update your extension is to submit a new version to the stores. This is the most reliable way to update your extension, but it can take a while for the new version to be approved and available to the users.
To submit a new version, you just need to change the version number in the `package.json` file:
```json title="package.json"
{
...
"version": "1.0.1",
...
}
```
Then, follow the exact same steps as [when you initially published your extension](/docs/extension/publishing/checklist), and, when you'll submit your extension to review, provide release notes for the new version.
file: ./src/content/docs/(core)/extension/structure/background.mdx
meta: {
"title": "Background service worker",
"description": "Configure your extension's background service worker."
}
An extension's service worker is a powerful script that runs in the background, separate from other parts of the extension. It's loaded when it is needed, and unloaded when it goes dormant.
Once loaded, an extension service worker generally runs as long as it is actively receiving events, though it [can shut down](https://developer.chrome.com/docs/extensions/develop/concepts/service-workers/lifecycle#idle-shutdown). Like its web counterpart, an extension service worker cannot access the DOM, though you can use it if needed with [offscreen documents](https://developer.chrome.com/docs/extensions/reference/api/offscreen).
Extension service workers are more than network proxies (as web service workers are often described), they run in a separate [service worker context](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers). For example, when in this context, you no longer need to worry about CORS and can fetch resources from any origin.
In addition to the [standard service worker events](https://developer.mozilla.org/docs/Web/API/ServiceWorkerGlobalScope#events), they also respond to extension events such as navigating to a new page, clicking a notification, or closing a tab. They're also registered and updated differently from web service workers.
**It's common to offload heavy computation to the background service worker**, so you should always try to do resouce-expensive operations there and send results using [Messages API](/docs/extension/structure/messaging) to other parts of the extension.
Code for the background service worker is located at `src/app/background` directory - you need to use `defineBackground` within `index.ts` file inside to allow WXT to include your script in the build.
```ts title="src/app/background/index.ts"
import { defineBackground } from "wxt/sandbox";
const main = () => {
console.log(
"Background service worker is running! Edit `src/app/background` and save to reload.",
);
};
export default defineBackground(main);
```
To see the service worker in action, reload the extension, then open its "Service Worker inspector":

You should see what we've logged in the console:

To communicate with the service worker from other parts of the extension, you can use the [Messaging API](/docs/extension/structure/messaging).
## Persisting state
Service workers in `dev` mode always remain in `active` state.
The worker becomes idle after a few seconds of inactivity, and the browser will kill its process entirely after 5 minutes. This means all state (variables, etc.) is lost unless you use a storage engine.
The simplest way to persist your background service worker's state is to use the [storage API](/docs/extension/structure/storage).
The more advanced way is to send the state to a remote database via our [backend API](/docs/extension/api/overview).
file: ./src/content/docs/(core)/extension/structure/content-scripts.mdx
meta: {
"title": "Content scripts",
"description": "Learn more about content scripts."
}
Content scripts run in the context of web pages in an isolated world. This allows multiple content scripts from various extensions to coexist without conflicting with each other's execution and to stay isolated from the page's JavaScript.
A script that ends with `.ts` will not have front-end runtime (e.g. react) bundled with it and won't be treated as a ui script, while a script that ends in `.tsx` will be.
There are many use cases for content scripts:
* Injecting a custom stylesheet into the page
* Scraping data from the current web page
* Selecting, finding, and styling elements from the current web page
* Injecting UI elements into current web page
Code for the content scripts is located in `src/app/content` directory - you need to define `.ts` or `.tsx` file inside and use `defineContentScript` to allow WXT to include your script in the build.
```ts title="src/app/content/index.ts"
export default defineContentScript({
matches: [""],
async main(ctx) {
console.log(
"Content script is running! Edit `app/content` and save to reload.",
);
},
});
```
Reload your extension, open a web page, then open its inspector:

To learn more about content scripts, e.g. how to configure only specific pages to load content scripts, how to inject them into `window` object or how to fetch data inside, please check [the official documentation](https://wxt.dev/guide/essentials/content-scripts.html).
## UI scripts
WXT has first-class support for mounting React components into the current webpage. This feature is called content scripts UI (CSUI).

An extension can have as many CSUI as needed, with each CSUI targeting a group of webpages or a specific webpage.
To get started with CSUI, create a `.tsx` file in `src/app/content` directory and use `defineContentScript` allow WXT to include your script in the build and mount your component into the current webpage:
```tsx title="src/app/content/index.tsx"
const ContentScriptUI = () => {
return (
);
};
export default defineContentScript({
matches: [""],
cssInjectionMode: "ui",
async main(ctx) {
const ui = await createShadowRootUi(ctx, {
name: "turbostarter-extension",
position: "overlay",
anchor: "body",
onMount: (container) => {
const app = document.createElement("div");
container.append(app);
const root = ReactDOM.createRoot(app);
root.render();
return root;
},
onRemove: (root) => {
root?.unmount();
},
});
ui.mount();
},
});
export default ContentScriptUI;
```
The `.tsx` extension is essential to differentiate between Content Scripts UI and regular Content Scripts. Make sure to check if you're using appropriate type of content script for your use case.
To learn more about content scripts UI, e.g. how to inject custom styles, fonts or the whole lifecycle of a component, please check [the official documentation](https://wxt.dev/guide/essentials/content-scripts.html#ui).
Under the hood, the component is wrapped inside the component that implements the Shadow DOM technique, together with many helpful features. This isolation technique prevents the web page's style from affecting your component's styling and vice-versa.
[Read more about the lifecycle of CSUI](https://docs.plasmo.com/framework/content-scripts-ui/life-cycle)
file: ./src/content/docs/(core)/extension/structure/messaging.mdx
meta: {
"title": "Messaging",
"description": "Communicate between your extension's components."
}
Messaging API makes communication between different parts of your extension easy. To make it simple and scalable, we're leveraging `@webext-core/messaging` library.
It provides a declarative, type-safe, functional, promise-based API for sending, relaying, and receiving messages between your extension components.
## Handling messages
Based on our convention, we implemented a little abstraction on top of `@webext-core/messaging` to make it easier to use. That's why all types and keys are stored inside `lib/messaging` directory:
```ts title="lib/messaging/index.ts"
import { defineExtensionMessaging } from "@webext-core/messaging";
export const Message = {
HELLO: "hello",
} as const;
export type Message = (typeof Message)[keyof typeof Message];
interface Messages {
[Message.HELLO]: (message: string) => string;
}
export const { onMessage, sendMessage } = defineExtensionMessaging();
```
There you need to define what will be handled under each key. To make it more secure, only `Message` enum and `onMessage` and `sendMessage` functions are exported from the module.
All message handlers are located in `src/app/background/messaging` directory under respective subdirectories.
To create a message handler, create a TypeScript module in the `background/messaging` directory. Then, include your handlers for all keys related to the message:
```ts title="app/background/messaging/hello.ts"
import { onMessage, Message } from "~/lib/messaging";
onMessage(Message.HELLO, (req) => {
const result = await querySomeApi(req.body.id);
return result;
});
```
To make your handlers available across your extension, you need to import them
in the `background/index.ts` file. That way they could be interpreted by the
build process facilitated by WXT.
## Sending messages
Extension pages, content scripts, or tab pages can send messages to the handlers using the `sendMessage` function. Since we orchestrate your handlers behind the scenes, the message names are typed and will enable autocompletion in your editor:
```tsx title="app/popup/index.tsx"
import { sendMessage, Message } from "~/lib/messaging";
...
const response = await sendMessage(Message.HELLO, "Hello, world!");
console.log(response);
...
```
As it's an asynchronous operation, it's advisable to use [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) integration to handle the response on the client side.
We're already doing it that way when fetching auth session in the `User` component:
```tsx title="hello.tsx"
export const Hello = () => {
const { data, isLoading } = useQuery({
queryKey: [Message.HELLO],
queryFn: () => sendMessage(Message.HELLO, "Hello, world!"),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{data?.message}
;
};
```
file: ./src/content/docs/(core)/extension/structure/overview.mdx
meta: {
"title": "Overview",
"description": "Learn about the structure of the extension app.",
"index": true
}
Every browser extension is different and can include different parts, removing the ones that are not needed.
TurboStarter ships with all the things you need to start developing your own extension including:
* **Popup window** - a small window that appears when the user clicks the extension icon.
* **Options page** - a page that appears when user enters extension settings.
* **Side panel** - a panel that appears when the user clicks sidepanel.
* **New tab page** - a page that appears when the user opens a new tab.
* **Devtools page** - a page that appears when the user opens the browser's devtools.
* **Tab pages** - custom pages shipped with the extension.
* **Content scripts** - injected scripts that run in the browser page.
* **Background scripts** - scripts that run in the background.
* **Message passing** - a way to communicate between different parts of the extension.
* **Storage** - a way to store data in the extension.
All the entrypoints are defined in `apps/extension/src/app` directory (it's similar to file-based routing in Next.js and Expo).
This directory acts as a source for WXT framework which is used to build the extension. It has the following structure:
By structurizing it this way, we can easily add new entrypoints in the future and extend rest of the extension independently from each other.
We'll go through each part and explain the purpose of it, check following sections for more details:
file: ./src/content/docs/(core)/extension/structure/pages.mdx
meta: {
"title": "Pages",
"description": "Get started with your extension's pages."
}
Extension pages are built-in pages recognized by the browser. They include the extension's popup, options, sidepanel and newtab pages.
As WXT is based on Vite, it has very powerful [HMR support](https://vite.dev/guide/features#hot-module-replacement). This means that you don't need to refresh the extension manually when you make changes to the code.
## Popup
The popup page is a small dialog window that opens when a user clicks on the extension's icon in the browser toolbar. It is the most common type of extension page.

## Options
The options page is meant to be a dedicated place for the extension's settings and configuration.

## Devtools
The devtools page is a custom page (including panels) that opens when a user opens the extension's devtools panel.

## New tab
The new tab page is a custom page that opens when a user opens a new tab in the browser.

## Side panel
The side panel is a custom page that opens when a user clicks on the extension's icon in the browser toolbar.

## Tabs
Unlike traditional extension pages, tab (unlisted) pages are just regular web pages shipped with your extension bundle. Extensions generally redirect to or open these pages programmatically, but you can link to them as well.
They could be useful for following cases:
* when you want to show a some page when user first installs your extension
* when you want to have dedicated pages for authentication
* when you need more advanced routing setup

Your tab page will be available under the `/tabs` path in the extension bundle. It will be accessible from the browser under the URL:
```
chrome-extension:///tabs/your-tab-page.html
```
file: ./src/content/docs/(core)/extension/structure/storage.mdx
meta: {
"title": "Storage",
"description": "Learn how to store data in your extension."
}
TurboStarter leverages `wxt/storage` library to handle persistent storage for your extension. It's a utility library from that abstracts the persistent storage API available to browser extensions.
It falls back to localStorage when the extension storage API is unavailable, allowing for state sync between extension pages, content scripts, background service workers and web pages.
To use the `wxt/storage` API, the "storage" permission **must** be added to the manifest:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["storage"],
},
});
```
## Storing data
The base Storage API is designed to be easy to use. It is usable in every extension runtime such as background service workers, content scripts and extension pages.
TurboStarter ships with predefined storage used to handle [theming](/docs/extension/customization/styling) in your extension, but you can create your own storage as well.
All storage-related methods and types are located in `lib/storage` directory.
```ts title="lib/storage/index.ts"
export const StorageKey = {
THEME: "local:theme",
} as const;
export type StorageKey = (typeof StorageKey)[keyof typeof StorageKey];
```
Then, to make it available around your extension, we're setting it up and providing default values:
```ts title="lib/storage/index.ts"
import { storage as browserStorage } from "wxt/storage";
import { appConfig } from "~/config/app";
import type { ThemeConfig } from "@turbostarter/ui";
const storage = {
[StorageKey.THEME]: browserStorage.defineItem(StorageKey.THEME, {
fallback: appConfig.theme,
}),
} as const;
```
To learn more about customizing your storage, syncing state or setup automatic backups please refer to the [official documentation](https://wxt.dev/storage.html).
## Consuming storage
To consume storage in your extension, you can use the `useStorage` React hook that is automatically provided to every part of the extension. The hook API is designed to streamline the state-syncing workflow between the different pieces of an extension.
Here is an example on how to consume our theme storage in `Layout` component:
```tsx title="components/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
Congrats! You've just learned how to persist and consume global data in your extension π
For more advanced use cases, please refer to the [official documentation](https://wxt.dev/storage.html).
### Usage with Firefox
To use the storage API on Firefox during development you need to add an addon ID to your manifest, otherwise, you will get this error:
> Error: The storage API will not work with a temporary addon ID. Please add an explicit addon ID to your manifest. For more information see [https://mzl.la/3lPk1aE](https://mzl.la/3lPk1aE)
To add an addon ID to your manifest, add this to your package.json:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
browser_specific_settings: {
gecko: {
id: "your-id@example.com",
},
},
},
});
```
During development, you may use any ID. If you have published your extension, you need to use the ID assigned by [Mozilla Addons](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons).
file: ./src/content/docs/(core)/extension/troubleshooting/installation.mdx
meta: {
"title": "Installation",
"description": "Find answers to common extension installation issues."
}
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## Permissions issues
If some feature of your extension is not working, it's possible that you're missing a permission in the manifest config.
Make sure to check the [permissions](/docs/extension/configuration/manifest#overriding-manifest) section in the manifest config file.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/extension/internationalization#configuration)
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter mobile my-package
# For a specific package
pnpm install --filter @turbostarter/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
file: ./src/content/docs/(core)/extension/troubleshooting/publishing.mdx
meta: {
"title": "Publishing",
"description": "Find answers to common publishing issues."
}
## My extension submission was rejected
If your extension submission was rejected, you probably got an email with the reason. You'll need to fix the issues and upload a new build of your extension to the store and send it for review again.
Make sure to follow the [guidelines](/docs/extension/marketing) when submitting your extension to ensure that everything is setup correctly.
## Version number mismatch
If you get version number conflicts when submitting:
1. Ensure your `manifest.json` version matches what's in the store
2. Increment the version number appropriately for each new submission
3. Make sure the version follows semantic versioning (e.g., `1.0.1`)
## Missing permissions in manifest
If your extension is rejected due to permission issues:
1. Review the permissions declared in your `manifest.json`
2. Ensure all permissions are properly justified in your submission
3. Remove any unused permissions that aren't essential
4. Consider using optional permissions where possible
[Learn more about permissions](/docs/extension/configuration/manifest#permissions)
## Content Security Policy (CSP) violations
If your extension is rejected due to CSP issues:
1. Check your manifest's `content_security_policy` field
2. Ensure all external resources are properly whitelisted
3. Remove any unsafe inline scripts or eval usage
4. Use more secure alternatives like `browser.scripting.executeScript`
file: ./src/content/docs/(core)/mobile/analytics/configuration.mdx
meta: {
"title": "Configuration",
"description": "Learn how to configure mobile analytics in TurboStarter."
}
The `@turbostarter/analytics-mobile` package offers a streamlined and flexible approach to tracking events in your TurboStarter mobile app using various analytics providers. It abstracts the complexities of different analytics services and provides a consistent interface for event tracking.
In this section, we'll guide you through the configuration process for each supported provider.
Note that the configuration is validated against a schema, so you'll see error messages in the console if anything is misconfigured.
## Permissions
First and foremost, to start tracking any metrics from your app (and to do so legally), you need to ask your users for permission. It's [required](https://support.apple.com/en-us/102420), and you're not allowed to collect any data without it.
To make this process as simple as possible, TurboStarter comes with a `useTrackingPermissions` hook that you can use to access the user's consent status. It will handle asking for permission automatically as well as process updates made through the general phone settings.
```tsx
import { useTrackingPermissions } from "@turbostarter/analytics-mobile";
export const MyComponent = () => {
const granted = useTrackingPermissions();
if (granted) {
// Start tracking
} else {
// Disable tracking
}
};
```
Also, for Apple, you must declare the tracking justification via [App Tracking Transparency](https://developer.apple.com/documentation/apptrackingtransparency). It comes pre-configured in TurboStarter via the [Expo Config Plugin](https://docs.expo.dev/versions/latest/config/app/#plugins), where you can provide a custom message to the user:
```tsx title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
plugins: [
[
"expo-tracking-transparency",
{
/* π Describe why you need access to the user's data */
userTrackingPermission:
"This identifier will be used to deliver personalized ads to you.",
},
],
],
});
```
This way, we ensure that the user is aware of the data we collect and can make an informed decision. If you don't provide this information, your app is likely to be rejected by Apple and/or Google during the [review process](/docs/mobile/publishing/checklist#send-to-review).
## Providers
TurboStarter supports multiple analytics providers, each with its own unique configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best suits your needs and follow the instructions in the respective accordion section.
To use Google Analytics as your analytics provider, you need to [configure and link a Firebase project to your app](/docs/mobile/installation/firebase).
After that, you can proceed with the installation of the analytics package:
```bash
pnpm add --filter @turbostarter/analytics-mobile @react-native-firebase/analytics
```
Also, make sure to activate the Google Analytics provider as your analytics provider:
```dotenv
EXPO_PUBLIC_ANALYTICS_PROVIDER="google-analytics"
```
To customize the provider, you can find its definition in `packages/analytics/mobile/src/providers/google-analytics` directory.
For more information, please refer to the [React Native Firebase documentation](https://rnfirebase.io/analytics/usage).

To use PostHog as your analytics provider, you need to configure a PostHog instance. You can obtain the [Cloud](https://app.posthog.com/signup) instance by [creating an account](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
Then, create a project and, based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/mobile` directory and your `eas.json` file:
```dotenv
EXPO_PUBLIC_POSTHOG_KEY="your-posthog-api-key"
EXPO_PUBLIC_POSTHOG_HOST="your-posthog-instance-host"
```
Also, make sure to activate the PostHog provider as your analytics provider:
```dotenv
EXPO_PUBLIC_ANALYTICS_PROVIDER="posthog"
```
To customize the provider, you can find its definition in `packages/analytics/mobile/src/providers/posthog` directory.
For more information, please refer to the [PostHog documentation](https://posthog.com/docs).

## Context
To enable tracking events, capturing screen views and other analytics features, you need to wrap your app with the `Provider` component that's implemented by every provider and available through the `@turbostarter/analytics-mobile` package:
```tsx title="providers.tsx"
// [!code word:AnalyticsProvider]
import { memo } from "react";
import { Provider as AnalyticsProvider } from "@turbostarter/analytics-mobile";
interface ProvidersProps {
readonly children: React.ReactNode;
}
export const Providers = memo(({ children }) => {
return (
{children}
);
});
Providers.displayName = "Providers";
```
By implementing this setup, you ensure that all analytics events are properly tracked from your mobile app code. This configuration allows you to safely utilize the [Analytics API](/docs/mobile/analytics/tracking) within your components, enabling comprehensive event tracking and data collection.
file: ./src/content/docs/(core)/mobile/analytics/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with mobile analytics in TurboStarter.",
"index": true
}
When it comes to mobile app analytics, we can distinguish between two types:
* **Store listing analytics**: Used to track the performance of your mobile app's store listing (e.g., how many people have viewed your app in the store or how many have installed it).
* **In-app analytics**: Tracks user actions within your mobile app (e.g., how many users entered a specific screen, how many users clicked on a specific button, etc.).
The `@turbostarter/analytics-mobile` package provides a set of tools to easily implement both types of analytics in your mobile app.
## Store listing analytics
Interpreting your mobile app's store listing metrics can help you evaluate how changes to your app and store listing affect conversion rates. For example, you can identify keywords that users are searching for to optimize your app's store listing.
While each store implements a different set of metrics, there are some common ones you should be aware of:
* **Downloads**: The total number of times your app was downloaded, including both first-time downloads and re-downloads.
* **Sales**: The total number of pre-orders, first-time app downloads, in-app purchases, and their associated sales.
* **Usage**: A variety of user engagement metrics, such as installations, sessions, crashes, and active devices.
To learn more about these or other metrics (e.g., how to create custom reports or KPIs), please refer to the official documentation of the store you're publishing to:
## In-app analytics
TurboStarter comes with built-in analytics support for multiple providers as well as a unified API for tracking events. This API enables you to easily and consistently track user behavior and app usage across your mobile application.
To learn more about each provider and how to configure them, see their respective sections:
All configuration and setup is built-in with a unified API, allowing you to switch between providers without changing your code. You can even introduce your own provider without breaking any tracking-related logic.
In the following sections, we'll cover how to set up each provider and how to track events in your application.
file: ./src/content/docs/(core)/mobile/analytics/tracking.mdx
meta: {
"title": "Tracking events",
"description": "Learn how to track events in your TurboStarter mobile app."
}
The strategy for tracking events, that every provider has to implement, is extremely simple:
```ts
export type AllowedPropertyValues = string | number | boolean;
type TrackFunction = (
event: string,
data?: Record,
) => void;
export interface AnalyticsProviderStrategy {
Provider: ({ children }: { children: React.ReactNode }) => React.ReactNode;
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
As shown above, each provider must supply two key elements:
1. `Provider` - a component that [wraps your app](/docs/mobile/analytics/configuration#context).
2. `track` - a function responsible for sending event data to the provider.
To track an event, you simply need to invoke the `track` method, passing the event name and an optional data object:
```tsx
import { track } from "@turbostarter/analytics-mobile";
export const MyComponent = () => {
return (
track("button.click", { country: "US" })}>
Track event
);
};
```
Congratulations! You've now mastered event tracking in your TurboStarter mobile app. With this knowledge, you're well-equipped to analyze user behaviors and gain valuable insights into your application's usage patterns. Happy analyzing! π
file: ./src/content/docs/(core)/mobile/api/client.mdx
meta: {
"title": "Using API client",
"description": "How to use API client to interact with the API."
}
In mobile app code, you can only access the API client from the **client-side.**
When you create a new component or screen and want to fetch some data, you can use the API client to do so.
## Creating a client
We're creating a client-side API client in `apps/mobile/src/lib/api/index.tsx` file. It's a simple wrapper around the [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) that fetches or mutates data from the API.
It also requires wrapping your app in a `ApiProvider` component to provide the API client to the rest of the app:
```tsx title="_layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
...
...
);
}
```
Inside the `apps/mobile/src/lib/api/utils.ts` file we're calling a function to get base url of your api, so make sure it's set correctly (especially on production) and your web api endpoint is corresponding with the name there.
```tsx title="utils.ts"
const getBaseUrl = () => {
/**
* Gets the IP address of your host-machine. If it cannot automatically find it,
* you'll have to manually set it. NOTE: Port 3000 should work for most but confirm
* you don't have anything else running on it, or you'd have to change it.
*
* **NOTE**: This is only for development. In production, you'll want to set the
* baseUrl to your production API URL.
*/
const debuggerHost = Constants.expoConfig?.hostUri;
const localhost = debuggerHost?.split(":")[0];
if (!localhost) {
console.warn("Failed to get localhost. Pointing to production server...");
return env.EXPO_PUBLIC_SITE_URL;
}
return `http://${localhost}:3000`;
};
```
As you can see we're relying on your machine IP address for local development (in case you want to open the app from another device) or on the [environment variables](/docs/mobile/configuration/environment-variables) in production to get it, so there shouldn't be any issues with it, but in case, please be aware where to find it π
## Queries
Of course, everything comes already configured for you, so you just need to start using `api` in your components/screens.
For example, to fetch the list of posts you can use the `useQuery` hook:
```tsx title="app/(tabs)/tab-one.tsx"
import { api } from "~/lib/api";
export default function TabOneScreen() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: async () => {
const response = await api.posts.$get();
if (!response.ok) {
throw new Error("Failed to fetch posts!");
}
return response.json();
},
});
if (isLoading) {
return Loading...;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
}
```
It's using the `@tanstack/react-query` [useQuery API](https://tanstack.com/query/latest/docs/framework/react/reference/useQuery), so you shouldn't have any troubles with it.
## Mutations
If you want to perform a mutation in your mobile code, you can use the `useMutation` hook that comes straight from the integration with [Tanstack Query](https://tanstack.com/query):
```tsx title="form.tsx"
import { api } from "~/lib/api";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
},
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return (
);
}
```
Here, we're also invalidating the query after the mutation is successful. This is a very important step to make sure that the data is updated in the UI.
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api";
export default function TabOneScreen() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return Loading...;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
}
```
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/client";
export default function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return (
);
}
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
file: ./src/content/docs/(core)/mobile/api/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with the API.",
"index": true
}
To enable communication between your Expo app and the server in a production environment, the web application with Hono API must be deployed first.
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe manner, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS. It also
includes an [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers are defined in the `packages/api/` package. Here you will find a `modules` folder that contains the different feature modules of the API. Each module has its own folder and exports all its resolvers.
For each module, we create a separate Hono route in the `packages/api/index.ts` file and aggregate all sub-routers into one main router.
The API is then exposed as a route handler that will be provided as a Next.js API route:
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@turbostarter/api";
const handler = handle(appRouter);
export { handler as GET, handler as POST };
```
Learn more about how to use the API in your mobile app in the following sections:
file: ./src/content/docs/(core)/mobile/auth/configuration.mdx
meta: {
"title": "Configuration",
"description": "Configure authentication for your application."
}
TurboStarter supports three different authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - passwordless email link authentication
* **Anonymous** - guest mode for unauthenticated users
* **OAuth** - OAuth providers, Google and Github are set up by default
All authentication methods are enabled by default, but you can easily customize them to your needs. You can enable or disable any method, and configure them according to your requirements.
Remember that you can mix and match these methods or add new ones - for
example, you can have both password and magic link authentication enabled at
the same time, giving your users more flexibility in how they authenticate.
Authentication configuration can be customized through a simple configuration file. The following sections explain the available options and how to configure each authentication method based on your requirements.
## API
To enable new authentication method or add some plugin, you'd need to update the API configuration. Please refer to [web authentication configuration](/docs/web/auth/configuration) for more information as it's not strictly related with mobile app configuration.
For mobile apps, we need to define an [authentication trusted origin](https://www.better-auth.com/docs/reference/security#trusted-origins) using a mobile app scheme instead.
App schemes (like `turbostarter://`) are used for [deep linking](https://docs.expo.dev/guides/linking/) users to specific screens in your app after authentication.
To find your app scheme, take a look at `apps/mobile/app.config.ts` file and then add it to your auth server configuration:
```tsx title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["turbostarter://**"],
...
});
```
Adding your app scheme to the trusted origins list is crucial for security - it prevents CSRF attacks and blocks malicious open redirects by ensuring only requests from approved origins (your app) are allowed through.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
## UI
We have separate configuration that determines what is displayed to your users in the **UI**. It's set at `apps/mobile/config/auth.ts`.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior.
```ts title="apps/mobile/config/auth.ts"
import { SOCIAL_PROVIDER, authConfigSchema } from "@turbostarter/auth";
import { env } from "~/lib/env";
import type { AuthConfig } from "@turbostarter/auth";
export const authConfig = authConfigSchema.parse({
providers: {
password: env.EXPO_PUBLIC_AUTH_PASSWORD,
magicLink: env.EXPO_PUBLIC_AUTH_MAGIC_LINK,
anonymous: env.EXPO_PUBLIC_AUTH_ANONYMOUS,
oAuth: [SOCIAL_PROVIDER.GOOGLE, SOCIAL_PROVIDER.GITHUB],
},
}) satisfies AuthConfig;
```
The configuration is also validated using the Zod schema, so if something is off, you'll see the errors.
For example, if you want to switch from password to magic link, you'd change the following environment variables:
```dotenv title=".env.local"
EXPO_PUBLIC_AUTH_PASSWORD=false
EXPO_PUBLIC_AUTH_MAGIC_LINK=true
```
To display third-party providers in the UI, you need to set the `oAuth` array to include the provider you want to display. The default is Google and Github.
```tsx title="apps/web/config/auth.ts"
providers: {
...
oAuth: [SOCIAL_PROVIDER.GOOGLE, SOCIAL_PROVIDER.GITHUB],
...
},
```
## Third party providers
To enable third-party authentication providers, you'll need to:
1. Set up an OAuth application in the provider's developer console (like Google Cloud Console, Github Developer Settings or any other provider you want to use)
2. Configure the corresponding environment variables in your TurboStarter **API (web) application**
Each OAuth provider requires its own set of credentials and environment variables. Please refer to the [Better Auth documentation](https://better-auth.com/docs/concepts/oauth) for detailed setup instructions for each supported provider.
Make sure to set both development and production environment variables
appropriately. Your OAuth provider may require different callback URLs for
each environment.
file: ./src/content/docs/(core)/mobile/auth/flow.mdx
meta: {
"title": "User flow",
"description": "Discover the authentication flow in Turbostarter."
}
TurboStarter ships with a fully functional authentication system. Most of the screens and components are preconfigured and easily customizable to your needs.
Here you will find a quick walkthrough of the authentication flow.
## Sign up
The sign-up screen is where users can create an account. They need to provide their email address and password.

Once successful, users are asked to confirm their email address. This is enabled by default - and due to security reasons, it's not possible to disable it.
Make sure to configure the [email provider](/docs/web/emails/configuration) together with the [auth hooks](/docs/web/emails/sending#authentication-emails) to be able to send emails from your app.

## Sign in
The sign-in screen is where users can log in to their account. They need to provide their email address and password, use magic link (if enabled) or third-party providers.

## Sign out
The sign out button is located in the user account settings.

## Forgot password
The forgot password screen is where users can reset their password. They need to provide their email address and follow the instructions in the email.
It comes together with the reset password screen, where users land from a forgot email. There they can reset their password by providing new password and confirming it.

file: ./src/content/docs/(core)/mobile/auth/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with authentication.",
"index": true
}
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration required, while maintaining enterprise-grade security standards. Its framework-agnostic approach and focus on performance makes it the perfect choice for TurboStarter.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
TurboStarter supports multiple authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - magic links with **deep linking**
* **Anonymous** - allowing users to proceed anonymously
* **OAuth** - OAuth social providers (Google and Github preconfigured)
* **Native Apple authentication** - Apple sign-in for iOS (coming soon)
* **Native Google authentication** - Google sign-in for Android (coming soon)
As well as common applications flows, with ready-to-use views and components:
* **Sign in** - sign in with email/password or OAuth providers
* **Sign up** - sign up with email/password or OAuth providers
* **Sign out** - sign out
* **Password recovery** - forgot and reset password
* **Email verification** - verify email
You can construct your auth flow like LEGO bricks - plug in needed parts and customize them to your needs.
file: ./src/content/docs/(core)/mobile/configuration/app.mdx
meta: {
"title": "App configuration",
"description": "Learn how to setup the overall settings of your app."
}
When configuring your app, you'll need to define settings in different places depending on which provider will use them (e.g., Expo, EAS).
## App configuration
Let's start with the core settings for your app. These settings are **crucial** as they're used by Expo and EAS to build your app, determine its store presence, prepare updates, and more.
This configuration includes essential details like the official name, description, scheme, store IDs, splash screen configuration, and more.
You'll define these settings in `apps/mobile/app.config.ts`. Make sure to follow the [Expo config schema](https://docs.expo.dev/versions/latest/config/app/) when setting this up.
Here is an example of what the config file looks like:
```ts title="apps/mobile/app.config.ts"
import { ExpoConfig } from "expo/config";
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
name: "TurboStarter",
slug: "turbostarter",
scheme: "turbostarter",
version: "0.1.0",
orientation: "portrait",
icon: "./assets/images/icon.png",
userInterfaceStyle: "automatic",
assetBundlePatterns: ["**/*"],
sdkVersion: "51.0.0",
platforms: ["ios", "android"],
updates: {
fallbackToCacheTimeout: 0,
},
newArchEnabled: true,
ios: {
bundleIdentifier: "your.bundle.identifier",
supportsTablet: false,
},
android: {
package: "your.bundle.identifier",
adaptiveIcon: {
monochromeImage: "./public/images/icon/android/monochrome.png",
foregroundImage: "./public/images/icon/android/adaptive.png",
backgroundColor: "#0D121C",
},
},
extra: {
eas: {
projectId: "your-project-id",
},
},
experiments: {
tsconfigPaths: true,
typedRoutes: true,
},
plugins: ["expo-router", ["expo-splash-screen", SPLASH]],
});
```
Make sure to replace the values with your own and take your time to set everything correctly.
### Internal configuration
The same as for the [web app](/docs/web/configuration/app), and [extension](/docs/extension/configuration/app), we're defining the internal app config, which stores some overall variables for your application (that can't be read from Expo config).
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/mobile/src/config/app.ts"
import { env } from "~/lib/env";
export const appConfig = {
locale: env.EXPO_PUBLIC_DEFAULT_LOCALE,
theme: {
mode: env.EXPO_PUBLIC_THEME_MODE,
color: env.EXPO_PUBLIC_THEME_COLOR,
},
} as const;
```
For example, to set the mobile app default theme color, you'd update the following variable:
```dotenv title=".env.local"
EXPO_PUBLIC_THEME_COLOR="yellow"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
## EAS configuration
To properly build and publish your app, you need to define settings for the EAS build service.
This is done in `apps/mobile/eas.json` and it must follow the [EAS config scheme](https://docs.expo.dev/eas/json/).
Here is an example of what the config file looks like:
```json title="apps/mobile/eas.json"
{
"cli": {
"version": ">= 4.1.2"
},
"build": {
"base": {
"node": "20.15.0",
"pnpm": "9.6.0",
"ios": {
"resourceClass": "m-medium"
},
"env": {
"EXPO_PUBLIC_DEFAULT_LOCALE": "en",
"EXPO_PUBLIC_AUTH_PASSWORD": "true",
"EXPO_PUBLIC_AUTH_MAGIC_LINK": "false",
"EXPO_PUBLIC_THEME_MODE": "system",
"EXPO_PUBLIC_THEME_COLOR": "orange"
}
},
...
"preview": {
"extends": "base",
"distribution": "internal",
"android": {
"buildType": "apk"
},
"env": {
"APP_ENV": "test",
"EXPO_PUBLIC_SITE_URL": "",
}
},
"production": {
"extends": "base",
"env": {
"APP_ENV": "production",
"EXPO_PUBLIC_SITE_URL": "",
}
}
...
},
}
```
Make sure to fill all the environment variables with the correct values for your project and correct environment, otherwise your app won't build and you won't be able to publish it.
file: ./src/content/docs/(core)/mobile/configuration/environment-variables.mdx
meta: {
"title": "Environment variables",
"description": "Learn how to configure environment variables."
}
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/mobile` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, mobile).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/web`). These are specific to the app and are not shared between apps.
* **Build environment variables**: Not stored in the `.env` file. Instead, they are stored in `eas.json` file used to build app on [Expo Application Services](https://expo.dev/eas).
* **Secret keys**: They're not stored on mobile side, instead [they're defined on the web side.](/docs/web/configuration/environment-variables#secret-keys)
## Shared variables
Here you can add all the environment variables that are shared across all the apps.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/mobile`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/mobile/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in Expo app
EXPO_PUBLIC_SITE_URL="${URL}"
EXPO_PUBLIC_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
EXPO_PUBLIC_THEME_MODE="system"
EXPO_PUBLIC_THEME_COLOR="orange"
# Use this variable to enable or disable password-based authentication. If you set this to true, users will be able to sign up and sign in using their email and password. If you set this to false, the form won't be shown.
EXPO_PUBLIC_AUTH_PASSWORD="true"
...
```
To make environment variables available in the Expo app code, you need to prefix them with `EXPO_PUBLIC_`. They will be injected to the code during the build process.
Only environment variables prefixed with `EXPO_PUBLIC_` will be injected.
[Read more about Expo environment variables.](https://docs.expo.dev/guides/environment-variables/)
## Build environment variables
To allow your app to build properly on [EAS](https://expo.dev/eas) you need to define your environment variables in `eas.json` file under corresponding profile (e.g. `preview` or `production`).
Here is an example of correctly filled `eas.json` for one of the profiles:
```json title="apps/mobile/eas.json"
{
"build": {
"base": {
"env": {
"EXPO_PUBLIC_DEFAULT_LOCALE": "en",
"EXPO_PUBLIC_AUTH_PASSWORD": "true",
"EXPO_PUBLIC_AUTH_MAGIC_LINK": "false",
"EXPO_PUBLIC_THEME_MODE": "system",
"EXPO_PUBLIC_THEME_COLOR": "orange"
}
},
"production": {
"extends": "base",
"autoIncrement": true,
"env": {
"APP_ENV": "production",
"EXPO_PUBLIC_SITE_URL": "https://www.turbostarter.dev"
}
}
}
}
```
Then, when you trigger `production` build correct environment variables will be injected to your mobile app code ensuring that everything is working correctly.
## Secret keys
Secret keys and sensitive information are to be **never** stored on the mobile app code.
It means that you will need to add the secret keys to the **web app, where the API is deployed.**
The mobile app should only communicate with the backend API, which is typically part of the web app. The web app is responsible for handling sensitive operations and storing secret keys securely.
[See web documentation for more details.](/docs/web/configuration/environment-variables#secret-keys)
This is not a TurboStarter-specific requirement, but a best practice for security for any
application. Ultimately, it's your choice.
file: ./src/content/docs/(core)/mobile/configuration/paths.mdx
meta: {
"title": "Paths configuration",
"description": "Learn how to configure the paths of your app."
}
The paths configuration is set at `apps/mobile/config/paths.ts`. This configuration stores all the paths that you'll be using in your application. It is a convenient way to store them in a central place rather than scatter them in the codebase using magic strings.
It is **unlikely you'll need to change** this unless you're heavily editing the codebase.
```ts title="apps/mobile/config/paths.ts"
const pathsConfig = {
index: "/",
tabs: {
auth: {
login: `${AUTH_PREFIX}/login`,
register: `${AUTH_PREFIX}/register`,
forgotPassword: `${AUTH_PREFIX}/password/forgot`,
updatePassword: `${AUTH_PREFIX}/password/update`,
error: `${AUTH_PREFIX}/error`,
},
billing: `/billing`,
ai: `/ai`,
settings: `/settings`,
},
} as const;
```
By declaring the paths as constants, we can use them safely throughout the
codebase. There is no risk of misspelling or using magic strings.
file: ./src/content/docs/(core)/mobile/customization/add-app.mdx
meta: {
"title": "Adding apps",
"description": "Learn how to add apps to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `mobile` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/mobile` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/mobile` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/mobile` folder. We will create a branch named `mobile-branch` and create a subtree for the `apps/mobile` folder.
```bash
git subtree split --prefix=apps/mobile --branch mobile-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/mobile` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin mobile-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/mobile` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `mobile-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the mobile-branch:
```bash
git subtree split --prefix=apps/mobile --branch mobile-branch
```
Now, you can push the updates to the `mobile-branch`:
```bash
git push origin mobile-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin mobile-branch --squash
```
That's it! You now have a new application in the monorepo π
file: ./src/content/docs/(core)/mobile/customization/add-package.mdx
meta: {
"title": "Adding packages",
"description": "Learn how to add packages to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/web` or modify existing packages under `packages`. You don't need to do this to add a new screen or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@turbostarter/`. If you named it `example`, the package will be named `@turbostarter/example`.
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@turbostarter/example/client'`
2. The `server` module can be imported using `import { server } from '@turbostarter/example/server'`
## Use the package in your application
You can now use the package in your application by importing it using the package name:
```ts title="apps/mobile/src/app/index.tsx"
import { example } from "@turbostarter/example";
console.log(example());
```
Et voilΓ ! You have successfully added a new package to your TurboStarter application. π
file: ./src/content/docs/(core)/mobile/customization/components.mdx
meta: {
"title": "Components",
"description": "Manage and customize your app components."
}
For the components part, we're using [react-native-reusables](https://rnr-docs.vercel.app/getting-started/introduction/) for atomic, accessible and highly customizable components.
> It's like shadcn/ui, but for mobile apps.
react-native-reusables is a powerful tool that allows you to generate
pre-designed components with a single command. It's built with Nativewind
(like Tailwind CSS for mobile) and accessiblity in mind, it's also highly
customizable.
TurboStarter defines two packages that are responsible for the UI part of your app:
* `@turbostarter/ui` - shared styles, [themes](/docs/mobile/customization/styling#themes) and assets (e.g. icons)
* `@turbostarter/ui-mobile` - pre-built UI mobile components, ready to use in your app
## Adding a new component
There are basically two ways to add a new component:
TurboStarter is fully compatible with [react-native-reusables CLI](https://www.npmjs.com/package/@react-native-reusables/cli), so you can generate new components with single command.
Run the following command from the **root** of your project:
```bash
pnpm --filter @turbostarter/ui-mobile ui:add
```
This will launch an interactive command-line interface to guide you through the process of adding a new component where you can pick which component you want to add.
```bash
Which components would you like to add? > Space to select. A to toggle all.
Enter to submit.
β― accordion
β― alert
β― alert-dialog
β― aspect-ratio
β― avatar
β― badge
β― button
β― calendar
β― card
β― checkbox
```
Newly created components will appear in the `packages/ui/mobile/src` directory.
You can always copy-paste a component from the [react-native-reusables](https://rnr-docs.vercel.app/getting-started/introduction/) website and modify it to your needs.
This is possible, because the components are headless and don't need (in most cases) any additional dependencies.
Copy code from the website, create a new file in the `packages/ui/mobile/src` directory and paste the code into the file.
Keep in mind that you should always try to keep shared components as atomic as possible. This will make it easier to reuse them and to build specific views by composition.
E.g. include components like `Button`, `Input`, `Card`, `Dialog` in shared package, but keep specific components like `LoginForm` in your app directory.
## Using components
Each component is a standalone entity which has a separate export from the package. It helps to keep things modular, avoid unnecessary dependencies and make tree-shaking possible.
To import a component from the UI package, use the following syntax:
```tsx title="apps/mobile/src/components/my-component.tsx"
// [!code word:card]
import {
Card,
CardContent,
CardHeader,
CardFooter,
CardTitle,
CardDescription,
} from "@turbostarter/ui-mobile/card";
```
Then you can use it to build a component specific to your app:
```tsx title="apps/mobile/src/components/my-component.tsx"
export function MyComponent() {
return (
My ComponentMy Component Content
);
}
```
Most of the components are the same as for the [web app](/docs/web/customization/components).
It means that you can basically migrate existing web components to the mobile app with just an import change!
file: ./src/content/docs/(core)/mobile/customization/styling.mdx
meta: {
"title": "Styling",
"description": "Get started with styling your app."
}
To build the user web interface TurboStarter comes with [Nativewind](https://www.nativewind.dev/) pre-configured.
Nativewind is basically a Tailwind CSS for mobile apps. It gives you a way to style your app with Tailwind CSS utilities and classes.
## Tailwind configuration
In the `tooling/tailwind/config` directory you will find shared Tailwind CSS configuration files. To change some global styles you can edit the files in this folder.
Here is an example of a shared Tailwind configuration file:
```ts title="tooling/tailwind/config/base.ts"
import type { Config } from "tailwindcss";
export default {
darkMode: "class",
content: ["src/**/*.{ts,tsx}"],
theme: {
extend: {
colors: {
...
primary: {
DEFAULT: "hsl(var(--color-primary))",
foreground: "hsl(var(--color-primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--color-secondary))",
foreground: "hsl(var(--color-secondary-foreground))",
},
success: {
DEFAULT: "hsl(var(--color-success))",
foreground: "hsl(var(--color-success-foreground))",
},
...
},
},
},
plugins: [animate, containerQueries, typography],
} satisfies Config;
```
For the colors part, we bet stricly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) format to allow for easy theme management without a need for any JavaScript.
Also, each app has its own `tailwind.config.ts` file which extends the shared config and allows you to override the global styles.
Here is an example of an app's `tailwind.config.ts` file:
```ts title="apps/mobile/tailwind.config.ts"
import type { Config } from "tailwindcss";
import nativewind from "nativewind/preset";
import { hairlineWidth } from "nativewind/theme";
import baseConfig from "@turbostarter/tailwind-config/mobile";
export default {
// We need to append the path to the UI package to the content array so that
// those classes are included correctly.
content: [
...baseConfig.content,
"../../packages/ui/{shared,mobile}/src/**/*.{ts,tsx}",
],
presets: [baseConfig, nativewind],
theme: {
extend: {
fontFamily: {
sans: ["DMSans_400Regular"],
mono: ["DMMono_400Regular"],
},
borderWidth: {
hairline: hairlineWidth(),
},
},
},
} satisfies Config;
```
That way we can have a separation of concerns and a clear structure for the Tailwind CSS configuration.
## Themes
TurboStarter comes with **9+** predefined themes which you can use to quickly change the look and feel of your app.
They're defined in `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) array, which is then converted to a CSS variable. That way we can ensure full type-safety and reuse themes across parts of our apps (e.g. use the same theme in emails).
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a theme to your app, we're declaring a theme provider, which is a wrapper around the app that passes the correct variables to the styles that will be used in the components and screens. It's located at `providers/theme.tsx`:
```tsx title="apps/mobile/src/providers/theme.tsx"
export default function ThemeProvider({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
## Dark mode
TurboStarter comes with a built-in dark mode support.
Each theme has a corresponding dark mode variables which are used to change the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
Nativewind takes care of adding the `dark` class to the root of our app as well as reading the user's preferences via `useColorScheme` hook. That way you can put full focus on the features and just change the colors if you need to customize the theme.
Also, you can define the default theme mode and color in [app configuration](/docs/mobile/configuration/app).
file: ./src/content/docs/(core)/mobile/installation/clone.mdx
meta: {
"title": "Cloning repository",
"description": "Get the code to your local machine and start developing your app.",
"mirror": "../../web/installation/clone.mdx"
}
file: ./src/content/docs/(core)/mobile/installation/commands.mdx
meta: {
"title": "Common commands",
"description": "Learn about common commands you need to know to work with the mobile project.",
"mirror": "../../web/installation/commands.mdx"
}
file: ./src/content/docs/(core)/mobile/installation/conventions.mdx
meta: {
"title": "Conventions",
"description": "Some standard conventions used across TurboStarter codebase.",
"mirror": "../../web/installation/conventions.mdx"
}
file: ./src/content/docs/(core)/mobile/installation/dependencies.mdx
meta: {
"title": "Managing dependencies",
"description": "Learn how to manage dependencies in your project.",
"mirror": "../../web/installation/dependencies.mdx"
}
file: ./src/content/docs/(core)/mobile/installation/development.mdx
meta: {
"title": "Development",
"description": "Get started with the code and develop your mobile SaaS."
}
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (20.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local database)
* [pnpm](https://pnpm.io)
* [Firebase](https://firebase.google.com) project (optional for some features - check [Firebase project](/docs/mobile/installation/firebase) section for more details)
## Project development
### Set up environment
We won't copy the official docs, as there is quite a bit of setup you need to make to get started with iOS and Android development and it also depends what approach you want to take.
[Check this official setup guide to get started](https://docs.expo.dev/get-started/set-up-your-environment/). After you're done with the setup, go back to this guide and continue with the next step.
You can pick if you want to develop the app for iOS or Android by using the real device or the simulator.
We recommend using the simulators and [development builds](https://docs.expo.dev/develop/development-builds/create-a-build/) for development, as it is more real and reliable approach. It also won't limit you in terms of native dependencies (required for e.g. [analytics](/docs/mobile/analytics/overview)).
Of course, you can start with the simplest approach (using [Expo Go](https://expo.dev/go)) and when you iterate further, switch to different approach.
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
Check [Environment variables](/docs/web/configuration/environment-variables) for more details on setting up environment variables.
### Start database
If you want to use local database (**recommended for development purposes**), ensure Docker is running, then setup your database with:
```bash
pnpm db:setup
```
This command initiates the PostgreSQL container and runs migrations, ensuring your database is up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your development server should now be running at `http://localhost:8081`.

Scan the QR code with your mobile device to start the app or press the appropriate key on your keyboard to run it on simulator. In case of any issues check the [Troubleshooting](https://docs.expo.dev/troubleshooting/overview/) section.
### Publish to stores
When you're ready to publish the project to the stores, follow [guidelines](/docs/mobile/marketing) and [checklist](/docs/mobile/publishing/checklist) to ensure everything is set up correctly.
file: ./src/content/docs/(core)/mobile/installation/editor-setup.mdx
meta: {
"title": "Editor setup",
"description": "Learn how to set up your editor for the fastest development experience.",
"mirror": "../../web/installation/editor-setup.mdx"
}
file: ./src/content/docs/(core)/mobile/installation/firebase.mdx
meta: {
"title": "Firebase project",
"description": "Learn how to set up a Firebase project for your TurboStarter mobile app."
}
For some features of your mobile app, you will need to set up a Firebase project. It's a requirement enforced by how these features are implemented under the hood and we cannot change it.
You would need a Firebase project to use the following features:
* [Analytics](/docs/mobile/analytics/overview) with [Google Analytics](/docs/mobile/analytics/configuration#google-analytics) provider
Here, we'll go through the steps to set up a Firebase project and link it to your mobile app.
In development environment, the integration with Firebase is possible only when using a [development build](https://docs.expo.dev/workflow/overview/#development-builds). It means that **it won't work in the [Expo Go](https://expo.dev/go) app**.
## Create a Firebase project
First things first, you need to create a Firebase project. You can do this by going to the [Firebase console](https://console.firebase.google.com/) and clicking on "Add Project":

Name it as you want, and proceed to the dashboard.
## Install Firebase SDK
To install React Native Firebase's base app module, run the following command in your mobile app directory:
```bash
npx expo install @react-native-firebase/app
```
## Configure Firebase modules
The recommended approach to configure React Native Firebase is to use [Expo Config Plugins](https://docs.expo.dev/config-plugins/introduction/).
To enable Firebase on the native Android and iOS platforms, create and download Service Account files for each platform from your Firebase project.
You can find them in the dashboard under the Firebase project settings:

For Android, it will be a `google-services.json` file, and for iOS it will be a `GoogleService-Info.plist` file.
Then provide paths to the downloaded files in the following `app.config.ts` fields: [`android.googleServicesFile`](https://docs.expo.io/versions/latest/config/app/#googleservicesfile-1) and [`ios.googleServicesFile`](https://docs.expo.io/versions/latest/config/app/#googleservicesfile). This is how an example configuration looks like:
```ts title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
ios: {
googleServicesFile: "./GoogleService-Info.plist",
},
android: {
googleServicesFile: "./google-services.json",
},
plugins: [
"@react-native-firebase/app",
[
"expo-build-properties",
{
ios: {
useFrameworks: "static",
},
},
],
],
});
```
For iOS only, since `firebase-ios-sdk` requires `use_frameworks` you need to configure `expo-build-properties` by adding `"useFrameworks": "static"`.
Listing a module in the Config Plugins (the `plugins` array in the config above) is only required for React Native Firebase modules that involve native installation steps - e.g. modifying the Xcode project, `Podfile`, `build.gradle`, `AndroidManifest.xml` etc. React Native Firebase modules without native steps will work out of the box.
## Generate native code
If you are compiling your app locally, you'll need to regenerate the native code for the platforms to pick up the changes:
```bash
npx expo prebuild --clean
```
Then, you could follow the same steps as in the [development environment setup](/docs/mobile/installation/development) guide to run the app locally or [build a production version](/docs/mobile/publishing/checklist#build-your-app) of your app.
Et voilΓ ! You've set up and linked your Firebase project to your mobile app π
You can learn more about the Firebase integration and it's possibilities in the [official documentation](https://rnfirebase.io/).
file: ./src/content/docs/(core)/mobile/installation/structure.mdx
meta: {
"title": "Project structure",
"description": "Learn about the project structure and how to navigate it."
}
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Mobile application structure
The mobile application is located in the `apps/mobile` folder. It contains the following folders:
file: ./src/content/docs/(core)/mobile/installation/update.mdx
meta: {
"title": "Updating codebase",
"description": "Learn how to update your codebase to the latest version.",
"mirror": "../../web/installation/update.mdx"
}
file: ./src/content/docs/(core)/mobile/publishing/checklist.mdx
meta: {
"title": "Checklist",
"description": "Let's publish your TurboStarter app to stores!"
}
When you're ready to publish your TurboStarter app to stores, follow this checklist.
This process may take a few hours and some trial and error, so buckle up β you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](https://supabase.com/) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities of doing a migration:
TurboStarter comes with predefined Github Action to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is to set your `DATABASE_URL` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in `.env.local` file and run the following command:
```bash
pnpm db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## (Optional) Set up Firebase project
**Why it's necessary?**
Setting up a Firebase project is optional, and depends on which features your app is using. For example, if you want to use [Analytics](/docs/mobile/analytics/overview) with [Google Analytics](/docs/mobile/analytics/configuration#google-analytics), setting up a Firebase project is required.
**How to do it?**
Please refer to the [Firebase project](/docs/mobile/installation/firebase) section on how to set up and configure your Firebase project.
## Set up web backend API
**Why it's necessary?**
Setting up the backend is necessary to have a place to store your data and to have other features work properly (e.g., auth, billing).
**How to do it?**
Please refer to the [web deployment checklist](/docs/web/deployment/checklist) on how to set up and deploy the web app backend to production.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the application to function correctly. These variables include API keys, database URLs, and other configuration details required for your app to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your `eas.json` file under correct profile.
```json title="eas.json"
{
"profiles": {
"base": {
"env": {
"EXPO_PUBLIC_DEFAULT_LOCALE": "en",
"EXPO_PUBLIC_AUTH_PASSWORD": "true",
"EXPO_PUBLIC_AUTH_MAGIC_LINK": "false",
"EXPO_PUBLIC_THEME_MODE": "system",
"EXPO_PUBLIC_THEME_COLOR": "orange"
}
},
"production": {
"extends": "base",
"autoIncrement": true,
"env": {
"APP_ENV": "production",
"EXPO_PUBLIC_SITE_URL": "https://www.turbostarter.dev",
}
}
}
}
```
## Build your app
Building your app requires an EAS account and project. If you don't have one, you can create it by following the steps [here](https://expo.dev/eas).
**Why it's necessary?**
Building your app is necessary to create a standalone application bundle that can be published to the stores.
**How to do it?**
You basically have two possibilities to build a bundle for your app:
TurboStarter comes with predefined Github Action to handle building your app on EAS. You can find its definition in the `.github/workflows/publish-mobile.yml` file.
What you need to do is to set your `EXPO_TOKEN` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). You can obtain it from your EAS account, learn more in the [official documentation](https://docs.expo.dev/accounts/programmatic-access/#personal-access-tokens)
Then, you can run the workflow which will build the app on EAS platform.
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your build locally, although this is not recommended for production.
To do it, you'll need to have EAS CLI installed on your machine. You can install it by running the following command:
```bash
npm install -g eas-cli
```
Then, run the following command to build your app with the `production` profile:
```bash
eas build --profile production --platform all
```
This will build the app for both platforms (iOS and Android) and output the results in your app folder.
## Submit to stores
**Why it's necessary?**
Submitting your app to the stores is necessary to make it available to your users. That's the only way to get your app in front of your users.
**How to do it?**
To submit your app to the stores, you will first need to get your app bundles. If you ran previous step locally you already should have the `.ipa` (for iOS) and `.aab` (for Android) files in your app folder.
If you used Github Actions to build your app, you can find the results in the `Builds` tab of your [EAS project](https://expo.dev). Download the artifacts and save them on your local machine.
Then, you would need to upload the bundles to the stores, here's how to do it:
Please follow the [official documentation](https://docs.expo.dev/submit/ios/#manual-submissions) on uploading your app to the App Stores.
Please follow the [official documentation](https://docs.expo.dev/submit/android/#start-the-submission) on uploading your app to the Google Play Store.
We're working on an auto-submit feature for EAS, which will automate the process of submitting your app to the stores.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
## Send to review
**Why it's necessary?**
Sending your app to review is necessary to make it available to your users. That's the only way to get your app in front of your users, because you **must** follow the stores' guidelines and get your app approved.
**How to do it?**
You need to send your app to review to the stores, you can do it by going to the [App Store Connect](https://appstoreconnect.apple.com/apps) and [Google Play Console](https://play.google.com/console).
Unfortunately, it's not possible to automate this step, so you'll need to do it manually.
1. Go to [App Store Connect](https://appstoreconnect.apple.com/apps)
2. Select your app
3. [Create new version](https://developer.apple.com/help/app-store-connect/update-your-app/create-a-new-version/)
4. Fill release notes and other relevant information
5. [Attach](https://developer.apple.com/help/app-store-connect/manage-builds/upload-builds) your uploaded `.ipa` file to a version
6. Click on *Send to Review*
7. [Confirm your submission](https://developer.apple.com/help/app-store-connect/manage-submissions-to-app-review/submit-for-review/#:~:text=Submit%20an%20app,click%20%E2%80%9CAdd%20for%20Review%E2%80%9D.)
8. Wait...
Follow the [official documentation](https://developer.apple.com/help/app-store-connect/update-your-app/create-a-new-version) for more information.
1. Go to [Google Play Console](https://play.google.com/console)
2. Select your app
3. [Create new release](https://support.google.com/googleplay/android-developer/answer/9859348)
4. Fill release notes and other relevant information
5. Attach your uploaded `.aab` file to a release
6. Confirm your submission
7. Wait...
Follow the [official documentation](https://support.google.com/googleplay/android-developer/answer/9859348) for more information.
Then, you would have to wait for the review process to complete. This process can take from a few hours to a few days, depending on the stores and the type of app you have.
If your submission is rejected, you'll receive an email from the stores with the rejection reason. You'll need to fix the issues and upload a new version of your app.
Make sure to follow the [guidelines](/docs/mobile/marketing) or check [publishing troubleshooting](/docs/mobile/troubleshooting/publishing) for more info.
When you receive the approval from the stores (by email or by push notification), you'll be able to publish your app on the stores.

That's it! Your app is now live and accessible to your users, good job! π
* Optimize your store listings with compelling descriptions, keywords, screenshots and preview videos
* Remove placeholder content and replace with your final production content
* Update all visual branding including favicon, scheme, splash screen and app icons
file: ./src/content/docs/(core)/mobile/publishing/updates.mdx
meta: {
"title": "Updates",
"description": "Learn how to update your published app."
}
When you publish your app to the stores, you can update it later to give your users new features and bug fixes.
TurboStarter comes with two ready-to-use ways to update your apps, we'll go through both of them.
## Submitting new version
The most traditional way to update your app is to submit a new version to the stores. This is the most reliable way to update your app, but it can take a while for the new version to be approved and available to the users.
To submit a new version, you need to change the version number in the `package.json` file and the `app.config.ts` file.
```json
{
...
"version": "1.0.1",
...
}
```
Then, follow the exact same steps as [when you initially published your app](/docs/mobile/publishing/checklist), and, when you'll submit your app to review, provide release notes for the new version.
## Over-the-air (OTA) updates
Over-the-Air (OTA) updates allow you to push updates to your app without requiring users to download a new version from the app store. This is a powerful feature that enables rapid iteration and fixes.
We are working on introducing seamless OTA updates integration into TurboStarter to enable you to push updates to your app without the need to submit a new version to the stores. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
file: ./src/content/docs/(core)/mobile/troubleshooting/installation.mdx
meta: {
"title": "Installation",
"description": "Find answers to common mobile installation issues."
}
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/mobile/internationalization#configuration)
## Expo cannot detect XCode
If you get the following error:
```bash
Expo cannot detect Xcode Xcode must be fully installed before you can continue
```
This is usually related to the Xcode CLI not being installed. You can fix this by running the following command:
```bash
sudo xcode-select -s /Applications/Xcode.app/Contents/Developer
```
If you still face the issue, please make sure you have the latest version of Xcode installed.
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter mobile my-package
# For a specific package
pnpm install --filter @turbostarter/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
file: ./src/content/docs/(core)/mobile/troubleshooting/publishing.mdx
meta: {
"title": "Publishing",
"description": "Find answers to common mobile publishing issues."
}
## My app submission was rejected
If your app submission was rejected, you probably got an email with the reason. You'll need to fix the issues and upload a new build of your app to the store and send it for review again.
Make sure to follow the [guidelines](/docs/mobile/marketing) when submitting your app to ensure that everything is setup correctly.
## App Store screenshots don't match requirements
If your app submission was rejected due to screenshot issues, make sure:
1. Screenshots match the required dimensions for each device
2. Screenshots accurately represent your app's functionality
3. You have provided screenshots for all required device sizes
4. Screenshots don't contain device frames unless they match Apple's requirements
[See Apple's screenshot specifications](https://developer.apple.com/app-store/screenshots/)
## Version number conflicts
If you get version number conflicts when submitting:
1. Ensure your `app.json` version matches what's in the store
2. Increment the version number appropriately:
```bash
"version": "1.0.1",
"android.versionCode": 2,
"ios.buildNumber": "2"
```
3. Make sure both stores have unique version numbers
file: ./src/content/docs/(core)/web/ai/configuration.mdx
meta: {
"title": "Configuration",
"description": "Configure AI integration in your TurboStarter project."
}
To ensure scalability and avoid security vulnerabilities, AI requests are proxied by our [Hono backend](/docs/web/api/overview). This means you need to set up AI integration on both the client and server side.
We want to avoid exposing API keys directly to the browser, as this could lead to abuse of your key and generate unnecessary costs.
In this section, we'll explore the configuration for both sides to give you a smooth start.
## Server-side
On the backend, you need to set up two things: environment variables to configure the provider and the procedure to pass responses to the client. Let's go through it!
### Environment variables
You need to set the environment variables that correspond to the AI provider you want to use.
For example, for the OpenAI provider, you would need to set the following environment variables:
```dotenv
OPENAI_API_KEY=
```
However, if you want to use the Anthropic provider, you would need to set these environment variables:
```dotenv
ANTHROPIC_API_KEY=
```
You can find the list of all available providers in the [official documentation](https://sdk.vercel.ai/providers/ai-sdk-providers), along with the required variables that need to be set to ensure the integration works correctly.
### API endpoint
As we're proxying the requests, we need to register an [API endpoint](/docs/web/api/new-endpoint) that will be used to pass the responses to the client.
The steps will be the same as we described in the [API](/docs/web/api/new-endpoint) section. An example implementation could look like this:
```ts title="ai.router.ts"
export const aiRouter = new Hono().post(
"/chat",
validate(
"json",
z.object({
messages: z.array(
z.object({
role: z.enum(["user", "system", "data", "assistant"]),
content: z.string(),
}),
),
}),
),
(c) =>
streamText({
model: openai("gpt-4o"),
messages: convertToCoreMessages(c.req.valid("json").messages),
}).toDataStreamResponse(),
);
```
As you can see, we're defining which provider and specific model we want to use here.
We're also using [Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Concepts), which allows us to pass the result to the user as soon as the model starts generating it, without needing to wait for the full response to be completed. This gives the user a sense of immediacy and makes the conversation more interactive.
## Client-side
To consume the server response, we can leverage the ready-to-use hooks provided by the [Vercel AI SDK](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot), such as the `useChat` hook:
```tsx title="page.tsx"
import { useChat } from "ai/react";
const AI = () => {
const { messages } = useChat({
api: "/api/ai/chat",
});
return (
{messages.map((message) => (
{message.content}
))}
);
};
export default AI;
```
By leveraging this integration, we can easily manage the state of the AI request and update the UI as soon as the response is ready.
TurboStarter ships with a ready-to-use implementation of AI chat, allowing you to see this solution in action. Feel free to reuse or modify it according to your needs.
file: ./src/content/docs/(core)/web/ai/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with AI integration in your TurboStarter project."
}
For AI integration, TurboStarter leverages the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction), which provides a comprehensive set of tools and utilities to help you build AI applications more easily and efficiently.
It's a simple yet powerful library that exposes a unified API for all major AI providers.
This allows you to build your AI application without worrying about the intricacies of each underlying provider's API.
You can learn more about the `ai` package in the [official documentation](https://sdk.vercel.ai/docs/introduction).
## Features
The starter comes with the most common AI features built-in, such as:
* **Chat**: Build chat applications with ease.
* **Streaming responses**: Stream responses from your AI provider in real-time.
* **Image generation**: Generate images using AI technology.
* **Embeddings**: Generate embeddings for your data.
* **Vector stores**: Store and query your embeddings efficiently.
You can easily compose your application using these building blocks or extend them to suit your specific needs.
## Providers
**TurboStarter relies on the AI SDK to provide support for various AI providers.**
This means you can easily switch between different AI providers without changing your code, as long as they are supported by the `ai` package.
You can find the list of supported providers in the [official documentation](https://sdk.vercel.ai/providers/ai-sdk-providers).
There is also the possibility to add your own custom provider. It just needs to implement the common interface and provide all the necessary methods.
Read more about this in the [official guide](https://sdk.vercel.ai/providers/community-providers/custom-providers).
The configuration for each provider is straightforward and simple. We'll explore this in more detail in the [Configuration](/docs/web/ai/configuration) section.
file: ./src/content/docs/(core)/web/analytics/configuration.mdx
meta: {
"title": "Configuration",
"description": "Learn how to configure web analytics in TurboStarter."
}
The `@turbostarter/analytics-web` package offers a streamlined and flexible approach to tracking events in your TurboStarter web app using various analytics providers. It abstracts the complexities of different analytics services and provides a consistent interface for event tracking.
In this section, we'll guide you through the configuration process for each supported provider.
Note that the configuration is validated against a schema, so you'll see error messages in the console if anything is misconfigured.
## Providers
TurboStarter supports multiple analytics providers, each with its own unique configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best suits your needs and follow the instructions in the respective accordion section.
To use Vercel Analytics as your provider, you need to [create a Vercel account](https://vercel.com/) and [set up a project](https://vercel.com/docs/projects/overview).
Next, enable analytics in your Vercel project settings:
1. Navigate to the [Vercel dashboard](https://vercel.com/dashboard).
2. Select your project.
3. Go to the *Analytics* section.
4. Click *Enable* in the dialog.
Enabling Web Analytics will add new routes (scoped at `/_vercel/insights/*`) after your next deployment.
Also, make sure to activate the Vercel provider as your analytics provider:
```dotenv
NEXT_PUBLIC_ANALYTICS_PROVIDER="vercel"
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/vercel` directory.
For more information, please refer to the [Vercel Analytics documentation](https://vercel.com/docs/analytics/overview).

To use Google Analytics as your analytics provider, you need to [create a Google Analytics account](https://analytics.google.com/) and [set up a property](https://support.google.com/analytics/answer/9304153).
Next, add a data stream in your Google Analytics account settings:
1. Navigate to [Google Analytics](https://analytics.google.com/).
2. In the *Admin* section, under *Data collection and modification*, click on *Data Streams*.
3. Click *Add stream*.
4. Select *Web* as the platform.
5. Enter the required details for the stream (at minimum, provide a name and website URL).
6. Click *Create stream*.
After creating the stream, you'll need two pieces of information:
1. Your [Measurement ID](https://support.google.com/analytics/answer/12270356) (it should look like `G-XXXXXXXXXX`):

2. Your [Measurement Protocol API secret](https://support.google.com/analytics/answer/9814495):

Set these values in your `.env.local` file in the `apps/web` directory and in your deployment environment:
```dotenv
NEXT_PUBLIC_ANALYTICS_GOOGLE_MEASUREMENT_ID="your-measurement-id"
GOOGLE_ANALYTICS_SECRET="your-measurement-protocol-api-secret"
```
Also, make sure to activate the Google Analytics provider as your analytics provider:
```dotenv
NEXT_PUBLIC_ANALYTICS_PROVIDER="google-analytics"
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/google-analytics` directory.
For more information, please refer to the [Google Analytics documentation](https://developers.google.com/analytics).

To use PostHog as your analytics provider, you need to configure a PostHog instance. You can obtain the [Cloud](https://app.posthog.com/signup) instance by [creating an account](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
Then, create a project and, based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_POSTHOG_KEY="your-posthog-api-key"
NEXT_PUBLIC_POSTHOG_HOST="your-posthog-instance-host"
```
Also, make sure to activate the PostHog provider as your analytics provider:
```dotenv
NEXT_PUBLIC_ANALYTICS_PROVIDER="posthog"
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/posthog` directory.
For more information, please refer to the [PostHog documentation](https://posthog.com/docs).

To use Open Panel as your analytics provider, you need to [create an account](https://openpanel.dev/) and [set up a client for your project](https://docs.openpanel.dev/docs).
Then, you would need to set your client ID and secret in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_OPEN_PANEL_CLIENT_ID="your-client-id"
OPEN_PANEL_CLIENT_SECRET="your-client-secret"
```
Also, make sure to activate the Open Panel provider as your analytics provider:
```dotenv
NEXT_PUBLIC_ANALYTICS_PROVIDER="open-panel"
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/open-panel` directory.
For more information, please refer to the [Open Panel documentation](https://docs.openpanel.dev/).

## Client-side context
To enable tracking events, capturing page views and other analytics features **on the client-side**, you need to wrap your app with the `Provider` component that's implemented by every provider and available through the `@turbostarter/analytics-web` package:
```tsx title="providers.tsx"
// [!code word:AnalyticsProvider]
import { memo } from "react";
import { Provider as AnalyticsProvider } from "@turbostarter/analytics-web";
interface ProvidersProps {
readonly children: React.ReactNode;
}
export const Providers = memo(({ children }) => {
return (
{children}
);
});
Providers.displayName = "Providers";
```
By implementing this setup, you ensure that all analytics events are properly tracked from your client-side code. This configuration allows you to safely utilize the [Analytics API](/docs/web/analytics/tracking) within your client components, enabling comprehensive event tracking and data collection.
file: ./src/content/docs/(core)/web/analytics/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with web analytics in TurboStarter.",
"index": true
}
TurboStarter comes with built-in analytics support for multiple providers as well as a unified API for tracking events. This API enables you to easily and consistently track user behavior and app usage across your SaaS application.
## Providers
The starter implements multiple providers for managing analytics. To learn more about each provider and how to configure them, see their respective sections:
All configuration and setup is built-in with a unified API, allowing you to switch between providers without changing your code. You can even introduce your own provider without breaking any tracking-related logic.
In the following sections, we'll cover how to set up each provider and how to track events in your application.
file: ./src/content/docs/(core)/web/analytics/tracking.mdx
meta: {
"title": "Tracking events",
"description": "Learn how to track events in your TurboStarter web app."
}
The implementation strategy for each analytics provider varies depending on whether it's designed for client-side or server-side use. We'll explore both approaches, as they are crucial for ensuring accurate and comprehensive analytics data in your web SaaS application.
## Client-side tracking
The client strategy for tracking events, which every provider must implement, is straightforward:
```ts
export type AllowedPropertyValues = string | number | boolean;
type TrackFunction = (
event: string,
data?: Record,
) => void;
export interface AnalyticsProviderClientStrategy {
Provider: ({ children }: { children: React.ReactNode }) => React.ReactNode;
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
As shown above, each provider must supply two key elements:
1. `Provider` - a component that [wraps your app](/docs/web/analytics/configuration#client-side-context).
2. `track` - a function responsible for sending event data to the provider.
To track an event, you simply need to invoke the `track` method, passing the event name and an optional data object:
```tsx
import { track } from "@turbostarter/analytics-web";
export const MyComponent = () => {
return (
);
};
```
## Server-side tracking
The server strategy for tracking events, that every provider has to implement, is even simpler:
```ts
export interface AnalyticsProviderServerStrategy {
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
This server-side strategy allows you to track events outside of the browser environment, which is particularly useful for scenarios involving server actions or React Server Components.
To track an event on the server side, simply call the `track` method, providing the event name and an optional data object:
```tsx
// [!code word:server]
import { track } from "@turbostarter/analytics-web/server";
track("button.click", {
country: "US",
region: "California",
});
```
Make sure to use the correct import for the `track` function. We're using the same name for both client and server tracking, but they are different functions. For server-side, just add `/server` to the import path (`@turbostarter/analytics-web/server`).
```tsx
import { track } from "@turbostarter/analytics-web";
```
```tsx
// [!code word:server]
import { track } from "@turbostarter/analytics-web/server";
```
Congratulations! You've now mastered event tracking in your TurboStarter web app. With this knowledge, you're well-equipped to analyze user behaviors and gain valuable insights into your application's usage patterns. Happy analyzing! π
file: ./src/content/docs/(core)/web/api/client.mdx
meta: {
"title": "Using API client",
"description": "How to use API client to interact with the API."
}
In Next.js, you can access the API client in two ways:
* **server-side**: in server components and API routes
* **client-side**: in client components
When you create a new page and want to fetch data, you have flexibility in where to make the API calls. Server Components are great for initial data loading since the fetching happens during server-side rendering, eliminating an extra client-server round trip. The data is then efficiently streamed to the client.
By default in Next.js, every component is a Server Component. You can opt into client-side rendering by adding the `use client` directive at the top of a component file. Client Components are useful when you need interactive features or want to fetch data based on user interactions. While they're initially server-rendered, they're also hydrated and rendered on the client, allowing you to make API calls directly from the browser.
Let's explore both approaches to understand their differences and use cases.
## Server-side
We're creating a server-side API client inside `apps/web/src/lib/api/server.ts` file. The client automatically handles passing authentication headers from the user's session to secure API endpoints.
It's pre-configured with all the necessary setup, so you can start using it right away without any additional configuration.
Then, there is nothing simpler than calling the API from your server component:
```tsx title="page.tsx"
import { api } from "~/lib/api/server";
export default async function MyServerComponent() {
const response = await api.posts.$get();
const posts = await response.json();
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
## Client-side
We're creating a separate client-side API client in `apps/web/src/lib/api/client.tsx` file. It's a simple wrapper around the [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) that fetches or mutates data from the API.
It also requires wrapping your app in a `ApiProvider` component to provide the query client to the rest of the app:
```tsx title="layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
Of course, it's all already configured for you, so you just need to start using `api` in your client components:
```tsx title="page.tsx"
"use client";
import { api } from "~/lib/api/client";
export default function MyClientComponent() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: async () => {
const response = await api.posts.$get();
if (!response.ok) {
throw new Error("Failed to fetch posts!");
}
return response.json();
},
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
Inside the `apps/web/src/lib/api/utils.ts` we're calling a function to get base url of your api, so make sure it's set correctly (especially on production) and your API endpoint is corresponding with the name there.
```tsx title="utils.ts"
export const getBaseUrl = () => {
if (typeof window !== "undefined") return window.location.origin;
if (env.NEXT_PUBLIC_URL) return env.NEXT_PUBLIC_URL;
if (env.VERCEL_URL) return `https://${env.VERCEL_URL}`;
// eslint-disable-next-line no-restricted-properties, turbo/no-undeclared-env-vars
return `http://localhost:${process.env.PORT ?? 3000}`;
};
```
As you can see we're mostly relying on the [environment variables](/docs/web/configuration/environment-variables) to get it, so there shouldn't be any issues with it, but in case, please be aware where to find it π
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/server";
export default async function MyServerComponent() {
const posts = await handle(api.posts.$get)();
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
```tsx
// [!code word:handle]
"use client";
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/client";
export default function MyClientComponent() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
file: ./src/content/docs/(core)/web/api/internationalization.mdx
meta: {
"title": "Internationalization",
"description": "Learn how to localize and translate your API."
}
Since TurboStarter provides fully featured [internationalization](/docs/web/internationalization/overview) out of the box, you can easily localize not only the frontend but also the API layer. This can be useful when you need to fetch localized data from the database or send emails in different languages.
Let's explore possibilities of this feature.
## Request-based localization
To get the locale for the current request, you can leverage the `localize` middleware:
```ts title="email.router.ts"
const emailRouter = new Hono().get("/", localize, (c) => {
const locale = c.var.locale;
// do something with the locale
});
```
Inside it, we're setting the `locale` variable in the current request context, making it available to the procedure.
## Error handling
When handling errors in an internationalized API, you'll want to ensure error messages are properly translated for your users. TurboStarter provides built-in support for localizing error messages using error codes and a special `onError` hook.
That's why it's recommended to use error codes instead of direct messages in your throw statements:
```ts
throw new HttpException(HttpStatusCode.UNAUTHORIZED, {
code: "auth:error.unauthorized",
/* π optional */
message: "You are not authorized to access this resource.",
});
```
The error code will then be used to retrieve the localized message, and the returned response from your API will look like this:
```json
{
"code": "auth:error.unauthorized",
/* π localized based on request's locale */
"message": "You are not authorized to access this resource.",
"path": "/api/auth/login",
"status": 401,
"timestamp": "2024-01-01T00:00:00.000Z"
}
```
Then, you can either use the returned code to get the localized message in your frontend, or simply use the returned message as is.
file: ./src/content/docs/(core)/web/api/mutations.mdx
meta: {
"title": "Mutations",
"description": "Learn how to mutate data on the server."
}
As we saw in [adding new endpoint](/docs/web/api/new-endpoint#maybe-mutation), mutations allow us to modify data on the server, like creating, updating, or deleting resources. They can be defined similarly to queries using our API client.
Just like queries, mutations can be executed either server-side or client-side depending on your needs. Let's explore both approaches.
## Server actions
Next.js provides [server actions](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations) as a powerful way to handle mutations directly on the server. They're particularly well-suited for form submissions and other data modifications.
Using our `api` client with server actions is straightforward - you simply call the API function on the server.
Here's an example of how you can define an action to create a new post:
```tsx
// [!code word:handle]
"use server";
import { revalidatePath } from "next/cache";
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/server";
export async function createPost(post: PostInput) {
try {
await handle(api.posts.$post)(post);
} catch (error) {
onError(error);
}
revalidatePath("/posts");
}
```
```tsx
"use server";
import { revalidatePath } from "next/cache";
import { api } from "~/lib/api/server";
export async function createPost(post: PostInput) {
const response = await api.posts.$post(post);
if (!response.ok) {
return { error: "Failed to create post" };
}
revalidatePath("/posts");
}
```
In the above example we're also using `revalidatePath` to revalidate the path `/posts` to fetch the updated list of posts.
## useMutation hook
On the other hand, if you want to perform a mutation in the client-side, you can use the `useMutation` hook that comes straight from the integration with [Tanstack Query](https://tanstack.com/query).
```tsx
// [!code word:handle]
import { handle } from "@turbostarter/api/utils";
import { api } from "~/lib/api/react";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
}
```
```tsx
import { api } from "~/lib/api/react";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
}
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
}
```
file: ./src/content/docs/(core)/web/api/new-endpoint.mdx
meta: {
"title": "Adding new endpoint",
"description": "How to add new endpoint to the API."
}
To define a new API endpoint, you can either extend an existing entity (e.g. add new customer route) or create a new, separate module.
## Create new module
To create a new module you can create a new folder in the `modules` folder. For example `modules/posts`.
Then you would need to create a router declaration for this module. We're following a convention with the filename suffixes, so you would need to create a file named `posts.router.ts` in the `modules/posts` folder.
```typescript title="modules/posts/posts.router.ts"
import { Hono } from "hono";
import { validate } from "../../middleware";
export const postsRouter = new Hono().get(
"/",
validate("query", filtersSchema),
(c) => getAllPosts(c.req.valid("query")),
);
```
As you can see we're implementing a `.get` method without any additional middlewares for the router. This is a simple way to define a new GET endpoint.
Also, we're using a [zod](https://zod.dev/) validator to ensure that input passed to the endpoint is correct.
### Maybe mutation?
The same way you can define a mutation for the new entity, just by changing the `get` to `post`:
```ts title="modules/posts/posts.router.ts"
// [!code word:.post]
export const postsRouter = new Hono().post(
"/",
enforceAuth,
validate("json", postSchema),
(c) => createPost(c.req.valid("json")),
);
```
Hono supports all [HTTP methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods), so you can define a new endpoint for any method you need (e.g. `put`, `delete`, etc.).
The `enforceAuth` middleware ensures that only authenticated users can access the endpoint, while the zod validator checks if the input data matches the expected schema. This combination provides both authentication and data validation in a single, clean setup.
[Read more about protected routes](/docs/web/api/protected-routes).
## Implement logic
Then you would need to create a controller for this module. There is a place, where the logic happens, e.g. for the `GET /` endpoint we would need to create a `getAllPosts` function which will fetch posts from the database.
```typescript title="modules/posts/posts.controller.ts"
import { db } from "@turbostarter/db/server";
import { posts } from "@turbostarter/db/schema";
export const getAllPosts = (filters: Filters) => {
return db.select().from(posts).all().where(/* your filter logic here */);
};
```
## Register router
To make the module and its endpoints available in the API you need to register a router for this module in the `index.ts` file:
```ts title="index.ts"
import { postsRouter } from "./modules/posts/posts.router";
const appRouter = new Hono()
.basePath("/api")
.route("/posts", postsRouter)
/* other routers from your app logic */
.onError(onError);
type AppRouter = typeof appRouter;
export type { AppRouter };
export { appRouter };
```
The `basePath` method sets a prefix for all routes in this router. While optional, using it helps organize API endpoints. This modular approach makes the API structure clearer and easier to maintain.
That's it! You've just created a new API endpoint - it's now available at `/api/posts` π
By exporting the `AppRouter` type you get fully type-safe RPC calls in the
client. It's important because without producing a huge amount of code, we're
fully type-safe from the frontend code. It helps avoid passing incorrect data
to the procedure and streamline consuming returned types without a need to
define these types by hand.
file: ./src/content/docs/(core)/web/api/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with the API.",
"index": true
}
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe manner, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS. It also
includes an [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers are defined in the `packages/api/` package. Here you will find a `modules` folder that contains the different feature modules of the API. Each module has its own folder and exports all its resolvers.
For each module, we create a separate Hono route in the `packages/api/index.ts` file and aggregate all sub-routers into one main router.
The API is then exposed as a route handler that will be provided as a Next.js API route:
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@turbostarter/api";
const handler = handle(appRouter);
export { handler as GET, handler as POST };
```
The API is a part of web, serverless Next.js app. It means that you **must**
deploy it to use the API in other apps (e.g. mobile app, browser extension),
even if you don't need web app itself. It's very simple, as you're just
deploying the Next.js app and the API is just a part of it.
Learn more about API in the following sections:
file: ./src/content/docs/(core)/web/api/protected-routes.mdx
meta: {
"title": "Protected routes",
"description": "Learn how to protect your API routes."
}
Hono has built-in support for [middlewares](https://hono.dev/docs/guides/middleware), which are functions that can be used to modify the context or execute code before or after a route handler is executed.
That's how we can secure our API endpoints from unauthorized access. Below are some examples of you can leverage middlewares to protect your API routes.
## Authenticated access
After validating the user's authentication status, we store their data in the context using [Hono's built-in context](https://hono.dev/docs/api/context). This allows us to access the user's information in subsequent middleware and procedures without having to re-validate the session.
Here's an example of middleware that validates whether the user is currently logged in and stores their data in the context:
```ts title="middleware.ts"
/**
* Reusable middleware that enforces users are logged in before running the
* procedure
*/
export const enforceAuth = createMiddleware<{
Variables: {
user: User;
};
}>(async (c, next) => {
const session = await auth.api.getSession({ headers: c.req.raw.headers });
const user = session?.user ?? null;
if (!user) {
throw new HTTPException(HttpStatusCode.UNAUTHORIZED, {
message: "You need to be logged in to access this feature!",
});
}
c.set("user", user);
await next();
});
```
Then we can use our defined middleware to protect endpoints by adding it before the route handler:
```ts title="billing.router.ts"
export const billingRouter = new Hono().get(
"/customer",
enforceAuth,
async (c) => c.json(await getCustomerByUserId(c.var.user.id)),
);
```
## Feature-based access
When developing your API you may want to restrict access to certain features based on the user's current subscription plan. (e.g. only users with "Pro" plan can access teams).
You can achieve this by creating a middleware that will check if the user has access to the feature and then pass the execution to the next middleware or procedure:
```ts title="middleware.ts"
/**
* Reusable middleware that enforces users have access to a feature
* before running the procedure
*/
export const enforceFeatureAvailable = (feature: Feature) =>
createMiddleware<{
Variables: {
user: User;
};
}>(async (c, next) => {
const { data: customer } = await getCustomerById(c.var.user.id);
const hasFeature = isFeatureAvailable(customer, feature);
if (!hasFeature) {
throw new HTTPException(HttpStatusCode.PAYMENT_REQUIRED, {
message: "Upgrade your plan to access this feature!",
});
}
await next();
});
```
Use it within your procedure the same way as we did with `enforceAuth` middleware:
```ts title="teams.router.ts"
export const teamsRouter = new Hono().get(
"/",
enforceAuth,
enforceFeatureAvailable(FEATURES.PRO.TEAMS),
(c) => c.json(...),
);
```
These are just examples of what you can achieve with Hono middlewares. You can use them to add any kind of logic to your API (e.g. [logging](https://hono.dev/docs/middleware/builtin/logger), [caching](https://hono.dev/docs/middleware/builtin/cache), etc.)
file: ./src/content/docs/(core)/web/auth/configuration.mdx
meta: {
"title": "Configuration",
"description": "Configure authentication for your application."
}
TurboStarter supports multiple different authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - passwordless email link authentication
* **Passkey** - passkeys as an alternative to passwords
* **Anonymous** - guest mode for unauthenticated users
* **OAuth** - OAuth providers, Google and Github are set up by default
All authentication methods are enabled by default, but you can easily customize them to your needs. You can enable or disable any method, and configure them according to your requirements.
Remember that you can mix and match these methods or add new ones - for
example, you can have both password and magic link authentication enabled at
the same time, giving your users more flexibility in how they authenticate.
Authentication configuration can be customized through a simple configuration file. The following sections explain the available options and how to configure each authentication method based on your requirements.
## API
The **server-side** authentication configuration is set at `packages/auth/src/server.ts`. It confgures [Better Auth](https://better-auth.com) package to use the correct providers and settings:
```tsx title="server.ts"
export const auth = betterAuth({
emailAndPassword: {
enabled: true,
requireEmailVerification: true,
sendResetPassword: () => {},
},
emailVerification: {
sendOnSignUp: true,
autoSignInAfterVerification: true,
sendVerificationEmail: () => {},
},
database: drizzleAdapter(db, {
provider: "pg",
usePlural: true,
schema,
}),
plugins: [
magicLink({
sendMagicLink: () => {},
}),
passkey(),
anonymous(),
expo(),
nextCookies(),
],
socialProviders: {
[SOCIAL_PROVIDER.GITHUB]: {
clientId: env.GITHUB_CLIENT_ID,
clientSecret: env.GITHUB_CLIENT_SECRET,
},
[SOCIAL_PROVIDER.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
/* other configuration options */
});
```
The configuration is validated against Better Auth's schema at runtime, providing immediate feedback if any settings are incorrect or insecure. This validation ensures your authentication setup remains robust and properly configured.
All authentication routes and handlers are centralized within the [Hono API](/docs/web/api/overview), giving you a single source of truth and complete control over the authentication flow. This centralization makes it easier to maintain, debug, and customize the authentication process as needed.
[Read more about it in the official documentation](https://www.better-auth.com/docs/basic-usage).
## UI
We have separate configuration that determines what is displayed to your users in the **UI**. It's set at `apps/web/config/auth.ts`.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior.
```ts title="apps/web/config/auth.ts"
import { SOCIAL_PROVIDER, authConfigSchema } from "@turbostarter/auth";
import { env } from "~/lib/env";
import type { AuthConfig } from "@turbostarter/auth";
export const authConfig = authConfigSchema.parse({
providers: {
password: env.NEXT_PUBLIC_AUTH_PASSWORD,
magicLink: env.NEXT_PUBLIC_AUTH_MAGIC_LINK,
passkey: env.NEXT_PUBLIC_AUTH_PASSKEY,
anonymous: env.NEXT_PUBLIC_AUTH_ANONYMOUS,
oAuth: [SOCIAL_PROVIDER.GOOGLE, SOCIAL_PROVIDER.GITHUB],
},
}) satisfies AuthConfig;
```
The configuration is also validated using the Zod schema, so if something is off, you'll see the errors.
For example, if you want to switch from password to magic link, you'd change the following environment variables:
```dotenv title=".env.local"
NEXT_PUBLIC_AUTH_PASSWORD=false
NEXT_PUBLIC_AUTH_MAGIC_LINK=true
```
To display third-party providers in the UI, you need to set the `oAuth` array to include the provider you want to display. The default is Google and Github:
```tsx title="apps/web/config/auth.ts"
providers: {
...
oAuth: [SOCIAL_PROVIDER.GOOGLE, SOCIAL_PROVIDER.GITHUB],
...
},
```
## Third party providers
To enable third-party authentication providers, you'll need to:
1. Set up an OAuth application in the provider's developer console (like Google Cloud Console, Github Developer Settings or any other provider you want to use)
2. Configure the corresponding environment variables in your TurboStarter application
Each OAuth provider requires its own set of credentials and environment variables. Please refer to the [Better Auth documentation](https://better-auth.com/docs/concepts/oauth) for detailed setup instructions for each supported provider.
Make sure to set both development and production environment variables
appropriately. Your OAuth provider may require different callback URLs for
each environment.
file: ./src/content/docs/(core)/web/auth/flow.mdx
meta: {
"title": "User flow",
"description": "Discover the authentication flow in Turbostarter."
}
TurboStarter ships with a fully functional authentication system. Most of the views and components are preconfigured and easily customizable to your needs.
Here you will find a quick walkthrough of the authentication flow.
## Sign up
The sign-up page is where users can create an account. They need to provide their email address and password.

Once successful, users are asked to confirm their email address. This is enabled by default - and due to security reasons, it's not possible to disable it.
Make sure to configure the [email provider](/docs/web/emails/configuration) together with the [auth hooks](/docs/web/emails/sending#authentication-emails) to be able to send emails from your app.

## Sign in
The sign-in page is where users can log in to their account. They need to provide their email address and password, use magic link (if enabled) or third-party providers.

## Sign out
The sign out button is located in the user menu.

## Forgot password
The forgot password page is where users can reset their password. They need to provide their email address and follow the instructions in the email.

The reset password page is where users land from a forgot email. There they can reset their password by providing new password and confirming it.

file: ./src/content/docs/(core)/web/auth/oauth.mdx
meta: {
"title": "OAuth",
"description": "Get started with social authentication."
}
Better Auth supports almost **15** (!) different [OAuth providers](https://www.better-auth.com/docs/concepts/oauth). They can be easily configured and enabled in the kit without any additional configuration needed.
TurboStarter provides you with all the configuration required to handle OAuth providers responses from your app:
* redirects
* middleware
* confirmation API routes
You just need to configure one of the below providers on their side and set correct credentials as environment variables in your TurboStarter app.

Third Party providers need to be configured, managed and enabled fully on the provider's side. TurboStarter just needs the correct credentials to be set as environment variables in your app and passed to the [authentication API configuration](/docs/web/auth/configuration#api).
To enable OAuth providers in your TurboStarter app, you need to:
1. Set up an OAuth application in the provider's developer console (like Google Cloud Console, Github Developer Settings or any other provider you want to use)
2. Configure the provider's credentials as environment variables in your app. For example, for Google OAuth:
```dotenv title="packages/db/.env.local"
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
```
Then, pass it to the authentication configuration in `packages/auth/src/server.ts`:
```tsx title="server.ts"
export const auth = betterAuth({
...
socialProviders: {
[SOCIAL_PROVIDER.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
...
});
```
Better Auth provides a [generic OAuth plugin](https://www.better-auth.com/docs/plugins/generic-oauth) that allows you to add any OAuth provider to your app.
It supports both OAuth 2.0 and OpenID Connect (OIDC) flows, allowing you to easily add social login or custom OAuth authentication to your application.
file: ./src/content/docs/(core)/web/auth/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with authentication.",
"index": true
}
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration required, while maintaining enterprise-grade security standards. Its framework-agnostic approach and focus on performance makes it the perfect choice for TurboStarter.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
TurboStarter supports multiple authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - magic links
* **Passkey** - passkeys (WebAuthn)
* **Anonymous** - allowing users to proceed anonymously
* **OAuth** - OAuth social providers (Google and Github preconfigured)
As well as common applications flows, with ready-to-use views and components:
* **Sign in** - sign in with email/password or OAuth providers
* **Sign up** - sign up with email/password or OAuth providers
* **Sign out** - sign out
* **Password recovery** - forgot and reset password
* **Email verification** - verify email
You can construct your auth flow like LEGO bricks - plug in needed parts and customize them to your needs.
file: ./src/content/docs/(core)/web/billing/configuration.mdx
meta: {
"title": "Configuration",
"description": "Configure billing for your application."
}
The billing configuration schema replicates your billing provider's schema, so that:
* we can display the data in the UI (pricing table, billing section, etc.)
* create the correct checkout session
* make some features work correctly - such as feature-based access
It is common to all billing providers and placed in `packages/billing/src/config/index.ts`. Some billing providers have some differences in what you can or cannot do. In these cases, the schema will try to validate and enforce the rules - but it's up to you to make sure the data is correct.
The schema is based on few entities:
* **Plans:** The main product you are selling (e.g., "Pro Plan", "Starter Plan", etc.)
* **Prices:** The pricing plan for the product (e.g., "Monthly", "Yearly", etc.)
* **Discounts:** The discount for the price (e.g., "10% off", "20% off", etc.)
```ts title="index.ts"
type BillingConfig = {
plans: PlanWithPrices[];
discounts: Discount[];
};
```
Getting the IDs of your plans is **extremely important** - as these are used to:
* create the correct checkout
* manage your customers billing data
Please take it easy while you configure this, do one step at a time, and test it thoroughly.
## Billing provider
To set the billing provider, you need to modify the `BILLING_PROVIDER` environment variable. It defaults to [Stripe](/docs/web/billing/stripe).
```dotenv
BILLING_PROVIDER="stripe"
```
It's important to set it correctly, as this is used to determine the correct API calls and environment variables used during the communication with the billing provider.
## Billing model
To set the billing model, you need to modify the `BILLING_MODEL` environment variable. It defaults to `recurring` as it's the most common model for SaaS apps.
```dotenv
BILLING_MODEL="recurring"
```
This field will be used to display corresponding data in the UI (e.g. in pricing tables) and to create the correct checkout session.
For now, TurboStarter supports two billing models:
* `recurring` - for subscription-based models
* `one-time` - for one-time payments
When changing it, make sure to also update corresponding data on the provider side to match it with the correct billing model.
## Plans
Plans are the main products you are selling. They are defined by the following fields:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: PricingPlanType.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
prices: [],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the plan (e.g., `free`, `pro`, `enterprise`, etc.). **This is chosen by you, it doesn't need to be the same one as the one in the provider.** It's also used to determine the access level of the plan.
* `name`: The name of the plan
* `description`: The description of the plan
* `badge`: A badge to display on the product (e.g., "Bestseller", "Popular", etc.)
The majority of these fields are going to populate the pricing table in the UI.
### Prices
Prices are the pricing plans for the plan. They are defined by the following fields:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: PricingPlanType.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
prices: [
{
/* π This is the `priceId` from the provider (e.g. Stripe) or `variantId` (e.g. Lemon Squeezy) */
id: "price_1PpZAAFQH4McJDTlig6Fxsyy",
amount: 1900,
currency: "usd",
interval: RecurringInterval.MONTH,
trialDays: 7,
type: BillingModel.RECURRING,
},
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the price. **This must match the price ID in the billing provider**
* `amount`: The amount of the price (displayed values will be divided by 100)
* `currency`: The currency of the price (only currencies from the [current locale](/docs/web/internationalization/overview) will be displayed - defaults to `usd`)
Make sure to have the same currency set on your third-party billing provider (e.g. as a [store currency](https://docs.lemonsqueezy.com/help/payments/currencies) on Lemon Squeezy)
* `interval`: The interval of the price (e.g., `month`, `year`, etc.)
* `trialDays`: The number of trial days for the price
* `type`: The type of the price (e.g., `recurring`, `one-time`, etc.)
The amount is set for UI purposes. The billing provider will handle the actual billing - therefore, please make sure the amount is correctly set in the billing provider.
Make sure to set the correct price ID that corresponds to the price in the billing provider. This is very important - as this is used to identify the correct price when creating a checkout session.
### One-off payments
One-off payments are a type of price that is used to create a checkout session for a one-time payment. They are defined by the following fields:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: PricingPlanType.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
prices: [
{
/* π This is the `priceId` from the provider (e.g. Stripe) or `variantId` (e.g. Lemon Squeezy) */
id: "price_1PpUagFQH4McJDTlHCzOmyT6",
amount: 29900,
currency: "usd",
type: BillingModel.ONE_TIME,
},
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the price. **This must match the price ID in the billing provider**
* `amount`: The amount of the price (displayed values will be divided by 100)
* `currency`: The currency of the price (only currencies from the [current locale](/docs/web/internationalization/overview) will be displayed - defaults to `usd`)
* `type`: The type of the price (e.g. `recurring`, `one-time`, etc.). In this case it's `one-time` as it's a one-off payment.
Please remember that the cost is set for UI purposes. **The billing provider will handle the actual billing - therefore, please make sure the cost is correctly set in the billing provider.**
### Custom prices
Sometimes - you want to display a price in the pricing table - but not actually have it in the billing provider. This is common for custom plans, free plans that don't require the billing provider subscription, or plans that are not yet available.
To do so, let's add the `custom` flag to the price:
```ts title="index.ts"
{
id: "enterprise-monthly",
label: "Contact us!",
href: "/contact",
interval: RecurringInterval.MONTH,
custom: true,
type: BillingModel.RECURRING,
}
```
Here's the full example:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: PricingPlanType.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
prices: [
{
id: "premium-monthly",
label: "Contact us!",
href: "/contact",
interval: RecurringInterval.MONTH,
custom: true,
type: BillingModel.RECURRING,
},
],
},
],
...
}) satisfies BillingConfig;
```
As you can see, the plan is now a custom plan. The UI will display the plan in the pricing table, but it won't be available for purchase.
We do this by adding the following fields:
* `custom`: A flag to indicate that the plan is custom. This will prevent the plan from being available for purchase. It's set to `false` by default.
* `label`: This is used to display the label in the pricing table instead of the price.
* `href`: The link to the page where the user can contact you. This is used in the pricing table.
All labels and descriptions can be translated using the [internationalization](/docs/web/internationalization/overview) feature. The UI will display the correct translation based on the user's locale.
```ts title="index.ts"
label: "common:contactUs",
```
To make strings translatable, make sure to provide the translation key in the config.
### Discounts
Sometimes, you want to offer a discount to your users. This is done by adding a discount to the price in `discounts` field.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
discounts: [
{
code: "50OFF",
type: BillingDiscountType.PERCENT,
off: 50,
appliesTo: [
"price_1PpUagFQH4McJDTlHwsCzOmyT6",
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `code`: The code of the discount (e.g., "50OFF", "10% off", etc.) **This must match the code configured in the billing provider**
* `type`: The type of the discount (e.g., `percent`, `amount`, etc.)
* `off`: The amount of the discount (e.g., 50 for 50% off)
* `appliesTo`: The list of prices that the discount applies to. This is the price ID that you've configured above for the price.
This data will allow to display the correct banner in the UI e.g. "10% off for the first 100 customers!" and to apply the discount to the correct price at checkout.
## Adding more products, plans and discounts
Simply add more plans, prices and discounts to the arrays. The UI **should** be able to handle it in most traditional cases. If you have a more complex billing schema, you may need to adjust the UI accordingly.
file: ./src/content/docs/(core)/web/billing/lemon-squeezy.mdx
meta: {
"title": "Lemon Squeezy",
"description": "Manage your customers data and subscriptions using Lemon Squeezy."
}
Lemon Squeezy is another billing provider available for TurboStarter. Here we'll go through the configuration and how to set it up as a provider for your app.
To switch to Lemon Squeezy you need to set it as provider in the environment variables:
```dotenv title="apps/web/.env.local"
BILLING_PROVIDER="lemon-squeezy"
```
Then, let's configure the integration:
## Get API keys
After you have created your account and a store for [Lemon Squeezy](https://lemonsqueezy.com/), you will need to create a new API key. You can do this by going to the [API page](https://app.lemonsqueezy.com/settings/api) in the settings and clicking on the plus button. You will need to give your API key a name and then click on the *Create* button. Once you have created your API key, you will need to copy the API key to use it in the setup of the integration.
For local development, make sure to use [Test Mode](https://docs.lemonsqueezy.com/help/getting-started/test-mode) to not mess with the real transactions.
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/web/.env.local"
LEMONSQUEEZY_API_KEY="" # Your Lemon Squeezy API key
LEMONSQUEEZY_SIGNING_SECRET="" # Your Lemon Squeezy webhook signing secret
LEMONSQUEEZY_STORE_ID="" # Your Lemon Squeezy store ID (can be found under Settings > Stores next to your store url, e.g #12345)
```
**Please do not add the secret keys to the .env file in production.** During development, you can place them in `.env.local` as it's not committed to the repository. In production, you can set them in the environment variables of your hosting provider.
## Create products
For your users to choose from the available subscription plans, you need to create those Products first on the [Products page](https://app.lemonsqueezy.com/products). You can create as many products as you want.
Create one product per plan you want to offer. You can add multiple variant within the product to offer multiple models or different billing intervals.

To offer multiple intervals for each plan, you can use the [Variant](https://docs.lemonsqueezy.com/help/products/variants) feature of Lemon Squeezy. Just create one variant for each interval/model you want to offer.

You need to make sure that the price ID you set in the configuration matches the ID of the variant you created in Lemon Squeezy.
[See configuration](/docs/web/billing/configuration#prices) for more information.
## Create a webhook
To sync the current subscription status or checkout conclusion and other information to your database, you need to set up a webhook.
The webhook handling code comes ready to use with TurboStarter, you just have to create the webhook in the Lemon Squeezy dashboard and insert the URL for your project.
To configure a new webhook, go to the [Webhooks page](https://app.lemonsqueezy.com/settings/webhooks) in the Lemon Squeezy settings and click the *Plus* button.

Select the following events:
* For subscriptions:
* `subscription_created`
* `subscription_updated`
* `subscription_cancelled`
* For one-off payments:
* `order_created`
You will also have to enter a *Signing secret* which you can get by running the following command in your terminal:
```bash
openssl rand -base64 32
```
Copy the generated string and paste it into the *Signing secret* field.
You also need to add this secret to your environment variables:
```dotenv title="apps/web/.env.local"
LEMONSQUEEZY_WEBHOOK_SECRET=your-signing-secret
```
To get the callback URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter web development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Lemon Squeezy. Just use that url and add `/api/billing/webhook` to it.
### Production deployment
When going to production, you will need to set the webhook URL and the events you want to listen to in Lemon Squeezy.
The webhook path is `/api/billing/webhook`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events please check [Webhooks](/docs/web/billing/webhooks) for more information.
## Add discount
You can add a discount for your customers that will apply on a specific price.
You can create the discount on [Discounts page](https://app.lemonsqueezy.com/discounts).

You can set there a details of discount such as products that it should apply to, amount off, duration, max redemptions and more.
You need to add also the discount code and details to TurboStarter billing configuration to enable displaying it in the UI, creating checkout sessions with it and calculate prices.
[See discounts configuration](/docs/web/billing/configuration#discounts) for more details.
That's it! π You have now set up Lemon Squeezy as a billing provider for your app.
Feel free to add more products, prices, discounts and manage your customers data and subscriptions using Lemon Squeezy.
Make sure that the data you set in the configuration matches the details of things you created in Lemon Squeezy.
[See configuration](/docs/web/billing/configuration) for more information.
file: ./src/content/docs/(core)/web/billing/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with billing in TurboStarter.",
"index": true
}
The `@turbostarter/billing` package is used to manage subscriptions, one-off payments, and more.
Inside, we're making an abstraction layer that allows us to use different billing providers without breaking our code nor changing the API calls.

## Providers
TurboStarter implements multiple providers for managing billing:
* [Stripe](/docs/web/billing/stripe)
* [Lemon Squeezy](/docs/web/billing/lemon-squeezy)
* [Paddle](/docs/web/billing/paddle) (coming soon)
All configuration and setup is built-in with a unified API, so you can switch between providers without changing your code.
## Subscriptions vs. One-off payments
TurboStarter supports both one-off payments and subscriptions. You have the choice to use one or both. What TurboStarter cannot assume with certainty is the billing mode you want to use. By default, we assume you want to use subscriptions, as this is the most common billing mode for SaaS applications.
This means that - by default - TurboStarter will be looking for a subscription plan when visiting the billing section or pricing page.
**It's easily customizable** - [take a look at configuration](/docs/web/billing/configuration).
### But I want to use both
Perfect - you can, but you need to customize the pages to display the correct data.
Depending on the service you use, you will need to set the environment variables accordingly. By default - the billing package uses [Stripe](/docs/web/billing/stripe). Alternatively, you can use [Lemon Squeezy](/docs/web/billing/lemon-squeezy). In the future, we will also add [Paddle](/docs/web/billing/paddle).
file: ./src/content/docs/(core)/web/billing/paddle.mdx
meta: {
"title": "Paddle",
"description": "Manage your customers data and subscriptions using Paddle."
}
We are working on adding Paddle integration to our platform. As soon as it's ready, we will update this page with the necessary information.
file: ./src/content/docs/(core)/web/billing/stripe.mdx
meta: {
"title": "Stripe",
"description": "Manage your customers data and subscriptions using Stripe."
}
Stripe is the default billing provider for TurboStarter. Here we'll go through the configuration and how to set it up as a provider for your app.
## Get API keys
After you have created your account for [Stripe](https://stripe.com), you will need to get the API key. You can do this by going to the [API page](https://dashboard.stripe.com/apikeys) in the dashboard. Here you will find the *Secret key* and the *Publishable key*. You will need the *Secret key* for the integration to work.
For local development, make sure to use [Test Mode](https://docs.stripe.com/test-mode) to not mess with the real transactions.
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/web/.env.local"
STRIPE_SECRET_KEY="" # Your Stripe secret key
STRIPE_WEBHOOK_SECRET="" # The secret key of the webhook you created (see below)
```
**Please do not add the secret keys to the .env file in production.** During development, you can place them in `.env.local` as it's not committed to the repository. In production, you can set them in the environment variables of your hosting provider.
## Create products
For your users to choose from the available subscription plans, you need to create those Products first on the [Products page](https://dashboard.stripe.com/products). You can create as many products as you want.
Create one product per plan you want to offer. You can add multiple prices within this product to offer multiple models or different billing intervals.

You need to make sure that the price ID you set in the configuration matches the ID of the price you created in Stripe.
[See configuration](/docs/web/billing/configuration) for more information.
## Create a webhook
To sync the current subscription status or checkout conclusion and other information to your database, you need to set up a webhook.
The webhook code comes ready to use with TurboStarter, you just have to create the webhook in the Stripe dashboard and insert the URL for your project.
To configure a new webhook, go to the [Webhooks page](https://dashboard.stripe.com/webhooks) in the Stripe settings and click the Add endpoint button.

Select the following events:
* For subscriptions:
* `customer.subscription.created`
* `customer.subscription.updated`
* `customer.subscription.deleted`
* For one-off payments:
* `checkout.session.completed`
To get the URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
There are two ways to test the webhook during local development:
The Stripe CLI which allows you to listen to Stripe events straight to your own localhost. You can install and use the CLI using a variety of methods, but we recommend using official way to do it.
[Install the Stripe CLI](https://docs.stripe.com/stripe-cli)
Then - login to your Stripe account using the project you want to run:
```bash
stripe login
```
Copy the webhook secret displayed in the terminal and set it as the `STRIPE_WEBHOOK_SECRET` environment variable in your `apps/web/.env.local` file:
```dotenv title="apps/web/.env.local"
STRIPE_WEBHOOK_SECRET=*your-secret-key*
```
Now, you can listen to Stripe events running the following command:
```bash
stripe listen --forward-to localhost:3000/api/billing/webhook
```
This will forward all the Stripe events to your local endpoint.
**If you have not logged in** - the first time you set it up, you are required to sign in. This is a one-time process. Once you sign in, you can use the CLI to listen to Stripe events.
**Please sign in and then re-run the command.** Now, you can listen to Stripe events.
If you're not receiving events, please make sure that:
* the webhook secret is correct
* the account you signed in is the same as the one you're using in your app
You can even trigger the event manually for testing purposes:
```bash
stripe trigger customer.subscription.created
```
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter web development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Stripe. Just use that url and add `/api/billing/webhook` to it.
### Production deployment
When going to production, you will need to set the webhook URL and the events you want to listen to in Stripe.
The webhook path is `/api/billing/webhook`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events please check [Webhooks](/docs/web/billing/webhooks) for more information.
## Configure Stripe Customer Portal
Stripe requires you to set up the Customer Portal so that users can manage their billing information, invoices and plan settings from there.
You can do it [under the following link.](https://dashboard.stripe.com/settings/billing/portal)

1. Please make sure to enable the setting that lets users switch plans
2. Configure the behavior of the cancellation according to your needs
## Add discount
You can add a discount for your customers that will apply on a specific price.
### Create coupon
First, you'd need to create a coupon on the [Coupons page](https://dashboard.stripe.com/coupons).

You can set there a details of discount such as prices that it should apply to, amount off, duration, max redemptions and more.
### Add promotion code
To enable using code during checkout you need to get a promotion code. You can define it on the same page as the coupon and give some user-friendly name to it.

This code will be auto-applied at new checkout sessions.
### Configure discount
You need to add also the discount code and details to TurboStarter billing configuration to enable displaying it in the UI, creating checkout sessions with it and calculate prices.
[See discounts configuration](/docs/web/billing/configuration#discounts) for more details.
That's it! π You have now set up Stripe as a billing provider for your app.
Feel free to add more products, prices, discounts and manage your customers data and subscriptions using Stripe.
Make sure that the data you set in the configuration matches the details of things you created in Stripe.
[See configuration](/docs/web/billing/configuration) for more information.
file: ./src/content/docs/(core)/web/billing/webhooks.mdx
meta: {
"title": "Webhooks",
"description": "Handle webhooks from your billing provider."
}
TurboStarter handles billing webhooks to update customer data based on events received from the billing provider.
Occasionally, you may need to set up additional webhooks or perform custom actions with webhooks.
In such cases, you can customize the billing webhook handler in the billing router at `packages/api/src/modules/billing/billing.router.ts`.
By default, the webhook handler is configured to be as straightforward as possible:
```ts title="billing.router.ts"
import { webhookHandler } from "@turbostarter/billing/server";
export const billingRouter = new Hono().post("/webhook", (c) =>
webhookHandler(c.req.raw),
);
```
However, you can extend it using the callbacks provided from `@turbostarter/billing` package:
```ts title="billing.router.ts"
import { webhookHandler } from "@turbostarter/billing/server";
export const billingRouter = new Hono().post("/webhook", (c) =>
webhookHandler(c.req.raw, {
onCheckoutSessionCompleted: (sessionId) => {},
onSubscriptionCreated: (subscriptionId) => {},
onSubscriptionUpdated: (subscriptionId) => {},
onSubscriptionDeleted: (subscriptionId) => {},
}),
);
```
You can provide one or more of the callbacks to handle the events you are interested in.
file: ./src/content/docs/(core)/web/cms/blog.mdx
meta: {
"title": "Blog",
"description": "Learn how to manage your blog content."
}
TurboStarter comes with a pre-configured blog implementation that allows you to manage your blog content.
## Creating a new blog post
To create a new blog post, you need to create a new directory (its name will be used as the slug of the blog post) with `.mdx` files in the `packages/cms/src/collections/blog/content` directory. Each file in this directory should be named after the locale it belongs to (e.g `en.mdx`, `es.mdx`, etc.).
The file will start with a [frontmatter](https://mdxjs.com/guides/frontmatter/) block, which is a yaml-like block that contains metadata about the post. The frontmatter block should be surrounded by three dashes (`---`).
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
---
title: Quick Tips to Improve Your Skills Right Away
description: Whether you're learning a new technical skill or working on personal development, these quick tips can help you improve right away. Learn how to break down your goals, practice consistently, and track your progress using Markdown.
publishedAt: 2023-04-19
tags: [learning, skills, progress]
thumbnail: https://images.unsplash.com/photo-1483639130939-150975af84e5?q=80&w=2370&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D
status: published
---
```
Let's break down the frontmatter fields:
* `title`: The title of the blog post (it will be also used to generate a slug for the blog post)
* `description`: The description of the blog post
* `publishedAt`: The date when the blog post was published
* `tags`: The tags of the blog post
* `thumbnail`: The thumbnail of the blog post
* `status`: The status of the blog post (could be `published` or `draft`)
After the frontmatter block, you can add the content of the blog post:
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
# Quick Tips to Improve Your Skills Right Away
Awesome paragraph!
[Link](https://www.turbostarter.dev)
This is a callout component.
...
```
You can consume the content the same as it's described in [Content Collections](/docs/web/cms/content-collections).
## BONUS: Using custom components
As you're using MDX, you can use **any React component** in your blog posts. Just define it as a normal React component and pass it to `` in `components` prop:
```tsx title="apps/web/src/app/content/page.tsx"
import { MyComponent } from "~/components/my-component";
export default function Page() {
return (
);
}
```
Then, you would be able to use it in your document content and it will rendered on the page as a result:
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
...
# Heading
Excellent paragraph!
1. First item
2. Second item
3. Third item
```
TurboStarter ships with a set of default components that you can use in your blog posts, e.g. ``, `` etc. Use them or define your own to make your blog posts more engaging.
file: ./src/content/docs/(core)/web/cms/content-collections.mdx
meta: {
"title": "Content Collections",
"description": "Get started with Content Collections."
}
By default, TurboStarter uses [Content Collections](https://www.content-collections.dev/) to store and retrieve content from the MDX files.
Content from there is used to populate data in the following places:
* **Blog**
* **Legal pages**
* **Documentation**
It is a great alternative to headless CMS like Contentful or Prismic based on MDX (a more powerful version of markdown). It is free, open source and the content is located right in your repository.
Of course, you can add more collections and views, as it's very flexible.
## Defining new collection
To define a new collection, you need to create a new file in the `packages/cms/src/collections` directory:
```ts title="packages/cms/src/collections/legal/index.ts"
import { defineCollection } from "@content-collections/core";
export const legal = defineCollection({
name: "legal",
directory: "src/collections/legal/content",
include: "**/*.mdx",
schema: (z) => ({
title: z.string(),
description: z.string(),
}),
transform: async (doc, context) => {
const mdx = await transformMDX(doc, context);
return {
...mdx,
slug: doc._meta.directory,
locale: doc._meta.fileName.split(".")[0],
};
},
});
```
Then it's passed to the config in `packages/cms/content-collections.ts` file which is used to generate types and parse content from MDX files.
```tsx title="packages/cms/content-collections.ts"
import { defineConfig } from "@content-collections/core";
import { legal } from "./src/collections/legal";
export default defineConfig({
collections: [legal],
});
```
When you run a development server, content collections will be automatically rebuilt (in `.content-collections` directory) and you will be able to import the content and metadata of each file in your application.
By exporting the generated content you get fully type-safe API to interact
with the content. We can have type safety on the data that we're receiving
from the MDX files.
## Using content collections
To get some content from `@turbostarter/cms` package, you need to use the exposed API that we described in the [Overview section](/docs/web/cms/overview#api):
```tsx title="apps/web/src/app/[locale](marketing)/legal/[slug]/page.tsx"
import { content } from "@turbostarter/cms";
export default async function Page({
params,
}: {
params: Promise<{ slug: string; locale: string }>;
}) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
return
{title}
;
}
```
Voila! You can now access the content from the MDX files.
file: ./src/content/docs/(core)/web/cms/overview.mdx
meta: {
"title": "Overview",
"description": "Manage your content in TurboStarter.",
"index": true
}
TurboStarter implements a CMS interface that abstracts the implementation from where you store your data. It provides a simple API to interact with your data, and it's easy to extend and customize.
By default, the starter kit ships with these implementations in place:
1. [Content Collections](https://www.content-collections.dev/) - a headless CMS that uses [MDX](https://mdxjs.com/) files to store your content.
The implementation is available under `@turbostarter/cms` package, here we'll go over how to use it.
## API
The CMS package provides a simple, unified API to interact with the content. It's the same for all the providers, so you can easily use it with any of the implementations without changing the code.
### Fetching content items
To fetch items from your colletions, you can use the `getContentItems` function.
```ts
import { getContentItems } from "@turbostarter/cms";
const { items, count } = getContentItems({
collection: CollectionType.BLOG,
tags: [ContentTag.SKILLS],
sortBy: "publishedAt",
sortOrder: SortOrder.DESCENDING,
status: ContentStatus.PUBLISHED,
locale: "en",
});
```
It accepts an object with the following properties:
* `collection`: The collection to fetch the items from.
* `tags`: The tags to filter the items by.
* `sortBy`: The field to sort the items by.
* `sortOrder`: The order to sort the items in.
* `status`: The status of the items to fetch. It can be `published` or `draft`. By default, only `published` items are fetched.
* `locale`: The locale to fetch the items in. By default, all locales are fetched.
### Fetching a single content item
To fetch a single content item, you can use the `getContentItemBySlug` function.
```ts
import { getContentItemBySlug } from "@turbostarter/cms";
const item = getContentItemBySlug({
collection: CollectionType.BLOG,
slug: "my-first-blog-post",
status: ContentStatus.PUBLISHED,
locale: "en",
});
```
It accepts an object with the following properties:
* `collection`: The collection to fetch the item from.
* `slug`: The slug of the item to fetch.
* `status`: The status of the item to fetch. It can be `published` or `draft`. By default, only `published` items are fetched.
* `locale`: The locale to fetch the item in. By default, all locales are fetched.
file: ./src/content/docs/(core)/web/configuration/app.mdx
meta: {
"title": "App configuration",
"description": "Learn how to setup the overall settings of your app."
}
The application configuration is set at `apps/web/src/config/app.ts`. This configuration stores some overall variables for your application.
This allows you to host multiple apps in the same monorepo, as every application defines its own configuration.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/web/src/config/app.ts"
import { env } from "~/lib/env";
export const appConfig = {
name: env.NEXT_PUBLIC_PRODUCT_NAME,
url: env.NEXT_PUBLIC_URL,
locale: env.NEXT_PUBLIC_DEFAULT_LOCALE,
theme: {
mode: env.NEXT_PUBLIC_THEME_MODE,
color: env.NEXT_PUBLIC_THEME_COLOR,
},
} as const;
```
For example, to set the product name and default locale, you'd update the following variables:
```dotenv title=".env.local"
NEXT_PUBLIC_PRODUCT_NAME="TurboStarter"
NEXT_PUBLIC_DEFAULT_LOCALE="en"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
file: ./src/content/docs/(core)/web/configuration/environment-variables.mdx
meta: {
"title": "Environment variables",
"description": "Learn how to configure environment variables."
}
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/web` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, mobile).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/web`). These are specific to the app and are not shared between apps.
* **Secret keys**: Not stored in the `.env` file. Instead, they are stored in the environment variables of the CI/CD system.
* **Local secret keys**: If you need to use secret keys locally, you can store them in the `.env.local` file. This file is not committed to Git, making it safe for sensitive information.
## Shared variables
Here you can add all the environment variables that are shared across all the apps. This file should be located in the **root** of the project.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/web`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/web/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in Next.js app
NEXT_PUBLIC_PRODUCT_NAME="${PRODUCT_NAME}"
NEXT_PUBLIC_URL="${URL}"
NEXT_PUBLIC_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
NEXT_PUBLIC_THEME_MODE="system"
NEXT_PUBLIC_THEME_COLOR="orange"
...
```
To make environment variables available in the Next.js **client-side** app code, you need to prefix them with `NEXT_PUBLIC_`. They will be injected to the code during the build process.
Only environment variables prefixed with `NEXT_PUBLIC_` will be injected, so don't use this prefix for environment variables that should be used only in the server-side code.
[Read more about Next.js environment variables.](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables)
## Secret keys
Secret keys and sensitive information are to be never stored in the `.env` file. Instead, **they are stored in the environment variables of the CI/CD system.**
It means that you will need to add the secret keys to the environment
variables of your CI/CD system (e.g., GitHub Actions, Vercel, Cloudflare, your
VPS, Netlify, etc.). This is not a TurboStarter-specific requirement, but a
best practice for security for any application. Ultimately, it's your choice.
Below is some examples of "what is a secret key?" in practice.
```dotenv title=".env.local"
# Secret keys
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# Stripe server config - required only if you use Stripe as a billing provider
STRIPE_WEBHOOK_SECRET=""
STRIPE_SECRET_KEY=""
# Lemon Squeezy server config - required only if you use Lemon Squeezy as a billing provider
LEMON_SQUEEZY_API_KEY=""
LEMON_SQUEEZY_SIGNING_SECRET=""
LEMON_SQUEEZY_STORE_ID=""
...
```
If you need to use secret keys locally, you can store them in the `.env.local`
file. This file is not committed to Git, therefore it is safe to store
sensitive information in it.
file: ./src/content/docs/(core)/web/configuration/paths.mdx
meta: {
"title": "Paths configuration",
"description": "Learn how to configure the paths of your app."
}
The paths configuration is set at `apps/web/config/paths.ts`. This configuration stores all the paths that you'll be using in your application. It is a convenient way to store them in a central place rather than scatter them in the codebase using magic strings.
It is **unlikely you'll need to change** this unless you're heavily editing the codebase.
```ts title="apps/web/config/paths.ts"
const pathsConfig = {
index: "/",
marketing: {
pricing: "/pricing",
contact: "/contact",
blog: {
index: BLOG_PREFIX,
post: (slug: string) => `${BLOG_PREFIX}/${slug}`,
},
legal: {
terms: `${LEGAL_PREFIX}/terms-and-conditions`,
privacy: `${LEGAL_PREFIX}/privacy-policy`,
cookies: `${LEGAL_PREFIX}/cookie-policy`,
},
},
auth: {
login: `${AUTH_PREFIX}/login`,
register: `${AUTH_PREFIX}/register`,
forgotPassword: `${AUTH_PREFIX}/password/forgot`,
updatePassword: `${AUTH_PREFIX}/password/update`,
error: `${AUTH_PREFIX}/error`,
},
dashboard: {
index: DASHBOARD_PREFIX,
},
...,
} as const;
```
By declaring the paths as constants, we can use them safely throughout the
codebase. There is no risk of misspelling or using magic strings.
file: ./src/content/docs/(core)/web/customization/add-app.mdx
meta: {
"title": "Adding apps",
"description": "Learn how to add apps to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `web` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/web` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/web` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/web` folder. We will create a branch named `web-branch` and create a subtree for the `apps/web` folder.
```bash
git subtree split --prefix=apps/web --branch web-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/web` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin web-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/web` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `web-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the web-branch:
```bash
git subtree split --prefix=apps/web --branch web-branch
```
Now, you can push the updates to the `web-branch`:
```bash
git push origin web-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin web-branch --squash
```
That's it! You now have a new application in the monorepo π
file: ./src/content/docs/(core)/web/customization/add-package.mdx
meta: {
"title": "Adding packages",
"description": "Learn how to add packages to your Turborepo workspace."
}
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/web` or modify existing packages under `packages`. You don't need to do this to add a new page or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@turbostarter/`. If you named it `example`, the package will be named `@turbostarter/example`.
Finally, to make fast refresh work when you make changes to the package, you need to add the package to the `next.config.js` file in the root of your TurboStarter application `apps/web`.
```ts title="next.config.js"
const INTERNAL_PACKAGES = [
// all internal packages,
"@turbostarter/example",
];
```
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@turbostarter/example/client'`
2. The `server` module can be imported using `import { server } from '@turbostarter/example/server'`
## Use the package in your application
You can now use the package in your application by importing it using the package name:
```ts title="apps/web/src/app/page.tsx"
import { example } from "@turbostarter/example";
console.log(example());
```
Et voilΓ ! You have successfully added a new package to your TurboStarter application. π
file: ./src/content/docs/(core)/web/customization/components.mdx
meta: {
"title": "Components",
"description": "Manage and customize your app components."
}
For the components part, we're using [shadcn/ui](https://ui.shadcn.com) for atomic, accessible and highly customizable components.
shadcn/ui is a powerful tool that allows you to generate pre-designed
components with a single command. It's built with Tailwind CSS and Radix UI,
and it's highly customizable.
TurboStarter defines two packages that are responsible for the UI part of your app:
* `@turbostarter/ui` - shared styles, [themes](/docs/web/customization/styling#themes) and assets (e.g. icons)
* `@turbostarter/ui-web` - pre-built UI web components, ready to use in your app
## Adding a new component
There are basically two ways to add a new component:
TurboStarter is fully compatible with [shadcn CLI](https://ui.shadcn.com/docs/cli), so you can generate new components with single command.
Run the following command from the **root** of your project:
```bash
pnpm --filter @turbostarter/ui-web ui:add
```
This will launch an interactive command-line interface to guide you through the process of adding a new component where you can pick which component you want to add.
```bash
Which components would you like to add? > Space to select. A to toggle all.
Enter to submit.
β― accordion
β― alert
β― alert-dialog
β― aspect-ratio
β― avatar
β― badge
β― button
β― calendar
β― card
β― checkbox
```
Newly created components will appear in the `packages/ui/web/src` directory.
You can always copy-paste a component from the [shadcn/ui](https://ui.shadcn.com/docs/components) website and modify it to your needs.
This is possible, because the components are headless and don't need (in most cases) any additional dependencies.
Copy code from the website, create a new file in the `packages/ui/web/src` directory and paste the code into the file.
Keep in mind that you should always try to keep shared components as atomic as possible. This will make it easier to reuse them and to build specific views by composition.
E.g. include components like `Button`, `Input`, `Card`, `Dialog` in shared package, but keep specific components like `LoginForm` in your app directory.
## Using components
Each component is a standalone entity which has a separate export from the package. It helps to keep things modular, avoid unnecessary dependencies and make tree-shaking possible.
To import a component from the UI package, use the following syntax:
```tsx title="components/my-component.tsx"
// [!code word:card]
import {
Card,
CardContent,
CardHeader,
CardFooter,
CardTitle,
CardDescription,
} from "@turbostarter/ui-web/card";
```
Then you can use it to build a component specific to your app:
```tsx title="components/my-component.tsx"
export function MyComponent() {
return (
My Component
My Component Content
);
}
```
We recommend using [v0](https://v0.dev) to generate layouts for your app. It's a powerful tool that allows you to generate layouts from the natural language instructions.
Of course, **it won't replace a designer**, but it can be a good starting point for your layout.
file: ./src/content/docs/(core)/web/customization/styling.mdx
meta: {
"title": "Styling",
"description": "Get started with styling your app."
}
To build the user web interface TurboStarter comes with [Tailwind CSS](https://tailwindcss.com/) and [Radix UI](https://www.radix-ui.com/) pre-configured.
The combination of Tailwind CSS and Radix UI gives ready-to-use, accessible UI components that can be fully customized to match your brand's design.
## Tailwind configuration
In the `tooling/tailwind/config` directory you will find shared Tailwind CSS configuration files. To change some global styles you can edit the files in this folder.
Here is an example of a shared Tailwind configuration file:
```ts title="tooling/tailwind/config/base.ts"
import type { Config } from "tailwindcss";
export default {
darkMode: "class",
content: ["src/**/*.{ts,tsx}"],
theme: {
extend: {
colors: {
...
primary: {
DEFAULT: "hsl(var(--color-primary))",
foreground: "hsl(var(--color-primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--color-secondary))",
foreground: "hsl(var(--color-secondary-foreground))",
},
success: {
DEFAULT: "hsl(var(--color-success))",
foreground: "hsl(var(--color-success-foreground))",
},
...
},
},
},
plugins: [animate, containerQueries, typography],
} satisfies Config;
```
For the colors part, we bet stricly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) format to allow for easy theme management without a need for any JavaScript.
Also, each app has its own `tailwind.config.ts` file which extends the shared config and allows you to override the global styles.
Here is an example of an app's `tailwind.config.ts` file:
```ts title="apps/web/tailwind.config.ts"
import type { Config } from "tailwindcss";
import { fontFamily } from "tailwindcss/defaultTheme";
import baseConfig from "@turbostarter/tailwind-config/web";
export default {
// We need to append the path to the UI package to the content array so that
// those classes are included correctly.
content: [
...baseConfig.content,
"../../packages/ui/{shared,web}/src/**/*.{ts,tsx}",
],
presets: [baseConfig],
theme: {
extend: {
fontFamily: {
sans: ["var(--font-sans)", ...fontFamily.sans],
mono: ["var(--font-mono)", ...fontFamily.mono],
},
},
},
} satisfies Config;
```
That way we can have a separation of concerns and a clear structure for the Tailwind CSS configuration.
## Themes
TurboStarter comes with **9+** predefined themes which you can use to quickly change the look and feel of your app.
They're defined in `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) array, which is then converted to a CSS variable at build time (by our custom build script). That way we can ensure full type-safety and reuse themes across parts of our apps (e.g. use the same theme in emails).
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a theme to your app, you can use the `data-theme` attribute on the `html` element:
```tsx title="apps/web/src/app/layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
## Dark mode
TurboStarter comes with a built-in dark mode support.
Each theme has a corresponding dark mode variables which are used to change the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0, 0, 1],
foreground: [240, 0.1, 0.039],
card: [0, 0, 1],
"card-foreground": [240, 0.1, 0.039],
...
}
} satisfies ThemeColors;
```
As we define the `darkMode` as `class` in the shared Tailwind configuration, we need to add the `dark` class to the `html` element to apply the dark mode styles.
For this purpose we're using [next-themes](https://github.com/pacocoursey/next-themes) package under the hood to handle the user preferences management.
```tsx title="apps/web/src/providers/theme.tsx"
export const ThemeProvider = memo(({ children }) => {
return (
{children}
);
});
```
We can define the default theme mode and color in [app configuration](/docs/web/configuration/app).
file: ./src/content/docs/(core)/web/database/client.mdx
meta: {
"title": "Database client",
"description": "Use database client to interact with the database."
}
The database client is an export of the Drizzle client. It is automatically typed by Drizzle based on the schema and is exposed as the db object from the database package (`@turbostarter/db`) in the monorepo.
This guide covers how to initialize the client and also basic operations, such as querying, creating, updating, and deleting records. To learn more about the Drizzle client, check out the [official documentation](https://orm.drizzle.team/kit-docs/overview).
## Initializing the client
Pass the validated `DATABASE_URL` to the client to initialize it.
```ts title="server.ts"
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import { env } from "../env";
const client = postgres(env.DATABASE_URL);
export const db = drizzle(client);
```
Now it's exported from the `@turbostarter/db` package and can be used across the codebase (server-side).
## Querying data
To query data, you can use the `db` object and its methods:
```ts title="query.ts"
import { eq } from "@turbostarter/db";
import { db } from "@turbostarter/db/server";
import { customers } from "@turbostarter/db/schema";
export const getCustomerByUserId = async (userId: string) => {
const [data] = await db
.select()
.from(customers)
.where(eq(customers.userId, userId));
return data ?? null;
};
```
## Mutating data
You can use the exported utilities to mutate data. Insert, update or delete records in fast and fully type-safe way:
```ts title="mutation.ts"
import { eq } from "@turbostarter/db";
import { db } from "@turbostarter/db/server";
import { customers } from "@turbostarter/db/schema";
export const upsertCustomer = (data: InsertCustomer) => {
return db.insert(customers).values(data).onConflictDoUpdate({
target: customers.userId,
set: data,
});
};
```
file: ./src/content/docs/(core)/web/database/migrations.mdx
meta: {
"title": "Migrations",
"description": "Migrate your changes to the database."
}
You have your schema in place, and you want to apply your changes to the database. TurboStarter provides you a convenient way to do so with pre-configured CLI commands.
## Generating migration
To generate a migration, from the schema you need to run the following command:
```bash
pnpm db:generate
```
This will create a new `.sql` file in the `migrations` directory.
Drizzle will also generate a `.json` representation of the migration in the `meta` directory, but it's for its internal purposes and you shouldn't need to touch it.
## Applying migrations
To apply the migrations to the database, you need to run the following command:
```bash
pnpm db:migrate
```
This will apply all the migrations that have not been applied yet. If any conflicts arise, you can resolve them by modifying the generated migration file.
## Pushing changes
To push changes directly to the database, you can use the following command:
```bash
pnpm db:push
```
This lets you push your schema changes directly to the database and omit managing SQL migration files.
Pushing changes directly to the database (without using migrations) could be risky. Please be careful when using it; we recommend it only for local development and local databases.
[Read more about it in the Drizzle docs](https://orm.drizzle.team/kit-docs/overview#prototyping-with-db-push).
file: ./src/content/docs/(core)/web/database/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with the database.",
"index": true
}
We're using [Drizzle ORM](https://orm.drizzle.team) to interact with the database. It basically adds a little layer of abstraction between our code and the database.
> If you know SQL, you know Drizzle.
For the database we're leveraging [PostgreSQL](https://www.postgresql.org), but you could use any other database that Drizzle ORM supports (basically any SQL database e.g. MySQL, SQLite, etc.).
Drizzle ORM is a powerful tool that allows you to interact with the database in a type-safe manner. It ships with 0 (!) dependencies and is designed to be fast and easy to use.
## Setup
To start interacting with the database you first need to ensure that your database instance is up and running.
For local development we recommend using the [Docker](https://hub.docker.com/_/postgres) container.
You can start the container with the following command:
```bash
pnpm db:setup
```
This will start the database container and initialize the database with the latest schema.
**Where is DATABASE\_URL?**
`DATABASE_URL` is a connection string that is used to connect to the database. When the command will finish it will be displayed in the console and setup to your environment variables.
You can also use a cloud instance of database (e.g. [Neon](https://neon.tech/), [Turso](https://turso.tech/), etc.), although it's not recommended for local development.
**Where is DATABASE\_URL?**
It's available in your provider's project dashboard. You'll need to copy the connection string from there and add it to your `.env.local` file. The format will look something like:
* Neon: `postgresql://user:password@ep-xyz-123.region.aws.neon.tech/dbname`
* Turso: `libsql://your-db-xyz.turso.io`
Make sure to keep this URL secure and never commit it to version control.
Then, you need to set `DATABASE_URL` environment variable in **root** `.env.local` file.
```dotenv title=".env.local"
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@127.0.0.1:54322/postgres"
```
You're ready to go! π₯³
## Studio
TurboStarter provides you also with an interactive UI where you can explore your database and test queries called Studio.
To run the Studio, you can use the following command:
```bash
pnpm db:studio
```
This will start the Studio on [https://local.drizzle.studio](https://local.drizzle.studio).

## Next steps
* [Update schema](/docs/web/database/schema) - learn about schema and how to update it.
* [Generate & run migrations](/docs/web/database/migrations) - migrate your changes to the database.
* [Initialize client](/docs/web/database/client) - initialize the database client and start interacting with the database.
file: ./src/content/docs/(core)/web/database/schema.mdx
meta: {
"title": "Schema",
"description": "Learn about the database schema."
}
Creating a schema for your data is one of the primary tasks when building a new application.
You can find the schema of each table in `packages/db/src/db/schema` directory. The schema is basically organized by entity and each file is a separate table.
## Defining schema
The schema is defined using SQL-like utilities from [drizzle-orm](https://orm.drizzle.team/docs/sql-schema-declaration).
It supports all the SQL features, such as enums, indexes, foreign keys, extensions and more.
We're relying on the code-first approach, where we define the schema in code and then generate the SQL from it. That way we can approach full type-safety and the simplest flow for database updates and migrations.
## Example
Let's take a look at the `customers` table, where we store information about our customers.
```typescript title="customers.ts"
export const customers = pgTable("customers", {
userId: text("user_id")
.references(() => users.id, {
onDelete: "cascade",
})
.primaryKey(),
customerId: text("customer_id").notNull(),
status: billingStatusEnum("status"),
plan: pricingPlanTypeEnum("plan"),
createdAt: timestamp("created_at", { mode: "string" }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { mode: "string" })
.notNull()
.$onUpdate(() => new Date().toISOString()),
});
export type InsertCustomer = typeof customers.$inferInsert;
export type SelectCustomer = typeof customers.$inferSelect;
```
We're using a few native SQL utilities here, such as:
* `pgTable` - a table definition.
* `timestamp` - a timestamp.
* `text` - a text.
* `uuid` - a unique identifier which is used as a primary key.
* `unique` - a unique constraint.
* `references` - a reference to another table.
What's more, Drizzle gives us the ability to export the TypeScript types for the table, which we can reuse e.g. for the API calls.
Also, we can use the drizzle extension [drizzle-zod](https://orm.drizzle.team/docs/zod) to generate the Zod schemas for the table.
```typescript title="customers.ts"
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
export const insertCustomerSchema = createInsertSchema(customers);
export const selectCustomerSchema = createSelectSchema(customers);
```
Then we can use the generated schemas in our API handlers and frontend forms to validate the data.
file: ./src/content/docs/(core)/web/deployment/amplify.mdx
meta: {
"title": "AWS Amplify",
"description": "Learn how to deploy your TurboStarter app to AWS Amplify."
}
[AWS Amplify](https://aws.amazon.com/amplify/) is a fully managed service that makes it easy to build, deploy, and host modern web applications. It provides features like continuous deployment, serverless functions, authentication, and more - all integrated into a seamless developer experience.
This guide explains how to deploy your TurboStarter app on AWS Amplify. You'll learn how to set up your repository for automated deployments, configure build settings, manage environment variables, and ensure your application runs smoothly in production. **AWS Amplify handles the infrastructure management, allowing you to focus on developing your application.**
To deploy to AWS Amplify, you need to have an AWS account. You can create one [here](https://aws.amazon.com/amplify/).
## Create configuration file
To deploy your TurboStarter app to AWS Amplify, you need to create a config file. This file will contain the necessary information to connect your repository to AWS Amplify and deploy your application.
Let's create a new file called `amplify.yml` in the root of your project:
```yaml title="amplify.yml"
version: 1
applications:
- frontend:
buildPath: "/"
phases:
preBuild:
commands:
- npm install -g pnpm
- pnpm install
build:
commands:
- pnpm dlx turbo build --filter=web
artifacts:
baseDirectory: apps/web/.next
files:
- "**/*"
cache:
paths:
- node_modules/**/*
- apps/web/.next/cache/**/*
appRoot: apps/web
```
This configuration file tells AWS Amplify how to build and deploy your application:
* The `version` field specifies the Amplify configuration version
* Under `applications`, we define the build settings for our web app:
* `buildPath` indicates where to run the build commands
* `preBuild` phase installs pnpm and project dependencies
* `build` phase runs the Turborepo build command for the web app
* `artifacts` specifies which files to deploy (the Next.js build output)
* `cache` configures which directories to cache between builds
* `appRoot` points to the web application directory
AWS Amplify will use this configuration to automatically build and deploy your app whenever you push changes to your repository. It also useful to define other resources that you can use and link to your project.
## Create a new Amplify project
We'll use the [AWS Amplify](https://aws.amazon.com/amplify/) web interface to deploy our app. First, let's create a new project.

Proceed with the option to *Deploy an app*.
## Connect repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Amplify to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
Select the branch you want to deploy and make sure to enable the *My app is a monorepo* option - configure it with the path to the app that you want to deploy (e.g. `apps/web`).

## Configure build settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the points below to ensure a seamless deployment process.

Make sure that the build command and build output directory is set to the correct values (it should be defined based on your configuration file from Step 1.).
### Environment variables
In the *Advanced settings* section, you can define environment variables that will be available to your application at runtime.

Verify that all required environment variables are defined, so your app can be build and deployed successfully.
## Review and deploy!
On the next step, you'll be able to review the configuration that you've created and deploy your app. It's the right time to make sure that everything is set up correctly.

After making sure that everything is set up correctly, you can click on the *Save and deploy* button to start the deployment process.
When your app is deployed, you'll be able to access it via the URL provided in the Amplify console:

That's it! Your app is now deployed to AWS Amplify, congratulations! π
Feel free to scale your deployment to multiple regions, add custom domains, and use other Amplify features to make your app more robust and scalable.
Check out the [AWS Amplify documentation](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) for more information on how to use Amplify to its full potential.
file: ./src/content/docs/(core)/web/deployment/api.mdx
meta: {
"title": "Standalone API",
"description": "Learn how to deploy your API as a dedicated service."
}
Sometimes you want to deploy your API as a standalone service. This is useful if you want to deploy your API to a different domain or to deploy it as a microservice. You can also follow this approach if you don't need a web app, but still need API service for [mobile app](/docs/mobile) or [browser extension](/docs/extension).
Deploying your API as a standalone service provides enhanced flexibility and scalability. This allows you to independently scale your API from your web app. It's particularly beneficial for executing "long-running" tasks on your backend, such as report generation, real-time data processing, or background tasks that are likely to timeout in a serverless environment.
This guide explains how to deploy your TurboStarter API as a standalone service. As Hono has multiple deployment options (e.g. [Deno](https://hono.dev/docs/getting-started/deno), [Bun](https://hono.dev/docs/getting-started/bun)), this guide will focus primarily on the [Node.js](https://hono.dev/docs/getting-started/nodejs) deployment.
## Create separate API app
We have a [dedicated guide](/docs/web/customization/add-app) on how to add another app to your project. However, in this case, only a few files need to be added, so we can do it quickly here.
First, let's create an `api` directory inside the `apps` directory - it will be the root of your API app.
Next, add the following files into the `apps/api` directory:
```json
{
"name": "api",
"version": "0.1.0",
"private": true,
"scripts": {
"build": "esbuild ./src/index.ts --bundle --platform=node --outfile=dist/index.js",
"clean": "git clean -xdf dist .turbo node_modules",
"dev": "dotenv -c -- tsx watch src/index.ts",
"start": "node dist/index.js",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@hono/node-server": "1.13.7",
"@turbostarter/api": "workspace:*"
},
"devDependencies": {
"@turbostarter/tsconfig": "workspace:*",
"@types/node": "20.16.10",
"esbuild": "0.24.2",
"tsx": "4.19.2",
"typescript": "catalog:"
}
}
```
```json
{
"extends": "@turbostarter/tsconfig/base.json",
"include": ["src"],
"exclude": ["node_modules"]
}
```
```ts
import { serve } from "@hono/node-server";
import { appRouter } from "@turbostarter/api";
serve(
{
fetch: appRouter.fetch,
port: Number(process.env.PORT) || 3001,
},
({ port }) => {
console.log(`Server is running on ${port} π`);
},
);
```
This will enable you to have a minimal configuration required to run your API as a standalone service. For sure, you can add more configuration (e.g. ESLint or Prettier) if needed, we just want to keep it minimal for the sake of this guide.
## Connect web app to API
The API will be running on a different URL than your web app. For the minimal setup and to avoid handling [cross-origin resource sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) issues, we will rewrite the API URL in the web app.
To do this, you will need to change your `next.config.js` file to include the API URL rewrite:
```js title="apps/web/next.config.js"
/** @type {import("next").NextConfig} */
const config = {
rewrites: async () => [
{
source: "/api/:path*",
destination: `${env.NEXT_PUBLIC_API_URL ?? "http://localhost:3001"}/api/:path*`,
},
],
};
```
It's recommended to use an environment variable (e.g. `NEXT_PUBLIC_API_URL`) to set the API URL. This is a good practice to make it easier to change the API URL in different environments (e.g. development, staging, production).
Now you should be able to run your API as a standalone service. When you run the project with `pnpm dev`, you will see the new app called `api` with your API server running on [http://localhost:3001](http://localhost:3001).
## Deploy!
You can basically deploy your API as any other Node.js project. We will quickly go through the two most popular options: [PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service) and [Docker](https://www.docker.com/).
### Platform as a Service (PaaS)
PaaS providers like [Vercel](https://vercel.com/), [Heroku](https://www.heroku.com/), or [Netlify](https://www.netlify.com/) allow you to deploy your Node.js app with a few clicks. You can follow our [dedicated guides](/docs/web/deployment/checklist#deploy-web-app-to-production) for the most popular providers. Every process is similar, and will contains a few crucial steps:
1. Connecting your repository to the PaaS provider
2. Setting up build settings (e.g. build command, output directory)
3. Setting up environment variables
4. Deploying the project
To make sure your API is built and run correctly, you will need to ensure that appropriate commands are correctly set up. In our case, the following commands will need to be configured:
```bash
pnpm turbo build --filter=api
```
```bash
pnpm --filter=api start
```
This is required to ensure that the PaaS provider of your choice will be able to build and run your application correctly.
### Docker
Deploying your API as a Docker container is a good option if you want to have more control over the deployment process. You can follow our [dedicated guide](/docs/web/deployment/docker) to learn how to deploy your API as a Docker container.
For the API application, the `Dockerfile` will be located in the `apps/api` directory and it could look like this:
```dockerfile title="apps/api/Dockerfile"
FROM node:20-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune api --docker
FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
ENV SKIP_ENV_VALIDATION=1 \
NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=api
FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
adduser -S api -u 1001 -G nodejs
COPY --from=builder --chown=api:nodejs /app/apps/api/dist/ ./
USER api
EXPOSE 3001
CMD ["node", "index.js"]
```
To test if everything works correctly, you can run a [container](https://docs.docker.com/get-started/03_run_your_app/) locally with the following commands:
```bash
docker build -f ./apps/api/Dockerfile . -t turbostarter-api
docker run -p 3001:3001 turbostarter-api
```
Make sure to also [pass](https://docs.docker.com/reference/cli/docker/container/run/#env) all the required environment variables to the container, so your API can start without any issues.
Deploying your API as a Docker container is a great way to isolate your API from the host environment, making it easier to deploy and scale. It also simplifies the workflow if you're working with a team, as you can easily share the Docker image with your colleagues and they will run the API in the **exact same** environment.
That's it! You can now grow your API layer as a standalone service, separated from other apps in your project, and deploy it anywhere you want.
file: ./src/content/docs/(core)/web/deployment/checklist.mdx
meta: {
"title": "Checklist",
"description": "Let's deploy your TurboStarter app to production!"
}
When you're ready to deploy your project to production, follow this checklist.
This process may take a few hours and some trial and error, so buckle up β you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](https://supabase.com/) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities of doing a migration:
TurboStarter comes with predefined Github Actions workflow to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is to set your `DATABASE_URL` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in `.env.local` file and run the following command:
```bash
pnpm db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## Configure OAuth Providers
**Why it's necessary?**
Configuring OAuth providers like [Google](https://www.better-auth.com/docs/authentication/google) or [Github](https://www.better-auth.com/docs/authentication/github) ensures that users can log in using their existing accounts, enhancing user convenience and security. This step involves setting up the OAuth credentials in the provider's developer console, configuring the necessary environment variables, and setting up callback URLs to point to your production app.
**How to do it?**
1. Follow the provider-specific guides to set up OAuth credentials for the providers you want to use. For example:
* [Google OAuth setup guide](https://www.better-auth.com/docs/authentication/google)
* [Github OAuth setup guide](https://www.better-auth.com/docs/authentication/github)
2. Once you have the credentials, set the corresponding environment variables in your project. For the example providers above:
* For Google: `GOOGLE_CLIENT_ID` and `GOOGLE_CLIENT_SECRET`
* For Github: `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
3. Ensure that the callback URLs for each provider are set to point to your production app. **This is crucial for the OAuth flow to work correctly.**
You can add or remove OAuth providers based on your needs. Just make sure to follow the provider's setup guide, set the required environment variables, and configure the callback URLs correctly.
## Setup billing provider
**Why it's necessary?**
Well - you want to get paid, right? Setting up billing ensures that you can charge your users for using your SaaS application, enabling you to monetize your service and cover operational costs.
**How to do it?**
* Create a [Stripe](/docs/web/billing/stripe) or [Lemon Squeezy](/docs/web/billing/lemon-squeezy) account.
* Update the environment variables with the correct values for your billing service.
* Point webhooks from Stripe or Lemon Squeezy to `/api/billing/webhook`.
* Refer to the [relevant documentation](/docs/web/billing/overview) for more details on setting up billing.
## Setup emails provider
**Why it's necessary?**
Setting up an email provider is crucial for your SaaS application to send notifications, confirmations, and other important messages to your users. This enhances user experience and engagement, and is a standard practice in modern web applications.
**How to do it?**
* Create an account with an email service provider of your choice. See [available providers](/docs/web/emails/configuration#providers) for more information.
* Update the environment variables with the correct values for your email service.
* Refer to the [relevant documentation](/docs/web/emails/overview) for more details on setting up email.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the application to function correctly. These variables include API keys, database URLs, and other configuration details required for your app to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your **hosting provider's environment variables**. Redeploy the app once you have the URL to set in the environment variables.
## Deploy web app to production
**Why it's necessary?**
Because your users are waiting! Deploying your Next.js app to a hosting provider makes it accessible to users worldwide, allowing them to interact with your application.
**How to do it?**
Deploy your Next.js app to chosen hosting provider. **Copy the deployment URL and set it as an environment variable in your project's settings.** Feel free to check out our dedicated guides for the most popular hosting providers:
We also have a dedicated guide for [deploying your API as a standalone service](/docs/web/deployment/api).
That's it! Your app is now live and accessible to your users, good job! π
* Update the legal pages with your company's information (privacy policy, terms of service, etc.).
* Remove the placeholder blog and documentation content / or replace it with your own.
* Customize authentication emails and other email templates.
* Update the favicon and logo with your own branding.
* Update the FAQ and other static content with your own information.
file: ./src/content/docs/(core)/web/deployment/docker.mdx
meta: {
"title": "Docker",
"description": "Learn how to containerize your TurboStarter app with Docker."
}
[Docker](https://docker.com) is a popular platform for containerizing applications, making it easy to package your app with all its dependencies for consistent performance across environments. It simplifies development, testing, and deployment.
This guide explains how to containerize your TurboStarter app using Docker. You'll learn to create a Dockerfile, build a container image, and run your app in a container for a reliable and portable setup.
## Configure Next.js for Docker
First of all, we need to configure Next.js to output the build files in the [standalone format](https://nextjs.org/docs/pages/api-reference/config/next-config-js/output) - it's required for the Docker image to work. To do this, we need to add the following to our `next.config.js` file:
```js title="apps/web/next.config.js"
/** @type {import("next").NextConfig} */
const config = {
output: "standalone",
...
};
```
## Create a Dockerfile
[Dockerfile](https://docs.docker.com/get-started/02_our_app/) is a text file that contains the instructions for building a [Docker image](https://docs.docker.com/get-started/02_our_app/). It defines the environment, dependencies, and commands needed to run your app. You can safely copy the following Dockerfile to your project:
```dockerfile title="apps/web/Dockerfile"
FROM node:20-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune web --docker
FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
ENV SKIP_ENV_VALIDATION=1 \
NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=web
FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
adduser -S web -u 1001 -G nodejs
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=web:nodejs /app/apps/web/public ./apps/web/public
USER web
EXPOSE 3000
CMD ["node", "apps/web/server.js"]
```
Feel free to check out our [self-hosting guide](/blog/self-host-your-nextjs-turborepo-app-with-docker-in-5-minutes) for more details on how each stage of the Dockerfile works.
And that's all we need! You can now build and run your Docker image to deploy your app anywhere you want in an [isolated environment](https://docs.docker.com/get-started/03_run_your_app/).
## Run a container
To test if everything works correctly, you can run a [container](https://docs.docker.com/get-started/03_run_your_app/) locally with the following commands:
```bash
docker build -f ./apps/web/Dockerfile . -t turbostarter
docker run -p 3000:3000 turbostarter
```
Make sure to also [pass](https://docs.docker.com/reference/cli/docker/container/run/#env) all the required environment variables to the container, so your app can start without any issues.
If everything works correctly, you should be able to access your app at [http://localhost:3000](http://localhost:3000).
That's it! You can now build and deploy your app as a Docker container to any supported hosting (e.g. [Fly.io](/docs/web/deployment/fly)).
Using Docker containers is a great way to isolate your app from the host environment, making it easier to deploy and scale. It also simplifies the workflow if you're working with a team, as you can easily share the Docker image with your colleagues and they will run the app in the **exact same** environment.
file: ./src/content/docs/(core)/web/deployment/fly.mdx
meta: {
"title": "Fly.io",
"description": "Learn how to deploy your TurboStarter app to Fly.io."
}
[Fly.io](https://fly.io) makes deploying web applications to the cloud easy and efficient. It handles scaling, monitoring, and logging so you can focus on building your app.
This guide explains how to deploy your TurboStarter app on Fly.io. You'll learn how to leverage [Docker](/docs/web/deployment/docker) containers to deploy your app, set up builds, and manage environment variables for a smooth and reliable deployment.
To deploy to Fly.io, you need to have an account. You can create one [here](https://fly.io/app/signup).
You also need to have [Docker](/docs/web/deployment/docker) configured in your project.
## Setup Fly CLI
As we will be using Fly CLI to launch and manage our app, you need to install and setup it on your machine.
[Check the official documentation on how to install Fly CLI](https://fly.io/docs/flyctl/install/).
After you've installed Fly CLI, you need to login to your Fly account and connect it with your machine:
```bash
fly auth login
```
[Read more about authenticating CLI](https://fly.io/docs/flyctl/auth/#available-commands).
Now you're ready to launch your app!
## Launch project
Use a [Dockerfile](/docs/web/deployment/docker) to launch your app with [Fly CLI](https://fly.io/docs/reference/flyctl/). You can use the following command to do this from your local machine:
```bash
fly launch --dockerfile apps/web/Dockerfile
```
Make sure to set all the required configuration in the CLI steps (e.g. set port to `3000`, setup additional services, choose billing plan, etc.).

If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Render service. Make sure to set it to the region closest to your database and users.
After the launch is complete, Fly will output your project configuration into `fly.toml` file. The configuration of your project is stored there, feel free to customize it to your needs:
```toml title="fly.toml"
app = 'web-aged-sky-5596'
primary_region = 'ams'
[build]
dockerfile = 'apps/web/Dockerfile'
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = 'stop'
auto_start_machines = true
min_machines_running = 0
processes = ['app']
[[vm]]
memory = '512mb'
cpu_kind = 'shared'
cpus = 1
```
See [Fly.io documentation](https://fly.io/docs/reference/configuration) for more information on how to use this file.
## Set up secrets
To make your app fully functional, you need to set up required environment variables. You can do this by running the following command:
```bash
fly secrets set --app DATABASE_URL=...
```
They will be automatically added to your app's runtime environment.
## Deploy!
Each time you make changes to `fly.toml` or secrets, you need to re-deploy your app to apply changes to the running app.
To do this, just run the following command in your project directory:
```bash
fly deploy
```
This will build your app and deploy it to Fly.io with the latest code version.

That's it! Your app is now deployed to Fly.io, congratulations! π
Fly is a platform that allows you to deploy and manage applications in the cloud. It provides a simple and intuitive way to deploy your app, with features such as automatic scaling, load balancing, and rolling updates. With Fly, you can focus on building your app without worrying about the underlying infrastructure.
file: ./src/content/docs/(core)/web/deployment/netlify.mdx
meta: {
"title": "Netlify",
"description": "Learn how to deploy your TurboStarter app to Netlify."
}
[Netlify](https://netlify.com) is a powerful platform for deploying modern web applications. It offers continuous deployment, serverless functions, and a global CDN to ensure your application is fast and reliable.
In this guide, we will walk through the steps to deploy your TurboStarter app to Netlify. You will learn how to connect your repository, configure build settings, and manage environment variables to ensure a smooth deployment process.
To deploy to Netlify, you need to have an account. You can create one [here](https://netlify.com/signup).
## Create new site
Once you've created your account and logged in, the Netlify dashboard will display an option to add a new site. Click on the *Import from Git* button to begin connecting your Git repository.

If you've already had a Netlify account, you can get to this step by clicking on the *Sites* tab in the navigation menu.
## Connect your repository
Choose the Git provider of your project and select the repository you want to deploy.

To connect your repository, you need to authorize Netlify to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure build settings
Last step before deploying! Configure the build settings according to your project configuration. Use the screenshots provided below for reference to ensure a smooth deployment process.

Also, add all environment variables under *Environment variables* section - it's required to make the build process work.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Netlify, congratulations! π
If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Netlify serverless functions. Make sure to set it to the region closest to your database and users.

Unfortunately, it's a paid feature, so you need to upgrade your Netlify account to be able to change it.
file: ./src/content/docs/(core)/web/deployment/railway.mdx
meta: {
"title": "Railway",
"description": "Learn how to deploy your TurboStarter app to Railway."
}
[Railway](https://railway.app) is a platform that allows you to deploy your web applications to a cloud environment. It provides a simple and efficient way to manage your application's infrastructure, including scaling, monitoring, and logging.
This guide provides a step-by-step walkthrough for deploying your TurboStarter app on Railway, and taking advantage of its features in production environment. You'll discover how to link your repository, tailor build settings, and oversee environment variables, ensuring a smooth and optimized deployment process that leverages Railway's capabilities.
To deploy to Railway, you need to have an account. You can create one [here](https://railway.app/signup).
## Create new project
We'll use [Railway](https://railway.app) web app to deploy our project. First, let's create a new project.

Proceed with the option to *Deploy from Github repo*.
## Connect repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Railway to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure project settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the points below to ensure a seamless deployment process.
### Commands
Configure the build and start commands to ensure that your project is built and started correctly.

Make sure to set them to the following values:
* **Build command** - `pnpm dlx turbo build --filter=web`
* **Start command** - `pnpm --filter=web start`
### Environment variables
Last, but not least, you need to set the environment variables for your project. Make sure to check if all the required variables are set.

If you want to achieve better performance, lower latency in your API requests or add some replicas of your application, you can customize the region of your Railway instance. Make sure to set it to the region closest to your database and users.

You can also use a [Railway config file](https://docs.railway.com/guides/config-as-code) to manage your project's settings in one place, as a code.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Railway, congratulations! π
Feel free to scale your deployment to multiple regions or isolate it in the separate network. Check out the [Railway documentation](https://docs.railway.app) for more information about which services are available.
file: ./src/content/docs/(core)/web/deployment/render.mdx
meta: {
"title": "Render",
"description": "Learn how to deploy your TurboStarter app to Render."
}
[Render](https://render.com) offers a unique combination of features that make it an ideal platform for deploying modern web applications. With Render, you can leverage continuous deployment, managed databases, and a global CDN to ensure your application is not only fast and reliable but also scalable and secure.
In this guide, we will walk through the steps to deploy your TurboStarter app to Render, highlighting the benefits of using Render's platform. You will learn how to connect your repository, configure build settings, and manage environment variables to ensure a seamless and efficient deployment process that takes advantage of Render's features.
To deploy to Render, you need to have an account. You can create one [here](https://render.com/signup).
## Create a new service
Navigate to the [Render dashboard](https://dashboard.render.com) and click on the *New* button.

Pick the *Web Service* option and proceed to the next step.
## Connect your repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Render to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure service settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the screenshots below to ensure a seamless deployment process.

You can also group your service with other services (e.g. [databases](https://render.com/docs/postgresql-creating-connecting) or [cron jobs](https://render.com/docs/cronjobs)) in a [Project](https://render.com/docs/projects), which will help you manage them together.
[Read official documentation for more information](https://render.com/docs/projects).
If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Render service. Make sure to set it to the region closest to your database and users.
### Commands
Configure the build and start commands to ensure that your project is built and started correctly.

Make sure to set them to the following values:
* **Build command** - `pnpm install --frozen-lockfile; pnpm dlx turbo build --filter=web`
* **Start command** - `pnpm --filter=web start`
### Instance type
Select a plan that fits your project's needs.

For testing purposes or MVPs, you can safely use the *Free* plan. Although, for the production version, it's recommended to upgrade your plan, as it offers more resources and your project won't be paused after periods of inactivity.
### Environment variables
Last, but not least, you need to set the environment variables for your project. Make sure to check if all the required variables are set.

You can also modify *Advanced settings* to set e.g. [health checks](https://render.com/docs/deploys#health-checks) or modify [auto deploy](https://render.com/docs/deploys#automatic-git-deploys) triggers.
## Deploy!
Click on the *Deploy Web Service* button to start the deployment process.

That's it! Your app is now deployed to Render, congratulations! π
Render is a powerful platform with a lot of integrations and features. Feel free to check out the [official documentation](https://render.com/docs) for more information.
file: ./src/content/docs/(core)/web/deployment/vercel.mdx
meta: {
"title": "Vercel",
"description": "Learn how to deploy your TurboStarter app to Vercel."
}
In general you can deploy the application to any hosting provider that supports Node.js, but we recommend using [Vercel](https://vercel.com) for the best experience.
Vercel is the easiest way to deploy Next.js apps. It's the company behind Next.js and has first-class support for Next.js.
To deploy to Vercel, you need to have an account. You can create one [here](https://vercel.com/signup).
TurboStarter has two, separate ways to deploy to Vercel, each ships with **one-click deployment**. Choose the one that best fits your needs.
Deploying with this method is the easiest and fastest way to get your app up and running on the cloud provider. Follow these steps:
## Connect your git repository
After signing up you will be promted to import a git repository. Select the git provider of your project and connect your git account with Vercel.

## Configure project settings
As we're working in monorepo, some additional settings are required to make the build process work.
Make sure to set the following settings:
* **Build command**: `pnpm turbo build --filter=web` - to build only the web app
* **Root directory**: `apps/web` - to make sure Vercel uses the web folder as the root directory (make sure to check *Include files outside the root directory in the Build Step* option, it will ensure that all packages from your monorepo are included in the build process)

## Configure environment variables
Please make sure to set all the environment variables required for the project to work correctly. You can find the list of required environment variables in the `.env.example` file in the `apps/web` directory.
The environment variables can be set in the Vercel dashboard under *Project Settings* > *Environment Variables*. Make sure to set them for all environments (Production, Preview, and Development) as needed.
**Failure to set the environment variables will result in the project not working correctly.**
If the build fails, deep dive into the logs to see what is the issue. Our Zod configuration will validate and report any missing environment variables. To find out which environment variables are missing, please check the logs.
The first time this may fail if you don't yet have a custom domain connected since you cannot place it in the environment variables yet. It's fine. Make the first deployment fail, then pick the domain and add it. Redeploy.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Vercel, congratulations! π
Despite connecting your repository is the easiest way to deploy to Vercel, we recommend using preconfigured Github Actions for the most granular control over your deployments.
We'll leverage [Vercel CLI](https://vercel.com/docs/cli) to deploy the application on the CI/CD pipeline. [See official documentation on deploying to Github Actions](https://vercel.com/guides/how-can-i-use-github-actions-with-vercel).
## Get Vercel Access Token
To deploy the application, we need to get Vercel access token.
Please, follow [this guide](https://vercel.com/guides/how-do-i-use-a-vercel-api-access-token) to create one.

## Install Vercel CLI
We need to install [Vercel CLI](https://vercel.com/docs/cli) locally to be able to get required credentials for our Github Actions.
You can install it using following command:
```bash
pnpm i -g vercel
```
Then, login to Vercel using following command:
```bash
vercel login
```
## Get credentials
Inside your folder, run following command to create a new project:
```bash
vercel link
```
This will generate a `.vercel` folder, where you can find `project.json` file with `projectId` and `orgId`.
## Configure Github Actions
Inside GitHub, add `VERCEL_TOKEN`, `VERCEL_ORG_ID`, and `VERCEL_PROJECT_ID` as [secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) to your repository.

This will allow Github Actions to access your settings and deploy the application to Vercel.
## Configure project settings
As we're working in monorepo, some additional settings are required to make the build process work.
Make sure to set the following settings:
* **Build command**: `pnpm turbo build --filter=web` - to build only the web app
* **Root directory**: `apps/web` - to make sure Vercel uses the web folder as the root directory (make sure to check *Include files outside the root directory in the Build Step* option, it will ensure that all packages from your monorepo are included in the build process)

## Configure environment variables
Please make sure to set all the environment variables required for the project to work correctly. You can find the list of required environment variables in the `.env.example` file in the `apps/web` directory.
The environment variables can be set in the Vercel dashboard under *Project Settings* > *Environment Variables*. Make sure to set them for all environments (Production, Preview, and Development) as needed.
**Failure to set the environment variables will result in the project not working correctly.**
If the build fails, deep dive into the logs to see what is the issue. Our Zod configuration will validate and report any missing environment variables. To find out which environment variables are missing, please check the logs.
The first time this may fail if you don't yet have a custom domain connected since you cannot place it in the environment variables yet. It's fine. Make the first deployment fail, then pick the domain and add it. Redeploy.
## Deploy!
By default, TurboStarter comes with a Github Actions workflow that can be [triggered manually](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow).
The configuration is located in `.github/workflows/publish-web.yml`, you can easily customize it to your needs, for example to trigger a deployment from `main` branch.
```diff title=".github/workflows/publish-web.yml"
on:
- workflow_dispatch:
+ push:
+ branches:
+ - main
```
Then, every time you push to `main` branch, the workflow will be triggered and the application will be deployed to Vercel.

That's it! Your app is now deployed to Vercel, congratulations! π
## Troubleshooting
In some cases, users have reported issues with the deployment to Vercel using the default parameters. If you encounter problems, try these troubleshooting steps:
1. **Check root directory settings**
* Set the root directory to `apps/web`
* Enable *Include source files outside of the Root Directory* option
2. **Verify build configuration**
* Ensure the framework preset is set to Next.js
* Set build command to `pnpm turbo build --filter=web`
* Set install command to `pnpm install`
3. **Review deployment logs**
* If deployment fails, carefully review the build logs
* Look for any error messages about missing dependencies or environment variables
* Verify that all required environment variables are properly configured
If issues persist after trying these steps, check the [deployment troubleshooting guide](/docs/web/troubleshooting/deployment) for additional help.
file: ./src/content/docs/(core)/web/emails/configuration.mdx
meta: {
"title": "Configuration",
"description": "Learn how to configure your emails in TurboStarter."
}
The `@turbostarter/email` package provides a simple and flexible way to send emails using various email providers. It abstracts the complexity of different email services and offers a consistent interface for sending emails with pre-defined templates.
To configure the email service, you need to set a few environment variables.
```dotenv
EMAIL_PROVIDER="resend"
EMAIL_FROM="hello@resend.dev"
EMAIL_THEME="orange"
```
Let's break down the variables:
* `EMAIL_PROVIDER` - The email provider to use. Defaults to [Resend](/docs/web/emails/configuration#resend).
* `EMAIL_FROM` - The email address that emails will be sent from. **Please make sure that the mail address and domain are verified in your mail provider.**
* `EMAIL_THEME` - The theme color to use for the emails. See [Themes](/docs/web/customization/styling#themes) for more information.
Configuration will be validated against the schema, so you will see the error messages in the console if something is not right.
## Providers
TurboStarter supports multiple email providers, each with its own configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best fits your needs and follow the instructions in the respective accordion section.
To use Resend as your email provider, you need to [create an account](https://resend.com/) and [obtain your API key](https://resend.com/docs/dashboard/api-keys/introduction).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
RESEND_API_KEY="your-api-key"
```
Also, make sure to activate the Resend provider as your email provider:
```dotenv
EMAIL_PROVIDER="resend"
```
To customize the provider, you can find its definition in `packages/email/src/providers/resend` directory.
For more information, please refer to the [Resend documentation](https://resend.com/docs).
To use SendGrid as your email provider, you need to [create an account](https://sendgrid.com/) and [obtain your API key](https://sendgrid.com/docs/ui/account-and-settings/api-keys/).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
SENDGRID_API_KEY="your-api-key"
```
Also, make sure to activate the SendGrid provider as your email provider:
```dotenv
EMAIL_PROVIDER="sendgrid"
```
To customize the provider, you can find its definition in `packages/email/src/providers/sendgrid` directory.
For more information, please refer to the [SendGrid documentation](https://sendgrid.com/docs).
To use Postmark as your email provider, you need to [create an account](https://postmarkapp.com/) and [obtain your server API token](https://postmarkapp.com/support/article/1008-what-are-the-account-and-server-api-tokens).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
POSTMARK_API_KEY="your-secret-api-token"
```
Also, make sure to activate the Postmark provider as your email provider:
```dotenv
EMAIL_PROVIDER="postmark"
```
To customize the provider, you can find its definition in `packages/email/src/providers/postmark` directory.
For more information, please refer to the [Postmark documentation](https://postmarkapp.com/developer).
To use Plunk as your email provider, you need to [create an account](https://plunk.dev/) and [obtain your API key](https://docs.useplunk.com/api-reference/authentication).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
PLUNK_API_KEY="your-api-key"
```
Also, make sure to activate the Plunk provider as your email provider:
```dotenv
EMAIL_PROVIDER="plunk"
```
To customize the provider, you can find its definition in `packages/email/src/providers/plunk` directory.
For more information, please refer to the [Plunk documentation](https://docs.useplunk.com).
If you're using the `nodemailer` as your email provider, you'll need to set the following SMTP configuration in your environment variables:
```dotenv
NODEMAILER_HOST="your-smtp-host"
NODEMAILER_PORT="your-smtp-port"
NODEMAILER_USER="your-smtp-user"
NODEMAILER_PASSWORD="your-smtp-password"
```
The variables are:
* `NODEMAILER_HOST`: The host of your SMTP server.
* `NODEMAILER_PORT`: The port of your SMTP server.
* `NODEMAILER_USER`: The email address user of your SMTP server.
* `NODEMAILER_PASSWORD`: The password for the email account.
Also, make sure to activate the nodemailer provider as your email provider:
```dotenv
EMAIL_PROVIDER="nodemailer"
```
To customize the provider, you can find its definition in `packages/email/src/providers/nodemailer` directory.
For more information, please refer to the [nodemailer documentation](https://nodemailer.com/smtp/).
## Templates
In the `@turbostarter/email` package, we provide a set of pre-defined templates for you to use. You can find them in the `packages/email/src/templates` directory.
When you run your development server, you will be able to preview all available templates in the browser under [http://localhost:3005](http://localhost:3005).

Next to the templates, you can also find some shared components that you can use in your emails. The file structure looks like this:
Feel free to add your own templates and components or modify existing ones to match them with your brand and style.
### How to add a new template?
We'll go through the process of adding a new template, as it requires a few steps to make sure everything works correctly.
#### Define types
Let's assume that we want to add a **welcome email**, that new users will receive after signing up.
We'll start with defining new template type in `packages/email/src/types/templates.ts` file:
```ts title="templates.ts"
export const EmailTemplate = {
...AuthEmailTemplate,
WELCOME: "welcome",
} as const;
```
Also, we would need to add types for variables that we'll pass to the template (if any), in our case it will be just a `name` of the user:
```ts title="templates.ts"
type WelcomeEmailVariables = {
welcome: {
name: string;
};
};
export type EmailVariables = AuthEmailVariables | WelcomeEmailVariables;
```
By doing this, we ensure that payload passed to the template will have all required properties and we won't end up with an email that tells your user "Hey, undefined!".
#### Create template
Next up, we need to create a file with the template itself. We'll create an `welcome.tsx` file in `packages/email/src/templates` directory.
```tsx title="welcome.tsx"
import { Heading, Preview, Text } from "@react-email/components";
import { Button } from "../_components/button";
import { Layout } from "../_components/layout/layout";
import type { EmailTemplate, EmailVariables } from "../../types";
type Props = EmailVariables[typeof EmailTemplate.WELCOME];
export const Welcome = ({ name }: Props) => {
return (
Welcome to TurboStarter!Hi, {name}!Start your journey with our app by clicking the button below.
);
};
Welcome.subject = "Welcome to TurboStarter!";
Welcome.PreviewProps = {
name: "John Doe",
};
export default Welcome;
```
As you can see, by defining appropriate types for the template, we can safely use the variables as a props in the template.
To learn more about supported components, please refer to the [React Email documentation](https://react.email/docs/components).
#### Register template
We have to register the template in the main entrypoint of the templates in `packages/email/src/templates/index.ts` file:
```ts title="index.ts"
import { Welcome } from "./welcome";
export const templates = {
...
[EmailTemplate.WELCOME]: Welcome,
} as const;
```
That way, it will be available in the `sendEmail` function, enabling us to send it from the server-side of your application.
```ts
import { sendEmail } from "@turbostarter/email/server";
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
});
```
Learn more about sending emails in the [dedicated section](/docs/web/emails/sending).
Et voilΓ ! You've just added a new email template to your application π
### Translating templates
You can also translate your templates to support multiple languages. Each mail template is passed the `locale` property, which you can use to get the translation for the current locale. This allows you to maintain consistent translations across your application and emails.
The translation system **uses the same i18n** setup as your main application, so you can reuse your existing translation files and namespaces. The translations are loaded server-side when the email is generated, ensuring the correct language is used based on the user's preferences.
Here's how you can implement translations in your email templates:
```tsx
import { Heading, Preview, Text } from "@react-email/components";
import { getTranslation } from "@turbostarter/i18n/server";
import { Button } from "../_components/button";
import { Layout } from "../_components/layout/layout";
import type {
EmailTemplate,
EmailVariables,
CommonEmailProps,
} from "../../types";
type Props = EmailVariables[typeof EmailTemplate.WELCOME] & CommonEmailProps;
export const Welcome = async ({ name, locale }: Props) => {
const { t } = await getTranslation({ locale, ns: "auth" });
return (
{t("account.welcome.preview")}{t("account.welcome.heading", { name })}{t("account.welcome.body")}
);
};
Welcome.subject = async ({ locale }: CommonEmailProps) => {
const { t } = await getTranslation({ locale, ns: "auth" });
return t("account.welcome.subject");
};
Welcome.PreviewProps = {
name: "John Doe",
locale: "en",
};
export default Welcome;
```
To send the email in the specified language, you can pass the optional `locale` argument to the `sendEmail` function:
```ts
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
locale: "en", // [!code highlight]
});
```
Learn more about translations in the [dedicated section](/docs/web/internationalization/translations).
file: ./src/content/docs/(core)/web/emails/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with emails in TurboStarter.",
"index": true
}
For mailing functionality, TurboStarter integrates [React Email](https://react.email/docs/introduction) which enables you to build your emails from composable React components.
It's a simple, yet powerful library that allows you to **write your emails in React**.
It also allows you to use **Tailwind CSS for styling**, which is a huge advantage, as we can share almost everything from the main app with the emails package, keeping them consistent with rest of the app.
You can read more about `react-email` package in the [official documentation](https://react.email/docs/introduction).
## Providers
TurboStarter implements multiple providers for managing and sending emails. To learn more about each provider and how to configure them, see the respective section:
All configuration and setup is built-in with a unified API, so you can switch between providers without changing your code and even introduce your own provider without breaking any sending-related logic.
## Development
When you [setup your development environment](/docs/web/installation/development) and run `pnpm dev` command a new app will start at [http://localhost:3005](http://localhost:3005).

There you'll be able to check your email templates and send test emails from your app. It includes hot-reloading, so when you make change in the code - it will be reflected in the browser.
Learn more about configuration and setup of the emails in TurboStarter in the following sections.
file: ./src/content/docs/(core)/web/emails/sending.mdx
meta: {
"title": "Sending emails",
"description": "Learn how to send emails in TurboStarter."
}
The strategy for sending emails, that every provider has to implement, is **extremely simple**:
```ts
export interface EmailProviderStrategy {
send: (args: {
to: string;
subject: string;
text: string;
html?: string;
}) => Promise;
}
```
You don't need to worry much about it, as all the providers are already configured for you. Just be aware of it if you want to add your custom provider.
Then, we define a general `sendEmail` function that you can use as an API for sending emails in your app:
```ts
const sendEmail = async ({
to,
template,
variables,
locale,
}: {
to: string;
template: T;
variables: EmailVariables[T];
locale?: string;
}) => {
const strategy = strategies[provider];
const { html, text, subject } = await getTemplate({
id: template,
variables,
locale,
});
return strategy.send({ to, subject, html, text });
};
```
The arguments are:
* `to`: The recipient's email address.
* `template`: The email template to use.
* `variables`: The variables to pass to the template.
* `locale`: The locale to use for the email.
It returns a promise that resolves when the email is sent successfully. If there is an error, the promise will be rejected with an error message.
To send an email, just invoke the `sendEmail` with the correct arguments from the **server-side** of your application:
```ts
import { sendEmail } from "@turbostarter/email/server";
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
locale: "en",
});
```
And that's it! You're ready to send emails in your application π
## Authentication emails
TurboStarter comes with a set of pre-configured authentication emails for various purposes, including magic links and password reset functionality.
To handle the sending of these emails at the right time, we use [Better Auth Hooks](https://www.better-auth.com/docs/concepts/email), which trigger when specific authentication events occur.
The logic for determining which email to send is already implemented for you in the `packages/auth/src/server.ts` file, alongside your [authentication configuration](/docs/web/auth/configuration):
```ts title="server.ts"
export const auth = betterAuth({
emailAndPassword: {
enabled: true,
sendResetPassword: async ({ user, url }) =>
sendEmail({
to: user.email,
template: EmailTemplate.RESET_PASSWORD,
variables: {
url,
},
}),
},
emailVerification: {
sendVerificationEmail: async ({ user, url }) =>
sendEmail({
to: user.email,
template: EmailTemplate.CONFIRM_EMAIL,
variables: {
url,
},
}),
},
/* other options */
});
```
As you can see, the authentication emails are automatically sent when needed (e.g. when user requests password reset or needs to verify their email address).
You can customize authentication templates by modifying them in the `packages/email/src/templates` directory, or create your own templates for other use cases in your application.
file: ./src/content/docs/(core)/web/installation/clone.mdx
meta: {
"title": "Cloning repository",
"description": "Get the code to your local machine and start developing."
}
Ensure you have Git installed on your local machine before proceeding. You can download Git from [here](https://git-scm.com).
## Git clone
Clone the repository using the following command:
```bash
git clone git@github.com:turbostarter/core
```
```bash
git clone https://github.com/turbostarter/core
```
If you are not using SSH, ensure you switch to HTTPS for all Git commands, not just the clone command.
## Git remote
After cloning the repository, remove the original origin remote:
```bash
git remote rm origin
```
Add the upstream remote pointing to the original repository to pull updates:
```bash
git remote add upstream git@github.com:turbostarter/core
```
Once you have your own repository set up, add your repository as the origin:
```bash
git remote add origin
```
## Staying up to date
To pull updates from the upstream repository, run the following command daily (preferably with your morning coffee β):
```bash
git pull upstream main
```
This ensures your repository stays up to date with the latest changes.
Check [Updating codebase](/docs/web/installation/update) for more details on updating your codebase.
file: ./src/content/docs/(core)/web/installation/commands.mdx
meta: {
"title": "Common commands",
"description": "Learn about common commands you need to know to work with the project."
}
For sure, you don't need these commands to kickstart your project, but it's useful to know they exist for when you need them.
You can set up aliases for these commands in your shell configuration file. For example, you can set up an alias for `pnpm` to `p`:
```bash title="~/.bashrc"
alias p='pnpm'
```
Or, if you're using [Zsh](https://ohmyz.sh/), you can add the alias to `~/.zshrc`:
```bash title="~/.zshrc"
alias p='pnpm'
```
Then run `source ~/.bashrc` or `source ~/.zshrc` to apply the changes.
You can now use `p` instead of `pnpm` in your terminal. For example, `p i` instead of `pnpm install`.
## Installing dependencies
To install the dependencies, run:
```bash
pnpm install
```
## Starting development server
Start development server by running:
```bash
pnpm dev
```
## Building project
To build the project (including all apps and packages), run:
```bash
pnpm build
```
## Building specific app/package
To build a specific app/package, run:
```bash
pnpm turbo build --filter=
```
## Cleaning project
To clean the project, run:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm install
```
## Formatting code
To format code using Prettier, run:
```bash
pnpm format:fix
```
## Linting code
To lint code using ESLint, run:
```bash
pnpm lint:fix
```
## Typechecking
To typecheck the code using TypeScript, run:
```bash
pnpm typecheck
```
## Adding UI components
To add a new web component, run:
```bash
pnpm --filter @turbostarter/ui-web ui:add
```
This command will add and export a new component to `@turbostarter/ui-web` package.
To add a new mobile component, run:
```bash
pnpm --filter @turbostarter/ui-mobile ui:add
```
This command will add and export a new component to `@turbostarter/ui-mobile` package.
## Database commands
We have a few commands to help you manage the database leveraging [Drizzle CLI](https://orm.drizzle.team/kit-docs/commands).
### Generating migrations
To generate a new migration, run:
```bash
pnpm db:generate
```
It will create a new migration `.sql` file in the `migrations` folder.
### Running migrations
To run the migrations against the db, run:
```bash
pnpm db:migrate
```
It will apply all the pending migrations.
### Pushing changes directly
Make sure you know what you're doing before pushing changes directly to the db.
To push changes directly to the db, run:
```bash
pnpm db:push
```
It lets you push your schema changes directly to the database and omit managing SQL migration files.
### Checking database
To check the database schema consistency, run:
```bash
pnpm db:check
```
### Docker commands
To run the database instance locally, you need to have [Docker](https://www.docker.com/) installed on your machine.
You can always use the cloud-hosted solution ([Neon](https://neon.tech/), [Turso](https://turso.tech/) or any other provider) for your projects.
We have a few commands to help you manage the database instance (for local development).
#### Starting container
To start the database container, run:
```bash
pnpm db:start
```
It will run the PostgreSQL container. You can check its config in `packages/db/docker-compose.yml`.
#### Stopping container
To stop the database container, run:
```bash
pnpm db:stop
```
#### Displaying status
To check the status and logs of the database container, run:
```bash
pnpm db:status
```
file: ./src/content/docs/(core)/web/installation/conventions.mdx
meta: {
"title": "Conventions",
"description": "Some standard conventions used across TurboStarter codebase."
}
You're not required to follow these conventions: they're simply a standard set of practices used in the core kit. If you like them - I encourage you to keep these during your usage of the kit - so to have consistent code style that you and your teammates understand.
## Turborepo Packages
In this project, we use [Turborepo packages](https://turbo.build/repo/docs/core-concepts/internal-packages) to define reusable code that can be shared across multiple applications.
* **Apps** are used to define the main application, including routing, layout, and global styles.
* **Packages** shares reusable code add functionalities across multiple applications. They're configurable from the main application.
**Recommendation:** Do not create a package for your app code unless you plan to reuse it across multiple applications or are experienced in writing library code.
If your application is not intended for reuse, keep all code in the app folder. This approach saves time and reduces complexity, both of which are beneficial for fast shipping.
**Experienced developers:** If you have the experience, feel free to create packages as needed.
## Imports and Paths
When importing modules from packages or apps, use the following conventions:
* **From a package:** Use `@turbostarter/package-name` (e.g., `@turbostarter/ui`, `@turbostarter/api`, etc.).
* **From an app:** Use `~/` (e.g., `~/lib/components`, `~/config`, etc.).
## Enforcing conventions
* [Prettier](https://prettier.io/) is used to enforce code formatting.
* [ESLint](https://eslint.org/) is used to enforce code quality and best practices.
* [TypeScript](https://www.typescriptlang.org/) is used to enforce type safety.
## Code health
TurboStarter provides a set of tools to ensure code health and quality in your project.
### Github Actions
By default, TurboStarter sets up Github Actions to run tests on every push to the repository. You can find the Github Actions configuration in the `.github/workflows` directory.
The workflow has multiple stages:
* `format` - runs Prettier to format the code.
* `lint` - runs ESLint to check for linting errors.
* `typecheck` - runs TypeScript to check for type errors.
### Git hooks
Together with TurboStarter we have set up a `commit-msg` hook which will check if your commit message follows the [conventional commit](https://www.conventionalcommits.org/en/v1.0.0/) message format. This is important for generating changelogs and keeping a clean commit history.
Although we didn't ship any pre-commit hooks (we believe in shipping fast with moving checking code responsibility to CI), you can easily add them by using [Husky](https://typicode.github.io/husky/#/).
#### Setting up the Pre-Commit Hook
To do so, create a `pre-commit` file in the `./..husky` directory with the following content:
```bash
#!/bin/sh
pnpm typecheck
pnpm lint
```
Turborepo will execute the commands for all the affected packages - while skipping the ones that are not affected.
#### Make the Pre-Commit Hook Executable
```bash
chmod +x ./.husky/pre-commit
```
To test the pre-commit hook, try to commit a file with linting errors or type errors. The commit should fail, and you should see the error messages in the console.
file: ./src/content/docs/(core)/web/installation/dependencies.mdx
meta: {
"title": "Managing dependencies",
"description": "Learn how to manage dependencies in your project."
}
As the package manager we chose [pnpm](https://pnpm.io/).
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
## Install dependency
To install a package you need to decide whether you want to install it to the root of the monorepo or to a specific workspace. Installing it to the root makes it available to all packages, while installing it to a specific workspace makes it available only to that workspace.
To install a package globally, run:
```bash
pnpm add -w
```
To install a package to a specific workspace, run:
```bash
pnpm add --filter
```
For example:
```bash
pnpm add --filter @turbostarter/ui motion
```
It will install `motion` to the `@turbostarter/ui` workspace.
## Remove dependency
Removing a package is the same as installing but with the `remove` command.
To remove a package globally, run:
```bash
pnpm remove -w
```
To remove a package from a specific workspace, run:
```bash
pnpm remove --filter
```
## Update a package
Updating is a bit easier since there is a nice way to update a package in all workspaces at once:
```bash
pnpm update -r
```
When you update a package, pnpm will respect the [semantic versioning](https://docs.npmjs.com/about-semantic-versioning) rules defined in the `package.json` file. If you want to update a package to the latest version, you can use the `--latest` flag.
## Renovate bot
By default, TurboStarter comes with [Renovate](https://www.npmjs.com/package/renovate) enabled. It is a tool that helps you manage your dependencies by automatically creating pull requests to update your dependencies to the latest versions. You can find its configuration in the `.github/renovate.json` file. Learn more about it in the [official docs](https://docs.renovatebot.com/configuration-options/).
When it creates a pull request, it is treated as a normal PR, so all tests and preview deployments will run. **It is recommended to always preview and test the changes in the staging environment before merging the PR to the main branch to avoid breaking the application.**
file: ./src/content/docs/(core)/web/installation/development.mdx
meta: {
"title": "Development",
"description": "Get started with the code and develop your SaaS."
}
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (20.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local database)
* [pnpm](https://pnpm.io)
## Project development
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
Check [Environment variables](/docs/web/configuration/environment-variables) for more details on setting up environment variables.
### Start database
If you want to use local database (**recommended for development purposes**), ensure Docker is running, then setup your database with:
```bash
pnpm db:setup
```
This command initiates the PostgreSQL container and runs migrations, ensuring your database is up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your app should now be up and running at [http://localhost:3000](http://localhost:3000) π
### Deploy to Production
When you're ready to deploy the project to production, follow the [checklist](/docs/web/deployment/checklist) to ensure everything is set up correctly.
file: ./src/content/docs/(core)/web/installation/editor-setup.mdx
meta: {
"title": "Editor setup",
"description": "Learn how to set up your editor for the fastest development experience."
}
Of course you can use every IDE you like, but you will have the best possible developer experience with this starter kit when using **VSCode-based** editor with the suggested settings and extensions.
## Settings
We have included most recommended settings in the `.vscode/settings.json` file to make your development experience as smooth as possible. It include mostly configs for tools like Prettier, ESLint and Tailwind which are used to enforce some conventions across the codebase. You can adjust them to your needs.
## LLM rules
We exposed a special endpoint that will scan all the docs and return the content as a text file which you can use to train your LLM or put in a prompt. You can find it at [/api/llms.txt](/api/llms.txt).
The repository also includes a custom rules for most popular AI editors to ensure that AI completions are working as expected and following our conventions.
### Cursor
If you're using [Cursor](https://www.cursor.com/), we've integrated specific rules that help maintain code quality and ensure AI-assisted completions align with our project standards.
You can find them in the `.cursor` directory at the root of the project. It includes multiple `.mdc` files which can be used to instruct the AI to follow our conventions when generating code.
```md title=".cursor/01-project-overview.mdc"
Code Style and Structure:
- Write concise, technical TypeScript code with accurate examples
- Use functional and declarative programming patterns; avoid classes
- Prefer iteration and modularization over code duplication
Naming Conventions:
....
```
To learn more about Cursor rules check out the [official documentation](https://docs.cursor.com/context/rules).
### Windsurf
For [Windsurf](https://windsurf.dev/) users we have included a custom rules file that can be used to instruct the AI to follow our conventions when generating code.
You can find the rules in the `.windsurfrules` file at the root of the project.
```md title=".windsurfrules"
You are an expert specifically trained for the Turbostarter monorepo project.
Your goal is to generate and modify code adhering strictly to the project's structure, conventions, and tech stack (TypeScript, React, Next.js, Hono, Drizzle, Tailwind, etc.) outlined below.
## Project Overview & Tech Stack
- **Monorepo:** Managed by Turborepo (`turbo.json`).
...
```
To learn more about Windsurf rules check out the [official documentation](https://docs.windsurf.com/windsurf/memories#windsurfrules).
## Extensions
Once you cloned the project and opened it in VSCode you should be promted to install suggested extensions which are defined in the `.vscode/extensions.json` automatically. In case you rather want to install them manually you can do so at any time later.
These are the extensions we recommend:
### ESLint
Global extension for static code analysis. It will help you to find and fix problems in your JavaScript code.
### Prettier
Global extension for code formatting. It will help you to keep your code clean and consistent.
### Pretty TypeScript Errors
Improves TypeScript error messages in the editor.
### Tailwind CSS IntelliSense
Adds IntelliSense for Tailwind CSS classes to enable autocompletion and linting.
file: ./src/content/docs/(core)/web/installation/structure.mdx
meta: {
"title": "Project structure",
"description": "Learn about the project structure and how to navigate it."
}
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Web application structure
The web application is located in the `apps/web` folder. It contains the following folders:
file: ./src/content/docs/(core)/web/installation/update.mdx
meta: {
"title": "Updating codebase",
"description": "Learn how to update your codebase to the latest version."
}
If you've been following along with our previous guides, you should already have a Git repository set up for your project, with an `upstream` remote pointing to the original repository.
Updating your project involves fetching the latest changes from the `upstream` remote and merging them into your project. Let's dive into the steps!
## Stash changes
If you don't have any changes to stash, you can skip this step and proceed with the update process.
Alternatively, you can [commit](https://git-scm.com/docs/git-commit) your changes.
If you have any uncommitted changes, stash them before proceeding. It will allow you to avoid any conflicts that may arise during the update process.
```bash
git stash
```
This command will save your changes in a temporary location, allowing you to retrieve them later. Once you're done updating, you can apply the stash to your working directory.
```bash
git stash apply
```
## Pull changes
Pull the latest changes from the `upstream` remote.
```bash
git pull upstream main
```
When prompted the first time, please opt for merging instead of rebasing.
Don't forget to run `pnpm i` in case there are any updates in the dependencies.
## Resolve conflicts
If there are any conflicts during the merge, Git will notify you. You can resolve them by opening the conflicting files in your code editor and making the necessary changes.
If you find conflicts in the `pnpm-lock.yaml file`, accept either of the two changes (avoid manual edits), then run:
```bash
pnpm i
```
Your lock file will now reflect both your changes and the updates from the upstream repository.
## Run a health check
After resolving the conflicts, it's time to test your project to ensure everything is working as expected. Run your project locally and navigate through the various features to verify that everything is functioning correctly.
For a quick health check, you can run:
```bash
pnpm lint
pnpm typecheck
```
If everything looks good, you're all set! Your project is now up to date with the latest changes from the `upstream` repository.
## Commit and push
Once everything is working fine, don't forget to commit your changes using:
```bash
git commit -m ""
```
and push them to your remote repository with:
```bash
git push origin
```
file: ./src/content/docs/(core)/web/internationalization/configuration.mdx
meta: {
"title": "Configuration",
"description": "Learn how to configure internationalization in TurboStarter."
}
The default global configuration is defined in the `@turbostarter/i18n` package and shared across all applications. You can override it in each app to customize the internationalization setup for that specific app.
The configuration is defined in the `packages/i18n/src/config.ts` file:
```ts title="packages/i18n/src/config.ts"
export const config = {
locales: ["en", "es"],
defaultLocale: "en",
namespaces: ["common", "auth", "billing", "marketing", "validation"],
cookie: "locale",
} as const;
```
Let's break down the configuration options:
* `locales`: An array of all supported locales.
* `defaultLocale`: The default locale to use if no other locale is detected.
* `namespaces`: An array of all namespaces used in the application.
* `cookie`: The name of the cookie to store the detected locale (acts as a cache).
## Translation files
The core of the whole internationalization setup is the translation files. They are stored in the `packages/i18n/src/translations` directory and are used to store the translations for each locale and namespace.
Each directory represents a locale and contains a set of files, each corresponding to a specific namespace (e.g. `en/common.json`). Inside we define the keys and values for the translations.
```ts title="packages/i18n/src/translations/en/common.json"
{
"hello": "Hello, world!"
}
```
That way we can ensure that we have a single source of truth for the translations and we can use them consistently in all the applications.
## Locales
The `locales` array in the configuration defines the list of supported languages in your application. Each locale is represented by a string that uniquely identifies the language.
To add a new locale, you need to:
1. Add the new locale to the `locales` array in the configuration.
2. Create a new directory in the `packages/i18n/src/translations` directory.
3. Create a new file in the new directory for each namespace and add the translations for the new locale.
For example, if you want to add the `fr` locale, you need to:
1. Add `fr` to the `locales` array in the configuration.
2. Create a new directory in the `packages/i18n/src/translations` directory.
3. Create a new file for each namespace in the created directory and add the translations for the new locale.
### Fallback locale
The `defaultLocale` option in the configuration defines the fallback locale. If a translation is not found for a specific locale, the fallback locale will be used.
We can also override this setting in each [app configuration](/docs/web/configuration/app) by configuring the `locale` property.
## Namespaces
`namespaces` are used to group translations by feature or module. This helps in organizing the translations and makes it easier to maintain them.
### Why not one big namespace?
Using multiple namespaces instead of one large namespace helps with:
1. **Performance:** load translations on-demand instead of all at once, reducing the initial bundle size.
2. **Organization:** group translations by feature (e.g., `auth`, `common`, `dashboard`).
3. **Maintenance:** easier to update and manage smaller translation files.
4. **Development:** better TypeScript support and team collaboration.
For example, you might structure your namespaces like this:
```ts title="packages/i18n/src/translations/en/common.json"
{
"hello": "Hello, world!"
}
```
```ts title="packages/i18n/src/translations/en/auth.json"
{
"login": "Login",
"register": "Register"
}
```
```ts title="packages/i18n/src/translations/en/billing.json"
{
"invoice": "Invoice",
"payment": "Payment",
"subscription": "Subscription"
}
```
Remember that while you can create as many namespaces as needed, it's important to maintain a balance - too many namespaces can lead to unnecessary complexity, while too few might defeat the purpose of separation.
## Routing
TurboStarter implements locale-based routing by placing pages under the `[locale]` folder. However, the default locale (usually `en`) is not prefixed in the URL for better SEO and user experience.
For example, with English as the default locale and Polish as an additional language:
* `/dashboard` β English version (default locale)
* `/pl/dashboard` β Polish version
The app also automatically detects the user's preferred language through cookies, HTML `lang` attribute, and browser's `Accept-Language` header.
This ensures a seamless experience where users get content in their preferred language while maintaining clean URLs for the default locale.
You can override the locale by manually setting the cookie or by navigating to
a URL with a different locale prefix.
file: ./src/content/docs/(core)/web/internationalization/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with internationalization in TurboStarter.",
"index": true
}
TurboStarter uses [i18next](https://www.i18next.com/) for internationalization, which is one of the most popular and mature (over 10 years of development!) i18n frameworks for JavaScript.
With i18next, you can easily translate your application into multiple
languages, handle complex pluralization rules, format dates and numbers
according to locale, and much more. The framework is highly extensible through
plugins and provides excellent TypeScript support out of the box.
You can read more about `i18next` package in the [official documentation](https://www.i18next.com/overview/getting-started).

## Getting started
TurboStarter comes with `i18next` pre-configured and abstracted behind the `@turbostarter/i18n` package. This abstraction layer ensures that any future changes to the underlying translation library won't impact your application code. The internationalization setup is ready to use out of the box and includes:
* Multiple language support out of the box
* Type-safe translations with generated types
* Automatic language detection
* Easy-to-use React hooks for translations
* Built-in number and date formatting
* Support for nested translation keys
* Pluralization handling
To start using internationalization in your app, you'll need to:
1. Configure your supported languages
2. Add translation files
3. Use translation hooks in your components
Check out the following sections to learn more about each step:
file: ./src/content/docs/(core)/web/internationalization/translations.mdx
meta: {
"title": "Translating app",
"description": "Learn how to translate your application to multiple languages."
}
TurboStarter provides a flexible and powerful translation system that works seamlessly across your entire application. Whether you're working with React Server Components (RSC), client-side components, or server-side rendering, you can easily integrate translations to create a fully internationalized experience.
The translation system supports:
* **Server components (RSC)** for efficient server-side translations
* **Client components** for dynamic language switching
* **Server-side rendering** for SEO-friendly translated content
## Server components (RSC)
To get the translations in a server component, you can use the `getTranslation` method:
```tsx
import { getTranslation } from "@turbostarter/i18n";
export default async function MyComponent() {
const { t } = await getTranslation();
return
{t("common:hello")}
;
}
```
There is also a possibility to use the [Trans](https://react.i18next.com/latest/trans-component) component, which could be useful e.g. for interpolating variables:
```tsx
import { Trans } from "@turbostarter/i18n";
import { withI18n } from "@turbostarter/i18n/server";
const Page = () => {
return }} />;
};
export default withI18n(Page);
```
Although, to make it available in the server component, you need to wrap it with the `withI18n` HOC.
Given that server components are rendered in parallel, it's uncertain which one will render first. Therefore, it's crucial to initialize the translations before rendering the server component on each page/layout.
## Client components
For client components, you can use the `useTranslation` hook from the `@turbostarter/i18n` package:
```tsx
"use client";
import { useTranslation } from "@turbostarter/i18n";
export default function MyComponent() {
const { t } = useTranslation();
return
{t("common:hello")}
;
}
```
That's the simplest way to get the translations in a client component.
## Server-side
In all other places (e.g. metadata, API routes, sitemaps etc.) you can use the `getTranslation` method to get the translations server-side:
```ts
import { getTranslation } from "@turbostarter/i18n";
export const generateMetadata = async () => {
const { t } = await getTranslation();
return {
title: t("common:title"),
};
};
```
It automatically checks the user's preferred locale and uses the correct translation.
## Language switcher
TurboStarter ships with a language switcher component that allows you to switch between languages. You can import and use the `LocaleSwitcher` component and drop it anywhere in your application to allow users to change the language seamlessly.
```tsx
import { LocaleSwitcher } from "@turbostarter/ui-web";
export default function MyComponent() {
return ;
}
```
The component automatically displays all languages configured in your i18n settings. When a user switches languages, it will:
1. Update the URL to include the new locale prefix (e.g. `/es/dashboard`)
2. Store the selected locale in a cookie for persistence
3. Refresh translations across the entire application
4. Preserve the current page/route during the language switch
This provides a seamless localization experience without requiring any additional configuration.
## Best practices
Here are some recommended best practices for managing translations in your application:
* Use descriptive translation keys that follow a logical hierarchy
```ts
// β Good
"auth.login.title";
// β Bad
"loginTitleForAuth";
```
* Keep translations organized in separate namespaces/files based on features or sections
```
translations/
βββ en/
β βββ auth.json
β βββ common.json
βββ pl/
βββ auth.json
βββ billing.json
```
* Avoid hardcoding text strings - always use translation keys even for seemingly static content
* Always provide a fallback language (usually English) for when translations are missing
* Use pluralization and interpolation features when dealing with dynamic content
```ts
// Pluralization
t("items", { count: 2 }); // "2 items"
// Interpolation
t("welcome", { name: "John" }); // "Welcome, John!"
```
* Regularly review and clean up unused translation keys to keep files maintainable
* Use TypeScript for type-safe translation keys
file: ./src/content/docs/(core)/web/marketing/legal.mdx
meta: {
"title": "Legal pages",
"description": "Learn how to create and update legal pages"
}
Legal pages are defined in the `apps/web/src/app/[locale]/(marketing)/legal` directory.
TurboStarter comes with the following legal pages:
* **Terms and Conditions**: to define the terms and conditions of your application
* **Privacy Policy**: to define the privacy policy of your application
* **Cookie Policy**: to define the cookie policy of your application
For obvious reasons, **these pages are empty and you need to fill in the content.**
## Content from CMS
Content for legal pages are stored as [MDX](https://mdxjs.com/) files in [content collection](/docs/web/cms/content-collections) in `packages/cms/src/content/collections/legal` directory.
Then it's parsed and rendered as a Next.js page under corresponding slug:
```tsx title="apps/web/src/app/[locale]/(marketing)/legal/[slug]/page.tsx"
import {
CollectionType,
getContentItemBySlug,
getContentItems,
} from "@turbostarter/cms";
export default async function Page({ params }: PageParams) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
if (!item) {
notFound();
}
return ;
}
export function generateStaticParams() {
return getContentItems({ collection: CollectionType.LEGAL }).items.map(
({ slug, locale }) => ({
slug,
locale,
}),
);
}
```
As it's fully typesafe it also allows us to generate metadata for each page based on the frontmatter that you define in the MDX file:
```tsx title="apps/web/src/app/[locale]/(marketing)/legal/[slug]/page.tsx"
export async function generateMetadata({ params }: PageParams) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
if (!item) {
return notFound();
}
return getMetadata({
title: item.title,
description: item.description,
})({ params });
}
```
Read more about it in the [CMS section](/docs/web/cms/overview).
## ChatGPT prompts
Each `.mdx` file with legal content include a set of useful prompts that you can use to generate the content.
Please, be aware that **ChatGPT is not a lawyer** and the content generated by it should be reviewed by one before publishing. Take your time and treat the generated content as a starting point not a final document.
```mdx title="privacy-policy.mdx"
---
title: Privacy Policy
description: Our privacy policy outlines how we collect, use, and protect your personal information.
---
{/* π‘ You can use one of the following ChatGPT prompts to generate this π‘ */}
...
```
Feel free to add your own content or even additional pages to the `legal` collection.
file: ./src/content/docs/(core)/web/marketing/pages.mdx
meta: {
"title": "Marketing pages",
"description": "Discover which marketing pages are available out of the box and how to add a new one."
}
TurboStarter comes with pre-defined marketing pages to help you get started with your SaaS application. These pages are built with Next.js and Tailwind CSS and are located in the `apps/web/src/app/[locale]/(marketing)` directory.
TurboStarter comes with the following marketing pages:
* [Blog](/docs/web/cms/blog): to display your blog posts
* **Pricing**: to display your pricing plans
* **Contact**: to enable users to contact you with a contact form
## Contact form
To make the contact form work, you need to add the following environment variable:
```dotenv
CONTACT_EMAIL=
```
Set this variable to the email address where you want to receive contact form submissions. The sender's email address will match what you configured in your [mailing configuration](/docs/web/emails/configuration).
## Adding a new marketing page
To add a new marketing page, create a new directory in `apps/web/src/app/[locale]/(marketing)` with the desired route name.
The page will automatically become available in your application at the corresponding URL path.
For example, to create a page accessible at `/about`, create a directory named `about` and add a `page.tsx` file inside it. The complete path would be `apps/web/src/app/[locale]/(marketing)/about/page.tsx`.
```tsx title="apps/web/src/app/[locale]/(marketing)/about/page.tsx"
export default function AboutPage() {
return
About
;
}
```
This page inherits the layout at `apps/web/src/app/[locale]/(marketing)/layout.tsx`. You can customize the layout by editing this file - but remember that it will affect all marketing pages.
file: ./src/content/docs/(core)/web/marketing/seo.mdx
meta: {
"title": "SEO",
"description": "Learn how to optimize your app for search engines."
}
SEO is an important part of building a website. It helps search engines understand your website and rank it higher in search results. In this guide, you'll learn how to improve your SaaS application's search engine optimization (SEO).
TurboStarter is already optimized for SEO out of the box (including meta tags, sitemaps, robots files and many more). However, there are a few things you can do to improve your application's SEO.
**Content:** High-quality, relevant content is the cornerstone of effective SEO. Focus on **creating valuable, engaging content** that addresses your customers' needs and questions. Regularly update your content to keep it fresh and relevant.
**Keyword optimization:** Conduct thorough keyword research to identify terms your target audience is searching for. Incorporate these keywords naturally into your content, titles, meta descriptions, and headers. Avoid keyword stuffing; prioritize readability and user experience.
**On-Page SEO:**
* Use descriptive, keyword-rich titles and meta descriptions for each page.
* Implement a clear heading structure (H1, H2, H3) to organize your content.
* Optimize images with descriptive file names and alt text.
* Ensure your URLs are clean, descriptive, and include relevant keywords.
**Technical SEO:**
* Improve website loading speed by optimizing images, minifying CSS and JavaScript, and leveraging browser caching.
* Ensure your website is mobile-friendly and responsive across all devices.
* Implement schema markup to help search engines better understand your content.
* Use HTTPS to secure your website and boost search rankings.
**User experience:**
* Design an intuitive site structure and navigation to improve user engagement.
* Reduce bounce rates by creating compelling, easy-to-read content.
* Implement internal linking to guide users through your site and distribute page authority.
**Link building:**
* Create high-quality, shareable content to naturally attract backlinks.
* Engage in guest posting on reputable sites within your industry.
* Participate in industry forums and discussions, providing valuable insights and linking to your content when relevant.
* Leverage social media to increase content visibility and encourage sharing.
**Local SEO (if applicable):**
* Claim and optimize your Google My Business listing.
* Ensure consistent NAP (Name, Address, Phone) information across all online directories.
* Encourage customer reviews on Google and other relevant platforms.
**Monitor and analyze:**
* Use [Google Search Console](https://search.google.com/search-console/about) to monitor your site's performance in search results and identify issues.
* Regularly analyze your SEO efforts using tools like Google Analytics to understand user behavior and refine your strategy.
**Stay updated:**
* Keep abreast of SEO best practices and algorithm updates to continually refine your strategy.
* Regularly audit your website to identify and fix any SEO issues.
## Sitemap
Generally speaking, Google will find your pages without a sitemap as it follows the link in your website. However, you can add pages to the sitemap by adding them to the `apps/web/src/app/sitemap.ts` file, which is used to generate the sitemap.
If you add more static pages to your website, you can add them to the sitemap by adding them to the `apps/web/src/app/sitemap.ts` returned array.
```tsx title="sitemap.ts"
export default function sitemap(): MetadataRoute.Sitemap {
return [
{
...getEntry(pathsConfig.index),
lastModified: new Date(),
changeFrequency: "monthly",
priority: 1,
},
...getContentItems({
collection: CollectionType.BLOG,
locale: appConfig.locale,
}).items.map((post) => ({
...getEntry(pathsConfig.marketing.blog.post(post.slug)),
lastModified: new Date(post.lastModifiedAt),
changeFrequency: "monthly",
priority: 0.7,
})),
/* other pages */
];
}
```
All the existing pages are already added to the sitemap. You don't need to add them manually.
## Meta tags
TurboStarter provides a helper function called `getMetadata` to easily set meta tags for your pages. This helper ensures consistent metadata formatting across your site and includes essential SEO tags like title, description, and Open Graph tags. You can use it in any page's metadata export:
```tsx title="page.tsx"
export const generateMetadata = getMetadata({
title: "My Page Title",
description: "My Page Description",
});
```
This will generate the following meta tags:
```html
```
The `getMetadata` helper is really useful for generating consistent meta tags across your site, making SEO optimization simpler and more reliable.
`getMetadata` also supports translations. You can pass a translation key to the `title` and `description` parameters, and it will automatically use the correct translation for the current locale.
```tsx
export const generateMetadata = getMetadata({
title: "billing:title",
description: "billing:description",
});
```
In this example, the `title` and `description` will be fetched from the `billing` namespace for the current locale and placed in the meta tags.
## Backlinks
Backlinks are said to be the **most important factor** in modern SEO. The more backlinks you have from high-quality websites, the higher your website will rank in search results - and the more traffic you'll get.
How do you acquire backlinks? The most effective strategy is to create high-quality, valuable content that naturally attracts links from other websites. However, there are several other methods to build backlinks:
1. **Guest blogging:** Contribute articles to reputable websites within your industry. This not only provides backlinks but also exposes your brand to a new audience.
2. **Strategic outreach:** Identify websites that could benefit from linking to your content. Reach out with a personalized pitch, explaining the value your content adds to their audience.
3. **Digital PR:** Create newsworthy content or conduct original research that journalists and bloggers will want to reference and link to.
4. **Broken link building:** Find broken links on relevant websites and suggest your content as a replacement.
5. **Resource page link building:** Find resource pages in your niche and suggest your content for inclusion.
6. **Social media engagement:** While not directly impacting SEO, active social media presence can increase content visibility and indirectly lead to more backlinks.
7. **Create linkable assets:** Develop infographics, tools, or comprehensive guides that others in your industry will want to reference.
8. **Participate in industry forums and discussions:** Contribute meaningfully to conversations in your field, including your website when relevant.
Remember, the quality of backlinks is more important than quantity. Focus on acquiring links from authoritative, relevant websites in your niche. Avoid any black-hat techniques or link schemes that could result in penalties from search engines.
## Adding your website to Google Search Console
Once you've optimized your website for SEO, you can add it to Google Search Console. Google Search Console is a free tool that helps you monitor and maintain your website's presence in Google search results.
You can use it to check your website's indexing status, submit sitemaps, and get insights into how Google sees your website.
The first thing you need to do is verify your website in Google Search Console. You can do this by adding a meta tag to your website's HTML or by uploading an HTML file to your website.
Once you've verified your website, you can submit your sitemap to Google Search Console. This will help Google find and index your website's pages faster.
Please submit your sitemap to Google Search Console by going to the `Sitemaps` section and adding the URL of your sitemap. The URL of your sitemap is `https://your-website.com/sitemap.xml`.
Of course, please replace `your-website.com` with your actual website URL.
## Content
When it comes to internal factors, **content is king**. Make sure your content is relevant, useful, and engaging. Make sure it's updated regularly and optimized for SEO.
Most importantly, you want to think about how your customers will search for the problem your SaaS is solving. For example, if you're building a project management tool, you might want to write about project management best practices, how to manage a remote team, or how to use your tool to improve productivity.
You can use the blog and documentation features in TurboStarter to create high-quality content that will help your website rank higher in search results - and help your customers find what they're looking for.
## Indexing and ranking take time
New websites can take a while to get indexed by search engines. It can take anywhere from a few days to a few weeks (in some cases, even months!) for your website to show up in search results. Be patient and keep updating your content and optimizing your website for search engines.
Also, you can edit `robots.ts` file to control which pages are indexed by search engines:
```tsx title="robots.ts"
export default function robots(): MetadataRoute.Robots {
return {
rules: {
userAgent: "*",
allow: "/",
disallow: ["/api", "/dashboard", "/auth"],
},
sitemap: appConfig.url + "/sitemap.xml",
};
}
```
Remember, **SEO is an ongoing process.** Consistently apply these practices and adapt your strategy based on performance data and industry changes to improve your search engine visibility over time.
file: ./src/content/docs/(core)/web/storage/configuration.mdx
meta: {
"title": "Configuration",
"description": "Learn how to configure storage in TurboStarter."
}
Currently, TurboStarter supports all S3-compatible storage providers, including [AWS S3](https://aws.amazon.com/s3/), [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces), [Cloudflare R2](https://www.cloudflare.com/products/r2/), and [Supabase Storage](https://supabase.com/storage).
The setup process is straightforward - you just need to configure a few environment variables in both your local environment and hosting provider:
```dotenv
S3_REGION=
S3_BUCKET=
S3_ENDPOINT=
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
```
Let's break down each required variable:
* `S3_REGION`: The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where your storage is located - defaults to `us-east-1`
* `S3_BUCKET`: The default name of your storage bucket - you can pass different for each request
* `S3_ENDPOINT`: The S3 [endpoint URL](https://docs.aws.amazon.com/general/latest/gr/s3.html) for your storage provider - defaults to `https://s3.amazonaws.com`
* `S3_ACCESS_KEY_ID`: Your storage provider's [access key ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
* `S3_SECRET_ACCESS_KEY`: Your storage provider's [secret access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
You can learn more about S3 service configuration in the [official AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html) or your specific storage provider's documentation.
file: ./src/content/docs/(core)/web/storage/managing-files.mdx
meta: {
"title": "Managing files",
"description": "Learn how to manage files in TurboStarter."
}
Before you start managing files, make sure you have [configured storage](/docs/web/storage/configuration).
## Permissions
Most S3-compatible storage providers allow you to configure bucket permissions and access policies. It's crucial to properly set these up to secure your files and control who can access them.
Here are some key security recommendations:
* Keep your bucket private by default
* Use IAM roles and policies to manage access
* Enable server-side encryption for sensitive data
* Configure CORS settings appropriately for client-side uploads
* Regularly audit bucket permissions and access logs
Making your bucket public is strongly discouraged as it can expose sensitive data and lead to unauthorized access and unexpected costs from bandwidth usage.
For detailed guidance on configuring bucket policies and permissions, refer to your storage provider's documentation:
* [AWS S3 Security Documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html)
* [DigitalOcean Spaces Security](https://docs.digitalocean.com/products/spaces/how-to/manage-access/)
* [Cloudflare R2 Security](https://developers.cloudflare.com/r2/api/s3/tokens/)
* [Supabase Storage Security](https://supabase.com/docs/guides/storage/security/access-control)
## Uploading files
As explained in the [overview](/docs/web/storage/overview), TurboStarter uses presigned URLs to upload files to your storage provider.
We prepared a special endpoint to generate presigned URLs for your uploads to use in your client-side code.
```ts title="storage.router.ts"
export const storageRouter = new Hono().get(
"/upload",
enforceAuth,
validate("query", getObjectUrlSchema),
async (c) => c.json(await getUploadUrl(c.req.valid("query"))),
);
```
The signed URL is only valid for a limited time and will work for anyone who has access to it during that period. Make sure to handle the URL securely and avoid exposing it to unauthorized users.
Then, you can use it to upload files to the generated presigned URL from your frontend code:
```tsx title="upload.tsx"
const upload = useMutation({
mutationFn: async (data: { file?: File }) => {
const extension = data.file?.type.split("/").pop();
const path = `files/${crypto.randomUUID()}.${extension}`;
const { url: uploadUrl } = await handle(api.storage.upload.$get)({
query: { path },
});
const response = await fetch(uploadUrl, {
method: "PUT",
body: data.file,
headers: {
"Content-Type": data.file?.type ?? "",
},
});
if (!response.ok) {
throw new Error("Failed to upload file!");
}
},
onError: (error) => {
toast.error(error.message});
},
onSuccess: async ({ publicUrl, oldImage }, _b, context) => {
toast.success("File uploaded!");
},
});
```
The code above demonstrates how to implement file uploads in your application:
1. First, we have a server-side endpoint (`storageRouter`) that generates presigned URLs for uploads. This endpoint:
* [Requires authentication](/docs/web/api/protected-routes) via `enforceAuth`
* Validates the request parameters using `validate`
* Returns a presigned URL for uploading
2. Then, in the frontend code (`upload.tsx`), we use React Query's `useMutation` hook to handle the upload process:
* Requests a presigned URL from the server
* Uploads the file directly to the storage provider using the presigned URL
* Handles success and error cases with toast notifications
This approach ensures secure file uploads while avoiding server bandwidth costs and function timeout issues.
### Public uploads
Although **it's not recommended** to use public uploads in production, you can use the same endpoint to generate presigned URLs for public uploads:
```ts title="storage.router.ts"
export const storageRouter = new Hono().get(
"/upload",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getUploadUrl(c.req.valid("query"))),
);
```
Just remove the `enforceAuth` middleware from the endpoint and keep rest of the logic the same.
## Displaying files
We provide dedicated endpoints for retrieving signed URLs specifically for displaying files. These URLs are time-limited to maintain security, so they cannot be used for permanent storage or long-term access:
```ts title="storage.router.ts"
export const storageRouter = new Hono().get(
"/signed",
enforceAuth,
validate("query", getObjectUrlSchema),
async (c) => c.json(await getSignedUrl(c.req.valid("query"))),
);
```
This endpoint is perfect for displaying files that should only be accessible to authorized users for a limited time.
### Public files
For displaying files publicly (without authorization and time limitations), you can use the `/public` endpoint:
```ts title="storage.router.ts"
export const storageRouter = new Hono().get(
"/public",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getPublicUrl(c.req.valid("query"))),
);
```
This endpoint generates a public URL for the file that you can use to display in your application. Please ensure that your bucket policy allows public access to the files and verify that you're not exposing any sensitive information.
## Deleting files
Deleting files works almost the same way as uploading files. You just need to generate a presigned URL for deletion and then use it to remove the file:
```ts title="storage.router.ts"
export const storageRouter = new Hono().get(
"/delete",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getDeleteUrl(c.req.valid("query"))),
);
```
Then, in the frontend code, we use React Query's `useMutation` hook to handle the deletion process:
```tsx title="delete.tsx"
const remove = useMutation({
mutationFn: async () => {
const path = file.split("/").pop();
if (!path) return;
const { url: deleteUrl } = await handle(api.storage.delete.$get)({
query: { path: `files/${path}` },
});
await fetch(deleteUrl, {
method: "DELETE",
});
},
onError: (error) => {
toast.error(error.message);
},
onSuccess: () => {
toast.success("File removed!");
},
});
```
Now that you understand how to manage files in TurboStarter, it's time to build something awesome! Try creating a file upload component, building a photo gallery, or implementing a document management system.
file: ./src/content/docs/(core)/web/storage/overview.mdx
meta: {
"title": "Overview",
"description": "Get started with storage in TurboStarter.",
"index": true
}
With TurboStarter, you can easily upload and manage files (images, videos, documents, and more) in your application.
Currently, all S3-compatible storage providers are supported, including [AWS S3](https://aws.amazon.com/s3/), [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces), [Cloudflare R2](https://www.cloudflare.com/products/r2/), [Supabase Storage](https://supabase.com/storage), and others.
## Uploading files
The most common approach to uploading files is to use client-side uploads. With client-side uploads, you avoid paying ingress/egress fees for transferring file binary data through your server.
Additionally, most hosting platforms like [Vercel](https://vercel.com/docs/functions/runtimes#size-limits) or [Netlify](https://answers.netlify.com/t/what-is-the-maximum-file-size-upload-limit-in-a-netlify-form-submission/108419) have limitations on file size and maximum serverless function execution time.
That's why TurboStarter utilizes the **presigned URLs** feature of storage providers to upload files. Instead of sending files to the serverless function, the client requests a time-limited presigned URL from the serverless function and then uploads the file directly to the storage provider.

1. Client **requests** a presigned URL from the serverless function.
2. Server parses the request, validates the payload, optionally saves the metadata, and **returns the presigned URL** to the client.
3. Client **uploads the file** to the presigned URL within the expiration time.
4. (Optional) Once the file is uploaded, the serverless function is notified about the upload event, and the file metadata is saved to the database.
This approach ensures that credentials remain secure, handles authorization and authentication properly, and avoids the limitations of serverless platforms.
The configuration and use of storage is straightforward and simple. We'll explore this in more detail in the following sections.
file: ./src/content/docs/(core)/web/troubleshooting/billing.mdx
meta: {
"title": "Billing",
"description": "Find answers to common billing issues."
}
## Checkout can't be created
This happen in the following cases:
1. The environment variables are not set correctly. Please make sure you have set the environment variables corresponding to your billing provider in `.env.local` if locally - or in your hosting provider's dashboard if in production
2. The price IDs used are incorrect. Make sure to use the exact price IDs as they are in the payment provider's dashboard.
[Read more about billing configuration](/docs/web/billing/configuration)
## Database is not updated after subscribing to a plan
This may happen if the webhook is not set up correctly. Please make sure you have set up the webhook in the payment provider's dashboard and that the URL is correct.
If working locally, make sure that:
1. If using Stripe, that the Stripe CLI or configured proxy is up and running ([see the Stripe documentation for more information](/docs/web/billing/stripe#create-a-webhook))
2. If using Lemon Squeezy, that the webhook set in Lemon Squeezy is correct and that the server is running. Additionally, make sure the proxy is set up correctly if you are testing locally ([see the Lemon Squeezy documentation for more information](/docs/web/billing/lemon-squeezy#create-a-webhook)).
file: ./src/content/docs/(core)/web/troubleshooting/deployment.mdx
meta: {
"title": "Deployment",
"description": "Find answers to common web deployment issues."
}
## Deployment build fails
This is most likely an issue related to the environment variables not being set correctly in the deployment environment. Please analyse the logs of the deployment provider to see what is the issue.
The kit is very defensive about incorrect environment variables, and will throw an error if any of the required environment variables are not set. In this way - the build will fail if the environment variables are not set correctly - instead of deploying a broken application.
Check our guides for the most popular hosting providers for more information on how to deploy your TurboStarter project correctly:
## What should I set as a URL before my first deployment?
That's very good question! For the first deployment you can set any URL, and then, after you (or your provider) assign a domain name, you can change it to the correct one. There's nothing wrong with redeploying your project multiple times.
## Sign in with OAuth provider doesn't work
This is most likely a settings issues in the provider's settings. To troubleshoot this issue, follow these steps:
1. **Verify provider settings**: Ensure that the OAuth provider's settings are correctly configured. Check that the client ID, client secret, and redirect URI are accurate and match the values in your application.
2. **Check environment variables**: Confirm that the environment variables for the OAuth provider are set correctly in your application production environment.
3. **Validate callback URLs**: Ensure that the callback URLs for each provider are set correctly and match the URLs in your application. This is crucial for the OAuth flow to work correctly.
Please read [Better Auth documentation](https://www.better-auth.com/docs/concepts/oauth) for more information on how to set up third-party providers.
file: ./src/content/docs/(core)/web/troubleshooting/emails.mdx
meta: {
"title": "Emails",
"description": "Find answers to common emails issues."
}
## I want to use a different email provider
Of course! You can use any email provider that you want. All you need to do is to implement the `EmailProviderStrategy` and pass it to the `strategies` map.
[Read more about sending emails](/docs/web/emails/sending)
## My emails are landing in the spam folder
Emails landing in spam folders is a common issue. Here are key steps to improve deliverability:
1. **Configure proper domain setup**:
* Use a dedicated subdomain for sending emails (e.g., mail.yourdomain.com)
* Ensure [reverse DNS (PTR) records](https://www.cloudflare.com/learning/dns/dns-records/dns-ptr-record/) are properly configured
* Warm up your sending domain gradually
2. **Implement authentication protocols**:
* Set up [SPF records](https://www.cloudflare.com/learning/dns/dns-records/dns-spf-record/) to specify authorized sending servers
* Enable [DKIM signing](https://www.cloudflare.com/learning/dns/dns-records/dns-dkim-record/) to verify email authenticity
* Configure [DMARC policies](https://www.cloudflare.com/learning/dns/dns-records/dns-dmarc-record/) to prevent spoofing
3. **Follow deliverability best practices**:
* Include clear unsubscribe mechanisms in all marketing communications
* Personalize content appropriately
* Avoid excessive promotional language and spam triggers
* Maintain consistent HTML formatting and styling
* Only include links to verified domains
* Keep a regular sending schedule
* Clean your email lists regularly
* Use double opt-in for new subscribers
4. **Monitor and optimize**:
* Track key metrics like delivery rates, opens, and bounces
* Monitor spam complaint rates
* Review email authentication reports
* Test emails across different clients and devices
* Adjust sending practices based on performance data
file: ./src/content/docs/(core)/web/troubleshooting/installation.mdx
meta: {
"title": "Installation",
"description": "Find answers to common web installation issues."
}
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## My environment variables from `.env.local` file are not being loaded
Make sure you are running the `pnpm dev` command from the root directory of your project (where the `pnpm-workspace.yaml` file is located)
Also, ensure that the `.env.local` files are present in the apps that need them. For example, the `.env` file should be present in the `apps/web` directory for the web app.
TurboStarter uses the `dotenv-cli` to load environment variables from a `.env` files. The `dotenv-cli` is automatically used when running the `pnpm dev` command from the root directory.
## Next.js server doesn't start
This may happen due to some issues in the packages. Try to clean the workspace using the following command:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm i
```
You can now retry running the dev server.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/web/internationalization/configuration)
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter web my-package
# For a specific package
pnpm install --filter @turbostarter/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
file: ./src/content/docs/(core)/mobile/auth/oauth/apple.mdx
meta: {
"title": "Apple",
"description": "Configure native Apple OAuth for your application."
}
We are working on adding native "Sign in with Apple" support to TurboStarter. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
You can find these resources helpful for implementing "Sign in with Apple" in your app:
file: ./src/content/docs/(core)/mobile/auth/oauth/google.mdx
meta: {
"title": "Google",
"description": "Configure native Google OAuth for your application."
}
We are working on adding native "Sign in with Google" support to TurboStarter. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
You can find these resources helpful for implementing "Sign in with Google" in your app:
file: ./src/content/docs/(core)/mobile/auth/oauth/index.mdx
meta: {
"title": "OAuth",
"description": "Get started with social authentication.",
"index": true
}
Better Auth supports almost **15** (!) different [OAuth providers](https://www.better-auth.com/docs/concepts/oauth). They can be easily configured and enabled in the kit without any additional configuration needed.
TurboStarter provides you with all the configuration required to handle OAuth providers responses from your app:
* redirects
* middleware
* confirmation API routes
You just need to configure one of the below providers on their side and set correct credentials as environment variables in your TurboStarter app.

Third Party providers need to be configured, managed and enabled fully on the provider's side. TurboStarter just needs the correct credentials to be set as environment variables in your app and passed to the [authentication API configuration](/docs/web/auth/configuration#api).
To enable OAuth providers in your TurboStarter app, you need to:
1. Set up an OAuth application in the provider's developer console (like Google Cloud Console, Github Developer Settings or any other provider you want to use)
2. Configure the provider's credentials as environment variables in your app. For example, for Google OAuth:
```dotenv title="packages/db/.env.local"
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
```
Then, pass it to the authentication configuration in `packages/auth/src/server.ts`:
```tsx title="server.ts"
export const auth = betterAuth({
...
socialProviders: {
[SOCIAL_PROVIDER.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
...
});
```
For mobile apps, we need to define a trusted origin using an app scheme instead of a classic URL. App schemes (like `turbostarter://`) are used for [deep linking](https://docs.expo.dev/guides/linking/) users to specific screens in your app after authentication.
To find your app scheme, take a look at `apps/mobile/app.config.ts` file and then add it to your auth server configuration:
```tsx title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["turbostarter://**"],
...
});
```
Adding your app scheme to the trusted origins list is crucial for security - it prevents CSRF attacks and blocks malicious open redirects by ensuring only requests from approved origins (your app) are allowed through.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
Also, we included some native integrations (Apple for iOS and Google for Android) to make the sign-in process smoother and faster for the user.