---
url: /docs/extension/ai
title: AI
description: Leverage AI in your TurboStarter extension.
---
TurboStarter includes a set of AI rules, skills, subagents, and commands for popular AI editors and tools - so the AI follows this repo's conventions and produces more consistent changes.
See [AI-assisted development](/docs/extension/installation/ai-development) to set it up.
There are two approaches to AI in a browser extension:
* **Server + client**: Traditional implementation, same as for [web](/docs/web/ai/overview) and [mobile](/docs/mobile/ai), used to stream responses generated on the server to the client.
* **Server + client**: Traditional implementation, same as for [web](/docs/web/ai/overview) and [mobile](/docs/mobile/ai), used to stream server-generated responses to the client.
* **Chrome built-in AI**: An [experimental implementation](https://developer.chrome.com/docs/ai/built-in) of [Gemini Nano](https://blog.google/technology/ai/google-gemini-ai/#performance) that's built into new versions of the Google Chrome browser.
We recommend the traditional server + client approach because it's more versatile and easier to implement. Chrome's built-in AI is a nice option, but it's still experimental and has limitations.
Of course, you can always implement a *hybrid* approach which combines both solutions to achieve the best results.
## Server + client
The traditional AI setup in the browser extension is the same as for the [web app](/docs/web/ai/configuration#client-side) and the [mobile app](/docs/mobile/ai). We use the same [API endpoint](/docs/web/ai/configuration#api-endpoint) and leverage streaming to display answers incrementally as they're generated.
```tsx title="main.tsx"
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
const Popup = () => {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: "/api/ai/chat",
}),
});
return (
{messages.map((message) => (
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return
{part.text}
;
}
})}
))}
);
};
export default Popup;
```
This is the most reliable way to use AI in the browser extension. Feel free to reuse or modify it to suit your needs.
## Chrome built-in AI
Chrome's implementation of [built-in AI with Gemini Nano](https://developer.chrome.com/docs/ai/built-in) is experimental and will change as they test and address feedback.
Chrome's built-in AI is a preview feature. To use it, you need Chrome version 127 or later and you must enable these flags:
* [chrome://flags/#prompt-api-for-gemini-nano](chrome://flags/#prompt-api-for-gemini-nano): `Enabled`
* [chrome://flags/#optimization-guide-on-device-model](chrome://flags/#optimization-guide-on-device-model): `Enabled BypassPrefRequirement`
* [chrome://components/](chrome://components/): Click `Optimization Guide On Device Model` to download the model.
Once enabled, you can use `window.ai` to access the built-in AI and do things like this:

You can also use a [dedicated provider](https://sdk.vercel.ai/providers/community-providers/chrome-ai) from the Vercel AI SDK ecosystem to simplify usage. Keep in mind that this API is still in its early stages and may change in the future.
You can use this API in any part of your extension (popup, background service worker, etc.).
It's safe to use on the client side because it doesn't require exposing secrets to the user (like an API key in the traditional server + client approach).
To learn more, check the official [Chrome documentation](https://developer.chrome.com/docs/ai/built-in) and the articles below.
---
url: /docs/extension/analytics/configuration
title: Configuration
description: Learn how to configure extension analytics in TurboStarter.
---
The `@workspace/analytics-extension` package offers a streamlined and flexible approach to tracking events in your TurboStarter extension using various analytics providers. It abstracts the complexities of different analytics services and provides a consistent interface for event tracking.
In this section, we'll guide you through the configuration process for each supported provider.
Note that the configuration is validated against a schema, so you'll see error messages in the console if anything is misconfigured.
## Providers
Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best suits your needs and follow the instructions in the respective accordion section.
To use Google Analytics as your analytics provider, you need to [create a Google Analytics account](https://analytics.google.com/) and [set up a property](https://support.google.com/analytics/answer/9304153).
Next, add a data stream in your Google Analytics account settings:
1. Navigate to [Google Analytics](https://analytics.google.com/).
2. In the *Admin* section, under *Data collection and modification*, click on *Data Streams*.
3. Click *Add stream*.
4. Select *Web* as the platform.
5. Enter the required details for the stream (at minimum, provide a name and website URL).
6. Click *Create stream*.
After creating the stream, you'll need two pieces of information:
1. Your [Measurement ID](https://support.google.com/analytics/answer/12270356) (it should look like `G-XXXXXXXXXX`):

2. Your [Measurement Protocol API secret](https://support.google.com/analytics/answer/9814495):

Set these values in your `.env.local` file in the `apps/extension` directory and in your CI/CD provider secrets:
```dotenv
VITE_GOOGLE_ANALYTICS_MEASUREMENT_ID="your-measurement-id"
VITE_GOOGLE_ANALYTICS_SECRET="your-measurement-protocol-api-secret"
```
Also, make sure to activate the Google Analytics provider as your analytics provider by updating the exports in:
```ts title="index.ts"
// [!code word:google-analytics]
export * from "./google-analytics";
export * from "./google-analytics/env";
```
To customize the provider, you can find its definition in `packages/analytics/extension/src/providers/google-analytics` directory.
For more information, please refer to the [Google Analytics documentation](https://developers.google.com/analytics).

PostHog is also one of pre-configured providers for [monitoring](/docs/extension/monitoring/overview) in TurboStarter. You can learn more about it [here](/docs/extension/monitoring/posthog).
To use PostHog as your analytics provider, you need to configure a PostHog instance. You can obtain the [Cloud](https://app.posthog.com/signup) instance by [creating an account](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
Then, create a project and, based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/extension` directory and your CI/CD provider secrets:
```dotenv
VITE_POSTHOG_KEY="your-posthog-api-key"
VITE_POSTHOG_HOST="your-posthog-instance-host"
```
Also, make sure to activate the PostHog provider as your analytics provider by updating the exports in:
```ts title="index.ts"
// [!code word:posthog]
export * from "./posthog";
export * from "./posthog/env";
```
To customize the provider, you can find its definition in `packages/analytics/extension/src/providers/posthog` directory.
For more information, please refer to the [PostHog documentation](https://posthog.com/docs/advanced/browser-extension).

---
url: /docs/extension/analytics/overview
title: Overview
description: Get started with extension analytics in TurboStarter.
---
When it comes to extension analytics, we can distinguish between two types:
* **Store listing analytics**: Used to track the performance of your extension's store listing (e.g., how many people have viewed your extension in the store or how many have installed it).
* **In-extension analytics**: Tracks user actions within your extension (e.g., how many users triggered your popup, how many users modified extension settings, etc.).
The `@workspace/analytics-extension` package provides a set of tools to easily implement both types of analytics in your extension.
## Store listing analytics
Interpreting your extension's store listing metrics can help you evaluate how changes to your extension and store listing affect conversion rates. For example, you can identify countries with a high number of visitors to prioritize supporting languages for those regions.
While each store implements a different set of metrics, there are some common ones you should be aware of:
* **Active installs**: The number of users who have installed your extension.
* **Active users**: The number of users who have used your extension.
* **Page views**: The number of times users have viewed your extension's detail page on the respective store.
To track more detailed metrics, you can opt in to Google Analytics in the Chrome Web Store's developer dashboard.
You can find this option under *Additional metrics* on the *Store listing* tab of your extension's control panel:

The Chrome Web Store manages the account for you and makes the data available
in the Google Analytics dashboard.
By enabling this feature, you can optimize your extension's store listing based on metrics such as bounce rate, time on page, and more. This can lead to more installs and ultimately more users for your extension.
To learn more about the limitations of this type of analytics and how to adjust event details, please refer to the following sections in the official documentation:
## In-extension analytics
TurboStarter comes with built-in support for tracking in-extension analytics. To learn more about each supported provider and how to configure them, see their respective sections:
All configuration and setup is built-in with a unified API, allowing you to switch between providers by simply changing the exports. You can even introduce your own provider without breaking any tracking-related logic.
In the following sections, we'll cover how to set up each provider and how to track events in your extension.
---
url: /docs/extension/analytics/tracking
title: Tracking events
description: Learn how to track events in your TurboStarter extension.
---
The strategy for tracking events that every provider has to implement is extremely simple:
```ts
export type AllowedPropertyValues = string | number | boolean;
type TrackFunction = (
event: string,
data?: Record,
) => void;
export interface AnalyticsProviderStrategy {
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
As shown above, each provider must supply the `track` function. This function is responsible for sending event data to the provider.
To track an event in any part of your extension, simply call the `track` method, passing the event name and an optional data object:
```tsx title="main.tsx"
import { track } from "@workspace/analytics-extension";
const Popup = () => {
return (
);
};
export default Popup;
```
## Identifying users
Linking events to specific users enables you to build a full picture of how they're using your product across different sessions, devices, and platforms.
For identification purposes, we're extending the strategy with the `identify` and `reset` methods. They are optional and only needed if you want to identify users in your app and associate their actions with a specific user ID.
```ts
type IdentifyFunction = (
userId: string,
traits?: Record,
) => void;
export interface AnalyticsProviderClientStrategy {
identify: IdentifyFunction;
reset: () => void;
}
```
To identify users, call the `identify` method, passing the user's ID and an optional traits object:
```tsx
import { identify } from "@workspace/analytics-extension";
identify("user-123", { name: "John Doe" });
```
This will associate all future events with the user's ID, allowing you to track user behavior and gain valuable insights into your application's usage patterns.
The `identify` method is configured out-of-the-box to react on changes to the user's authentication state.
When the user is authenticated, the `identify` method will be called with the user's ID and the user's traits. When the user is logged out, the `reset` method will be called to clear the existing user identification.
Congratulations! You've now mastered event tracking in your TurboStarter extension. With this knowledge, you're well-equipped to analyze user behaviors and gain valuable insights into your extension's usage patterns. Happy analyzing! 📊
---
url: /docs/extension/api/client
title: Using API client
description: How to use API client to interact with the API.
---
In browser extension code, you can only access the API client from the **client-side.**
When you create a new component or piece of your extension and want to fetch some data, you can use the API client to do so.
## Creating a client
We're creating a client-side API client in `apps/extension/src/lib/api/index.tsx` file. It's a simple wrapper around the [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) that fetches or mutates data from the API.
It also requires wrapping your views in a `QueryClientProvider` component to provide the API client to the rest of the components.
We recommend to create a separate layout file, which will be used to wrap your pages. TurboStarter comes with a `layout.tsx` file in the `modules/common/layout` folder, which you can use as a template:
```tsx title="layout.tsx"
export const Layout = ({
children,
loadingFallback,
errorFallback,
}: LayoutProps) => {
return (
{children}
);
};
```
Remember that every part of your extension will be mounted as a **separate** React component, so you need to wrap each of them in the `QueryClientProvider` component if you want to use the API client inside:
```tsx title="app/popup/main.tsx"
import { Layout } from "~/modules/common/layout/layout";
export default function Popup() {
return {/* your popup code here */};
}
```
Inside the `apps/extension/src/lib/api/index.tsx` we're calling a function to get base url of your api, so make sure it's set correctly (especially on production) and your web api endpoint is corresponding with the name there.
```tsx title="index.tsx"
const getBaseUrl = () => {
return env.VITE_SITE_URL;
};
```
As you can see we're mostly relying on the [environment variables](/docs/extension/configuration/environment-variables) to get it, so there shouldn't be any issues with it, but in case, please be aware where to find it 😉
## Queries
Of course, everything comes already configured for you, so you just need to start using `api` in your components/screens.
For example, to fetch the list of posts you can use the `useQuery` hook:
```tsx title="posts.tsx"
import { api } from "~/lib/api";
export const Posts = () => {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: async () => {
const response = await api.posts.$get();
if (!response.ok) {
throw new Error("Failed to fetch posts!");
}
return response.json();
},
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
};
```
It's using the `@tanstack/react-query` [useQuery API](https://tanstack.com/query/latest/docs/framework/react/reference/useQuery), so you shouldn't have any troubles with it.
## Mutations
If you want to perform a mutation in your extension code, you can use the `useMutation` hook that comes straight from the integration with [Tanstack Query](https://tanstack.com/query):
```tsx title="modules/popup/form.tsx"
import { api } from "~/lib/api";
export const CreatePost = () => {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
},
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
};
```
Here, we're also invalidating the query after the mutation is successful. This is a very important step to make sure that the data is updated in the UI.
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api";
export const Posts = () => {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
};
```
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/client";
export const CreatePost = () => {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
};
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
---
url: /docs/extension/api/overview
title: Overview
description: Get started with the API.
---
To enable communication between your WXT extension and the server in a production environment, the API **must** be deployed first. By default, it's hosted together with the [web app](/docs/web/api/overview), but you can also [deploy it separately](/docs/web/deployment/api).
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe way, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS. It also
includes an [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers live in the `packages/api/` package. Inside, the `modules` folder contains the API's feature modules. Each module has its own directory and exports its resolvers.
For each module, we create a separate Hono router and aggregate all sub-routers into one main router in the `packages/api/index.ts` file.
By default, the API is integrated with the [web app](/docs/web/api/overview) and exposed as a [Next.js route handler](https://nextjs.org/docs/app/getting-started/route-handlers):
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@workspace/api";
const handler = handle(appRouter);
export {
handler as GET,
handler as POST,
handler as OPTIONS,
handler as PUT,
handler as PATCH,
handler as DELETE,
handler as HEAD,
};
```
Learn more about how to use the API in your browser extension code in the following sections:
---
url: /docs/extension/auth/overview
title: Overview
description: Learn how to authenticate users in your extension.
---
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration while keeping enterprise-grade security. Its framework-agnostic approach and focus on performance make it the perfect choice for TurboStarter.
Recently, Better Auth [announced](https://www.better-auth.com/blog/authjs-joins-better-auth) an incorporation of [Auth.js (28k+ stars on GitHub)](https://authjs.dev/), making it even more powerful and flexible.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
To keep things simple and secure, **the extension shares the same authentication session with your web app.**
This is a common approach used by popular services like [Notion](https://www.notion.so) and [Google Workspace](https://workspace.google.com/). The benefits include:
* Users only need to sign in once through the web app
* The extension automatically inherits the authenticated session
* Sign out actions are synchronized across platforms
* Reduced security surface area and complexity
Before setting up extension authentication, make sure to first [configure authentication for your web app](/docs/web/auth/overview) and then head back to the extension code.
The following sections cover everything you need to know about authentication in your extension:
---
url: /docs/extension/auth/session
title: Session
description: Learn how to manage the user session in your extension.
---
We're not implementing fully-featured auth flow in the extension. Instead, **we're sharing the same auth session with the web app.**
It's a common practice in the industry used e.g. by [Notion](https://www.notion.so) and [Google Workspace](https://workspace.google.com/).
That way, when the user is signed in to the web app, the extension can use the same session to authenticate the user, so he doesn't have to sign in again. Also signing out from the extension will affect both platforms.
For browser extensions, we need to define an [authentication trusted origin](https://www.better-auth.com/docs/reference/security#trusted-origins) using an extension scheme.
Extension schemes (like `chrome-extension://...`) are used for redirecting users to specific screens after authentication and sharing the auth session with the web app.
To find your extension ID, open Chrome and go to `chrome://extensions/`, enable Developer Mode in the top right, and look for your extension's ID. Then add it to your auth server configuration:
```ts title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["chrome-extension://your-extension-id"],
...
});
```
Adding your extension scheme to the trusted origins list is crucial for security - it prevents CSRF attacks and blocks malicious open redirects by ensuring only requests from approved origins (your extension) are allowed through.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
## Cookies
When the user signs in to the [web app](/docs/web) through our [Better Auth API](/docs/web/auth/configuration#api), web app is setting the cookie with the session token under your app's domain, which is later used to validate the session on the server.
You can find your cookie in *Cookies* tab in the browser's developer tools (remember to be logged in to the app to check it):

To enable your extension to read the cookie and that way share the session with the web app, you need to set the `cookies` permission in the `wxt.config.ts` under `manifest.permissions` field:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["cookies"],
},
});
```
And to be able to read the cookie from your app url, you need to set `host_permissions`, which will include your app url:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
host_permissions: ["http://localhost/*", "https://your-app-url.com/*"],
},
});
```
Then you would be able to share the cookie with API requests and also read its value using `browser.cookies` API.
Avoid using `` in `host_permissions`. It affects all urls and may cause security issues, as well as a [rejection](https://developer.chrome.com/docs/webstore/review-process#review-time-factors) from the destination store.
## Reading session
You **don't** need to worry about reading, parsing, or validating the session cookie. TurboStarter comes with a pre-built solution that ensures your session is correctly shared with the web app.
It also ensures that appropriate cookies are passed to [API](/docs/web/api/overview) requests, so you can safely use [protected endpoints](/docs/web/api/protected-routes) (that require authentication) in your extension.
To get session details in your extension code (e.g., inside a popup window), you can leverage the `useSession` hook provided by the [auth client](https://www.better-auth.com/docs/basic-usage#client-side) (which is also widely used in the web and mobile apps):
```tsx title="user.tsx"
import { authClient } from "~/lib/auth";
const User = () => {
const session = authClient.useSession();
if (session.isPending) {
return
Loading...
;
}
/* do something with the session data... */
return
{session.data?.user?.email}
;
};
```
That's how you can access user details right in your extension.
## Signing out
Signing out from the extension also involves using the well-known `signOut` function that is derived from our [auth client](https://www.better-auth.com/docs/basic-usage#signout):
```tsx title="logout.tsx"
import { authClient } from "~/lib/auth";
export const Logout = () => {
return ;
};
```
The session is automatically invalidated, so the next use of `useSession` or any other query that depends on the session will return `null`. The UI for both the extension and the web app will be updated to show the user as logged out.
As web app is using the same session cookie, the user will be signed out from the web app as well. **This is intentional**, as your extension will most probably serves as an add-on for the web app and it doesn't make sense to keep the user signed in there if the extension is not used.

---
url: /docs/extension/billing
title: Billing
description: Get started with extension billing in TurboStarter.
---
As you could guess, there is no sense in implementing the whole billing process inside the browser extension, so we're relying on the [web app](/docs/web/billing/overview) to handle it.
> You probably won't display pricing tables inside a popup window, right?
You can customize the flow and onboarding process when a user purchases a plan in your [web app](/docs/web/billing/overview).
Then you would be able to easily fetch customer data to ensure that the user has access to correct extension features.
## Fetching customer data
When your user has purchased a plan from your landing page or web app, you can easily fetch their data using the [API](/docs/extension/api/client).
To do so, just invoke the `me` query on the `billing` router to get the summary of the user's billing data:
```tsx title="customer-screen.tsx"
import { getActivePlan } from "@workspace/billing";
import { api } from "~/lib/api";
export default function CustomerScreen() {
const summary = useQuery({
queryKey: ["me"],
queryFn: handle(api.billing.me.$get),
});
if (summary.isLoading) {
return
Loading...
;
}
const plan = getActivePlan(summary.data);
return
{plan}
;
}
```
You may also want to ensure that user is logged in before fetching their billing data to avoid unnecessary API calls.
```tsx title="header.tsx"
import { api } from "~/lib/api";
import { authClient } from "~/lib/auth";
export const User = () => {
const session = authClient.useSession();
const summary = useQuery({
queryKey: ["me"],
queryFn: handle(api.billing.me.$get),
enabled: !!session.data?.user, // [!code highlight]
});
if (!session.data?.user || !summary.data) {
return null;
}
return (
{session.data.user.email}
{summary.data.length}
);
};
```
Read more about [auth in extension](/docs/extension/auth/overview).
---
url: /docs/extension/cli
title: CLI
description: Start your new app project with a single command.
---
To help you get started with TurboStarter **as quickly as possible**, we've developed a [CLI](https://www.npmjs.com/package/@turbostarter/cli) that enables you to create a new project (with all the configuration) in seconds.
The CLI is a set of commands that will help you create a new project, generate code, and manage your project efficiently.
Currently, the following actions are available:
* **Starting a new project** - Generate starter code for your project with all necessary configurations in place (billing, database, emails, etc.)
* **Updating existing project** - Pull the latest upstream changes into your TurboStarter repository
**The CLI is in beta**, and we're actively working on adding more commands and actions.
## Installation
You can run commands without installing globally:
```bash
npx @turbostarter/cli@latest
```
```bash
pnpm dlx @turbostarter/cli@latest
```
```bash
yarn dlx @turbostarter/cli@latest
```
```bash
bunx @turbostarter/cli@latest
```
Or install globally and run:
```bash
npm install -g @turbostarter/cli
turbostarter
```
```bash
pnpm add -g @turbostarter/cli
turbostarter
```
```bash
yarn global add @turbostarter/cli
turbostarter
```
```bash
bun add -g @turbostarter/cli
turbostarter
```
You can also display help for it or check the actual version using `--help` or `-v` flags.
### Starting a new project
Use the `new` command to initialize configuration and dependencies for a new project.
```bash
turbostarter new
```
You will be asked a few questions to configure your project:
```bash
✔ All prerequisites satisfied, let's start! 🚀
? What do you want to ship? ›
◉ Web app
◉ Mobile app
◯ Browser extension
? Enter your project name. ›
? Configure all providers now? ›
Yes, configure now (recommended)
No, just let me ship, now!
Creating a new TurboStarter project in ...
✔ Repository successfully pulled!
✔ Git successfully configured!
✔ Dependencies successfully installed!
✔ Services successfully started!
🎉 You can now get started. Open the project and just ship it! 🎉
Problems? https://turbostarter.dev/docs
```
It will create a new project, configure providers, install dependencies and start required services in development mode.
### Updating existing project
Use the `project update` command to pull the latest upstream changes into your TurboStarter repository.
```bash
turbostarter project update
```
Before updating, the CLI validates that:
* You are running the command from a TurboStarter project root
* Your git working tree is clean
* Your `upstream` remote points to `turbostarter/core`
Then it fetches upstream changes and merges `upstream/main` into your current branch. If conflicts occur, it prints the conflicting files with next steps.
---
url: /docs/extension/configuration/app
title: App configuration
description: Learn how to setup the overall settings of your extension.
---
The application configuration is set at `apps/extension/src/config/app.ts`. This configuration stores some overall variables for your application.
This allows you to host multiple apps in the same monorepo, as every application defines its own configuration.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/extension/src/config/app.ts"
import env from "env.config";
export const appConfig = {
name: env.VITE_PRODUCT_NAME,
url: env.VITE_SITE_URL,
locale: env.VITE_DEFAULT_LOCALE,
theme: {
mode: env.VITE_THEME_MODE,
color: env.VITE_THEME_COLOR,
},
} as const;
```
For example, to set the extension default theme color, you'd update the following variable:
```dotenv title=".env.local"
VITE_THEME_COLOR="yellow"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
## WXT config
To configure framework-specific settings, you can use the `wxt.config.ts` file. You can configure a lot of options there, such as [manifest](/docs/extension/configuration/manifest), [project structure](https://wxt.dev/guide/essentials/project-structure.html) or even [underlying Vite config](https://wxt.dev/guide/essentials/config/vite.html):
```ts title="wxt.config.ts"
import { defineConfig } from "wxt";
export default defineConfig({
srcDir: "src",
entrypointsDir: "app",
outDir: "build",
modules: [],
manifest: {
// Put manifest changes here
},
vite: () => ({
// Override config here, same as `defineConfig({ ... })`
// inside vite.config.ts files
}),
});
```
Make sure to setup it correctly, as it's the main source of config for your development, build and publishing process.
---
url: /docs/extension/configuration/environment-variables
title: Environment variables
description: Learn how to configure environment variables.
---
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/extension` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, extension).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/extension`). These are specific to the app and are not shared between apps.
* **Bundle-specific variables**: Specific to the [bundle target](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) (e.g. `.env.safari`, `.env.firefox`) or [bundle tag](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) (e.g. `.env.testing`)
* **Build environment variables**: Not stored in the `.env` file. Instead, they are stored in the environment variables of the CI/CD system.
* **Secret keys**: They're not stored on the extension side, instead [they're defined on the web side.](/docs/web/configuration/environment-variables#secret-keys)
## Shared variables
Here you can add all the environment variables that are shared across all the apps.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/extension`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/extension/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in WXT (Vite) extension
VITE_SITE_URL="${URL}"
VITE_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
VITE_THEME_MODE="system"
VITE_THEME_COLOR="orange"
...
```
To make environment variables available in the browser extension code, you need to prefix them with `VITE_`. They will be injected to the code during the build process.
Only environment variables prefixed with `VITE_` will be injected.
[Read more about Vite environment variables.](https://vite.dev/guide/env-and-mode.html#env-files)
## Bundle-specific variables
WXT also provides environment variables specific to a certain [build target](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) or [build tag](https://wxt.dev/guide/essentials/config/environment-variables.html#built-in-environment-variables) when creating the final bundle. Given the following build command:
```json title="package.json"
"scripts": {
"build": "wxt build -b firefox --mode testing"
}
```
The following env files will be considered, ordered by priority:
* `.env.firefox`
* `.env.testing`
* `.env`
You shouldn't worry much about this, as TurboStarter comes with already configured build processes for all the major browsers.
## Build environment variables
To allow your extension to build properly on CI you need to define your environment variables on your CI/CD system (e.g. [Github Actions](https://docs.github.com/en/actions/learn-github-actions/environment-variables)).
TurboStarter comes with predefined Github Actions workflow used to build and submit your extension to the stores. It's located in `.github/workflows/publish-extension.yml` file.
To correctly set up the build environment variables, you need to define them under `env` section and then add them as a [secrets](http://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) to your repository.
```yaml title="publish-extension.yml"
...
jobs:
extension:
name: 🚀 Publish extension
runs-on: ubuntu-latest
environment: Production
env:
VITE_SITE_URL: ${{ secrets.SITE_URL }}
...
```
We'll go through the whole process of building and publishing the extension in the [publishing guide](/docs/extension/publishing/checklist).
## Secret keys
Secret keys and sensitive information are to be **never** stored on the extension app code.
It means that you will need to add the secret keys to the **web app, where the API is deployed.**
The browser extension should only communicate with the backend API, which is typically part of the web app. The web app is responsible for handling sensitive operations and storing secret keys securely.
[See web documentation for more details.](/docs/web/configuration/environment-variables#secret-keys)
This is not a TurboStarter-specific requirement, but a best practice for security for any
application. Ultimately, it's your choice.
---
url: /docs/extension/configuration/manifest
title: Manifest
description: Learn how to configure the manifest of your extension.
---
As a requirement from web stores, every extension must have a `manifest.json` file in its root directory that lists important information about the structure and behavior of that extension.
It's a JSON file that contains metadata about the extension, such as its name, version, and permissions.
You can read more about it in the [official documentation](https://developer.chrome.com/docs/extensions/reference/manifest).
## Where is the `manifest.json` file?
WXT **abstracts away** the manifest file. The framework generates the manifest under the hood based on your source files and configurations you export from your code, similar to how Next.js abstracts page routing and SSG with the file system and page components.
That way, you don't have to manually create the `manifest.json` file and worry about correctly setting all the fields.
Most of the common properties are taken from the `package.json` and `wxt.config.ts` files:
| Manifest Field | Abstractions |
| ------------------------ | ------------------------------------------------------------- |
| icons | Auto generated with the `icon.png` in the `/assets` directory |
| action, browser\_actions | Popup window |
| options\_ui | Options page |
| content\_scripts | Content scripts |
| background | Background service worker |
| version | set by the `version` field in `package.json` |
| name | set by the `name` field in `wxt.config.ts` |
| description | set by the `description` field in `wxt.config.ts` |
| author | set by the `author` field in `wxt.config.ts` |
| homepage\_url | set by the `homepage` field in `wxt.config.ts` |
WXT build process centralizes common metadata and resolves any static file references (such as popup, background, content scripts, and so on) automatically.
This enables you to focus on the metadata that matters, such as name, description, OAuth, and so on.
## Overriding manifest
Sometimes, you want to override the default manifest fields (e.g. because you need to add a new permission that is required for your extension to work).
You'll need to modify your project's `wxt.config.ts` like so:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["activeTab"],
},
});
```
Then, your settings will be merged with the settings auto-generated by WXT.
### Environment variables
You can use environment variables inside the manifest overrides:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
browser_specific_settings: {
gecko: {
id: import.meta.env.VITE_FIREFOX_EXT_ID,
},
},
},
});
```
If the environment variable could not be found, the field will be removed completely from the manifest.
### Locales
TurboStarter extension supports [extension localization](https://developer.chrome.com/docs/extensions/reference/api/i18n) out-of-the-box. You can customize e.g. your extension's name and description based on the language of the user's browser.
Locales are defined in the `/public/_locales` directory. The directory should contain a `messages.json` file for each language you want to support (e.g. `/public/_locales/en/messages.json` and `/public/_locales/es/messages.json`).
By default, the first locale alphabetically available is used as default. However, you can specify a `default_locale` in your manifest like so:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
default_locale: "en",
},
});
```
To reference a locale string inside your manifest overrides, wrap the key inside `__MSG___`:
```ts title="apps/extension/wxt.config.ts"
export default defineConfig({
manifest: {
name: "__MSG_extensionName__",
description: "__MSG_extensionDescription__",
},
});
```
Apart of this, we also configure [in-extension internationalization](/docs/extension/internationalization) out-of-the-box to easily translate your components and views.
---
url: /docs/extension/customization/add-app
title: Adding apps
description: Learn how to add apps to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `extension` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/extension` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/extension` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/extension` folder. We will create a branch named `extension-branch` and create a subtree for the `apps/extension` folder.
```bash
git subtree split --prefix=apps/extension --branch extension-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/extension` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin extension-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/extension` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `extension-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the extension-branch:
```bash
git subtree split --prefix=apps/extension --branch extension-branch
```
Now, you can push the updates to the `extension-branch`:
```bash
git push origin extension-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin extension-branch --squash
```
That's it! You now have a new application in the monorepo 🎉
---
url: /docs/extension/customization/add-package
title: Adding packages
description: Learn how to add packages to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/web` or modify existing packages under `packages`. You don't need to do this to add a new page or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen package
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@workspace/`. If you named it `example`, the package will be named `@workspace/example`.
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@workspace/example/client'`
2. The `server` module can be imported using `import { server } from '@workspace/example/server'`
## Use the package in your extension
You can now use the package in your extension by importing it using the package name:
```ts title="app/popup/index.tsx"
import { example } from "@workspace/example";
console.log(example());
```
Et voilà! You have successfully added a new package to your TurboStarter extension. 🎉
---
url: /docs/extension/customization/components
title: Components
description: Manage and customize your extension components.
---
For the components part, we're using [shadcn/ui](https://ui.shadcn.com) for atomic, accessible and highly customizable components.
shadcn/ui is a powerful tool that allows you to generate pre-designed
components with a single command. It's built with Tailwind CSS and Base UI,
and it's highly customizable.
TurboStarter defines two packages that are responsible for the UI part of your app:
* `@workspace/ui` - shared styles, [themes](/docs/web/customization/styling#themes) and assets (e.g. icons)
* `@workspace/ui-web` - pre-built UI web components, ready to use in your app
## Adding a new component
There are basically two ways to add a new component:
TurboStarter is fully compatible with [shadcn CLI](https://ui.shadcn.com/docs/cli), so you can generate new components with single command.
Run the following command from the **root** of your project:
```bash
pnpm --filter @workspace/ui-web ui:add
```
This will launch an interactive command-line interface to guide you through the process of adding a new component where you can pick which component you want to add.
```bash
Which components would you like to add? > Space to select. A to toggle all.
Enter to submit.
◯ accordion
◯ alert
◯ alert-dialog
◯ aspect-ratio
◯ avatar
◯ badge
◯ button
◯ calendar
◯ card
◯ checkbox
```
Newly created components will appear in the `packages/ui/web/src` directory.
You can always copy-paste a component from the [shadcn/ui](https://ui.shadcn.com/docs/components) website and modify it to your needs.
This is possible, because the components are headless and don't need (in most cases) any additional dependencies.
Copy code from the website, create a new file in the `packages/ui/web/src` directory and paste the code into the file.
Keep in mind that you should always try to keep shared components as atomic as possible. This will make it easier to reuse them and to build specific views by composition.
E.g. include components like `Button`, `Input`, `Card`, `Dialog` in shared package, but keep specific components like `LoginForm` in your app directory.
## Using components
Each component is a standalone entity which has a separate export from the package. It helps to keep things modular, avoid unnecessary dependencies and make tree-shaking possible.
To import a component from the UI package, use the following syntax:
```tsx title="components/my-component.tsx"
// [!code word:card]
import {
Card,
CardContent,
CardHeader,
CardFooter,
CardTitle,
CardDescription,
} from "@workspace/ui-web/card";
```
Then you can use it to build a component specific to your app:
```tsx title="components/my-component.tsx"
export function MyComponent() {
return (
My Component
My Component Content
);
}
```
We recommend using [v0](https://v0.dev) to generate layouts for your app. It's a powerful tool that allows you to generate layouts from the natural language instructions.
Of course, **it won't replace a designer**, but it can be a good starting point for your layout.
---
url: /docs/extension/customization/styling
title: Styling
description: Get started with styling your extension.
---
To build the extension interface TurboStarter comes with [Tailwind CSS](https://tailwindcss.com/) and [Base UI](https://base-ui.com) pre-configured.
The combination of Tailwind CSS and Base UI gives ready-to-use, accessible UI components that can be fully customized to match your brands design.
## Tailwind configuration
In the `packages/ui/shared/src/styles` directory, you will find shared CSS files with Tailwind CSS configuration. To change global styles, you can edit the files in this folder.
Here is an example of a shared CSS file that includes the Tailwind CSS configuration:
```css title="packages/ui/shared/src/styles/globals.css"
@import "tailwindcss";
@import "./themes.css";
@custom-variant dark (&:is(.dark *));
:root {
--radius: 0.65rem;
}
@theme inline {
--color-background: var(--background);
--color-foreground: var(--foreground);
--color-card: var(--card);
--color-card-foreground: var(--card-foreground);
--color-popover: var(--popover);
--color-popover-foreground: var(--popover-foreground);
--color-primary: var(--primary);
--color-primary-foreground: var(--primary-foreground);
--color-secondary: var(--secondary);
--color-secondary-foreground: var(--secondary-foreground);
--color-muted: var(--muted);
--color-muted-foreground: var(--muted-foreground);
...
}
```
For colors, we rely strictly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) format to allow for easy theme management without the need for any JavaScript.
Also, each app has its own `globals.css` file, which extends the shared config and allows you to override the global styles.
Here is an example of an extension's `globals.css` file:
```css title="apps/extension/src/assets/styles/globals.css"
@import url("https://fonts.googleapis.com/css2?family=Geist+Mono:wght@100..900&family=Geist:wght@100..900&display=swap");
@import "@workspace/ui-web/globals.css";
@theme {
--font-sans: "Geist", sans-serif;
--font-mono: "Geist Mono", monospace;
}
```
This way, we maintain a separation of concerns and a clear structure for the Tailwind CSS configuration.
## Themes
TurboStarter comes with **9+** predefined themes, which you can use to quickly change the look and feel of your app.
They're defined in the `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [1, 0, 0],
foreground: [0.141, 0.005, 285.823],
card: [1, 0, 0],
"card-foreground": [0.141, 0.005, 285.823],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) array, which is then converted to a CSS variable at build time (by our custom build script). That way we can ensure full type-safety and reuse themes across different parts of our apps (e.g. use the same theme in emails).
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a theme to your app, you can use the `data-theme` attribute on your layout wrapper for each part of the extension:
```tsx title="modules/common/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
In TurboStarter, we're using [Storage API](/docs/extension/structure/storage) to persist the user's theme selection and then apply it to the `div#main` element.
## Dark mode
The starter kit comes with a built-in dark mode support.
Each theme has a corresponding dark mode variables which are used to change the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0.141, 0.005, 285.823],
foreground: [0.985, 0, 0],
card: [0.21, 0.006, 285.885],
"card-foreground": [0.985, 0, 0],
...
}
} satisfies ThemeColors;
```
Because the dark variant is defined to use a class (`@custom-variant dark (&:is(.dark *))`) in the shared Tailwind configuration, we need to add the `dark` class to the root element to apply dark mode styles.
The same as for the theme color, we're using here the [Storage API](/docs/extension/structure/storage) to persist the user's dark mode selection and then apply correct class name to the root `div` element:
```tsx title="modules/common/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
You can also define the default theme mode and color in the [app configuration](/docs/extension/configuration/app).
---
url: /docs/extension/database
title: Database
description: Get started with the database.
---
To enable communication between your WXT extension and the server in a production environment, the web application with Hono API must be deployed first.
As browser extensions use only client-side code, **there's no way to interact with the database directly**.
Also, you should avoid any workarounds to interact with the database directly, because it can lead to leaking your database credentials and other security issues.
## Recommended approach
You can safely use the [API](/docs/extension/api/overview) and invoke procedures which will run queries on the database.
To do this you need to set up the database on the [web, server side](/docs/web/database/overview) and then use the [API client](/docs/extension/api/client) to interact with it.
Learn more about its configuration in the web part of the docs, especially in the following sections:
---
url: /docs/extension/extras
title: Extras
description: See what you get together with the code.
---
## Tips and Tricks
In many places, next to the code you will find some marketing tips, design suggestions, and potential risks. This is to help you build a better product and avoid common pitfalls.
```tsx title="Hero.tsx"
return (
{/* 💡 Use something that user can visualize e.g.
"Ship your startup while on the toilet" */}
Best startup on the world
);
```
### Submission tips
When it comes to mobile app and browser extension, you must submit your product to review from Apple/Google etc. We have some tips for you to make sure your submission goes smoothly.
```json title="app.json"
{
"ios": {
"infoPlist": {
/* 🍎 add descriptive justification of using this permission on iOS */
"NSCameraUsageDescription": "This app uses the camera to scan barcodes on event tickets."
}
}
}
```
As well as providing you with the info on how to make your store listings better:
```json title="package.json"
{
"manifest": {
/* 💡 Use localized messages to get more visibility in web stores */
"name": "__MSG_extensionName__",
"default_locale": "en"
}
}
```
## 25+ SaaS Ideas
Not sure what to build? We have a list of **25+** SaaS ideas that you can use to get started 🔥
Grouped by category, these ideas are a great way to get inspired and start building your next project.
Including design, copies, marketing tips and potential risks, this list is a great resource for anyone looking to build a SaaS product.

## AI rules, skills, subagents and commands
TurboStarter ships with a set of custom AI rules, skills, subagents, and commands you can use in popular AI editors and tools. They help the AI understand the codebase conventions and generate changes faster and more reliably.
To learn how to set them up and use them effectively, see the [AI-assisted development docs](/docs/web/installation/ai-development).
## Discord community
We have a Discord community where you can ask questions and share your projects. It's a great place to get help and meet other developers. Check more details at [/discord](/discord).

---
url: /docs/extension/faq
title: FAQ
description: Find answers to common technical questions.
---
## Why isn't everything hidden and configured with one BIG config file?
TurboStarter intentionally exposes the underlying code rather than hiding it behind configuration files (like some starters do). This design choice follows our **you own your code** philosophy, giving you full control and flexibility over your codebase.
While a single config file might seem simpler initially, it often becomes restrictive when you need to customize functionality beyond what the config allows. With direct access to the code, you can modify any part of the system to match your specific requirements.
## I don't know some technology! Should I buy TurboStarter?
You should be prepared for a learning curve or consider learning it first. However, TurboStarter will still work for you if you're willing to learn.
Even without knowing some technologies, you can still use the rest of the features.
## I don't need mobile app or browser extension, what should I do?
You can simply ignore the mobile app and browser extension parts of the project. You can remove the `apps/mobile` and `apps/extension` directories from the project.
The modular nature of TurboStarter allows you to remove parts of the project that you don't need without affecting the rest of the stack.
## I want to use a different provider for X
Sure! TurboStarter is designed to be modular, so configuring new provider (e.g. for emails, billing or any other service) is straightforward. You just need to make sure your configuration is compatible with common interface to be able to plug it into the codebase.
## Will you add more packages in the future?
Yes, we will keep updating TurboStarter with new packages and features. This kit is designed to be modular, allowing for new features and packages to be added without interfering with your existing code. You can always [update your project](/docs/web/installation/update) to the latest version.
## Can I use this kit for a non-SaaS project?
This kit is mainly designed for SaaS projects. If you're building something other than a SaaS, the Next.js SaaS Boilerplate might include features you don't need. You can still use it for non-SaaS projects, but you may need to remove or modify features that are specific to SaaS use cases.
## Can I use personal accounts only?
Yes! You can disable team accounts and have personal accounts only by setting a feature flag.
## Does it set up the production instance for me?
No, TurboStarter does not set up the production instance for you. This includes setting up databases, Stripe, or any other services you need. TurboStarter does not have access to your Stripe or Resend accounts, so setup on your end is required. TurboStarter provides the codebase and documentation to help you set up your SaaS project.
## Does the starter include Solito?
No. Solito will not be included in this repo. It is a great tool if you want to share code between your Next.js and Expo app. However, the main purpose of this repo is not the integration between Next.js and Expo — it's the code splitting of your SaaS platforms into a monorepo. You can utilize the monorepo with multiple apps, and it can be any app such as Vite, Electron, etc.
Integrating Solito into this repo isn't hard, and there are a few [official templates](https://github.com/nandorojo/solito/tree/master/example-monorepos) by the creators of Solito that you can use as a reference.
## Does this pattern leak backend code to my client applications?
No, it does not. The `api` package should only be a production dependency in the Next.js application where it's served. The Expo app, browser extension, and all other apps you may add in the future should only add the `api` package as a dev dependency. This lets you have full type safety in your client applications while keeping your backend code safe.
If you need to share runtime code between the client and server, you can create a separate `shared` package for this and import it on both sides.
## How do I get support if I encounter issues?
For support, you can:
1. Visit our [Discord](https://discord.gg/KjpK2uk3JP)
2. Contact us via support email ([hello@turbostarter.dev](mailto:hello@turbostarter.dev))
## Are there any example projects or demos?
Yes - feel free to check out our demo app at [demo.turbostarter.dev](https://demo.turbostarter.dev). Also, you can get inspired by projects built by our customers - take a look at [Showcase](/#showcase).
## How do I deploy my application?
Please check the [production checklist](/docs/web/deployment/checklist) for more information.
## How do I update my project when a new version of the boilerplate is released?
Please read the [documentation for updating your TurboStarter code](/docs/web/installation/update).
## Can I use the React package X with this kit?
Yes, you can use any React package with this kit. The kit is based on React, so you are generally only constrained by the underlying technologies and not by the kit itself. Since you own and can edit all the code, you can adapt the kit to your needs. However, if there are limitations with the underlying technology, you might need to work around them.
## Can I integrate TurboStarter into an existing project?
TurboStarter is a full-stack starter intended to be used as the foundation of your app. You can copy individual modules or patterns into an existing codebase, but retrofitting the entire starter into a mature project is typically not recommended and is not officially supported. If you choose to copy parts, prefer isolating boundaries (e.g., `packages/` modules) and aligning interfaces first.
## Where can I deploy my application?
TurboStarter targets modern Node.js/Next.js runtimes. You can deploy to providers that support these environments, such as [Vercel](/docs/web/deployment/vercel), [Railway](/docs/web/deployment/railway), [Render](/docs/web/deployment/render), [Fly](/docs/web/deployment/fly), or [Netlify](/docs/web/deployment/netlify) - following their Next.js guidance. Review our [production checklist](/docs/web/deployment/checklist) before going live.
## Can I easily swap providers (billing, email, etc.)?
Yes. The starter organizes integrations behind clear interfaces so you can replace providers (e.g., billing or email) with minimal surface changes. Keep your implementation behind a module boundary and adapt to the existing types to avoid ripple effects.
---
url: /docs/extension
title: Introduction
description: Get started with TurboStarter extension kit.
---
Welcome to the TurboStarter documentation. This is your starting point for learning about the starter kit, its structure, features, and how to use it for your app development.
## What is TurboStarter?
TurboStarter is a fullstack starter kit that helps you build scalable and production-ready web apps, mobile apps, and browser extensions in minutes.
Looking to bootstrap your project quickly? Check out our [TurboStarter CLI guide](/blog/the-only-turbo-cli-you-need-to-start-your-next-project-in-seconds) to get started in seconds.
## Demo apps
TurboStarter provides a suite of live demo applications you can try instantly - right in your browser, on your phone, or via browser extensions. Try them live by clicking the buttons below.
## Principles
TurboStarter is built with the following principles:
* **As simple as possible** - It should be easy to understand, easy to use, and strongly avoid overengineering things.
* **As few dependencies as possible** - It should have as few dependencies as possible to allow you to take full control over every part of the project.
* **As performant as possible** - It should be fast and light without any unnecessary overhead.
## Features
Before diving into the technical details, let's overview the features TurboStarter provides.
### Multi-platform development
* [Web](/docs/web/stack): Build web apps with React, Next.js, and Tailwind CSS.
* [Mobile](/docs/mobile/stack): Build mobile apps with React Native and Expo.
* [Browser extension](/docs/extension/stack): Build browser extensions with React and WXT.
If you're specifcally interested in AI-related features (such as chatbots, agents, image generation, etc.), check out our dedicated [TurboStarter AI documentation](/ai/docs) which includes specialized stuff for building AI-powered applications.
Most features are available on all platforms. You can use the **same codebase** to build web, mobile, and browser extension apps.
### Authentication
### Organizations/teams
### Billing
### Database
### API
### Admin
### AI
Seamless integration of OpenAI, Anthropic, Groq, Mistral, and Gemini. For more advanced AI features, check out [TurboStarter AI](/ai/docs).
### Internationalization
### Emails
### Landing page
### Marketing
### Storage
### CMS
### Theming
### Analytics
### Monitoring
### Deployment
### Testing
## Use like LEGO blocks
The biggest advantage of TurboStarter is its modularity. You can use the entire stack or just the parts you need. It's like LEGO blocks - you can build anything you want with it.
If you don't need a specific feature, feel free to remove it without affecting the rest of the stack.
This approach allows for:
* **Easy feature integration** - plug new features into the kit with minimal changes.
* **Simplified maintenance** - keep the codebase clean and maintainable.
* **Core feature separation** - distinguish between core features and custom features.
* **Additional modules** - easily add modules like billing, CMS, monitoring, logger, mailer, and more.
## Scope of this documentation
While building a SaaS application involves many moving parts, this documentation focuses specifically on TurboStarter. For in-depth information on the underlying technologies, please refer to their respective official documentation.
This documentation will guide you through configuring, running, and deploying the kit, and will provide helpful links to the official documentation of technologies where necessary.
## Enjoy!
This documentation is designed to be easy to follow and understand. If you have any questions or need help, feel free to reach out to us at [hello@turbostarter.dev](mailto:hello@turbostarter.dev).
Explore new features, build amazing apps, and have fun! 🚀
---
url: /docs/extension/installation/ai-development
title: AI-assisted development
description: Configure AI coding assistants like Cursor, Claude Code, Codex, or Antigravity to build your SaaS faster.
---
TurboStarter includes pre-configured rules, skills, subagents, and commands for AI coding assistants. These help AI understand your codebase, follow project conventions, and produce consistent, high-quality code.
Everything works out-of-the-box with all major AI tools like [Cursor](https://cursor.com), [Claude Code](https://claude.ai/code), [Codex](https://openai.com/codex), [Antigravity](https://antigravity.dev), and many more. Just open the project in your AI tool and start coding with the help of LLMs.
## Structure
The codebase organizes AI-specific configuration in the following structure:
The `.agents/` directory contains shared skills, commands, and agents that ship with TurboStarter. The tool-specific folders (e.g., `.cursor/`, `.claude/`, `.github/`) are [symlinked](https://en.wikipedia.org/wiki/Symbolic_link) to the `.agents/` directory, allowing you to add your own skills, commands, and agents to all tools at once while also customizing them individually.
## Rules
Rules provide persistent instructions that LLMs can read when they need to know more about specific parts of your project. They define code conventions, project structure, and workflow guidelines.
### AGENTS.md
The `AGENTS.md` file at the project root is the primary rules file. It uses a standardized format recognized by [most](https://agents.md) AI coding tools.
```md title="AGENTS.md"
## Agent rules
**DO:**
- Read existing files before editing; understand imports and structure first
- Keep diffs minimal and scoped to the request
...
**DON'T:**
- Commit, push, or modify git state unless explicitly asked
- Run destructive commands (`reset --hard`, force-push) without permission
...
## Code conventions
- TypeScript: functional, declarative; no classes
- File layout: exported component → subcomponents → helpers → types
```
Rules should be concise and actionable. Include only information the AI **cannot infer from code alone**, such as:
* Bash commands and common workflows
* Code style rules that differ from defaults
* Architectural decisions specific to your project
* Common gotchas or non-obvious behaviors
Keep rules short. Overly long files cause AI to ignore important instructions. If you notice the AI not following a rule, the file might be too verbose.
### CLAUDE.md
The `CLAUDE.md` file provides compatibility with Claude-specific tools. In TurboStarter, it simply references the main rules file:
```md title="CLAUDE.md"
@AGENTS.md
```
This ensures consistent behavior across all AI tools without duplicating content.
You can also nest AGENTS.md files in subdirectories to create more granular rules for specific parts of your project.
For example, you can create an `AGENTS.md` file in the `apps/web/` directory to add rules for the web application, or an `AGENTS.md` file in the `packages/api/` directory to add specific rules for the API.
The right approach depends on your project's complexity and where you need more targeted AI assistance.
Most providers allow you to add tool-specific rules. For example, Cursor rules go in the `.cursor/` directory, while Claude rules go in the `.claude/` directory.
If you primarily use one AI tool in your workflow, consider creating tool-specific rules rather than relying solely on the shared `AGENTS.md` file.
## Skills
Skills are modular capabilities that extend AI functionality with domain-specific knowledge. They package instructions, workflows, and reference materials that AI loads on-demand when relevant.
### How skills work
Skills are organized as directories containing a `SKILL.md` file and optionally a `references/` directory with additional documentation:
Each skill includes YAML frontmatter that describes when to use it:
```md title="SKILL.md"
---
name: better-auth-best-practices
description: Skill for integrating Better Auth - the comprehensive TypeScript authentication framework.
---
# Better Auth Integration Guide
**Always consult [better-auth.com/docs](https://better-auth.com/docs) for code examples and latest API.**
...
```
AI tools read the `description` field to determine when to apply the skill automatically. When triggered, the full skill content loads into context.
### Included skills
TurboStarter ships with several pre-configured skills covering common development scenarios:
| Skill | Description |
| ----------------------------- | ---------------------------------------------- |
| `turborepo` | Turborepo best practices and configuration |
| `better-auth-best-practices` | Auth integration patterns and API reference |
| `building-native-ui` | Mobile UI patterns with Expo and React Native |
| `native-data-fetching` | Network requests, caching, and offline support |
| `vercel-react-best-practices` | React and Next.js performance optimization |
| `vercel-composition-patterns` | Component architecture and API design |
| `web-design-guidelines` | UI review and accessibility compliance |
| `find-skills` | Discover and install additional skills |
### Installing skills
To install additional skills, we recommend using [Skills CLI](https://skills.sh), which allows you to easily install skills from the [open skills ecosystem](https://skills.sh). To install a skill, run:
```bash
npx skills add
```
Browse available skills at [skills.sh](https://skills.sh).
### Creating custom skills
If you have project-specific workflows, you can create your own skills:
Create a directory in `.agents/skills/`:
```bash
mkdir -p .agents/skills/my-custom-skill
```
Add a `SKILL.md` file with frontmatter:
```md title=".agents/skills/my-custom-skill/SKILL.md"
---
name: my-custom-skill
description: Handles X workflow. Use when working with Y or when user asks about Z.
---
# My Custom Skill
## Instructions
1. First, check the existing patterns in `packages/api/`
2. Follow the established naming conventions
3. ...
```
The skill will be automatically available in your AI tool. Test by asking about the topic described in the `description` field.
## Subagents
Subagents are specialized AI assistants that handle specific types of tasks in isolation. They operate in their own context window, preventing long research or review tasks from cluttering your main conversation.
### Included subagents
TurboStarter includes a code reviewer subagent:
```md title=".agents/agents/code-reviewer.md"
---
name: code-reviewer
description: Reviews code for quality, conventions, and potential issues.
model: inherit
readonly: true
---
You are a senior code reviewer for the TurboStarter project...
```
The subagent runs in read-only mode and checks for:
* TypeScript best practices (no `any`, explicit types)
* Component conventions (named exports, props interface)
* Architecture patterns (shared logic in packages)
* Security issues (no hardcoded secrets, proper auth)
### Using subagents
Invoke subagents explicitly in your prompts:
```txt
Use the code-reviewer to review the changes in src/modules/auth/
```
Or let the AI delegate automatically based on the task.
### Creating custom subagents
Add subagent definitions to `.agents/agents/`:
```md title=".agents/agents/security-auditor.md"
---
name: security-auditor
description: Security specialist. Use when implementing auth, payments, or handling sensitive data.
model: inherit
readonly: true
---
You are a security expert auditing code for vulnerabilities.
When invoked:
1. Identify security-sensitive code paths
2. Check for common vulnerabilities (injection, XSS, auth bypass)
3. Verify secrets are not hardcoded
4. Review input validation and sanitization
Report findings by severity: Critical, High, Medium, Low.
```
## Commands
Commands are reusable workflows triggered with a `/` prefix in chat. They standardize common tasks and encode institutional knowledge.
### Included commands
TurboStarter includes a feature setup command:
```md title=".agents/commands/setup-new-feature.md"
# Setup New Feature
Set up a new feature in the TurboStarter.dev website following project conventions.
## Before starting
1. **Clarify scope**: What part of the site needs this feature?
2. **Check existing code**: Look in `packages/*` for reusable logic
3. **Identify shared vs app-specific**: Shared logic goes in `packages/*`
## Project structure
...
```
### Using commands
Type `/` in chat to see available commands:
```txt
/setup-new-feature
```
Follow the guided workflow to scaffold features consistently.
### Creating custom commands
Add command definitions to `.agents/commands/`:
```md title=".agents/commands/fix-issue.md"
# Fix GitHub Issue
Fix a GitHub issue following project conventions.
## Steps
1. Use `gh issue view ` to get issue details
2. Search the codebase for relevant files
3. Implement the fix following existing patterns
4. Write tests to verify the fix
5. Run `pnpm typecheck` and `pnpm lint`
6. Create a descriptive commit message
7. Push and create a PR
```
## Model Context Protocol (MCP)
MCP enables AI tools to connect to external services like databases, APIs, and third-party tools. This allows AI to access real data and perform actions beyond code generation.
### Common MCP integrations
| Service | Use case |
| ---------------------------------------------------------------------------------------------- | -------------------------------------- |
| [GitHub](https://github.com/github/github-mcp-server) | Create issues, open PRs, read comments |
| [Database](https://github.com/crystaldba/postgres-mcp) | Query schemas, inspect data |
| [Figma](https://help.figma.com/hc/en-us/articles/32132100833559-Guide-to-the-Figma-MCP-server) | Import designs for implementation |
| [Linear](https://linear.app/docs/mcp)/[Jira](https://github.com/sooperset/mcp-atlassian) | Read tickets, update status |
| [Browser](https://browsermcp.io/) | Test UI, take screenshots |
For a full list of available MCP servers, see the [Cursor documentation](https://cursor.com/docs/context/mcp/directory) or the [MCP directory](https://www.pulsemcp.com/servers/).
### Setting up MCP
MCP configuration varies by tool. Generally, you create a configuration file that specifies server connections:
```json title="mcp.json"
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}
```
Consult your AI tool's documentation for specific setup instructions.
## Documentation
Like the rest of TurboStarter, the documentation is optimized for AI-assisted workflows. You can chat with it and get answers about specific features using the **most up-to-date** information.
### `llms.txt`
You can access the entire TurboStarter documentation in Markdown format at [/llms.txt](/llms.txt). This allows you to ask any LLM (assuming it has a large enough context window) questions about TurboStarter using the most up-to-date documentation.
#### Example usage
For example, to prompt an LLM with questions about TurboStarter:
1. Copy the documentation contents from [/llms.txt](/llms.txt)
2. Use the following prompt format:
```txt
Documentation:
{paste documentation here}
---
Based on the above documentation, answer the following:
{your question}
```
This works with any AI tool that accepts large context, regardless of whether it has native integration with your editor.
### Markdown format
Each documentation page is also available in raw Markdown format. You can copy the contents using the *Copy Markdown* button in the page header.
You can also access it directly by adding the `.mdx` extension to the specific documentation page. For example, to access this page, visit [/docs/web/installation/ai-development.mdx](/docs/web/installation/ai-development.mdx).
### Open in ...
To make chatting with TurboStarter documentation even more convenient, each page includes an *Open in...* button in the header that opens the documentation directly in your preferred chatbot.
For example, opening the documentation page in [ChatGPT](https://chatgpt.com) will create a new chat with the documentation automatically attached as a context:

## Best practices
Following best practices helps you get the most out of AI-assisted development. Review the tips below and share your experiences on our [Discord](https://discord.gg/KjpK2uk3JP) server.
### Plan before coding
The most impactful change you can make is planning before implementation. Planning forces clear thinking about what you're building and gives the AI concrete goals to work toward.
For complex tasks, use this workflow:
1. **Explore**: Have the AI read files and understand the existing architecture
2. **Plan**: Ask for a detailed implementation plan with file paths and code references
3. **Implement**: Execute the plan, verifying against each step
4. **Commit**: Review changes and commit with descriptive messages
Not every task needs a detailed plan. For quick changes or familiar patterns, jumping straight to implementation is fine.
### Provide verification criteria
AI performs dramatically better when it can verify its own work. Include tests, screenshots, or expected outputs:
```txt
// Instead of:
"implement email validation"
// Use:
"write a validateEmail function. test cases: user@example.com → true,
invalid → false, user@.com → false. run tests after implementing."
```
Without clear success criteria, the AI might produce something that looks right but doesn't actually work. Verification can be a test suite, a linter, or a command that checks output.
### Write specific prompts
The more precise your instructions, the fewer corrections you'll need. Reference specific files, mention constraints, and point to example patterns:
| Strategy | Before | After |
| ---------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scope the task** | "add tests for auth" | "write a test for `auth.ts` covering the logout edge case, using patterns in `__tests__/` and avoiding mocks" |
| **Reference patterns** | "add a calendar widget" | "look at how existing widgets are implemented. `HotDogWidget.tsx` is a good example. follow the pattern to implement a calendar widget" |
| **Describe symptoms** | "fix the login bug" | "users report login fails after session timeout. check the auth flow in `src/auth/`, especially token refresh. write a failing test, then fix it" |
### Use absolute rules
When writing rules, be direct. Absolute rules beat suggestions. "Always verify ownership with `userId` before database writes" works. "Consider checking ownership" gets ignored.
Structure rules with clear "MUST do" and "MUST NOT do" sections:
```md
## MUST DO
- Verify ownership before ALL database writes
- Run `pnpm typecheck` after every implementation
- Use `@workspace/ui` components - never install shadcn directly
## MUST NOT DO
- Never use `any` type - fix the types instead
- Never store secrets in code - use environment variables
- Never create new UI components if one exists in @workspace/ui
```
### Use rules as a router
Tell AI where and how to find things. This prevents hallucinated file paths and inconsistent patterns:
```md
## Where to find things
- Database schemas: `packages/db/src/schema/`
- Server action patterns: `apps/web/app/api/`
- UI components: `packages/ui/src/`
- Existing features to reference: `apps/web/app/`
```
### Course-correct early
Stop AI mid-action if it goes off track. Most tools support an interrupt key (usually `Esc`). Redirect early rather than waiting for a complete but wrong implementation.
If you've corrected the AI more than twice on the same issue in one session, the context is cluttered with failed approaches. Start fresh with a more specific prompt that incorporates what you learned.
### Manage context aggressively
Long sessions accumulate irrelevant context that degrades AI performance. Clear context between unrelated tasks or start fresh sessions for new features.
**Start a new conversation when:**
* You're moving to a different task or feature
* The AI seems confused or keeps making the same mistakes
* You've finished one logical unit of work
**Continue the conversation when:**
* You're iterating on the same feature
* The AI needs context from earlier in the discussion
* You're debugging something it just built
### Use subagents for research
When exploring unfamiliar code, delegate to subagents. They run in separate context windows and report back summaries, keeping your main conversation clean for implementation.
This is especially useful for:
* Codebase exploration that might read many files
* Code review (fresh context prevents bias toward code just written)
* Security audits and performance analysis
### Review AI-generated code carefully
AI-generated code can look right while being subtly wrong. Read the diffs and review carefully. The faster the AI works, the more important your review process becomes.
For significant changes, consider:
* Running a dedicated review pass after implementation
* Asking the AI to generate architecture diagrams
* Using a separate AI session to review the changes (fresh context)
### Add business domain context
Generic rules produce generic code. Add your application's domain to help AI understand context:
```md
## Business Domain
This application is a project management tool for software teams.
### Key Entities
- **Projects**: User-created workspaces containing tasks
- **Tasks**: Work items with status, assignee, and due date
### Business Rules
- Projects belong to organizations (use organizationId for queries)
- Tasks require project membership to view (check via RBAC)
- Deleted projects cascade-delete all tasks
```
## Troubleshooting
Common issues when using AI coding assistants and how to resolve them:
1. Check that `AGENTS.md` exists at the project root
2. Verify the file contains valid Markdown
3. Some tools require reopening the project to reload rules
4. Check if the file is too long—important rules may be getting lost in the noise
5. Try adding emphasis (e.g., "IMPORTANT" or "MUST") to critical instructions
Long sessions cause AI to "forget" rules and earlier instructions. This happens because:
* Context windows fill up with irrelevant information
* Important instructions get pushed out during summarization
* Failed approaches pollute the conversation
**Solutions:**
1. Start fresh sessions for complex or unrelated tasks
2. Re-state important rules when you notice drift
3. After two failed corrections, clear context and write a better initial prompt
1. Verify the skill's `description` field clearly describes when to use it
2. Try invoking the skill explicitly by name (e.g., `/skill-name`)
3. Check that the `SKILL.md` file has valid YAML frontmatter
4. Skills may require explicit invocation for workflows with side effects
1. Ensure subagent files are in the correct directory (`.agents/agents/`)
2. Check the frontmatter for syntax errors
3. Some tools require specific configuration to enable subagents
4. Verify the `name` and `description` fields are properly defined
AI can produce plausible-looking implementations that don't handle edge cases or reference non-existent APIs.
**Prevention:**
1. Always provide verification criteria (tests, expected outputs)
2. Use typed languages and configure linters
3. Point AI to reference implementations rather than documenting APIs
4. Run verification commands after every implementation
**Recovery:**
1. Don't try to fix incorrect code through follow-up prompts repeatedly
2. Revert changes and start fresh with a more specific prompt
3. Use a dedicated review pass to catch issues before committing
When you have multiple `AGENTS.md` files (root and package-level), they can conflict. Generally, the more specific file (closer to the code being edited) takes priority.
**Solutions:**
1. Check which `AGENTS.md` is being read by asking the AI
2. Consolidate conflicting rules into one location
3. Use package-level rules only for domain-specific guidance
Unbounded exploration fills context with irrelevant information.
**Solutions:**
1. Scope investigations narrowly: "search for JWT validation in `src/auth/`" instead of "find auth code"
2. Use subagents for exploration so it doesn't consume your main context
3. Specify file types or directories to limit search scope
Large codebases or long sessions can consume significant resources.
**Solutions:**
1. Use compact/summarize features regularly to reduce context size
2. Close and restart between major tasks
3. Add large build directories (e.g., `node_modules`, `dist`) to `.gitignore`
4. Disable unnecessary extensions that might impact performance
## Learn more
Dive deeper into AI-assisted development with these resources. They cover open standards, tool directories, and specifications that power modern AI coding workflows.
---
url: /docs/extension/installation/clone
title: Cloning repository
description: Get the code to your local machine and start developing your extension.
---
Ensure you have Git installed on your local machine before proceeding. You can download Git from [here](https://git-scm.com).
## Git clone
Clone the repository using the following command:
```bash
git clone git@github.com:turbostarter/core
```
By default, we're using [SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for all Git commands. If you don't have it configured, please refer to the [official documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to set it up.
Alternatively, you can use HTTPS to clone the repository:
```bash
git clone https://github.com/turbostarter/core
```
Another alternative could be to use the [Github CLI](https://cli.github.com/) or [Github Desktop](https://desktop.github.com/) for Git operations.
## Git remote
After cloning the repository, remove the original origin remote:
```bash
git remote rm origin
```
Add the upstream remote pointing to the original repository to pull updates:
```bash
git remote add upstream git@github.com:turbostarter/core
```
Once you have your own repository set up, add your repository as the origin:
```bash
git remote add origin
```
## Staying up to date
To pull updates from the upstream repository, run the following command daily (preferably with your morning coffee ☕):
```bash
git pull upstream main
```
This ensures your repository stays up to date with the latest changes.
Check [Updating codebase](/docs/web/installation/update) for more details on updating your codebase.
---
url: /docs/extension/installation/commands
title: Common commands
description: Learn about common commands you need to know to work with the extension project.
---
For sure, you don't need these commands to kickstart your project, but it's useful to know they exist for when you need them.
You can set up aliases for these commands in your shell configuration file. For example, you can set up an alias for `pnpm` to `p`:
```bash title="~/.bashrc"
alias p='pnpm'
```
Or, if you're using [Zsh](https://ohmyz.sh/), you can add the alias to `~/.zshrc`:
```bash title="~/.zshrc"
alias p='pnpm'
```
Then run `source ~/.bashrc` or `source ~/.zshrc` to apply the changes.
You can now use `p` instead of `pnpm` in your terminal. For example, `p i` instead of `pnpm install`.
To inject environment variables into the command you run, prefix it with `with-env`:
```bash
pnpm with-env
```
For example, `pnpm with-env pnpm build` will run `pnpm build` with the environment variables injected.
Some commands, like `pnpm dev`, automatically inject the environment variables for you.
## Installing dependencies
To install the dependencies, run:
```bash
pnpm install
```
## Starting development server
Start development server by running:
```bash
pnpm dev
```
## Building project
To build the project (including all apps and packages), run:
```bash
pnpm build
```
## Building specific app/package
To build a specific app/package, run:
```bash
pnpm turbo build --filter=
```
## Cleaning project
To clean the project, run:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm install
```
## Formatting code
To check for formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format
```
To fix formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format:fix
```
## Linting code
To check for linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint
```
To fix linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint:fix
```
## Adding UI components
To add a new web component, run:
```bash
pnpm --filter @workspace/ui-web ui:add
```
This command will add and export a new component to `@workspace/ui-web` package.
To add a new mobile component, run:
```bash
pnpm --filter @workspace/ui-mobile ui:add
```
This command will add and export a new component to `@workspace/ui-mobile` package.
## Services commands
To run the services containers locally, you need to have [Docker](https://www.docker.com/) installed on your machine.
You can always use the cloud-hosted solution (e.g. [Neon](https://neon.tech/), [Turso](https://turso.tech/) for database) for your projects.
We have a few commands to help you manage the services containers (for local development).
### Starting containers
To start the services containers, run:
```bash
pnpm services:start
```
It will run all the services containers. You can check their configs in `docker-compose.yml`.
### Setting up services
To setup all the services, run:
```bash
pnpm services:setup
```
It will start all the services containers and run necessary setup steps.
### Stopping containers
To stop the services containers, run:
```bash
pnpm services:stop
```
### Displaying status
To check the status and logs of the services containers, run:
```bash
pnpm services:status
```
### Displaying logs
To display the logs of the services containers, run:
```bash
pnpm services:logs
```
### Database commands
We have a few commands to help you manage the database leveraging [Drizzle CLI](https://orm.drizzle.team/kit-docs/commands).
#### Generating migrations
To generate a new migration, run:
```bash
pnpm with-env turbo db:generate
```
It will create a new migration `.sql` file in the `packages/db/migrations` folder.
#### Running migrations
To run the migrations against the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
It will apply all the pending migrations.
#### Pushing changes directly
Make sure you know what you're doing before pushing changes directly to the db.
To push changes directly to the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:push
```
It lets you push your schema changes directly to the database and omit managing SQL migration files.
#### Checking database status
To check the status of the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:status
```
It will display the status of the applied migrations and the pending ones.
```bash
Applied migrations:
- 0000_cooing_vargas
- 0001_curious_wallflower
- 0002_good_vertigo
- 0003_peaceful_devos
- 0004_fat_mad_thinker
- 0005_yummy_bucky
- 0006_glorious_vargas
Pending migrations:
- 0007_nebulous_havok
```
#### Resetting database
To reset the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:reset
```
It will reset the database to the initial state.
#### Seeding database
To seed the database with some example data (for development purposes), run:
```bash
pnpm with-env turbo db:seed
```
It will populate your database with some example data.
#### Checking database
To check the database schema consistency, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:check
```
#### Studying database
To study the database schema in the browser, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:studio
```
This will start the Studio on [https://local.drizzle.studio](https://local.drizzle.studio).
## Tests commands
### Running tests
To run the tests, run:
```bash
pnpm test
```
This will run all the tests in the project using Turbo tasks. As it leverages Turbo caching, it's [recommended](/docs/web/tests/unit#configuration) to run it in your CI/CD pipeline.
### Running tests projects
To run tests for all Vitest [Test Projects](https://vitest.dev/guide/projects), run:
```bash
pnpm test:projects
```
This will run all the tests in the project using Vitest.
### Watching tests
To watch the tests, run:
```bash
pnpm test:projects:watch
```
This will watch the tests for all [Test Projects](https://vitest.dev/guide/projects) and run them automatically when you make changes.
### Generating code coverage
To generate code coverage report, run:
```bash
pnpm turbo test:coverage
```
This will generate a code coverage report in the `coverage` directory under `tooling/vitest` package.
### Viewing code coverage
To preview the code coverage report in the browser, run:
```bash
pnpm turbo test:coverage:view
```
This will launch the report's `.html` file in your default browser.
---
url: /docs/extension/installation/conventions
title: Conventions
description: Some standard conventions used across TurboStarter codebase.
---
You're not required to follow these conventions; they're simply a standard set of practices used in the core kit. If you like them, we encourage you to keep them during your usage of the kit so you have a consistent code style that you and your teammates understand.
## Turborepo packages
In this project, we use [Turborepo packages](https://turbo.build/repo/docs/core-concepts/internal-packages) to define reusable code that can be shared across multiple applications.
* **Apps** are used to define the main application, including routing, layout, and global styles.
* **Packages** share reusable code and add functionality across multiple applications. They're configurable from the main application.
**Recommendation:** Do not create a package for your app code unless you plan to reuse it across multiple applications or are experienced in writing library code.
If your application is not intended for reuse, keep all code in the app folder. This approach saves time and reduces complexity, both of which are beneficial for fast shipping.
**Experienced developers:** If you have the experience, feel free to create packages as needed.
## Imports and paths
When importing modules from packages or apps, use the following conventions:
* **From a package:** Use `@workspace/package-name` (e.g., `@workspace/ui`, `@workspace/api`, etc.).
* **From an app:** Use `~/` (e.g., `~/components`, `~/config`, etc.).
## Enforcing conventions
We don't enforce complex rules or specific style guides that are not relevant to the project, giving you more freedom to customize things to your needs.
To enforce these conventions, we use the following tools:
* [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) is a [Prettier-compatible](https://oxc.rs/docs/guide/usage/formatter/migrate-from-prettier.html) tool used to enforce code formatting.
* [Oxlint](https://oxc.rs/docs/guide/usage/linter.html) is an [ESLint-compatible](https://oxc.rs/docs/guide/usage/linter/migrate-from-eslint.html) tool used to enforce code quality and best practices.
* [TypeScript](https://www.typescriptlang.org/) is used to enforce type safety.
## Code health
TurboStarter provides a set of tools to ensure code health and quality in your project.
### GitHub Actions
By default, TurboStarter sets up GitHub Actions to run tests on every push to the repository. You can find the workflow configuration in the `.github/workflows` directory.
The workflow has multiple stages:
* `format` - runs Oxfmt to format the code.
* `lint` - runs Oxlint to check for linting errors.
* `test` - runs tests.
### Git hooks
Together with TurboStarter, we have set up a `pre-commit` hook that will check for linting and formatting errors in the files being committed.
It's configured using [Lefthook](https://lefthook.dev), which supports multiple hooks and can be configured to run commands on specific files or directories.
Feel free to customize the hook to your needs, e.g. to check consistency of commit messages (useful for generating changelogs) using [commitlint](https://commitlint.js.org/):
```yaml title="lefthook.yml"
commit-msg:
commands:
"lint commit message":
run: pnpm commitlint --edit {1}
```
---
url: /docs/extension/installation/dependencies
title: Managing dependencies
description: Learn how to manage dependencies in your project.
---
As the package manager we chose [pnpm](https://pnpm.io/).
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
## Install dependency
To install a package you need to decide whether you want to install it to the root of the monorepo or to a specific workspace. Installing it to the root makes it available to all packages, while installing it to a specific workspace makes it available only to that workspace.
To install a package globally, run:
```bash
pnpm add -w
```
To install a package to a specific workspace, run:
```bash
pnpm add --filter
```
For example:
```bash
pnpm add --filter @workspace/ui motion
```
It will install `motion` to the `@workspace/ui` workspace.
## Remove dependency
Removing a package is the same as installing but with the `remove` command.
To remove a package globally, run:
```bash
pnpm remove -w
```
To remove a package from a specific workspace, run:
```bash
pnpm remove --filter
```
## Update a package
Updating is a bit easier since there is a nice way to update a package in all workspaces at once:
```bash
pnpm update -r
```
When you update a package, pnpm will respect the [semantic versioning](https://docs.npmjs.com/about-semantic-versioning) rules defined in the `package.json` file. If you want to update a package to the latest version, you can use the `--latest` flag.
## Renovate bot
By default, TurboStarter comes with [Renovate](https://www.npmjs.com/package/renovate) enabled. It is a tool that helps you manage your dependencies by automatically creating pull requests to update your dependencies to the latest versions. You can find its configuration in the `.github/renovate.json` file. Learn more about it in the [official docs](https://docs.renovatebot.com/configuration-options/).
When it creates a pull request, it is treated as a normal PR, so all tests and preview deployments will run. **It is recommended to always preview and test the changes in the staging environment before merging the PR to the main branch to avoid breaking the application.**
---
url: /docs/extension/installation/development
title: Development
description: Get started with the code and develop your browser extension.
---
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (24.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local services e.g. database)
* [pnpm](https://pnpm.io)
## Project development
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
You can use the following command to recursively copy the `.env.example` files to the `.env.local` files:
```bash
find . -name ".env.example" -exec sh -c 'cp "$1" "${1%.example}.local"' _ {} \;
```
```bash
Get-ChildItem -Recurse -Filter ".env.example" | ForEach-Object {
Copy-Item $_.FullName ($\_.FullName -replace '\.example$', '.local')
}
```
Check [Environment variables](/docs/extension/configuration/environment-variables) for more details on setting up environment variables.
### Setup services
If you want to use local services like database etc. (**recommended for development purposes**), ensure Docker is running, then setup them with:
```bash
pnpm services:setup
```
This command initiates the containers and runs necessary setup steps, ensuring your services are up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your development server should now be running 🎉
WXT will create a dev bundle for your extension and start a live-reloading development server, which will automatically update your extension bundle and reload your browser on source code changes.
It also makes the icon grayscale to distinguish between development and production extension bundles.
### Load the extension
Head over to `chrome://extensions` and enable **Developer Mode**.

Click on "Load Unpacked" and navigate to your extension's `apps/extension/build/chrome-mv3` directory.
To see your popup, click on the puzzle piece icon on the Chrome toolbar, and click on your extension.

Pin your extension to the Chrome toolbar for easy access by clicking the pin button.
Head over to `about:debugging` and click on "This Firefox".
Click on "Load Temporary Add-on" and navigate to your extension's `apps/extension/build/firefox-mv2` directory. Pick any file to load the extension.

The extension now installs, and remains installed until you restart Firefox.
To see your popup, click on your extension icon on the Firefox toolbar.

Loaded extension starts as pinned on the Firefox toolbar. Don't remove it to easily access it later.
You can also configure your development server to automatically start the browser when you start the server. To do it, create a `web-ext.config.ts` file in a root of your extension and configure it with your browser [binaries](https://wxt.dev/guide/essentials/config/browser-startup.html#set-browser-binaries) and [argumens](https://wxt.dev/guide/essentials/config/browser-startup.html#persist-data).
Learn more in the [official documentation](https://wxt.dev/guide/essentials/config/browser-startup.html).
### Publish to stores
When you're ready to publish the project to the stores, follow the [guidelines](/docs/extension/marketing) and [checklist](/docs/extension/publishing/checklist) to ensure everything is set up correctly.
---
url: /docs/extension/installation/editor-setup
title: Editor setup
description: Learn how to set up your editor for the fastest development experience.
---
Of course, you can use any IDE you like, but you'll have the best possible developer experience with this starter kit when using a **VSCode-based** editor with the suggested settings and extensions.
## Settings
We've included most recommended settings in the `.vscode/settings.json` file to make your development experience as smooth as possible. It includes configuration for tools like [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), and Tailwind CSS, which are used to enforce conventions across the codebase. You can adjust them to your needs.
```json title=".vscode/settings.json"
{
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.oxc": "always"
},
"editor.formatOnSave": true
...
}
```
## Extensions
Once you've cloned the project and opened it in VSCode, you should be prompted to install the suggested extensions, which are defined in `.vscode/extensions.json`. If you'd rather install them manually, you can do so at any time.
These are the extensions we recommend:
### OXC
Global extension for static code analysis. It will help you find and fix problems in your JavaScript/TypeScript code using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) and [Oxlint](https://oxc.rs/docs/guide/usage/linter.html). It's compatible with [Prettier](https://prettier.io/) and [ESLint](https://eslint.org/).
### Pretty TypeScript Errors
Improves TypeScript error messages shown in the editor.
### Tailwind CSS IntelliSense
Adds IntelliSense for Tailwind CSS classes to enable autocompletion and linting.
---
url: /docs/extension/installation/structure
title: Project structure
description: Learn about the project structure and how to navigate it.
---
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Browser extension application structure
The browser extension application is located in the `apps/extension` folder. It contains the following folders:
---
url: /docs/extension/installation/update
title: Updating codebase
description: Learn how to update your codebase to the latest version.
---
If you've been following along with our previous guides, you should already have a Git repository set up for your project, with an `upstream` remote pointing to the original repository.
Updating your project involves fetching the latest changes from the `upstream` remote and merging them into your project. Let's dive into the steps!
## Stash changes
If you don't have any changes to stash, you can skip this step and proceed with the update process.
Alternatively, you can [commit](https://git-scm.com/docs/git-commit) your changes.
If you have any uncommitted changes, stash them before proceeding. It will allow you to avoid any conflicts that may arise during the update process.
```bash
git stash
```
This command will save your changes in a temporary location, allowing you to retrieve them later. Once you're done updating, you can apply the stash to your working directory.
```bash
git stash apply
```
## Pull changes
Pull the latest changes from the `upstream` remote.
```bash
git pull upstream main
```
When prompted the first time, please opt for merging instead of rebasing.
Don't forget to run `pnpm i` in case there are any updates in the dependencies.
## Resolve conflicts
If there are any conflicts during the merge, Git will notify you. You can resolve them by opening the conflicting files in your code editor and making the necessary changes.
If you find conflicts in the `pnpm-lock.yaml file`, accept either of the two changes (avoid manual edits), then run:
```bash
pnpm i
```
Your lock file will now reflect both your changes and the updates from the upstream repository.
## Run a health check
After resolving the conflicts, it's time to test your project to ensure everything is working as expected. Run your project locally and navigate through the various features to verify that everything is functioning correctly.
For a quick health check, you can run:
```bash
pnpm lint
pnpm typecheck
```
If everything looks good, you're all set! Your project is now up to date with the latest changes from the `upstream` repository.
## Commit and push
Once everything is working fine, don't forget to commit your changes using:
```bash
git commit -m ""
```
and push them to your remote repository with:
```bash
git push origin
```
---
url: /docs/extension/internationalization
title: Internationalization
description: Learn how to internationalize your extension.
---
Turbostarter's extension uses [i18next](https://www.i18next.com/) and web cookies to store the language preference of the user. This allows the extension to be fully internationalized.
We use i18next because it's a robust and widely-adopted internationalization framework that works seamlessly with React.
The combination with web cookies allows us to persistently store language preferences across all extension contexts and share it with the web app while maintaining excellent performance and browser compatibility.

## Configuration
The global configuration is defined in the `@workspace/i18n` package and shared across all applications. You can read more about it in the [web configuration](/docs/web/internationalization/configuration) documentation.
By default, the locale is automatically detected based on the user's device settings. You can override it and set the default locale of your mobile app in the [app configuration](/docs/extension/configuration/app) file.
Also, the locale configuration is **shared between the web app and the extension** (same as [session](/docs/extension/auth/session)), which means that changing the locale in one place will automatically update it in the other. It's a common pattern for modern apps, simplifying the user experience and reducing the maintenance effort.
### Cookies
When a user first opens the [web app](/docs/web), the locale is detected and a cookie is set. This cookie is used to remember the user's language preference.
You can find its value in the *Cookies* tab of the developer tools of your browser:

To enable your extension to read the cookie and that way share the locale settings with the web app, you need to set the cookies permission in the `wxt.config.ts` under `manifest.permissions` field:
```ts
export default defineConfig({
manifest: {
permissions: ["cookies"],
},
});
```
And to be able to read the cookie from your app url, you need to set host\_permissions, which will include your app url:
```ts
export default defineConfig({
manifest: {
host_permissions: ["http://localhost/*", "https://your-app-url.com/*"],
},
});
```
Then you would be able to share the cookie between your apps and also read its value using `browser.cookies` API.
Avoid using `` in `host_permissions`. It affects all urls and may cause security issues, as well as a [rejection](https://developer.chrome.com/docs/webstore/review-process#review-time-factors) from the destination store.
## Translating extension
To translate individual components and screens, you can use the well-known `useTranslation` hook.
```tsx
import { useTranslation } from "@workspace/i18n";
export const Popup = () => {
const { t } = useTranslation();
return
{t("hello")}
;
};
```
That's the recommended way to translate stuff inside your extension.
### Store presence
As we saw in the [manifest](/docs/extension/configuration/manifest#locales) section, you can also localize your extension's store presence (like title, description, and other metadata). This allows you to customize how your extension appears in different web stores based on the user's language.
Each store has specific requirements for localization:
* [Chrome Web Store](https://developer.chrome.com/docs/webstore/cws-dashboard-listing/) requires a `_locales` directory with JSON files for each language
* [Firefox Add-ons](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Internationalization) uses a similar structure but with some differences in the manifest
* [Edge Add-ons](https://learn.microsoft.com/en-us/microsoft-edge/extensions/publish/publish-extension#supporting-multiple-languages) uses the same structure as Chrome Web Store
Although most of the config is abstracted behind common structure, please follow the store-specific guides below for detailed instructions on setting up localization for your extension's store listing.
## Language switcher
TurboStarter ships with a language customizer component that allows users to switch between languages in your extension. You can import and use the `LocaleCustomizer` component in your popup, options page, or any other extension view:
```tsx
import { LocaleCustomizer } from "@workspace/ui-web/i18n";
export const Popup = () => {
return ;
};
```
As the web app and extension share the same i18n configuration (cookie), changing the language in one will affect the other. **This is intentional** and ensures a consistent experience across both platforms, since your extension likely serves as a companion to the web app and should maintain the same language preferences.
## Best practices
Here are key best practices for managing translations in your browser extension:
* Use descriptive, hierarchical translation keys
```ts
// ✅ Good
"popup.settings.language";
"content.toolbar.save";
// ❌ Bad
"saveButton";
"text1";
```
* Organize translations by extension views and features
```
_locales/
├── en/
│ ├── messages.json
│ ├── popup.json
│ └── options.json
└── es/
├── messages.json
├── popup.json
└── options.json
```
* Handle fallback languages gracefully
* Keep manifest descriptions localized for store listings
* Consider context in translations:
```ts
// Context-aware messages
t("button.save", { context: "document" }); // "Save document"
t("button.save", { context: "settings" }); // "Apply changes"
```
* Use placeholders for dynamic content:
```ts
// With variables
t("status.saved", { time: "2 minutes ago" }); // "Last saved 2 minutes ago"
// With plurals
t("items", { count: 5 }); // "5 items"
```
* Keep translations in sync between extension views
* Cache translations for offline functionality
---
url: /docs/extension/marketing
title: Marketing
description: Learn how to market your mobile application.
---
As you saw in the [Extras](/docs/extension/extras) section, TurboStarter comes with a lot of tips and tricks to make your product better and help you launch your extension faster with higher traffic.
The same applies to [submission tips](/docs/extension/extras#submission-tips) to help you get your extension approved by the browser stores faster.
We'll talk more about the whole process of deploying and publishing your extension in the [Publishing](/docs/extension/publishing/checklist) section, here we'll go through some guidelines that you need to follow to make your store's visibility higher.
## Before you submit
To help your extension approval go as smoothly as possible, review the common missteps listed below that can slow down the review process or trigger a rejection. This doesn't replace the official guidelines or guarantee approval, but making sure you can check every item on the list is a good start.
Make sure you:
* Test your extension thoroughly for crashes and bugs
* Ensure that all extension information and metadata is complete and accurate
* Update your contact information in case the review team needs to reach you
* Provide clear instructions on how to use your extension, including any special setup required
* If your extension requires an account, provide a demo account or a way to test all features without signing up
* Enable and test all backend services to ensure they're live and accessible during review
* Include detailed explanations of non-obvious features in the extension description
* Ensure your extension complies with the specific browser store's policies (e.g., [Chrome Web Store](https://developer.chrome.com/docs/webstore/program-policies/best-practices), [Firefox Add-ons](https://extensionworkshop.com/documentation/publish/add-on-policies/), [Edge Add-ons](https://learn.microsoft.com/en-us/legal/microsoft-edge/extensions/developer-policies) etc.)
* Remove any references to features not supported in browser extensions (e.g., in-app purchases)
Following these basic steps during development and before submission will help you get your extension approved faster and with fewer issues.
## Guidelines
Each store has slightly different guidelines, but some of them are general and can be applied to all stores:
* **Security**: Your extension must not contain malicious code or behavior that can harm users' devices or data.
* **Performance**: Your extension must be performant and stable, with a smooth user experience.
* **Privacy**: Your extension must respect user privacy and not collect unnecessary data without explicit consent.
* **Compliance**: Your extension must comply with all relevant laws and regulations.
You can read more about official guidelines for each store in the following links:
* [Chrome Web Store](https://developer.chrome.com/docs/webstore/program-policies/best-practices)
* [Firefox Add-ons](https://extensionworkshop.com/documentation/publish/add-on-policies/)
* [Edge Add-ons](https://learn.microsoft.com/en-us/microsoft-edge/extensions/developer-guide/best-practices)
## Common mistakes
There are a few common mistakes that you should avoid to make sure your extension can be accepted in the stores. The most common ones are:
* **Not enough description** - make sure to describe all the features of your extension and how it works in your store listing, that way users won't be confused about what your extension does. Also include detailed information in the single purpose field regarding your extension's primary functionality.
* **Privacy issues** - respect user privacy and require as least permissions as possible, don't ask for permissions that are not necessary for your extension to work
* **Customer support** - provide a way to contact you in case the user has any issues with your extension
* **Stay up-to-date** - keep your extension and its documentation up-to-date to ensure a smooth user experience and to prevent issues during the review process.
---
url: /docs/extension/monitoring/overview
title: Overview
description: Get started with browser extension monitoring in TurboStarter.
---
TurboStarter includes powerful, provider-agnostic monitoring helpers for the browser extension so you can understand **what failed**, **where it failed** (popup, content script, background), and **who it impacted**. The API is intentionally designed for simplicity and extensibility, so you can swap providers without rewriting your extension code.
## Capturing exceptions
Extensions have multiple runtimes. To get good coverage, capture errors in the places users actually feel them:
* **Popup / options UI**: React pages where runtime errors break interactions.
* **Background (service worker)**: long-lived logic like alarms, message routing, and sync.
* **Content scripts**: page integrations where DOM differences and CSP can trigger failures.
* **Manual reporting**: wrap critical flows (auth, billing, webhooks-to-extension sync, imports) with `try/catch` and report with context.
```tsx
import { captureException } from "@workspace/monitoring-extension";
export function ExampleButton() {
const onPress = async () => {
try {
/* some risky operation */
} catch (error) {
captureException(error);
}
};
return ;
}
```
```ts
import { captureException } from "@workspace/monitoring-extension";
browser.runtime.onMessage.addListener((message, _sender, sendResponse) => {
try {
/* handle message */
sendResponse({ ok: true });
} catch (error) {
captureException(error);
sendResponse({ ok: false });
}
});
```
```ts
import { captureException } from "@workspace/monitoring-extension";
try {
/* interact with the page DOM */
} catch (error) {
captureException(error);
}
```
An exception in a content script won't automatically show up in your background logs (and vice versa). Add capture points in each runtime you ship, especially if you do message passing between them.
## Identifying users
Monitoring becomes far more useful once reports can be tied to a stable identity. In extensions you often have two “identities”:
* **Anonymous, stable install id**: useful before sign-in (and to correlate issues with a device/install).
* **Signed-in user**: once the user authenticates, identify with their user id so issues map to a real account.
TurboStarter's monitoring layer supports identifying the current user when your auth session resolves. When signed out, pass `null` (or your provider's preferred anonymous identity strategy).
```tsx title="monitoring.tsx"
import { useEffect } from "react";
import { identify } from "@workspace/monitoring-extension";
import { authClient } from "~/lib/auth/client";
export const MonitoringProvider = ({
children,
}: {
children: React.ReactNode;
}) => {
const session = authClient.useSession();
useEffect(() => {
if (session.isPending) {
return;
}
identify(session.data?.user ?? null);
}, [session]);
return <>{children}>;
};
```
Prefer **stable IDs** over PII. Only attach traits that help debugging (plan, role, extension version) and avoid secrets (tokens, passwords) or sensitive fields unless you've explicitly chosen to send them.
## Providers
The starter supports multiple monitoring providers behind the same API, so you can start with one and switch later.
## Best practices
Extension issues are often environment-specific. Make sure you can filter by
runtime (popup/background/content script), extension version, and browser.
Focus on crashes and failures that break core flows; skip “expected” states
like validation errors or user cancellations.
Background alarms, retries, and message loops can generate many identical
errors. Guard your capture calls to keep signal high (and costs low).
Don't mix dev/beta/stable releases. Tag builds so you can correlate spikes
with a rollout and verify fixes quickly.
With capture points in each runtime, user identification wired up, and a provider configured, extension monitoring becomes a tight feedback loop: you can spot regressions early, understand which surface area is failing and validate fixes confidently as you ship new versions.
---
url: /docs/extension/monitoring/posthog
title: PostHog
description: Learn how to setup PostHog as your browser extension monitoring provider.
---
[PostHog](https://posthog.com/) is a product analytics platform that also supports monitoring capabilities like error tracking and session replay. In extensions, it's especially useful when you want to connect “what broke” with “what the user did” right before the issue occurred.
TurboStarter keeps monitoring behind a unified API, so you can route exception captures from your popup, background, and content scripts to PostHog without rewriting the call sites.
To use PostHog as your monitoring provider, you'll need a PostHog instance. You can use [PostHog Cloud](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host).
PostHog is also supported as an analytics provider for the extension. If you want to track in-extension events, see the [analytics overview](/docs/extension/analytics/overview) and the [PostHog analytics configuration](/docs/extension/analytics/configuration#posthog).

## Configuration
Here you'll configure PostHog as the monitoring provider for your extension so exceptions from the popup, background/service worker, and content scripts show up with enough context to debug.
### Create a project
Create a PostHog [project](https://app.posthog.com/project/settings) for your extension. You can do this from the [PostHog dashboard](https://app.posthog.com) via the *New Project* action.
### Activate PostHog as your monitoring provider
TurboStarter picks the extension monitoring provider through exports in the monitoring package. To route captures to PostHog, export the PostHog implementation from the extension monitoring entrypoint:
```ts title="index.ts"
// [!code word:posthog]
export * from "./posthog";
export * from "./posthog/env";
```
### Set environment variables
Add your PostHog project key (and host, if you're not using the default cloud region) to your extension env. Set these locally and in whatever build environment produces your extension bundles:
```dotenv title="apps/extension/.env.local"
VITE_POSTHOG_KEY="your-posthog-project-api-key"
VITE_POSTHOG_HOST="https://us.i.posthog.com"
```
That's it — load the extension, trigger a test error from the popup/background/content script, and confirm events are arriving in your PostHog project.

If you want to go beyond basic capture (session replay, feature flags, richer context), follow PostHog's web/extension guidance.
## Uploading source maps
**Source maps** map the minified/bundled JavaScript shipped with your extension back to your original source code. Without them, stack traces in PostHog often point at compiled output, which makes debugging much slower.
PostHog’s source map flow for web builds relies on injecting metadata into the bundled assets. You must deploy/ship the injected assets, otherwise PostHog can’t match captured errors to the uploaded symbol sets.
For extensions built with Vite (which [WXT](https://wxt.dev/) is using under the hood), the high-level flow is:
* generate `.map` files during the production build
* inject PostHog metadata into the built assets
* upload the injected source maps to PostHog
### Install the PostHog CLI
Install the CLI globally:
```bash
npm install -g @posthog/cli
```
### Authenticate the CLI
Authenticate interactively:
```bash
posthog-cli login
```
In CI, you can authenticate with environment variables:
```dotenv
POSTHOG_CLI_HOST="https://us.posthog.com"
POSTHOG_CLI_ENV_ID="your-posthog-project-id"
POSTHOG_CLI_TOKEN="your-personal-api-key"
```
### Build with source maps enabled
Make sure your extension build outputs source maps by modifying your `wxt.config.ts` file.
```ts title="wxt.config.ts"
import { defineConfig } from "wxt";
export default defineConfig({
/* existing WXT configuration options */
vite: () => ({
build: {
sourcemap: "hidden", // [!code ++] Source map generation must be turned on ("hidden", true, etc.)
},
}),
});
```
After building, you should have `.js` and `.js.map` files in your output directory.
### Inject PostHog metadata into the built assets
Inject release/chunk metadata so PostHog can associate uploaded maps with the shipped bundles:
```bash
posthog-cli sourcemap inject --directory ./path/to/assets --project my-extension --version 1.2.3
```
### Upload source maps
Upload the injected source maps to PostHog:
```bash
posthog-cli sourcemap upload --directory ./path/to/assets
```
### Verify injection and uploads
After deployment, confirm your production bundles include the injected comment (for example `//# chunkId=...`) and verify symbol sets exist in your PostHog project settings.
With this in place, PostHog can symbolicate extension errors (popup/options UI, background/service worker, and content scripts) so stack traces point back to your original source files.
---
url: /docs/extension/monitoring/sentry
title: Sentry
description: Learn how to setup Sentry as your browser extension monitoring provider.
---
[Sentry](https://sentry.io/) is a popular error monitoring and performance tracking platform. It helps you catch and debug issues by collecting exceptions, stack traces, and helpful context from production.
For browser extensions, that context matters even more: errors can happen in multiple runtimes (popup/options UI, background/service worker, and content scripts). Sentry makes it easier to see what failed and where it happened so you can ship fixes with confidence.
To use Sentry, create an account and a project first. You can sign up [here](https://sentry.io/signup).

## Configuration
This section walks you through enabling Sentry for your extension and verifying that errors from the popup, background/service worker, and content scripts are captured reliably.
### Create a project
Create a Sentry [project](https://docs.sentry.io/product/projects/) for the extension (JavaScript / browser). You can do this from the Sentry [projects dashboard](https://sentry.io/settings/account/projects/) via the *Create Project* flow.
### Activate Sentry as your monitoring provider
TurboStarter picks the extension monitoring provider via exports in the monitoring package. To enable Sentry, export the Sentry implementation from the extension monitoring entrypoint:
```ts title="index.ts"
// [!code word:sentry]
export * from "./sentry";
export * from "./sentry/env";
```
If you need to customize behavior, the provider implementation lives under `packages/monitoring/extension/src/providers/sentry`.
### Set environment variables
From your Sentry project settings, add the DSN and environment to your extension env file (and to any [CI/build step](/docs/extension/publishing/checklist#build-your-app) that produces your extension bundles):
```dotenv title="apps/extension/.env.local"
VITE_SENTRY_DSN="your-sentry-dsn"
VITE_SENTRY_ENVIRONMENT="your-project-environment"
```
That's it — load the extension, trigger a test error from the popup/background/content script, and confirm it shows up in your [Sentry dashboard](https://sentry.io/settings/account/projects/).

For advanced options (sampling, releases, extra context), refer to [Sentry's JavaScript docs](https://docs.sentry.io/platforms/javascript/).
## Uploading source maps
**Source maps** map the bundled/minified JavaScript shipped with your extension back to your original source files. Without them, Sentry stack traces often point to compiled output, which makes debugging across popup/background/content-script runtimes much harder.
Generating source maps can expose your source code if `.map` files are publicly accessible. Prefer hidden source maps and/or delete them after upload.
Sentry can automatically provide readable stack traces for errors using source maps, requiring a [Sentry auth token](https://docs.sentry.io/account/auth-tokens/).
### Install the Sentry Vite plugin
Install the package `@sentry/vite-plugin` in `apps/extension/package.json` as a dev dependency.
```bash
pnpm i @sentry/vite-plugin -D --filter extension
```
### Add an auth token for uploads
Create an [auth token in Sentry](https://docs.sentry.io/account/auth-tokens/) and provide it as an environment variable during builds (locally and in your build environment):
```dotenv
SENTRY_AUTH_TOKEN="your-sentry-auth-token"
```
### Enable source maps and configure the plugin
Enable source map generation in your extension build and add `sentryVitePlugin` **after** your other Vite plugins:
```ts title="wxt.config.ts"
import { defineConfig } from "wxt";
import { sentryVitePlugin } from "@sentry/vite-plugin";
export default defineConfig({
/* existing WXT configuration options */
vite: () => ({
build: {
sourcemap: "hidden", // [!code ++] Source map generation must be turned on ("hidden", true, etc.)
},
plugins: [
sentryVitePlugin({
org: "your-sentry-org",
project: "your-sentry-project",
authToken: process.env.SENTRY_AUTH_TOKEN,
sourcemaps: {
// As you're enabling client source maps, you probably want to delete them after they're uploaded to Sentry.
// Set the appropriate glob pattern for your output folder - some glob examples below:
filesToDeleteAfterUpload: [
"./**/*.map",
".*/**/public/**/*.map",
"./dist/**/client/**/*.map",
],
},
}),
],
}),
});
```
### Verify uploads with a production build
The Sentry Vite plugin doesn't upload in dev/watch mode. Run a production build, then trigger a test error in the extension and confirm stack traces resolve to your original source.
Once this is in place, errors from your extension's compiled bundles (popup/options UI, background/service worker, content scripts) should show **readable stack traces** in Sentry, without shipping source maps to end users.
---
url: /docs/extension/organizations
title: Organizations/teams
description: Learn how to use organizations/teams/multi-tenancy in TurboStarter extension.
---
TurboStarter extensions support organizations/teams out of the box by sharing the same authentication session as your web app. The active organization is stored in the session and available to your extension without re-implementing organizations logic.
The extension and web app use a single auth session powered by Better Auth. The session includes tenant context (for example, `activeOrganizationId`). When users sign in, switch organizations, or sign out in the web app, the extension picks up these changes automatically.
Learn more: [Auth → Session](/docs/extension/auth/session).
## How it works
* **No separate auth flow** in the extension. We reuse the web session.
* **Active organization comes from the session** (e.g., `session.activeOrganizationId`).
* **Protected API calls** from the extension include the right cookies, so org‑scoped server logic works as expected.

## Active organization
Use your existing auth client to read the active organization through the `useActiveOrganization` hook.
```tsx title="popup.tsx"
import { authClient } from "~/lib/auth";
export function Popup() {
const organization = authClient.useActiveOrganization();
return <>{organization?.name}>;
}
```
If a user switches organizations in the web app, the extension reflects the change through the shared session on the next interaction. For long-lived views, re-read the session or invalidate related queries when appropriate.
## Do more with organizations
Most organization features live in the web app and are exposed via APIs your extension can call. These guides explain the underlying concepts and server behavior your extension builds upon:
Looking for the underlying auth setup? Start with [Auth →
Overview](/docs/extension/auth/overview) and [Auth →
Session](/docs/extension/auth/session).
---
url: /docs/extension/publishing/checklist
title: Checklist
description: Let's publish your TurboStarter extension to stores!
---
When you're ready to publish your TurboStarter extension to stores, follow this checklist.
This process may take a few hours and some trial and error, so buckle up - you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](/docs/extension/recipes/supabase) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities for doing a migration:
TurboStarter comes with a predefined GitHub Action to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is set your `DATABASE_URL` as a [secret for your GitHub repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run GitHub Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in the `.env.local` file and run the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## Set up web backend API
**Why it's necessary?**
Setting up the backend is necessary to have a place to store your data and to have other features work properly (e.g. authentication, billing or storage).
**How to do it?**
Please refer to the [web deployment checklist](/docs/web/deployment/checklist) on how to set up and deploy the web app backend to production.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the extension to function correctly. These variables include API keys, database URLs, and other configuration details required for your extension to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your CI/CD provider (e.g. [GitHub Actions](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions)) as a [secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
## Build your app
**Why it's necessary?**
Building your extension is necessary to create a standalone extension bundle that can be published to the stores.
**How to do it?**
You basically have two possibilities to build a bundle for your extension:
TurboStarter comes with a predefined GitHub Action to handle building your extension for submission. You can find its definition in the `.github/workflows/publish-extension.yml` file.
[Check how to run GitHub Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
This will also save the `.zip` file as an [artifact](https://docs.github.com/en/actions/guides/storing-workflow-data-as-artifacts) of the workflow run, so you can download it from there and submit your extension to stores (if configured).
You can also run your build locally, although this is not recommended for production.
To do it, run the following command:
```bash
pnpm turbo build --filter=extension
```
This will build the extension and package it into a `.zip` file. You can find the output in the `build` folder.
## Submit to stores
**Why it's necessary?**
Publishing your extension to the stores is required to make it discoverable and accessible to your users. This is the official distribution channel where users can find, install, and trust your extension.
**How to do it?**
We've prepared dedicated guides for each store that TurboStarter supports out-of-the-box, please refer to the following pages:
That's it! Your extension is now live and accessible to your users, good job! 🎉
* Optimize your store listing description, keywords, and other relevant information for the stores.
* Remove the placeholder content in the extension or replace it with your own.
* Update the favicon, scheme, store images, and logo with your own branding.
---
url: /docs/extension/publishing/chrome
title: Chrome Web Store
description: Publish your extension to Google Chrome Web Store.
---
[Chrome Web Store](https://chromewebstore.google.com/) is the most popular store for browser extensions, as it makes them available in any Chromium-based browser, including Google Chrome, Edge, Brave, and many others.
To submit your extension to Chrome Web Store, you'll need to complete a few steps. Here, we'll go through them.
Make sure your extension follows the [guidelines](/docs/extension/marketing) and other requirements to increase your chances of getting approved.
## Developer account
Before you can publish items on the Chrome Web Store, you must register as a CWS developer and pay a one-time registration fee. You must provide a developer email when you create your developer account.
To register, just access the [developer console](https://chrome.google.com/webstore/devconsole). The first time you do this, the following registration screen will appear. First, agree to the developer agreement and policies, then pay the registration fee.

Once you pay the registration fee and agree to the terms, your account will be created, and you'll be able to proceed to fill out additional information about it.

There are a few fields that you'll need to fill in:
* **Publisher name**: Appears under the title of each of your extensions. If you are a verified publisher, you can display an official publisher URL instead.
* **Verified email**: Verifying your contact email address is required when you set up a new developer account. It's only displayed under your extensions' contact information. Any notifications will be sent to your Chrome Web Store developer account email.
* **Physical address**: Only items that offer functionality to purchase items, additional features, or subscriptions must include a physical address.
## Submission
After registering your developer account, setting it up, and preparing your extension, you're ready to publish it to the store.
You can submit your extension in two ways:
* **Manually**: By uploading your extension's bundle directly to the store.
* **Automatically**: By using GitHub Actions to submit your extension to the stores.
**The first submission must be done manually, while subsequent updates can be submitted automatically.** We'll go through both approaches.
### Manual submission
To manually submit your extension to stores, you will first need to get your extension bundle. If you ran the build step locally, you should already have the `.zip` file in your extension's `build` folder.
If you used GitHub Actions to build your extension, you can find the results in the workflow run. Download the artifacts and save them on your local machine.
Then, use the following steps to upload your item:
1. Go to the [Chrome Web Store Developer Dashboard](https://chrome.google.com/webstore/devconsole/).
2. Sign in to your developer account.
3. Click on the *Add new item* button.
4. Click *Choose file* > *your zip file* > *Upload*. If your item's manifest and other contents are valid, you will see a new item in the dashboard.

After you upload the bundle, you'll need to fill in the extension's details, such as the icons, privacy settings, permissions justification, and other information.
Please refer to the official guides on how to set up your extension's details.
### Automated submission
The first submission of your extension to Chrome Web Store must be done manually because you need to provide the store's credentials and extension ID to automation, which will be available only after the first bundle upload.
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your extension to web stores automatically. It's located in the `.github/workflows/publish-extension.yml` file.
What you need to do is fill the environment variables with your store's credentials and extension's details and set them as a [secrets in your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) under correct names:
```yaml title="publish-extension.yml"
env:
CHROME_EXTENSION_ID: ${{ secrets.CHROME_EXTENSION_ID }}
CHROME_CLIENT_ID: ${{ secrets.CHROME_CLIENT_ID }}
CHROME_CLIENT_SECRET: ${{ secrets.CHROME_CLIENT_SECRET }}
CHROME_REFRESH_TOKEN: ${{ secrets.CHROME_REFRESH_TOKEN }}
```
Please refer to the [official guide](https://github.com/PlasmoHQ/bms/blob/main/tokens.md#chrome-web-store-api) to learn how to get these credentials correctly.
That's it! You can [run the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) and it will submit your extension to the Chrome Web Store 🎉
This workflow will also try to send your extension to review, but it's not guaranteed to happen. You need to have all required information filled in your extension's details page to make it possible.
Even then, when you introduce some **breaking change** (e.g. add another permission), you'll need to update your extension store metadata and automatic submit won't be possible.
To opt out of this behavior (and use only automatic uploading to store, but not sending to review) you can set `--chrome-skip-submit-review` flag in the `publish-extension.yml` file for the `wxt submit` command:
```yaml title="publish-extension.yml"
// [!code word:--chrome-skip-submit-review]
- name: 💨 Publish!
run: |
npx wxt submit \
--chrome-zip apps/extension/build/*-chrome.zip --chrome-skip-submit-review
```
Then, your extension bundle will be uploaded to the store, but you will need to send it to review manually.
Check out the [official documentation](https://wxt.dev/api/cli/wxt-submit) for more customization options.
## Review
After filling out the information about your item, you are ready to send it to review. Click on *Submit for review* button and confirm that you want to submit your item in the following dialog:

The confirmation dialog shown above also lets you control the timing of your item's publishing. If you uncheck the checkbox, your item will **not** be published immediately after its review is complete. Instead, you'll be able to manually publish it at a time of your choosing once the review is complete.
After you submit the item for review, it will undergo a review process. The time for this review depends on the nature of your item. See [Understanding the review process](https://developer.chrome.com/docs/webstore/review-process) for more details.
There are important emails like take down or rejection notifications that are enabled by default. To receive an email notification when your item is published or staged, you can enable notifications on the *Account page*.

The review status of your item appears in the [developer dashboard](https://chrome.google.com/webstore/devconsole) next to each item. The status can be one of the following:
* **Published**: Your item is available to all users.
* **Pending**: Your item is under review.
* **Rejected**: Your item was rejected by the store.
* **Taken Down**: Your item was taken down by the store.

You'll receive an email notification when the status of your item changes.
If your extension has been determined to violate one or more terms or policies, you will receive an email notification that contains the violation description and instructions on how to rectify it.
If you did not receive an email within a week, check the status of your item. If your item has been rejected, you can see the details on the *Status* tab of your item.

You'll need to fix the issues and upload a new version of your extension, make sure to follow the [guidelines](/docs/extension/marketing) or check [publishing troubleshooting](/docs/extension/troubleshooting/publishing) for more info.
If you have been informed about a violation and you do not rectify it, your item will be taken down. See [Violation enforcement](https://developer.chrome.com/docs/webstore/review-process#enforcement) for more details.
You can learn more about the review process in the official guides listed below.
---
url: /docs/extension/publishing/edge
title: Edge Add-ons
description: Publish your extension to Microsoft Edge Add-ons.
---
[Microsoft Edge Add-ons](https://microsoftedge.microsoft.com/addons/) distributes extensions to Microsoft Edge users. If you already have a Chromium-based extension, you can submit it to Edge with minimal changes.
This guide walks you through manual submission and optional automation, aligned with the official process.
Make sure your extension follows the general [guidelines](/docs/extension/marketing) and the Edge Add-ons developer policies to increase your chances of approval.
## Developer account
To enroll in the Microsoft Edge program you need to have a Microsoft account. If you don't have one, you can create one [here](https://account.microsoft.com/account/signup?signin=1\&ru=https://account.microsoft.com/account/login?loginMethod=email).

Next, before you can publish your extension to Edge Add-ons, you need to register your developer account in [Partner Center](https://partner.microsoft.com/dashboard/microsoftedge/public/login?ref=dd). Fill out the required fields and submit the form with *Finish* button. Wait for the email that your account has been verified - you're ready to submit your extension!

## Submission
After your account is ready and the extension bundle is prepared, you can publish it. There are two paths:
* **Manually**: Upload your `.zip` package through Partner Center.
* **Automatically**: Use CI to upload new versions after the first manual submission.
**The first submission should be done manually.** Subsequent updates can be automated once you have your extension ID and required credentials.
### Manual submission
To manually submit your extension to stores, you will first need to get your extension bundle. If you ran the build step locally, you should already have the .zip file in your extension's build folder.
If you used GitHub Actions to build your extension, you can find the results in the workflow run. Download the artifacts and save them on your local machine.
Then, use the following steps to upload your item:
#### Sign in to your developer account
Go to the [Partner Center](https://partner.microsoft.com/dashboard/microsoftedge/public/login?ref=dd) and sign in to your developer account.
#### Create new extension
Click the *Create new extension* button to start a new submission.

#### Upload the extension package
The *Extension overview* page shows information for a specific extension:

To upload your extension package:
1. Click *Packages* in the left sidebar.
2. Drag and drop your `.zip` file or click *Browse your files* to select it.
3. Wait for validation to complete. If it fails, fix any issues and re-upload.
4. Review the extracted extension details and click *Continue*.
#### Set availability
Choose visibility:
* `Public`: discoverable in the store and via search.
* `Hidden`: not discoverable; accessible via direct listing URL only.
Select markets where the extension is available. You can later add or remove markets; existing users retain access to installed versions.

#### Enter properties
Provide category, privacy policy requirements, privacy policy URL (if applicable), website URL, and support contact.
These are shown to users on the listing and must meet policy requirements.

Follow the [official documentation](https://learn.microsoft.com/en-us/microsoft-edge/extensions/publish/publish-extension#step-4-enter-properties-describing-your-extension) for more details.
#### Add store listing details
Fill in the store listing details for your extension:
* **Display name**: The name shown in the store (from your manifest file).
* **Description**: A detailed description (250-5000 characters) explaining what your extension does and why users should install it.
* **Extension Store logo**: A 300x300 pixel logo representing your extension.
* **Screenshots**: Up to 10 screenshots (640x480 or 1280x800 pixels) showing your extension's functionality.
* **Small/Large promotional tiles**: Optional promotional images for store featuring.
* **YouTube video URL**: Optional promotional video.
* **Search terms**: Keywords to help users discover your extension (up to 21 words total).
You must provide the description and logo for each supported language. Other fields are optional but recommended for better discoverability.

Follow the [official documentation](https://learn.microsoft.com/en-us/microsoft-edge/extensions/publish/publish-extension#step-5-add-store-listing-details-for-your-extension) for detailed requirements and best practices.
#### Submit for review
Complete the submission by providing testing notes to help certification testers understand your extension.
Click the *Submit* button to open the submission page:

In the **Notes for certification** text box, provide additional information to help testers properly evaluate your extension. Include any relevant details such as:
* Test account usernames and passwords
* Steps to access hidden or locked features
* Expected differences based on region or user settings
* Information about changes if this is an update
* Any other context testers need to understand your submission
Once you've added your notes, click the *Publish* button to submit your extension for certification.
Your extension will proceed to the certification step, which can take up to seven business days.
After passing certification, your extension will be published to [Microsoft Edge Add-ons](https://microsoftedge.microsoft.com/addons/) and the status in Partner Center will change to "In the Store".
## Automated submission
The first submission of your extension to Microsoft Edge Add-ons must be done manually because you need to provide the store's credentials and extension ID to automation, which will be available only after the first bundle upload.
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your extension to web stores automatically. It's located in the .github/workflows/publish-extension.yml file.
What you need to do is fill the environment variables with your store's credentials and extension's details and set them as a [secrets in your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) under correct names:
```yaml title="publish-extension.yml"
env:
EDGE_PRODUCT_ID: ${{ secrets.EDGE_PRODUCT_ID }}
EDGE_CLIENT_ID: ${{ secrets.EDGE_CLIENT_ID }}
EDGE_API_KEY: ${{ secrets.EDGE_API_KEY }}
```
Please refer to the [official guide](https://github.com/PlasmoHQ/bms/blob/main/tokens.md#edge-add-ons-api-v11) to learn how to get these credentials correctly.
Once configured, you can manually [trigger the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) to upload the new version to Edge Add-ons 🎉
This workflow will also try to send your extension to review, but it's not guaranteed to happen. You need to have all required information filled in your extension's details page to make it possible.
Even then, when you introduce some **breaking change** (e.g. add another permission), you'll need to update your extension store metadata and automatic submit won't be possible.
To opt out of this behavior (and use only automatic uploading to store, but not sending to review) you can set `--edge-skip-submit-review` flag in the `publish-extension.yml` file for the `wxt submit` command:
```yaml title="publish-extension.yml"
// [!code word:--edge-skip-submit-review]
- name: 💨 Publish!
run: |
npx wxt submit \
--edge-zip apps/extension/build/*-chrome.zip --edge-skip-submit-review
```
Then, your extension bundle will be uploaded to the store, but you will need to send it to review manually.
Check out the [official documentation](https://wxt.dev/api/cli/wxt-submit) for more customization options.
## Review
After you submit your extension, it enters Microsoft's certification and publishing pipeline.
1. Preprocessing
* Uploaded packages are queued and scanned. If errors are detected during preprocessing, you'll see a message and must resolve issues before re-uploading.
2. Certification
* Security tests: packages are checked for viruses and malware.
* Content compliance: human review of your listing and content for policy adherence.
3. Release and publishing
* If you selected publish immediately, publishing begins right away; otherwise schedule/hold options apply.
* While publishing, the submission status page shows rollout details. When complete, the status changes from "Publishing" to "In the Store".
4. Edge Add-ons curation and ranking
* Discovery is influenced by quality, relevancy (name, description, popularity, user experience), and popularity (ratings and averages). Security and policy compliance are verified per the developer policies.
Microsoft may also perform spot checks after publishing to ensure ongoing compliance.
The review status of your item appears in the [Partner Center](https://partner.microsoft.com/dashboard/microsoftedge/public/login?ref=dd) under the *Overview* page of your item.

You'll receive an email notification when the status of your item changes.
If your extension has been determined to violate one or more terms or policies, you will receive an email notification that contains the violation description and instructions on how to rectify it.

You can also check the reason behind the rejection on the *Certification report* page of your item.

You'll need to fix the issues and upload a new version of your extension. Make sure to follow the [guidelines](/docs/extension/marketing) or check [publishing troubleshooting](/docs/extension/troubleshooting/publishing) for more info.
You can learn more about the review process in the official guides listed below.
---
url: /docs/extension/publishing/firefox
title: Firefox Add-ons
description: Publish your extension to Mozilla Firefox Add-ons.
---
Mozilla Firefox doesn't share extensions with [Google Chrome](/docs/extension/publishing/chrome), so you'll need to publish your extension to it separately.
Here, we'll go through the process of publishing an extension to [Firefox Add-ons](https://addons.mozilla.org/).
Make sure your extension follows the [guidelines](/docs/extension/marketing) and other requirements to increase your chances of getting approved.
## Developer account
Before you can publish items on Firefox Add-ons, you must register a developer account. In comparison to the Chrome Web Store, Firefox Add-ons doesn't require a registration fee.
To register, go to [addons.mozilla.org](https://addons.mozilla.org/) and click on the *Register* button.

It's important to set at least a display name on your profile to increase transparency with users, add-on reviewers, and the greater community.
You can do it in the *Edit My Profile* section:

## Submission
After registering your developer account, setting it up, and preparing your extension, you're ready to publish it to the store.
You can submit your extension in two ways:
* **Manually**: By uploading your extension's bundle directly to the store.
* **Automatically**: By using GitHub Actions to submit your extension to the stores.
**The first submission must be done manually, while subsequent updates can be submitted automatically.** We'll go through both approaches.
### Manual submission
To manually submit your extension to stores, you will first need to get your extension bundle. If you ran the build step locally, you should already have the `.zip` file in your extension's `build` folder.
If you used GitHub Actions to build your extension, you can find the results in the workflow run. Download the artifacts and save them on your local machine.
Then, use the following steps to upload your item:
#### Sign in to your developer account
Go to the [Add-ons Developer Hub](https://addons.mozilla.org/developers/) and sign in to your developer account.
#### Choose distribution method
You should reach the following page:

Here, you have two ways of distributing your extension:
* **On this site**, if you want your add-on listed on AMO (Add-ons Manager).
* **On your own**, if you plan to distribute the add-on yourself and don't want it listed on AMO.
We recommend going with the first option, as it will allow you to reach more users and get more feedback. If you decide to go with the second option, please refer to the [official documentation](https://extensionworkshop.com/documentation/publish/self-distribution/) for more details.
#### Submit your extension
On the next page, click on *Select file* and choose your extension's `.zip` bundle.

Once you upload the bundle, the validator checks the add-on for issues and the page updates:

If your add-on passes all the checks, you can proceed to the next step.
You may receive a message that you only have warnings. It's advisable to address these warnings, particularly those flagged as security or privacy issues, as they may result in your add-on failing review. However, **you can continue with the submission**.
If the validation fails, you'll need to address the issues and upload a new version of your add-on.
#### Submit source code (if needed)
You'll need to indicate whether you need to provide the source code of your extension:

If you select *Yes*, a section displays describing what you need to submit. Click *Browse* and locate and upload your source code package. See [Source code submission](https://extensionworkshop.com/documentation/publish/source-code-submission/) for more information.
You may receive a message that you only have warnings. It's advisable to address these warnings, particularly those flagged as security or privacy issues, as they may result in your add-on failing review. However, **you can continue with the submission**.
If the validation fails, you'll need to address the issues and upload a new version of your add-on.
#### Add metadata
On the next page, you'll need to provide the following additional information about your extension:

* **Name**: Your add-on's name.
* **Add-on URL**: The URL for your add-on on AMO. A URL is automatically assigned based on your add-on's name. To change this, click Edit. The URL must be unique. You will be warned if another add-on is using your chosen URL, and you must enter a different one.
* **Summary**: A useful and descriptive short summary of your add-on.
* **Description**: A longer description that provides users with details of the extension's features and functionality.
* **This add-on is experimental**: Indicate if your add-on is experimental or otherwise not ready for general use. The add-on will be listed but with reduced visibility. You can remove this flag when your add-on is ready for general use.
* **This add-on requires payment, non-free services or software, or additional hardware**: Indicate if your add-on requires users to make an additional purchase for it to work fully.
* **Select up to 2 Firefox categories for this add-on**: Select categories that describe your add-on.
* **Select up to 2 Firefox for Android categories for this add-on**: Select categories that describe your add-on.
* **Support email and Support website**: Provide an email address and website where users can get in touch when they have questions, issues, or compliments.
* **License**: Select the appropriate license for your add-on. Click Details to learn more about each license.
* **This add-on has a privacy policy**: If any data is being transmitted from the user's device, a privacy policy explaining what is being sent and how it's used is required. Check this box and provide the privacy policy.
* **Notes for Reviewers**: Provide information to assist the AMO reviewer, such as login details for a dummy account, source code information, or similar.
#### Finalize the process
Once you're ready, click on the *Submit Version* button.

You can still edit your add-on's details from the dedicated page after submission.
### Automated submission
The first submission of your extension to Firefox Add-ons must be done manually because you need to provide the store's credentials and extension ID to automation, which will be available only after the first bundle upload.
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your extension to web stores automatically. It's located in the `.github/workflows/publish-extension.yml` file.
What you need to do is fill the environment variables with your store's credentials and extension's details and set them as a [secrets in your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) under correct names:
```yaml title="publish-extension.yml"
env:
FIREFOX_EXTENSION_ID: ${{ secrets.FIREFOX_EXTENSION_ID }}
FIREFOX_JWT_ISSUER: ${{ secrets.FIREFOX_JWT_ISSUER }}
FIREFOX_JWT_SECRET: ${{ secrets.FIREFOX_JWT_SECRET }}
```
Please refer to the [official guide](https://github.com/PlasmoHQ/bms/blob/main/tokens.md#firefox-add-ons-api) to learn how to get these credentials correctly.
That's it! You can [run the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) and it will submit your extension to the Firefox Add-ons 🎉
This workflow will also try to send your extension to review, but it's not guaranteed to happen. You need to have all required information filled in your extension's details page to make it possible.
Even then, when you introduce some **breaking change** (e.g., add another permission), you'll need to update your extension store metadata and automatic submission won't be possible.
## Review
Once you submit your extension bundle, it's automatically sent to review and will undergo a review process. The time for this review depends on the nature of your item.
The add-on review process includes the following phases:
1. **Automatic Review**: Upon upload, the add-on undergoes several automatic validation steps to ensure its general safety.
2. **Content Review**: Shortly after submission, a human reviewer inspects the add-on to ensure that the listing adheres to content review guidelines, including metadata such as the add-on name and description.
3. **Technical Code Review**: The add-on's source code is examined to ensure compliance with review policies.
4. **Basic Functionality Testing**: After the source code is verified as safe, the add-on undergoes basic functionality testing to confirm it operates as described.
There are important emails like takedown or rejection notifications that are enabled by default. To receive an email notification when your item is published or staged, you can enable notifications in the *Account Settings*.

The review status of your item appears in the [developer hub](https://addons.mozilla.org/en-US/firefox/) next to each item.

You'll receive an email notification when the status of your item changes.
If your extension has been determined to violate one or more terms or policies, you will receive an email notification that contains the violation description and instructions on how to rectify it.
You can also check the reason behind the rejection on the *Status* page of your item.

You'll need to fix the issues and upload a new version of your extension. Make sure to follow the [guidelines](/docs/extension/marketing) or check [publishing troubleshooting](/docs/extension/troubleshooting/publishing) for more info.
You can learn more about the review process in the official guides listed below.
---
url: /docs/extension/publishing/updates
title: Updates
description: Learn how to update your published extension.
---
After publishing your extension to the stores, you can release updates to deliver new features and bug fixes to your users.
TurboStarter provides a ready-to-use process for updating your extensions. Let's quickly review how it works.
## Uploading a new version
The recommended way to update your extension is to submit a new version to the stores. This method is the most reliable, although it may take some time for the new version to be approved and become available to users.
To submit a new version, simply update the version number in your `package.json` file:
```json title="package.json"
{
...
"version": "1.0.0", // [!code --]
"version": "1.0.1", // [!code ++]
...
}
```
Next, follow the exact same steps as [when you initially published your extension](/docs/extension/publishing/checklist). When submitting your extension for review, be sure to provide release notes describing the new version.
---
url: /docs/extension/recipes/supabase
title: Supabase
description: Learn how to set up Supabase as the database (and optional storage) provider for your TurboStarter project.
---
[Supabase](https://supabase.com) is an open-source backend platform built on top of PostgreSQL that provides a managed database, storage, and other features out of the box.
You can adopt Supabase incrementally - start with just the pieces you need (for example, database only, or database + storage) and add more features over time. There's no requirement to integrate everything at once.
In this guide, we'll walk you through the process of setting up Supabase as a provider for your TurboStarter project. This could include using it as a [database](https://supabase.com/docs/guides/database), [storage](https://supabase.com/docs/guides/storage), [edge runtime for your API](https://supabase.com/docs/guides/functions) and more.
## Prerequisites
Before you start, make sure you have:
* **TurboStarter project** cloned locally with dependencies installed (you can use our [CLI](/docs/web/cli) to create a new project in seconds)
* **Supabase account** - you can create one at [supabase.com](https://supabase.com/sign-up)
* Basic familiarity with the core database docs:
* [Database overview](/docs/web/database/overview)
* [Migrations](/docs/web/database/migrations)
* [Database client](/docs/web/database/client)
## (Optional) Use Supabase locally with Docker
If you're on the Supabase free plan, you can only have a limited number of active hosted databases at once. A good workflow is:
* Use **local Supabase** for day-to-day development
* Keep **one hosted Supabase project** for staging/production (and for testing features that require a deployed project)
Supabase provides a local development stack that runs via **Docker**, managed by the **Supabase CLI**.
### Install prerequisites
* Install **Docker** (Docker Desktop is the easiest option)
* Install the **Supabase CLI** (pick one):
* macOS (Homebrew): `brew install supabase/tap/supabase`
* npm (no global install): `npx supabase --version`
### Initialize and start Supabase locally
From the monorepo root:
```bash
supabase init
supabase start
```
Once it’s running, get the local URLs and credentials:
```bash
supabase status
```
You should see a local **DB URL** (Postgres), plus URLs for **Studio** and the local API.
In most default setups, the local Postgres URL looks like:
`postgresql://postgres:postgres@127.0.0.1:54322/postgres`
Always prefer copying the exact value from `supabase status` to avoid port mismatches.
### Point TurboStarter to the local database
Update the **root** `.env.local` so TurboStarter’s `@workspace/db` uses the local Postgres:
```dotenv title=".env.local"
DATABASE_URL="postgresql://postgres:postgres@127.0.0.1:54322/postgres"
```
Then run migrations (same as with hosted Supabase):
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
That’s it — TurboStarter now talks to your **local Supabase Postgres**.
### Useful local commands
```bash
supabase stop # stop containers
supabase start # start again
supabase status # show URLs/ports/keys
supabase db reset # reset local DB (drops data)
```
## Create a new Supabase project
1. Go to the [Supabase dashboard](https://supabase.com).
2. Create a **new project** (choose a strong database password and a region close to your users).
3. Supabase will automatically provision a **PostgreSQL database** for you.

Optionally, you can customize the **Security options** by choosing the **Only Connection String** option - it will opt out of autogenerating API for tables inside your database. It's not needed for TurboStarter setup, but of course you can still leverage it for your custom use-cases.

Once the project is ready, you can fetch the connection string.
## Get the database connection string
In the Supabase dashboard:
1. Open your project.
2. Click on the **Connect** button at the top.
3. Locate the **connection string** for your chosen ORM (it will be under the **ORMs** tab).

Copy this value - you'll use it as your `DATABASE_URL`.
In your Supabase connection string, you can see a placeholder like `[YOUR-PASSWORD]`. Make sure to replace this with the actual password you set when creating your Supabase project.
## Configure environment variables
TurboStarter reads database connection settings from the **root** `.env.local` file and uses them inside the `@workspace/db` package.
Create (or update) the `.env.local` file in the **monorepo root**:
```dotenv title=".env.local"
DATABASE_URL="postgres://postgres.[YOUR-PROJECT-REF]:[YOUR-PASSWORD]@aws-0-[aws-region].pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1"
```
Replace:
* `YOUR-PROJECT-REF` with your Supabase project ref
* `YOUR-PASSWORD` with the database password you set when creating the project
* `aws-region` with the region shown in the Supabase connection string
These variables are validated in the `@workspace/db` package and used to create Drizzle client for your database.
For more background on how `DATABASE_URL` is used, see [Database overview](/docs/web/database/overview).
## Setup your Supabase database
With `DATABASE_URL` now pointing to Supabase, you can apply the existing TurboStarter schema to your Supabase database.
From the monorepo root, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This will:
* Use your Supabase `DATABASE_URL` from `.env.local`
* Run all pending SQL migrations from `packages/db/migrations`
* Create the full TurboStarter schema (users, billing, demo tables, etc.) in Supabase
If you're actively iterating on the schema, you can generate new migrations and apply them as described in [Migrations](/docs/web/database/migrations).
After running your migrations, you may want to seed your database with initial data (such as demo users or organizations). You can do this by running the following command:
```bash
pnpm with-env pnpm turbo db:seed
```
This will populate your Supabase database with some example data you can use to test your application.
## Use Supabase Storage as S3-compatible storage
TurboStarter's storage layer is designed to work seamlessly with **any S3-compatible provider**. In this section, we'll show how to use [Supabase Storage](/docs/web/storage/overview) as your application's file storage back-end.
Supabase Storage provides a simple, S3-compatible API and is a great choice if you're already using Supabase for your database.
### Create a storage bucket
1. In the Supabase dashboard, go to **Storage → Buckets**.
2. Click **Create bucket** (name it whatever you want, for example `avatars` or `uploads`).
3. Adjust settings based on your needs (e.g. limit the maximum file size, specify the allowed file types, etc.)

You can create multiple buckets (for documents, images, videos, etc.) if needed.
### Generate S3 access keys in Supabase dashboard
1. Go to **Storage → S3 → Access keys**.
2. Click **New access key**.
3. Give it a descriptive name and create the key.
4. Copy the **Access key ID** and **Secret access key** to use in your application.

### Configure S3 environment variables for Supabase Storage
In your weba application's `.env.local`, add (or update) the S3 configuration used by TurboStarter's storage layer:
```dotenv title=".env.local"
S3_REGION="us-east-1"
S3_BUCKET="avatars"
S3_ENDPOINT="https://[YOUR-PROJECT-REF].supabase.co/storage/v1/s3"
S3_ACCESS_KEY_ID="your-access-key-id"
S3_SECRET_ACCESS_KEY="your-secret-access-key"
```
These variables integrate directly with the storage configuration described in:
* [Storage overview](/docs/web/storage/overview)
* [Storage configuration](/docs/web/storage/configuration)
Once set, existing TurboStarter file upload flows (e.g. user avatars, organization logos) will use Supabase Storage via presigned URLs.
## Run your API on Supabase Edge Functions
As we're using a [Hono](https://hono.dev) as our API server, you can deploy it as a Supabase Edge Function so it runs close to your users.
At a high level:
1. Install the [Supabase CLI](https://supabase.com/docs/guides/cli) and initialize a Supabase project locally with `supabase init`.
2. Create a new [Edge Function](https://supabase.com/docs/guides/functions/quickstart) (for example `hono-backend`) with `supabase functions new hono-backend`.
3. Inside the generated function (for example `supabase/functions/hono-backend/index.ts`), set up a basic Hono app and export it via `Deno.serve(app.fetch)`:
```ts
import { Hono } from "jsr:@hono/hono";
// change this to your function name
const functionName = "hono-backend";
const app = new Hono().basePath(`/${functionName}`);
app.get("/hello", (c) => c.text("Hello from hono-server!"));
Deno.serve(app.fetch);
```
4. Run the function locally with `supabase start` and `supabase functions serve --no-verify-jwt`, then call it from your TurboStarter app using the local or deployed function URL.
5. When you're ready, deploy the function with `supabase functions deploy` (or `supabase functions deploy hono-backend`) and manage it using the Supabase dashboard, as described in the [Supabase Edge Functions docs](https://supabase.com/docs/guides/functions).
This is entirely optional, but it's a great fit for lightweight APIs, webhooks, and other serverless logic you want to run alongside your Supabase project.
## Explore additional Supabase features
Supabase is a full Postgres development platform, so beyond the database and storage pieces wired up above you can gradually add more features as your app grows ([see the Supabase homepage](https://supabase.com/) for an overview).
Some features that fit especially well with TurboStarter's design are:
* [Realtime](https://supabase.com/docs/guides/realtime) - built on [Postgres replication](https://www.postgresql.org/docs/current/runtime-config-replication.html), so you can stream changes from your existing TurboStarter tables (inserts, updates, deletes) into live UIs without changing how you manage schema or RLS. You still define tables and policies via `@workspace/db`, and opt into Realtime on top.
* [Vector](https://supabase.com/docs/guides/vector) - powered by the [pgvector](https://github.com/pgvector/pgvector) extension and stored in regular Postgres tables, making it easy to integrate semantic search or AI features while keeping everything in the same migrations and Drizzle models you already use in TurboStarter. We're using it extensively in our dedicated [AI Kit](/ai).
* [Cron](https://supabase.com/docs/guides/functions/cron) - enables you to schedule background jobs and periodic tasks with [pg\_cron](https://github.com/citusdata/pg_cron). You can define cron jobs for things like scheduled database cleanups, sending emails, report generation, or any recurring logic, all managed alongside your TurboStarter app with full Postgres integration.
Because these features are all layered on top of Postgres, you can introduce them incrementally and keep managing everything through your familiar workflow.
## Start the development server
With the database and other services configured to use Supabase, you can start TurboStarter as usual from the monorepo root:
```bash
pnpm dev
```
TurboStarter will now:
* Use **Supabase Postgres** as your database through `DATABASE_URL`
* Use **Supabase Storage** as your file storage through the S3-compatible endpoint
* Leverage **Supabase Edge Functions** (for example, with Hono) for your serverless backend
That's it! You can now start building your application with Supabase as your main provider. Explore the [Supabase documentation](https://supabase.com/docs) for more features and best practices.
---
url: /docs/extension/stack
title: Tech Stack
description: A detailed look at the technical details.
---
## Turborepo
[Turborepo](https://turbo.build/) is a monorepo tool that helps you manage your project's dependencies and scripts. We chose a monorepo setup to make it easier to manage the structure of different features and enable code sharing between different packages.
} />
## WXT (Vite)
> It's like Next.js for browser extensions.
[WXT](https://www.wxt.dev/) is a very lightweight and powerful framework (based on [Vite](https://vite.dev/)) for building browser extensions using most popular frontend tools. It provides a modern development experience with features like hot module reloading, TypeScript support, and automatic manifest generation.
WXT simplifies the process of creating cross-browser extensions, allowing you to focus on your extension's functionality rather than boilerplate setup.
} />
} />
## React
[React](https://reactjs.org/) is a JavaScript library for building user interfaces. It's the core technology we use for creating the UI of our browser extension, allowing for efficient updates and rendering of components.
} />
## Tailwind CSS
[Tailwind CSS](https://tailwindcss.com) is a utility-first CSS framework that helps you build custom designs without writing any CSS. We also use [Base UI](https://base-ui.com) for our headless components library and [shadcn/ui](https://ui.shadcn.com) which enables you to generate pre-designed components with a single command.
} />
} />
} />
## Hono & React Query
[Hono](https://hono.dev) is a small, simple, and ultrafast web framework for the edge. It provides tools to help you build APIs and web applications faster. It includes an RPC client for making type-safe function calls from the frontend. We use Hono to build our serverless API endpoints.
To make data fetching and caching from our API easy and reliable, we pair Hono with [React Query](https://tanstack.com/query/latest). It helps manage asynchronous data, caching, and state synchronization between the client and backend, delivering a fast and seamless UX.
} />
} />
## Better Auth
[Better Auth](https://www.better-auth.com) is a modern authentication library for fullstack applications. It provides ready-to-use snippets for features like email/password login, magic links, OAuth providers, and more. We use Better Auth to handle all authentication flows in our application.
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a super fast [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) tool for databases. It helps manage databases, generate TypeScript types from your schema, and run queries in a fully type-safe way.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but thanks to Drizzle's flexibility, you can easily switch to MySQL, SQLite or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
---
url: /docs/extension/structure/background
title: Background service worker
description: Configure your extension's background service worker.
---
An extension's service worker is a powerful script that runs in the background, separate from other parts of the extension. It's loaded when it is needed, and unloaded when it goes dormant.
Once loaded, an extension service worker generally runs as long as it is actively receiving events, though it [can shut down](https://developer.chrome.com/docs/extensions/develop/concepts/service-workers/lifecycle#idle-shutdown). Like its web counterpart, an extension service worker cannot access the DOM, though you can use it if needed with [offscreen documents](https://developer.chrome.com/docs/extensions/reference/api/offscreen).
Extension service workers are more than network proxies (as web service workers are often described), they run in a separate [service worker context](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers). For example, when in this context, you no longer need to worry about CORS and can fetch resources from any origin.
In addition to the [standard service worker events](https://developer.mozilla.org/docs/Web/API/ServiceWorkerGlobalScope#events), they also respond to extension events such as navigating to a new page, clicking a notification, or closing a tab. They're also registered and updated differently from web service workers.
**It's common to offload heavy computation to the background service worker**, so you should always try to do resouce-expensive operations there and send results using [Messages API](/docs/extension/structure/messaging) to other parts of the extension.
Code for the background service worker is located at `src/app/background` directory - you need to use `defineBackground` within `index.ts` file inside to allow WXT to include your script in the build.
```ts title="src/app/background/index.ts"
import { defineBackground } from "wxt/sandbox";
const main = () => {
console.log(
"Background service worker is running! Edit `src/app/background` and save to reload.",
);
};
export default defineBackground(main);
```
To see the service worker in action, reload the extension, then open its "Service Worker inspector":

You should see what we've logged in the console:

To communicate with the service worker from other parts of the extension, you can use the [Messaging API](/docs/extension/structure/messaging).
## Persisting state
Service workers in `dev` mode always remain in `active` state.
The worker becomes idle after a few seconds of inactivity, and the browser will kill its process entirely after 5 minutes. This means all state (variables, etc.) is lost unless you use a storage engine.
The simplest way to persist your background service worker's state is to use the [storage API](/docs/extension/structure/storage).
The more advanced way is to send the state to a remote database via our [backend API](/docs/extension/api/overview).
---
url: /docs/extension/structure/content-scripts
title: Content scripts
description: Learn more about content scripts.
---
Content scripts run in the context of web pages in an isolated world. This allows multiple content scripts from various extensions to coexist without conflicting with each other's execution and to stay isolated from the page's JavaScript.
A script that ends with `.ts` will not have front-end runtime (e.g. react) bundled with it and won't be treated as a ui script, while a script that ends in `.tsx` will be.
There are many use cases for content scripts:
* Injecting a custom stylesheet into the page
* Scraping data from the current web page
* Selecting, finding, and styling elements from the current web page
* Injecting UI elements into current web page
Code for the content scripts is located in `src/app/content` directory - you need to define `.ts` or `.tsx` file inside and use `defineContentScript` to allow WXT to include your script in the build.
```ts title="src/app/content/index.ts"
export default defineContentScript({
matches: [""],
async main(ctx) {
console.log(
"Content script is running! Edit `app/content` and save to reload.",
);
},
});
```
Reload your extension, open a web page, then open its inspector:

To learn more about content scripts, e.g. how to configure only specific pages to load content scripts, how to inject them into `window` object or how to fetch data inside, please check [the official documentation](https://wxt.dev/guide/essentials/content-scripts.html).
## UI scripts
WXT has first-class support for mounting React components into the current webpage. This feature is called content scripts UI (CSUI).

An extension can have as many CSUI as needed, with each CSUI targeting a group of webpages or a specific webpage.
To get started with CSUI, create a `.tsx` file in `src/app/content` directory and use `defineContentScript` allow WXT to include your script in the build and mount your component into the current webpage:
```tsx title="src/app/content/index.tsx"
const ContentScriptUI = () => {
return (
);
};
export default defineContentScript({
matches: [""],
cssInjectionMode: "ui",
async main(ctx) {
const ui = await createShadowRootUi(ctx, {
name: "turbostarter-extension",
position: "overlay",
anchor: "body",
onMount: (container) => {
const app = document.createElement("div");
container.append(app);
const root = ReactDOM.createRoot(app);
root.render();
return root;
},
onRemove: (root) => {
root?.unmount();
},
});
ui.mount();
},
});
export default ContentScriptUI;
```
The `.tsx` extension is essential to differentiate between Content Scripts UI and regular Content Scripts. Make sure to check if you're using appropriate type of content script for your use case.
To learn more about content scripts UI, e.g. how to inject custom styles, fonts or the whole lifecycle of a component, please check [the official documentation](https://wxt.dev/guide/essentials/content-scripts.html#ui).
Under the hood, the component is wrapped inside the component that implements the Shadow DOM technique, together with many helpful features. This isolation technique prevents the web page's style from affecting your component's styling and vice-versa.
[Read more about the lifecycle of CSUI](https://docs.plasmo.com/framework/content-scripts-ui/life-cycle)
---
url: /docs/extension/structure/messaging
title: Messaging
description: Communicate between your extension's components.
---
Messaging API makes communication between different parts of your extension easy. To make it simple and scalable, we're leveraging `@webext-core/messaging` library.
It provides a declarative, type-safe, functional, promise-based API for sending, relaying, and receiving messages between your extension components.
## Handling messages
Based on our convention, we implemented a little abstraction on top of `@webext-core/messaging` to make it easier to use. That's why all types and keys are stored inside `lib/messaging` directory:
```ts title="lib/messaging/index.ts"
import { defineExtensionMessaging } from "@webext-core/messaging";
export const Message = {
HELLO: "hello",
} as const;
export type Message = (typeof Message)[keyof typeof Message];
interface Messages {
[Message.HELLO]: (message: string) => string;
}
export const { onMessage, sendMessage } = defineExtensionMessaging();
```
There you need to define what will be handled under each key. To make it more secure, only `Message` enum and `onMessage` and `sendMessage` functions are exported from the module.
All message handlers are located in `src/app/background/messaging` directory under respective subdirectories.
To create a message handler, create a TypeScript module in the `background/messaging` directory. Then, include your handlers for all keys related to the message:
```ts title="app/background/messaging/hello.ts"
import { onMessage, Message } from "~/lib/messaging";
onMessage(Message.HELLO, (req) => {
const result = await querySomeApi(req.body.id);
return result;
});
```
To make your handlers available across your extension, you need to import them
in the `background/index.ts` file. That way they could be interpreted by the
build process facilitated by WXT.
## Sending messages
Extension pages, content scripts, or tab pages can send messages to the handlers using the `sendMessage` function. Since we orchestrate your handlers behind the scenes, the message names are typed and will enable autocompletion in your editor:
```tsx title="app/popup/index.tsx"
import { sendMessage, Message } from "~/lib/messaging";
...
const response = await sendMessage(Message.HELLO, "Hello, world!");
console.log(response);
...
```
As it's an asynchronous operation, it's advisable to use [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) integration to handle the response on the client side.
We're already doing it that way when fetching auth session in the `User` component:
```tsx title="hello.tsx"
export const Hello = () => {
const { data, isLoading } = useQuery({
queryKey: [Message.HELLO],
queryFn: () => sendMessage(Message.HELLO, "Hello, world!"),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{data?.message}
;
};
```
---
url: /docs/extension/structure/overview
title: Overview
description: Learn about the structure of the extension app.
---
Every browser extension is different and can include different parts, removing the ones that are not needed.
TurboStarter ships with all the things you need to start developing your own extension including:
* **Popup window** - a small window that appears when the user clicks the extension icon.
* **Options page** - a page that appears when user enters extension settings.
* **Side panel** - a panel that appears when the user clicks sidepanel.
* **New tab page** - a page that appears when the user opens a new tab.
* **Devtools page** - a page that appears when the user opens the browser's devtools.
* **Tab pages** - custom pages shipped with the extension.
* **Content scripts** - injected scripts that run in the browser page.
* **Background scripts** - scripts that run in the background.
* **Message passing** - a way to communicate between different parts of the extension.
* **Storage** - a way to store data in the extension.
All the entrypoints are defined in `apps/extension/src/app` directory (it's similar to file-based routing in Next.js and Expo).
This directory acts as a source for WXT framework which is used to build the extension. It has the following structure:
By structurizing it this way, we can easily add new entrypoints in the future and extend rest of the extension independently from each other.
We'll go through each part and explain the purpose of it, check following sections for more details:
---
url: /docs/extension/structure/pages
title: Pages
description: Get started with your extension's pages.
---
Extension pages are built-in pages recognized by the browser. They include the extension's popup, options, sidepanel and newtab pages.
As WXT is based on Vite, it has very powerful [HMR support](https://vite.dev/guide/features#hot-module-replacement). This means that you don't need to refresh the extension manually when you make changes to the code.
## Popup
The popup page is a small dialog window that opens when a user clicks on the extension's icon in the browser toolbar. It is the most common type of extension page.

## Options
The options page is meant to be a dedicated place for the extension's settings and configuration.

## Devtools
The devtools page is a custom page (including panels) that opens when a user opens the extension's devtools panel.

## New tab
The new tab page is a custom page that opens when a user opens a new tab in the browser.

## Side panel
The side panel is a custom page that opens when a user clicks on the extension's icon in the browser toolbar.

## Tabs
Unlike traditional extension pages, tab (unlisted) pages are just regular web pages shipped with your extension bundle. Extensions generally redirect to or open these pages programmatically, but you can link to them as well.
They could be useful for following cases:
* when you want to show a some page when user first installs your extension
* when you want to have dedicated pages for authentication
* when you need more advanced routing setup

Your tab page will be available under the `/tabs` path in the extension bundle. It will be accessible from the browser under the URL:
```
chrome-extension:///tabs/your-tab-page.html
```
---
url: /docs/extension/structure/storage
title: Storage
description: Learn how to store data in your extension.
---
TurboStarter leverages `wxt/storage` library to handle persistent storage for your extension. It's a utility library from that abstracts the persistent storage API available to browser extensions.
It falls back to localStorage when the extension storage API is unavailable, allowing for state sync between extension pages, content scripts, background service workers and web pages.
To use the `wxt/storage` API, the "storage" permission **must** be added to the manifest:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
permissions: ["storage"],
},
});
```
## Storing data
The base Storage API is designed to be easy to use. It is usable in every extension runtime such as background service workers, content scripts and extension pages.
TurboStarter ships with predefined storage used to handle [theming](/docs/extension/customization/styling) in your extension, but you can create your own storage as well.
All storage-related methods and types are located in `lib/storage` directory.
```ts title="lib/storage/index.ts"
export const StorageKey = {
THEME: "local:theme",
} as const;
export type StorageKey = (typeof StorageKey)[keyof typeof StorageKey];
```
Then, to make it available around your extension, we're setting it up and providing default values:
```ts title="lib/storage/index.ts"
import { storage as browserStorage } from "wxt/storage";
import { appConfig } from "~/config/app";
import type { ThemeConfig } from "@workspace/ui";
const storage = {
[StorageKey.THEME]: browserStorage.defineItem(StorageKey.THEME, {
fallback: appConfig.theme,
}),
} as const;
```
To learn more about customizing your storage, syncing state or setup automatic backups please refer to the [official documentation](https://wxt.dev/storage.html).
## Consuming storage
To consume storage in your extension, you can use the `useStorage` React hook that is automatically provided to every part of the extension. The hook API is designed to streamline the state-syncing workflow between the different pieces of an extension.
Here is an example on how to consume our theme storage in `Layout` component:
```tsx title="modules/common/layout/layout.tsx"
import { StorageKey, useStorage } from "~/lib/storage";
export const Layout = ({ children }: { children: React.ReactNode }) => {
const { data } = useStorage(StorageKey.THEME);
return (
{children}
);
};
```
Congrats! You've just learned how to persist and consume global data in your extension 🎉
For more advanced use cases, please refer to the [official documentation](https://wxt.dev/storage.html).
### Usage with Firefox
To use the storage API on Firefox during development you need to add an addon ID to your manifest, otherwise, you will get this error:
> Error: The storage API will not work with a temporary addon ID. Please add an explicit addon ID to your manifest. For more information see [https://mzl.la/3lPk1aE](https://mzl.la/3lPk1aE)
To add an addon ID to your manifest, add this to your package.json:
```ts title="wxt.config.ts"
export default defineConfig({
manifest: {
browser_specific_settings: {
gecko: {
id: "your-id@example.com",
},
},
},
});
```
During development, you may use any ID. If you have published your extension, you need to use the ID assigned by [Firefox Add-ons](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons).
---
url: /docs/extension/tests/e2e
title: E2E tests
description: Simulate real user scenarios across the entire stack with automated end-to-end test tools and examples.
---
End-to-end (E2E) tests will be available soon, allowing you to automate testing of real user flows and interactions across your application.
Stay tuned for updates as we roll out robust E2E testing resources and examples.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/extension/tests/unit
title: Unit tests
description: Write and run fast unit tests for individual functions and components with instant feedback.
---
Unit tests are a type of automated test where individual units or components are tested. The "unit" in "unit test" refers to the smallest testable parts of an application. These tests are designed to verify that each unit of code performs as expected.
TurboStarter uses [Vitest](https://vitest.dev) as the unit testing framework. It's a blazing-fast test runner built on top of [Vite](https://vitejs.dev), designed for modern JavaScript and TypeScript projects.
If you've used [Jest](https://jestjs.io) before, you already know Vitest - it shares the same API. But Vitest is built for speed: native TypeScript support without transpilation, parallel test execution, and a smart watch mode that only re-runs tests affected by your changes.
It comes with everything you need out of the box - code coverage, snapshot testing, mocking, and a slick UI for debugging. Fast feedback, zero configuration.
## Why write unit tests?
Unit tests give you **fast, focused feedback** on small pieces of your code - individual functions, hooks, or components. Instead of debugging an entire page or flow, you can verify just the logic you care about in isolation.
They also act as **living documentation**: a good test tells you how a function is supposed to behave, which edge cases are important, and what assumptions the code makes. This makes it much easier to safely refactor or extend features later.
In TurboStarter, unit tests are designed to be **cheap and quick to run**, so you can keep Vitest running in watch mode while you code. Every change you make is immediately checked, helping you catch regressions before they ever reach integration or end‑to‑end tests.
## Configuration
TurboStarter configures Vitest to be **as simple as possible**, while still taking advantage of [Turborepo's caching](https://turborepo.com/docs/crafting-your-repository/caching) and Vitest's [Test Projects](https://vitest.dev/guide/projects).
```ts title="vitest.config.ts"
import { mergeConfig } from "vitest/config";
import baseConfig from "@workspace/vitest-config/base";
export default mergeConfig(baseConfig, {
test: {
/* your extended test configuration here */
},
});
```
* **Per-package tests**: each package that has unit tests defines its own `test` script. This keeps the configuration close to the code and makes it easy to add tests to any workspace.
* **Turbo tasks for CI**: the root `test` task (`pnpm test`) uses `turbo run test` to execute all package-level test scripts with smart caching, which is ideal for CI pipelines where you want to avoid re-running unchanged tests.
* **Vitest Test Projects for local dev**: a root Vitest configuration uses [Test Projects](https://vitest.dev/guide/projects) to run all unit test suites from a single command, which is perfect for local development when you want fast feedback across the whole monorepo.
This **hybrid setup** combines Turborepo and Vitest Projects in a way that fits TurboStarter's principles: cached, package-aware runs in CI, and a single, unified Vitest entry point for local development.
You can read more about this setup in the official documentation guides listed below.
## Running tests
There are a few different ways to run unit tests, depending on what you're doing:
* **CI / full test run** - at the root of the repo:
```bash
pnpm test
```
This runs `turbo run test`, which executes all `test` scripts in packages that define them, with Turborepo handling caching so unchanged packages are skipped. This is what you should use in your CI/CD pipeline.
* **One-off local run with Vitest Projects**:
```bash
pnpm test:projects
```
This uses Vitest [Test Projects](https://vitest.dev/guide/projects) to run all configured unit test suites from a single command, which is great when you want to quickly validate the whole monorepo locally.
* **Watch mode during development**:
```bash
pnpm test:projects:watch
```
This starts Vitest in watch mode across all Test Projects. As you edit files, only the affected tests are re-run, giving you fast feedback while you work.
## Code coverage
Unit test coverage helps you understand **how much** of your code is being tested. While it can't guarantee bug-free code, it shines a light on untested paths that could hide issues or regressions.
To generate a code coverage report for all unit tests, run:
```bash
pnpm turbo test:coverage
```
This command runs the coverage task across all relevant packages (using Turborepo) and collects the results into a single coverage output.
To open the coverage report in your browser:
```bash
pnpm turbo test:coverage:view
```
This will build the HTML report and launch it using your default browser, so you can explore which files and branches are covered.
You can also store the generated coverage report as a [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) during your CI/CD pipeline, just add the following steps to your workflow job:
```yaml title=".github/workflows/ci.yml"
# your workflow job configuration here
- name: 📊 Generate coverage
run: pnpm turbo test:coverage
- name: 🗃️ Archive coverage report
uses: actions/upload-artifact@v5
with:
name: coverage-${{ github.sha }}
path: tooling/vitest/coverage/report
```
This will generate a test coverage report and upload it as an artifact, so you can access it from GitHub Actions tab for later inspection.
A high coverage percentage means your tests execute most lines and branches - but the quality and relevance of your tests matter more than the raw number. Use coverage reports to spot gaps and guide improvements, not as the sole metric of test health.

## Best practices
Unit tests should work **for you**, not the other way around. Focus on writing tests that make it easier to change code with confidence, not on satisfying arbitrary rules or reaching a magic number in a dashboard.
Code coverage is a **useful metric**, but it **SHOULD NOT** be the goal. It's better to have a smaller set of high‑value tests that cover critical paths and edge cases than a huge suite of fragile tests that are hard to maintain.
When in doubt, ask: *“Does this test give **me** confidence that I can change this code without breaking users?”* If the answer is no, refactor or remove it.
Finally, keep unit tests focused on **small, isolated pieces of logic**. More advanced flows — like multi-step user journeys, cross-service interactions, or full-page behavior — are better covered by [end-to-end (E2E) tests](/docs/web/tests/e2e), where you can verify the system as a whole.
---
url: /docs/extension/troubleshooting/installation
title: Installation
description: Find answers to common extension installation issues.
---
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## Permissions issues
If some feature of your extension is not working, it's possible that you're missing a permission in the manifest config.
Make sure to check the [permissions](/docs/extension/configuration/manifest#overriding-manifest) section in the manifest config file.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/extension/internationalization#configuration)
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter mobile my-package
# For a specific package
pnpm install --filter @workspace/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
---
url: /docs/extension/troubleshooting/publishing
title: Publishing
description: Find answers to common publishing issues.
---
## My extension submission was rejected
If your extension submission was rejected, you probably got an email with the reason. You'll need to fix the issues and upload a new build of your extension to the store and send it for review again.
Make sure to follow the [guidelines](/docs/extension/marketing) when submitting your extension to ensure that everything is setup correctly.
## Version number mismatch
If you get version number conflicts when submitting:
1. Ensure your `manifest.json` version matches what's in the store
2. Increment the version number appropriately for each new submission
3. Make sure the version follows semantic versioning (e.g., `1.0.1`)
## Missing permissions in manifest
If your extension is rejected due to permission issues:
1. Review the permissions declared in your `manifest.json`
2. Ensure all permissions are properly justified in your submission
3. Remove any unused permissions that aren't essential
4. Consider using optional permissions where possible
[Learn more about permissions](/docs/extension/configuration/manifest#permissions)
## Content Security Policy (CSP) violations
If your extension is rejected due to CSP issues:
1. Check your manifest's `content_security_policy` field
2. Ensure all external resources are properly whitelisted
3. Remove any unsafe inline scripts or eval usage
4. Use more secure alternatives like `browser.scripting.executeScript`
## My extension crashes on production build
If the extension works during development but crashes after publishing or when loaded unpacked in production mode, check these common causes:
1. **Uncaught runtime errors** in the background service worker or content scripts. Open `chrome://extensions` (or `about:debugging` in Firefox) → enable Developer mode → Inspect the service worker/content script and check the console for stack traces.
2. **Missing permissions or host permissions** causing APIs to throw (e.g., network calls, tabs access). Ensure required `permissions` and `host_permissions` are declared in `manifest.json`.
3. **CSP blocking resources** (inline scripts/styles, remote fonts, or endpoints). Verify `content_security_policy` and update code to avoid unsafe patterns.
4. **Missing assets or incorrect paths** referenced in `manifest.json` (`icons`, `web_accessible_resources`, `action.default_popup`, etc.). Confirm files exist in the final build output and paths match.
5. **Build-time variables not resolved**. If you rely on environment variables, ensure they’re inlined at build time or have safe fallbacks at runtime. Example:
```js
const apiUrl = env.VITE_SITE_URL ?? "https://api.example.com";
```
6. **Module format or bundler config issues** (MV3 service worker must be ESM if `type: 'module'`). Align bundler output with your manifest expectations and rebuild.
Try this:
1. Reproduce with a production bundle locally and load it as an unpacked extension; inspect background and content script logs for errors.
2. Validate `manifest.json` and ensure all referenced files are present in the build output.
3. Temporarily relax CSP locally to confirm whether CSP is the cause; then apply a compliant fix (don’t ship relaxed CSP).
4. Add fallbacks for any build-time variables and rebuild.
---
url: /docs/mobile/ai
title: AI
description: Learn how to use AI integration in your mobile app.
---
TurboStarter includes a set of AI rules, skills, subagents, and commands for popular AI editors and tools - so the AI follows this repo's conventions and produces more consistent changes.
See [AI-assisted development](/docs/mobile/installation/ai-development) to set it up.
AI integration on [web](/docs/web/ai/overview), [extension](/docs/extension/ai), and mobile uses the same battle-tested [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction), so the overall approach is similar across platforms.
In this section, we'll focus on consuming AI responses in the mobile app. For server-side implementation details, refer to the [web documentation](/docs/web/ai/overview).
## Features
The most common AI integration features are also supported in the mobile app:
* **Chat**: Build chat interfaces inside native mobile apps.
* **Streaming**: Receive AI responses as soon as the model starts generating them, without waiting for the full response.
* **Image generation**: Generate images based on a given prompt.
You can easily compose your application using these building blocks or extend them to suit your specific needs.
## Usage
AI integration in the mobile app works the same way as in the [web app](/docs/web/ai/configuration#client-side) and the [browser extension](/docs/extension/ai#server--client). We use the same [API endpoint](/docs/web/ai/configuration#api-endpoint), and since TurboStarter ships with built-in streaming support on mobile, we can display answers incrementally as they're generated.
```tsx title="ai.tsx"
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { Text, View } from "react-native";
const AI = () => {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: "/api/ai/chat",
}),
});
return (
{messages.map((message) => (
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return {part.text};
}
})}
))}
);
};
export default AI;
```
By leveraging this integration, we can easily manage the state of the AI request and update the UI as soon as the response is ready.
TurboStarter ships with a ready-to-use implementation of AI chat, allowing you to see this solution in action. Feel free to reuse or modify it according to your needs.
---
url: /docs/mobile/analytics/configuration
title: Configuration
description: Learn how to configure mobile analytics in TurboStarter.
---
The `@workspace/analytics-mobile` package offers a streamlined and flexible approach to tracking events in your TurboStarter mobile app using various analytics providers. It abstracts the complexities of different analytics services and provides a consistent interface for event tracking.
In this section, we'll guide you through the configuration process for each supported provider.
Note that the configuration is validated against a schema, so you'll see error messages in the console if anything is misconfigured.
## Permissions
First and foremost, to start tracking any metrics from your app (and to do so legally), you need to ask your users for permission. It's [required](https://support.apple.com/en-us/102420), and you're not allowed to collect any data without it.
To make this process as simple as possible, TurboStarter comes with a `useTrackingPermissions` hook that you can use to access the user's consent status. It will handle asking for permission automatically as well as process updates made through the general phone settings.
```tsx
import { useTrackingPermissions } from "@workspace/analytics-mobile";
export const MyComponent = () => {
const granted = useTrackingPermissions();
if (granted) {
// Start tracking
} else {
// Disable tracking
}
};
```
Also, for Apple, you must declare the tracking justification via [App Tracking Transparency](https://developer.apple.com/documentation/apptrackingtransparency). It comes pre-configured in TurboStarter via the [Expo Config Plugin](https://docs.expo.dev/versions/latest/config/app/#plugins), where you can provide a custom message to the user:
```ts title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
plugins: [
[
"expo-tracking-transparency",
{
/* 🍎 Describe why you need access to the user's data */
userTrackingPermission:
"This identifier will be used to deliver personalized ads to you.",
},
],
],
});
```
This way, we ensure that the user is aware of the data we collect and can make an informed decision. If you don't provide this information, your app is likely to be rejected by Apple and/or Google during the [review process](/docs/mobile/publishing/checklist#send-to-review).
## Providers
TurboStarter supports multiple analytics providers, each with its own unique configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best suits your needs and follow the instructions in the respective accordion section.
To use Google Analytics as your analytics provider, you need to [configure and link a Firebase project to your app](/docs/mobile/installation/firebase).
After that, you can proceed with the installation of the analytics package:
```bash
pnpm add --filter @workspace/analytics-mobile @react-native-firebase/analytics
```
Also, make sure to activate the Google Analytics provider as your analytics provider by updating the exports in:
```ts title="index.ts"
// [!code word:google-analytics]
export * from "./google-analytics";
export * from "./google-analytics/env";
```
To customize the provider, you can find its definition in `packages/analytics/mobile/src/providers/google-analytics` directory.
For more information, please refer to the [React Native Firebase documentation](https://rnfirebase.io/analytics/usage).

PostHog is also one of pre-configured providers for [monitoring](/docs/mobile/monitoring/overview) in TurboStarter mobile apps. You can learn more about it [here](/docs/mobile/monitoring/posthog).
To use PostHog as your analytics provider, you need to configure a PostHog instance. You can obtain the [Cloud](https://app.posthog.com/signup) instance by [creating an account](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
Then, create a project and, based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/mobile` directory and your `eas.json` file:
```dotenv
EXPO_PUBLIC_POSTHOG_KEY="your-posthog-api-key"
EXPO_PUBLIC_POSTHOG_HOST="your-posthog-instance-host"
```
Also, make sure to activate the PostHog provider as your analytics provider by updating the exports in:
```ts title="index.ts"
// [!code word:posthog]
export * from "./posthog";
export * from "./posthog/env";
```
To customize the provider, you can find its definition in `packages/analytics/mobile/src/providers/posthog` directory.
For more information, please refer to the [PostHog documentation](https://posthog.com/docs).

To use Mixpanel as your analytics provider, you need to [create an account](https://mixpanel.com/) and [obtain your project token](https://help.mixpanel.com/hc/en-us/articles/115004502806-Find-Project-Token).
Then, set it as an environment variable in your `.env.local` file in the `apps/mobile` directory and your `eas.json` file:
```dotenv
EXPO_PUBLIC_MIXPANEL_TOKEN="your-project-token"
```
Also, make sure to activate the Mixpanel provider as your analytics provider by updating the exports in:
```ts title="index.ts"
// [!code word:mixpanel]
export * from "./mixpanel";
export * from "./mixpanel/env";
```
To customize the provider, you can find its definition in `packages/analytics/mobile/src/providers/mixpanel` directory.
For more information, please refer to the [Mixpanel documentation](https://docs.mixpanel.com/).
## Context
To enable tracking events, capturing screen views and other analytics features, you need to wrap your app with the `Provider` component that's implemented by every provider and available through the `@workspace/analytics-mobile` package:
```tsx title="providers.tsx"
// [!code word:AnalyticsProvider]
import { memo } from "react";
import { Provider as AnalyticsProvider } from "@workspace/analytics-mobile";
interface ProvidersProps {
readonly children: React.ReactNode;
}
export const Providers = memo(({ children }) => {
return (
{children}
);
});
Providers.displayName = "Providers";
```
By implementing this setup, you ensure that all analytics events are properly tracked from your mobile app code. This configuration allows you to safely utilize the [Analytics API](/docs/mobile/analytics/tracking) within your components, enabling comprehensive event tracking and data collection.
---
url: /docs/mobile/analytics/overview
title: Overview
description: Get started with mobile analytics in TurboStarter.
---
When it comes to mobile app analytics, we can distinguish between two types:
* **Store listing analytics**: Used to track the performance of your mobile app's store listing (e.g., how many people have viewed your app in the store or how many have installed it).
* **In-app analytics**: Tracks user actions within your mobile app (e.g., how many users entered a specific screen, how many users clicked on a specific button, etc.).
The `@workspace/analytics-mobile` package provides a set of tools to easily implement both types of analytics in your mobile app.
## Store listing analytics
Interpreting your mobile app's store listing metrics can help you evaluate how changes to your app and store listing affect conversion rates. For example, you can identify keywords that users are searching for to optimize your app's store listing.
While each store implements a different set of metrics, there are some common ones you should be aware of:
* **Downloads**: The total number of times your app was downloaded, including both first-time downloads and re-downloads.
* **Sales**: The total number of pre-orders, first-time app downloads, in-app purchases, and their associated sales.
* **Usage**: A variety of user engagement metrics, such as installations, sessions, crashes, and active devices.
To learn more about these or other metrics (e.g., how to create custom reports or KPIs), please refer to the official documentation of the store you're publishing to:
## In-app analytics
TurboStarter comes with built-in analytics support for multiple providers as well as a unified API for tracking events. This API enables you to easily and consistently track user behavior and app usage across your mobile application.
To learn more about each provider and how to configure them, see their respective sections:
All configuration and setup is built-in with a unified API, allowing you to switch between providers by simply changing the exports. You can even introduce your own provider without breaking any tracking-related logic.
In the following sections, we'll cover how to set up each provider and how to track events in your application.
---
url: /docs/mobile/analytics/tracking
title: Tracking events
description: Learn how to track events in your TurboStarter mobile app.
---
The strategy for tracking events that every provider has to implement is extremely simple:
```ts
export type AllowedPropertyValues = string | number | boolean;
type TrackFunction = (
event: string,
data?: Record,
) => void;
export interface AnalyticsProviderStrategy {
Provider: ({ children }: { children: React.ReactNode }) => React.ReactNode;
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
As shown above, each provider must supply two key elements:
1. `Provider` - a component that [wraps your app](/docs/mobile/analytics/configuration#context).
2. `track` - a function responsible for sending event data to the provider.
To track an event, you simply need to invoke the `track` method, passing the event name and an optional data object:
```tsx
import { track } from "@workspace/analytics-mobile";
export const MyComponent = () => {
return (
track("button.click", { country: "US" })}>
Track event
);
};
```
In most mobile apps, you'll only ever need to use the `track` method to track events. You can use it anywhere in your app code—such as in response to user interactions, navigation events, or custom actions - by simply calling `track` with an event name and optional properties.
## Identifying users
Linking events to specific users enables you to build a full picture of how they're using your product across different sessions, devices, and platforms.
For identification purposes, we're extending the strategy with the `identify` and `reset` methods. They are optional and only needed if you want to identify users in your app and associate their actions with a specific user ID.
```ts
type IdentifyFunction = (
userId: string,
traits?: Record,
) => void;
export interface AnalyticsProviderClientStrategy {
identify: IdentifyFunction;
reset: () => void;
}
```
To identify users, call the `identify` method, passing the user's ID and an optional traits object:
```tsx
import { identify } from "@workspace/analytics-mobile";
identify("user-123", { name: "John Doe" });
```
This will associate all future events with the user's ID, allowing you to track user behavior and gain valuable insights into your application's usage patterns.
The `identify` method is configured out-of-the-box to react on changes to the user's authentication state.
When the user is authenticated, the `identify` method will be called with the user's ID and the user's traits. When the user is logged out, the `reset` method will be called to clear the existing user identification.
Congratulations! You've now mastered event tracking in your TurboStarter mobile app. With this knowledge, you're well-equipped to analyze user behaviors and gain valuable insights into your application's usage patterns. Happy analyzing!
---
url: /docs/mobile/api/client
title: Using API client
description: How to use API client to interact with the API.
---
In mobile app code, you can only access the API client from the **client-side.**
When you create a new component or screen and want to fetch some data, you can use the API client to do so.
## Creating a client
We're creating a client-side API client in `apps/mobile/src/lib/api/index.tsx` file. It's a simple wrapper around the [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) that fetches or mutates data from the API.
It also requires wrapping your app in a `QueryClientProvider` component to provide the API client to the rest of the app:
```tsx title="_layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
...
...
);
}
```
Inside the `apps/mobile/src/lib/api/utils.ts` file we're calling a function to get base url of your api, so make sure it's set correctly (especially on production) and your web api endpoint is corresponding with the name there.
```tsx title="utils.ts"
const getBaseUrl = () => {
/**
* Gets the IP address of your host-machine. If it cannot automatically find it,
* you'll have to manually set it. NOTE: Port 3000 should work for most but confirm
* you don't have anything else running on it, or you'd have to change it.
*
* **NOTE**: This is only for development. In production, you'll want to set the
* baseUrl to your production API URL.
*/
const debuggerHost = Constants.expoConfig?.hostUri;
const localhost = debuggerHost?.split(":")[0];
if (!localhost) {
console.warn("Failed to get localhost. Pointing to production server...");
return env.EXPO_PUBLIC_SITE_URL;
}
return `http://${localhost}:3000`;
};
```
As you can see we're relying on your machine IP address for local development (in case you want to open the app from another device) or on the [environment variables](/docs/mobile/configuration/environment-variables) in production to get it, so there shouldn't be any issues with it, but in case, please be aware where to find it 😉
## Queries
Of course, everything comes already configured for you, so you just need to start using `api` in your components/screens.
For example, to fetch the list of posts you can use the `useQuery` hook:
```tsx title="app/(tabs)/tab-one.tsx"
import { api } from "~/lib/api";
export default function TabOneScreen() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: async () => {
const response = await api.posts.$get();
if (!response.ok) {
throw new Error("Failed to fetch posts!");
}
return response.json();
},
});
if (isLoading) {
return Loading...;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
}
```
It's using the `@tanstack/react-query` [useQuery API](https://tanstack.com/query/latest/docs/framework/react/reference/useQuery), so you shouldn't have any troubles with it.
## Mutations
If you want to perform a mutation in your mobile code, you can use the `useMutation` hook that comes straight from the integration with [Tanstack Query](https://tanstack.com/query):
```tsx title="form.tsx"
import { api } from "~/lib/api";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
},
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return (
);
}
```
Here, we're also invalidating the query after the mutation is successful. This is a very important step to make sure that the data is updated in the UI.
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api";
export default function TabOneScreen() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return Loading...;
}
/* do something with the data... */
return (
{JSON.stringify(posts)}
);
}
```
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/client";
export default function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return (
);
}
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
---
url: /docs/mobile/api/overview
title: Overview
description: Get started with the API.
---
To enable communication between your Expo app and the server in a production environment, the API **must** be deployed first. By default, it's hosted together with the [web app](/docs/web/api/overview), but you can also [deploy it separately](/docs/web/deployment/api).
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe way, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS.
It also includes an [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers live in the `packages/api/` package. Inside, the `modules` folder contains the API's feature modules. Each module has its own directory and exports its resolvers.
For each module, we create a separate Hono router and aggregate all sub-routers into one main router in the `packages/api/index.ts` file.
By default, the API is integrated with the [web app](/docs/web/api/overview) and exposed as a [Next.js route handler](https://nextjs.org/docs/app/getting-started/route-handlers):
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@workspace/api";
const handler = handle(appRouter);
export {
handler as GET,
handler as POST,
handler as OPTIONS,
handler as PUT,
handler as PATCH,
handler as DELETE,
handler as HEAD,
};
```
Learn more about how to use the API in your mobile app in the following sections:
---
url: /docs/mobile/auth/2fa
title: Two-Factor Authentication (2FA)
description: Add an extra layer of security with two-factor authentication in your mobile app.
---
TurboStarter uses [Better Auth's 2FA plugin](https://www.better-auth.com/docs/plugins/2fa) to provide multi-factor authentication (MFA) capabilities in your mobile app. Two-factor authentication adds an extra layer of security by requiring users to provide a second form of verification alongside their password.
## Available methods
TurboStarter supports multiple 2FA verification methods through Better Auth:
* **TOTP (Time-based One-Time Password)** - codes generated by authenticator apps
* **OTP (One-Time Password)** - codes sent via email or SMS
* **Backup codes** - single-use recovery codes for account recovery
You can use any TOTP-compatible authenticator app, such as:
* [Google Authenticator](https://support.google.com/accounts/answer/1066447)
* [Authy](https://authy.com/)
* [Microsoft Authenticator](https://www.microsoft.com/en-us/security/mobile-authenticator-app)
* [1Password](https://1password.com/features/authenticator/)
* [Bitwarden](https://bitwarden.com/help/authenticator-keys/)
## Enabling 2FA
### Enable in settings
Users enable two-factor authentication in their account security settings within the mobile app.

### Setup authenticator
A QR code is displayed in the mobile app for users to scan with their authenticator app. Users can also manually enter the setup key if needed.

### Verify setup
Users enter a verification code from their authenticator to confirm setup directly in the mobile app.
### Backup codes
Users receive single-use backup codes for account recovery, which can be saved or shared from the mobile app.

Recovery codes are essential for account recovery if users lose access to
their authenticator device. Make sure to educate users about safely storing
their backup codes, and consider providing options to save them to the device
or share them securely.
## Using 2FA
### Sign in normally
Users enter their email and password or use other authentication methods (biometric, social login) as usual in the mobile app.
### 2FA prompt
After successful password verification, users are prompted for their 2FA code in a native mobile interface.

### Enter verification code
Users input the 6-digit code from their authenticator app using the mobile keyboard.
### Access granted
Upon successful verification, users gain access to their account and are navigated to the main app screen.
### Trusted devices
Users can mark their mobile device as trusted during 2FA verification. Trusted devices won't require 2FA verification for 60 days, providing a balance between security and convenience. This is particularly useful for personal mobile devices.
## Mobile-specific considerations
### Biometric integration
On mobile devices, 2FA can be enhanced with biometric authentication (fingerprint, face recognition) for added security and convenience.
### App switching
The mobile app should handle switching between your app and authenticator apps seamlessly, maintaining the authentication state when users return.
### Offline support
Consider implementing offline backup code verification for scenarios where users may have limited connectivity.
### Push notifications
For OTP delivery via SMS or email, ensure your app handles incoming notifications gracefully during the authentication flow.
## Configuration
2FA is configured through Better Auth's plugin system. The plugin handles:
* Secure secret generation and storage
* QR code generation for authenticator setup
* TOTP code validation
* Backup code generation and management
* Trusted device management
* Mobile-specific session handling
For detailed implementation instructions, refer to the [Better Auth 2FA documentation](https://www.better-auth.com/docs/plugins/2fa).
---
url: /docs/mobile/auth/configuration
title: Configuration
description: Configure authentication for your application.
---
TurboStarter supports multiple authentication methods on mobile:
* **Password** - the traditional email/password method
* **Magic Link** - passwordless email link authentication
* **OTP** - one-time passwords sent to email or phone
* **Anonymous** - guest mode for unauthenticated users
* **OAuth** - OAuth providers; [Apple](https://www.better-auth.com/docs/authentication/apple), [Google](https://www.better-auth.com/docs/authentication/google), and [GitHub](https://www.better-auth.com/docs/authentication/github) are set up by default
All methods are enabled by default; you can enable, disable, or configure any of them to your needs.
You can mix and match these methods or add new ones - for example, password
and magic link at the same time - so users have flexibility in how they sign
in.
Authentication configuration can be customized through a simple configuration file. The following sections explain the available options and how to configure each authentication method based on your requirements.
## API
To enable a new authentication method or add a plugin, update the shared API configuration. See [web authentication configuration](/docs/web/auth/configuration) for details; the server setup is shared between web and mobile.
For mobile apps, we need to define an [authentication trusted origin](https://www.better-auth.com/docs/reference/security#trusted-origins) using a mobile app scheme instead.
App schemes (like `turbostarter://`) are used for [deep linking](https://docs.expo.dev/guides/linking/) users to specific screens in your app after authentication.
To find your app scheme, take a look at `apps/mobile/app.config.ts` file and then add it to your auth server configuration:
```ts title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["turbostarter://**"],
...
});
```
Adding your app scheme to trusted origins is required for security - it prevents CSRF and open redirects by allowing only requests from your app.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
## UI
Separate configuration controls what is shown in the **UI**. It lives in `apps/mobile/config/auth.ts`.
```ts title="apps/mobile/config/auth.ts"
import { Platform } from "react-native";
import { authConfigSchema, type AuthConfig } from "@workspace/auth";
export const authConfig = authConfigSchema.parse({
providers: {
password: true,
emailOtp: false,
magicLink: false,
anonymous: true,
oAuth: [
Platform.select({
android: "google",
ios: "apple",
}),
"github",
],
},
}) satisfies AuthConfig;
```
The configuration is validated with a Zod schema, so invalid values surface as errors at startup.
**Avoid editing the config file directly.** Prefer environment variables to override the defaults.
For example, to switch from password to magic link, set:
```dotenv title=".env.local"
EXPO_PUBLIC_AUTH_PASSWORD=false
EXPO_PUBLIC_AUTH_MAGIC_LINK=true
```
To show third-party providers in the UI, add the provider to the `oAuth` array. Defaults: Google and GitHub (with platform-specific Apple on iOS).
```tsx title="apps/mobile/config/auth.ts"
providers: {
...
oAuth: [
Platform.select({
android: SocialProvider.GOOGLE,
ios: SocialProvider.APPLE,
}),
SocialProvider.GITHUB,
],
...
},
```
You can even display specific providers for specific platforms - for example, you can display Google authentication for Android and Apple authentication for iOS.
## Third-party providers
To enable third-party authentication providers, you'll need to:
1. Create an OAuth application in the provider’s developer console ([Apple](https://developer.apple.com/account/), [Google Cloud Console](https://console.cloud.google.com/), [GitHub](https://github.com/settings/developers), or another supported provider).
2. Set the matching environment variables in your TurboStarter API (shared with web).
Each provider needs its own credentials and environment variables. See the [Better Auth OAuth docs](https://better-auth.com/docs/concepts/oauth) for step-by-step setup per provider.
Make sure to set both development and production environment variables
appropriately. Your OAuth provider may require different callback URLs for
each environment.
---
url: /docs/mobile/auth/flow
title: User flow
description: Discover the authentication flow in TurboStarter.
---
TurboStarter ships with a fully functional authentication system. Most screens and components are preconfigured and easy to customize.
Here you will find a quick walkthrough of the authentication flow.
## Sign up
The sign-up screen is where users can create an account. They need to provide their email address and password.

Once successful, users are asked to confirm their email address. This is enabled by default - and due to security reasons, it's not possible to disable it.
Make sure to configure the [email provider](/docs/web/emails/configuration) together with the [auth hooks](/docs/web/emails/sending#authentication-emails) to be able to send emails from your app.

## Sign in
The sign-in screen lets users log in with email and password, magic link (if enabled), OTP (if enabled), or third-party providers.

## Sign out
The sign-out button is in the user account settings.

## Forgot password
The forgot-password screen lets users request a reset. They enter their email and follow the instructions sent to them.
The reset-password screen is where users land from the password-reset email. They set a new password and confirm it.

## Two-factor authentication
Two-factor authentication adds a second step: users enter a code sent to their email or phone (or from an authenticator app) in addition to their password when signing in.

---
url: /docs/mobile/auth/oauth/apple
title: Apple
description: Configure "Sign in with Apple" for your mobile application.
---
**"Sign in with Apple"** provides a native, privacy-preserving SSO experience on iOS. Use the system Apple button and the Apple Authentication APIs to sign users in, then verify the identity token on your backend and create a session with your auth server.
Native Apple ID authentication is available on iOS only. You are advised to
present the official system button (or our custom component - also compliant!)
and follow [Apple's Human Interface
Guidelines](https://developer.apple.com/design/human-interface-guidelines/sign-in-with-apple)
for best practices.

## Why use native Apple ID authentication?
System sheet + official button, aligned with [Apple's Human Interface Guidelines](https://developer.apple.com/design/human-interface-guidelines/sign-in-with-apple) for trust and conversion.
Private relay email and limited data by design, ensuring your users' privacy is protected and compliant with App Store guidelines.
Fast, low-friction sign-in on iOS enabling your users to sign in without the need to remember or create additional passwords.
JWT verification on the server with [Better Auth](https://www.better-auth.com/docs/authentication/apple), keeping your users' credentials secure.
We exchange Apple credentials for an app session and persist it in the app.
## Requirements
* Enable the "Sign in with Apple" capability for your bundle identifier in the [Apple Developer Portal](https://developer.apple.com/account/resources/identifiers/list)
* Add the entitlement and build with [EAS](/docs/mobile/publishing/checklist) (or configure natively)
* Ensure your app's deep link scheme is added to the auth server's [trusted origins configuration](/docs/mobile/auth/configuration)
Check the [Better Auth documentation](https://www.better-auth.com/docs/authentication/apple) for more details on how to configure all the required keys and certificates.
## High-level flow
1. Check availability with `AppleAuthentication.isAvailableAsync()`.
2. Render the system `AppleAuthenticationButton` or custom TurboStarter component.
3. Call `AppleAuthentication.signInAsync()` requesting `FULL_NAME` and/or `EMAIL` as needed.
4. Send the returned `idTokeb` identifier to the API powered by [Better Auth](https://www.better-auth.com/docs/authentication/apple) to verify and establish a session.
5. Optionally track credential state with `AppleAuthentication.getCredentialStateAsync(user)`.
Always verify the JWT signature from `idToken` on your backend using Apple's
public keys before creating a session.
For a more in-depth overview of Apple ID authentication—including implementation details, platform caveats, and advanced configuration—see the following resources:
---
url: /docs/mobile/auth/oauth/google
title: Google
description: Configure "Sign in with Google" for your mobile application.
---
**"Sign in with Google"** enables a fast account-chooser experience on mobile (especially on Android). Configure your platform credentials, prompt the native account picker, then exchange the returned token on your backend to create a session with your auth server.
On Android, Google Sign‑In uses [Google Identity
Services](https://developers.google.com/identity?hl=pl) and integrates with
the system account chooser. On iOS, the recommended Expo flow uses
[expo-auth-session](https://docs.expo.dev/versions/latest/sdk/auth-session/)
with Google for a native, web-based sign-in experience.

## Why use Google authentication?
Account picker and token storage integrated with the OS for speed and familiarity.
Android native chooser; iOS polished experience via Expo.
Tokens are verified server-side with [Better Auth](https://www.better-auth.com/docs/authentication/google) before a session is issued.
Reduce friction with one-tap sign-in and fewer passwords to remember.
Built on [Google Identity Services](https://developers.google.com/identity?hl=pl) and best-practice OAuth flows.
## Requirements
* Configure [Google Cloud OAuth Client IDs](https://react-native-google-signin.github.io/docs/setting-up/get-config-file) (Android package + SHA-1, iOS bundle ID) in the [Google Cloud Console](https://console.cloud.google.com/)
* Build with [EAS](/docs/mobile/publishing/checklist) to ensure native credentials are embedded correctly
* Add your app deep link scheme to the auth server's [trusted origins configuration](/docs/mobile/auth/configuration)
Check the [Better Auth documentation](https://www.better-auth.com/docs/authentication/google) and [`@react-native-google-signin/google-signin` documentation](https://react-native-google-signin.github.io) for steps to configure your server verification, client IDs and more.
## High-level flow
1. Configure Google OAuth Client IDs for Android and iOS in [Google Cloud Console](https://console.cloud.google.com/).
2. Initialize the Google auth request in your app and render a "Sign in with Google" button.
3. Prompt the account chooser; on success you receive an `idToken` and/or `accessToken`.
4. Send the tokens to the API powered by [Better Auth](https://www.better-auth.com/docs/authentication/google) to verify and establish a session.
5. Persist the session and proceed to the app.
For a more in-depth overview of Google authentication, including implementation details, platform caveats, and advanced configuration, see the following resources:
---
url: /docs/mobile/auth/oauth
title: OAuth
description: Get started with social authentication.
---
Better Auth supports almost **30** (!) different [OAuth providers](https://www.better-auth.com/docs/concepts/oauth). They can be easily configured and enabled in the kit without any additional configuration needed.
TurboStarter provides you with all the configuration required to handle OAuth providers responses from your app:
* redirects
* middleware
* confirmation API routes
You just need to configure one of the below providers on their side and set correct credentials as environment variables in your TurboStarter app.

Third Party providers need to be configured, managed and enabled fully on the provider's side. TurboStarter just needs the correct credentials to be set as environment variables in your app and passed to the [authentication API configuration](/docs/web/auth/configuration#api).
To enable OAuth providers in your TurboStarter app, you need to:
1. Set up an OAuth application in the provider's developer console (like [Apple Developer Portal](https://developer.apple.com/account/), [Google Cloud Console](https://console.cloud.google.com/), [Github Developer Settings](https://github.com/settings/developers) or any other provider you want to use)
2. Configure the provider's credentials as environment variables in your app. For example, for Google OAuth:
```dotenv title="apps/web/.env.local"
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
```
Then, pass it to the authentication configuration in `packages/auth/src/server.ts`:
```ts title="server.ts"
export const auth = betterAuth({
...
socialProviders: {
[SocialProvider.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
...
});
```
For mobile apps, we need to define a trusted origin using an app scheme instead of a classic URL. App schemes (like `turbostarter://`) are used for [deep linking](https://docs.expo.dev/guides/linking/) users to specific screens in your app after authentication.
To find your app scheme, take a look at `apps/mobile/app.config.ts` file and then add it to your auth server configuration:
```ts title="server.ts"
export const auth = betterAuth({
...
trustedOrigins: ["turbostarter://**"],
...
});
```
Adding your app scheme to the trusted origins list is crucial for security - it prevents CSRF attacks and blocks malicious open redirects by ensuring only requests from approved origins (your app) are allowed through.
[Read more about auth security in Better Auth's documentation.](https://www.better-auth.com/docs/reference/security)
Also, we included some native integrations (["Sign in with Apple"](/docs/mobile/auth/oauth/apple) for iOS and ["Sign in with Google"](/docs/mobile/auth/oauth/google) for Android) to make the sign-in process smoother and faster for the user.
---
url: /docs/mobile/auth/overview
title: Overview
description: Get started with authentication.
---
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration while keeping enterprise-grade security. Its framework-agnostic approach and focus on performance make it the perfect choice for TurboStarter.
Recently, Better Auth [announced](https://www.better-auth.com/blog/authjs-joins-better-auth) an incorporation of [Auth.js (28k+ stars on GitHub)](https://authjs.dev/), making it even more powerful and flexible.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
TurboStarter supports multiple authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - magic links with [deep linking](https://docs.expo.dev/linking/overview)
* **OTP** - one-time passwords sent to email or phone
* **Anonymous** - allowing users to proceed anonymously
* **OAuth** - social providers ([Apple](https://www.better-auth.com/docs/authentication/apple), [Google](https://www.better-auth.com/docs/authentication/google), and [GitHub](https://www.better-auth.com/docs/authentication/github) preconfigured)
* **Native Apple authentication** - [Sign in with Apple](/docs/mobile/auth/oauth/apple) for iOS
* **Native Google authentication** - [Sign in with Google](/docs/mobile/auth/oauth/google) for Android
As well as common applications flows, with ready-to-use views and components:
* **Sign in** - sign in with email/password, magic link, one-time password, or OAuth providers
* **Sign up** - sign up with email/password or OAuth providers
* **Sign out** - end session by signing out
* **Password recovery** - forgot and reset password
* **Email verification** - verify email address
You can **build your auth flow like LEGO bricks** - plug in the parts you need and customize them.
---
url: /docs/mobile/billing/configuration
title: Configuration
description: Configure billing for your mobile application.
---
Mobile billing configuration consists of a few key components that must be set up correctly to work across platforms.
If you're new to in-app purchases, the most important thing to understand is that **native stores are the source of truth** for your products (what can be purchased, how much it costs, and where it's available). Your billing provider (and your app) can only show and sell what you've configured correctly in [App Store Connect](https://developer.apple.com/help/app-store-connect) (iOS) or the [Google Play Console](https://developer.android.com/studio/publish/preparing) (Android).
As a rule of thumb, set things up in this order:
* **Store**: complete agreements, create products in the native stores, and make sure they're available for testing
* **Offerings**: organize products into purchasable choices for your users (monthly/yearly, tiers, etc.) and specify which entitlements each option grants
* **Paywall**: present offerings in-app and trigger purchases/restores
All of these components are tightly connected, and each must be configured correctly for the billing flow to work smoothly. The sections below guide you through configuring each one.
## Store
Store configuration is the foundation: every in-app purchase ultimately goes through the native store ([App Store](https://apple.com/app-store) on iOS, [Google Play](https://play.google.com/) on Android). Complete this first—otherwise your paywall won't be able to display the correct products and prices.
Follow the official guides to make sure you've created and configured your products in the native stores correctly.
Although these links come from RevenueCat's documentation, the steps apply to any provider - the key part is knowing what's relevant for correct store setup.
## Offerings
Offerings are what your app presents to the user. They typically map store products into a set of purchasable options (for example: monthly vs. yearly), plus the “what do I unlock?” logic.
They're primarily configured remotely in your provider's dashboard. See the dedicated setup guides below to learn how to configure offerings for each provider.
The source of truth for the offerings is the native store products data, so make sure to first complete the store configuration above.
In mobile apps terminology, you often can see the term **entitlements** used together with offerings. Entitlements define what content or features users have access to after making a purchase and are tied to specific products.
For instance, a user who purchases a premium subscription could be granted access to exclusive features through entitlements.
### Cross-platform support
To show purchased plan details in your app (such as on a subscription overview screen) and make them accessible on both mobile and web apps, you'll need to include that plan in your shared billing configuration. Follow the [web configuration schema](/docs/web/billing/configuration) for consistency across platforms.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: BillingPlan.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
variants: [
{
/* WEB */
...
/* MOBILE */
/* 👇 This is the product identifier from the store (e.g. App Store Connect, Google Play) */
id: "core_premium_recurring_month_flat",
cost: 1900,
currency: "usd",
model: BillingModel.RECURRING,
interval: RecurringInterval.MONTH,
trialDays: 7,
hidden: true, // [!code highlight]
},
],
},
],
...
}) satisfies BillingConfig;
```
Make sure to set the `hidden` flag to `true` to prevent the variant from being displayed in the pricing table on the web app. This is because the web app will display variants from the shared billing configuration, while the mobile app displays variants from offerings configured for the specific paywall in the provider's dashboard.
This way, you stay compliant with in-app purchase requirements, keep a native mobile experience, and can still show plan details in both the web app and the mobile app.
## Paywall
The paywall is the UI users interact with to purchase an offering. It's configured remotely in the provider's dashboard, letting you change the paywall UI, behavior, and displayed offerings without needing to release a new version of your app.

To make paywall setup easier, we have a dedicated guide for each provider:
Test your paywall in a sandbox environment first to confirm everything works as expected and that products from the native stores display correctly.
---
url: /docs/mobile/billing/overview
title: Overview
description: Get started with mobile billing in TurboStarter.
---
Implementing mobile billing can be challenging, especially when you need to handle cross-platform compatibility and comply with the different requirements of the App Store and Google Play.
Apple has strict guidelines regarding external payment systems and **may reject your app** if you aggressively redirect users to web-based payment flows. Make sure to review the [App Store Review Guidelines](https://developer.apple.com/app-store/review/guidelines/#payments) carefully and consider implementing native in-app purchases for iOS users to ensure compliance.
TurboStarter's mobile billing is designed around native in-app purchases, so it's fully compliant and ready to use out of the box. However, please be mindful when modifying payment-related features in your mobile app.
TurboStarter makes this easier by providing **native in-app billing** through [RevenueCat](/docs/mobile/billing/revenuecat) and [Superwall](/docs/mobile/billing/superwall). These providers abstract the native store APIs, so you can sell subscriptions and manage entitlements without integrating each store SDK yourself or relying on web-based checkout flows.

## Providers
To support both iOS and Android, TurboStarter includes the following providers for mobile billing:
Each provider is configured and set up behind a unified API. You can switch providers by changing the exports, or introduce your own provider without breaking billing-related logic.
Depending on the provider you choose, you'll need to set the corresponding environment variables. By default, the billing package uses [RevenueCat](/docs/mobile/billing/revenuecat). Alternatively, you can use [Superwall](/docs/mobile/billing/superwall).
## Configuration
Most configuration is done **provider-side**, following the philosophy that you should be able to change plan configuration (and other settings) without having to release a new version of your app. This is especially useful for A/B testing to determine which offering performs better.
To learn more about configuring products, offerings, and cross-platform support, check the following sections:
## Displaying a paywall
The paywall is a crucial part of the billing flow - it displays offerings to the user and lets you trigger purchases/restores at different points in the user journey.
To present a paywall in your app, use the `usePaywall` hook from the `@workspace/billing-mobile` package. This hook returns the paywall result directly from the configured provider.
```tsx title="paywall.tsx"
import { usePaywall } from "@workspace/billing-mobile";
export default function Paywall() {
const { present, result } = usePaywall();
return (
<>
present({
trigger: "onboarding",
})
}
>
Present paywall{result.status}
>
);
}
```
Don't forget to pass the `trigger` parameter, as it's used to identify the template/campaign that needs to be triggered on the provider's side.
If you want to react to paywall lifecycle events, you can pass additional callbacks to `usePaywall`:
```tsx title="paywall.tsx"
import { usePaywall } from "@workspace/billing-mobile";
const { present, result } = usePaywall({
onPresent: () => {},
onDismiss: () => {},
onPurchase: () => {},
onRestore: () => {},
onSkip: () => {},
onError: (error) => {},
});
```
They're called automatically when the paywall enters a specific state - for example, when the user purchases a plan, `onPurchase` will be called.
## Fetching customer status
After a user purchases a plan in-app, you'll often want to fetch their current billing summary (subscription status, entitlements, credits) to:
* gate features in your UI
* show “Current plan” / “Manage subscription” states
* keep the app in sync across sessions and devices
You can do this via the billing `me` endpoint (`/api/billing/me`) using the mobile [API client](/docs/mobile/api/client).
To do so, call `/api/billing/me` to fetch the user's billing summary:
```tsx title="customer-screen.tsx"
import { handle } from "@workspace/api/utils";
import { getActivePlan } from "@workspace/billing";
import { api } from "~/lib/api";
export default function CustomerScreen() {
const summary = useQuery({
queryKey: ["me"],
queryFn: handle(api.billing.me.$get),
});
if (!summary.data) {
return null;
}
const plan = getActivePlan(summary.data);
return (
{plan}
);
}
```
Alternatively, you can treat the **provider as the source of truth** - for example, if you only need to check whether a user has a specific entitlement and want to delegate the rest to the native store handling. To do this, use the `useCustomer` hook, which returns customer data from the configured provider (RevenueCat or Superwall).
```tsx title="customer-screen.tsx"
import { useCustomer } from "@workspace/billing-mobile";
export default function CustomerScreen() {
const { entitlements } = useCustomer();
const hasPremium = entitlements.some(
(entitlement) => entitlement.id === "premium" && entitlement.active,
);
/* ... */
}
```
Which approach you choose depends on how much you want to handle in your backend vs. the native store/provider layer. By default, we recommend using the API to handle billing-related logic because it gives you the most flexibility and control. If you need something native-specific, use the built-in hooks (like `useCustomer` and `usePaywall`) to communicate directly with the configured provider.
---
url: /docs/mobile/billing/revenuecat
title: RevenueCat
description: Integrate your mobile application with RevenueCat.
---
[RevenueCat](https://www.revenuecat.com/) is a popular platform for managing in-app purchases and subscriptions. It's a great choice for mobile billing because it's fully compliant with App Store and Google Play guidelines.
It's the default billing provider for mobile apps in TurboStarter. This guide walks you through configuring RevenueCat and wiring it up to your app.
First complete the [store configuration](/docs/mobile/billing/configuration#store) and create your products in the native stores before configuring RevenueCat.
## Configure a new project
RevenueCat projects are top-level containers for your apps, products, entitlements, paywalls, and more. If you don't already have a project for your app, create one in the dashboard.
To create a project, click the *+ Create new project* button in the *Projects* dropdown at the top of the RevenueCat dashboard.
You can also set a name and configure global [restore behavior](https://www.revenuecat.com/docs/getting-started/restoring-purchases).

## Connect to a store
Depending on which platform you're building for, you'll need to connect your RevenueCat project to one or more stores.
Each [project](https://www.revenuecat.com/docs/projects/overview) comes with a [Test Store](https://www.revenuecat.com/docs/test-and-launch/sandbox/test-store) where you can create products, configure offerings, and test the complete purchase flow—without connecting to any app store or payment provider.
When you're ready to submit your app for review, connect it to the real stores and payment providers you want to support and set up [Server Notifications](https://www.revenuecat.com/docs/platform-resources/server-notifications). After you've connected your app, you can import your products and start configuring offerings.
Add an app configuration in the *Apps & providers* section of your app settings.

To learn more about how to obtain all the required API keys and secrets, refer to the [official documentation](https://www.revenuecat.com/docs/projects/connect-a-store).
If you've been using the Test Store during development, switch from your Test Store API key to your platform-specific API key before submitting for app review.
## Get API keys
After you've connected to a store, you'll need the API keys and secrets for the SDK. You can find them under *API Keys* in the dashboard.

To make server-side API requests work, create a *Secret API key* for your project. Pick `v1` as the API version so the server can fetch customer billing data on webhook requests.
For local development, you can use the Test Store API key (sandbox).
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/mobile/.env.local"
EXPO_PUBLIC_REVENUECAT_APPLE_API_KEY="" # Your RevenueCat Apple API key
EXPO_PUBLIC_REVENUECAT_GOOGLE_API_KEY="" # Your RevenueCat Google API key
```
Additionally, set the secret API key as an environment variable for your **web app**:
```dotenv title="apps/web/.env.local"
REVENUECAT_API_KEY="" # Your RevenueCat secret API key
```
This is required to fetch customer billing data on webhook requests, and it should only be available on the server.
**Don't commit secret keys.** During development, put them in `.env.local` (it's not committed). In production, set them as environment variables in your hosting provider.
## Create products
For each store you're supporting, you'll need to add the products you plan on offering to your customers.
To streamline setup, RevenueCat can import products you've already created in the app stores. This keeps your catalog consistent and saves manual work.

You can also create products manually in the dashboard, although it's not recommended—those products still need to exist in the app stores.
Read more about product setup in the [official documentation](https://www.revenuecat.com/docs/offerings/products/setup-index).
## Create an entitlement
RevenueCat entitlements represent a level of access, features, or content that a user is "entitled" to. Entitlements are scoped to a [project](https://www.revenuecat.com/docs/projects/overview) and are typically unlocked after a user purchases a [product](https://www.revenuecat.com/docs/offerings/products-overview).
To create a new entitlement, click *Product catalog* in the left menu, open the *Entitlements* tab, and click *+ New entitlement*. Enter a unique identifier you'll reference in your app, like `pro`.
Most apps only need one entitlement, but create as many as your product requires. For example, a navigation app might have a subscription for `pro` access and one-time purchases to unlock specific map regions - one `pro` entitlement plus additional entitlements for each region.

### Attach products to an entitlement
After you create entitlements, attach products to them. This tells RevenueCat which entitlement(s) to unlock after a customer purchases a product.
When viewing an entitlement, click *Attach* to attach a product. If you've already added your products, you'll be able to select one from the list.

When a customer buys a product attached to an entitlement, that entitlement becomes active for the duration of the product. Subscription products unlock entitlements for the subscription period. Non-consumable purchases can unlock content permanently.
If you have non-subscription products, whether you attach them to entitlements depends on your use case. If a product is non-consumable (e.g. lifetime access to `pro`), you usually want an entitlement. If it's consumable (e.g. buying more lives), you usually don't.
Attaching an entitlement to a product will grant that entitlement to any customers that have previously purchased that product. Likewise, detaching an entitlement from a product will remove it for any customers that have previously purchased that product.
When designing your Entitlement structure, keep in mind that a single product can unlock multiple entitlements, and multiple products may unlock the same entitlement.
## Create an offering
Offerings are the selection of products that are "offered" to a user on your paywall. Think of an offering as the product group your paywall will display.
Offerings are created and configured in the RevenueCat dashboard. When using RevenueCat Paywalls, you'll configure a single paywall that is paired to a single Offering.
To create an offering, go to the *Offerings* tab in your project settings and click *+ New*.
You'll be prompted to enter an Identifier and Description for your offering. Note that the offering identifier cannot be changed later. Once you've entered this information, click Save.

Each Offering you create should contain at least one Package that holds cross-platform products.
To create a package, open your new offering and click *+ Add package* in the *Packages* section. Choose an identifier that matches the package duration. If a duration isn't suitable (e.g. consumables), choose a custom identifier. Add a description.
Attach the relevant products (i.e., the products with the same duration you chose) for this Offering, then click Save.

Any product can be added to an Offering, even if it's not part of any Entitlement. This can come in handy if your app's paywall contains a combination of subscription products that unlock Entitlements, and consumable products that do not.
## Configure a paywall
RevenueCat Paywalls let you configure your paywall UI remotely - without code changes or app updates. They're great for iterating on designs and running experiments.
To get started, click *+ New Paywall* on the Paywalls page for your project:

Next, you'll need to select the Offering you want to add a Paywall to. Or, if you don't have any Offerings without Paywalls, you'll have the option to duplicate an existing one or create a new one.
Unless you have a very specific custom design in mind, start with a template. You can customize everything after you pick one - it's just a starting point.

To customize your paywall, you can edit components in the dedicated editor.

When you're ready to publish your paywall, click *Publish* in the top-right corner of the editor. Check [Overview](/docs/mobile/billing/overview#displaying-a-paywall) for details on how to display the paywall in your app.
You can preview your paywall on your phone before publishing by clicking the Preview button in the top-right corner of the editor.
## Create a webhook
To sync subscription status (and other purchase events) to your database, you need to set up a webhook.
TurboStarter includes the webhook handler out of the box - you just need to create the webhook in RevenueCat and paste in your callback URL.
To configure a new webhook, go to the *Integrations* tab and choose the *Webhooks* option.

Click on the *Add new configuration* button to create a new webhook configuration.

It's also recommended to set an `Authorization` header that will be sent with every request. Your server can verify it to ensure the request is coming from a trusted source.
You can get it by running the following command in your terminal:
```bash
openssl rand -base64 32
```
Copy the generated string and paste it into the Authorization header.
You also need to add this secret to your environment variables for your **web app**:
```dotenv title="apps/web/.env.local"
REVENUECAT_WEBHOOK_SECRET=
```
This secret is used by your server to verify incoming webhook requests.
To get the callback URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter **web** development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in RevenueCat. Use that URL and append `/api/billing/webhook/revenuecat`.
### Production deployment
When going to production, you will need to set the webhook URL and choose which events you want to listen to in RevenueCat.
The webhook path is `/api/billing/webhook/revenuecat`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook/revenuecat` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events, check [Webhooks](/docs/mobile/billing/webhooks) for more information.
To handle billing webhooks in production (and allow your Expo app to talk to your backend), you must first deploy the Hono API.
That's it! 🎉 You have now set up RevenueCat as a billing provider for your app.
Feel free to add more products, variants, and promotional offers, and manage your customer data and subscriptions using RevenueCat.
---
url: /docs/mobile/billing/superwall
title: Superwall
description: Implement paywalls, subscriptions, and revenue sharing with Superwall.
---
[Superwall](https://superwall.com/) is a paywall experimentation platform for mobile apps. You can build and deploy paywalls without coding, run targeted A/B tests, and track conversion and revenue analytics - all while using native in-app purchases. It's a great choice when you want to iterate on monetization quickly.
To switch to Superwall, update the exports in the `@workspace/billing-mobile` package:
```ts
// [!code word:superwall]
export * from "./superwall";
```
```ts
// [!code word:superwall]
export * from "./superwall/server";
```
```ts
// [!code word:superwall]
export * from "./superwall/server/env";
```
These exports tell TurboStarter to use the Superwall implementation (instead of the default provider) for mobile billing.
In the sections below, you'll configure Superwall and set it up as the billing provider for your app.
First complete the [store configuration](/docs/mobile/billing/configuration#store) and create your products in the native stores before configuring Superwall.
## Configure a new project
Start by creating a new project in the [Superwall dashboard](https://superwall.com/dashboard). Projects let you manage your apps, paywalls, entitlements, and integrations in one place.

## Connect to a store
Link your app to the appropriate stores (Apple App Store and/or Google Play) in Superwall. This lets Superwall access your in-app purchase products and keep paywalls and entitlements in sync across platforms.
You can find the required keys (and how to obtain them) under *Revenue Tracking* in your project's *Settings*.
Superwall uses this connection to attribute revenue and to help validate purchase data coming from the stores.

## Get API keys
Open your project settings and copy the API keys you need to integrate Superwall in your app. You'll use these keys to initialize the SDK and connect your app to your Superwall project.

Make sure to copy keys for both iOS and Android. To test the purchase flow, see Superwall's [blog post](https://superwall.com/blog/testing-subscriptions-and-in-app-purchases-for-ios-apps-before-launch/).
## Set environment variables
Add the Superwall API keys as environment variables for your app so they're available securely at build and runtime.
```dotenv title="apps/mobile/.env.local"
EXPO_PUBLIC_SUPERWALL_APPLE_API_KEY="" # Your Superwall Apple API key
EXPO_PUBLIC_SUPERWALL_GOOGLE_API_KEY="" # Your Superwall Google API key
```
Even though these are used on the client, keep them out of git. During development, put them in `.env.local` (it's not committed). In production, set them via your hosting provider (e.g. EAS).
## Create an entitlement
Define entitlements in Superwall to represent what a user gets after purchasing a product or subscription. Your app uses entitlements to gate premium features/content.

## Create products
In the Superwall dashboard, create the same products (subscriptions, consumables, etc.) as you have in the App Store / Play Store. Make sure the product identifiers match exactly.

When you connect your app to a store, Superwall can automatically import products. That's usually the fastest way to get started.
For each product, select the entitlement it should unlock after purchase.
After creating/importing products, make sure they're all in the **Active** state so they can be shown on your paywall.

## Create a paywall
Design and configure paywalls in the Superwall editor. Paywalls control how products are presented to users, including A/B tests, price localization, promotions, and other monetization experiments.

It's recommended to start with a template and customize it later, but you can also build one from scratch.

When you're ready to publish your paywall, click *Publish* in the top-right corner of the editor. Check [Overview](/docs/mobile/billing/overview#displaying-a-paywall) for details on how to display the paywall in your app.
You can preview your paywall on your phone before publishing by clicking the Preview button in the top-right corner of the editor.
## Create a webhook
To sync subscription status (and other purchase events) to your database, you need to set up a webhook.
TurboStarter includes the webhook handler out of the box - you just need to create the webhook in Superwall and paste in your callback URL.
To configure a webhook, go to the *Integrations* tab and choose *Webhooks*.

Click on the *Create Webhook* button to create a new webhook configuration.

After creating the webhook, copy the generated secret and add it to your environment variables for your **web app**:
```dotenv title="apps/web/.env.local"
SUPERWALL_WEBHOOK_SECRET=
```
This secret is used by your server to verify incoming webhook requests.
To get the callback URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter **web** development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Superwall. Use that URL and append `/api/billing/webhook/superwall`.
### Production deployment
When going to production, you will need to set the webhook URL and choose which events you want to listen to in Superwall.
The webhook path is `/api/billing/webhook/superwall`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook/superwall` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events, check [Webhooks](/docs/mobile/billing/webhooks) for more information.
To handle billing webhooks in production (and allow your Expo app to talk to your backend), you must first deploy the Hono API.
That's it! 🎉 You've successfully configured Superwall as your billing provider.
You can now add additional products, variants, or promotional offers, and manage your customers and subscriptions through Superwall.
---
url: /docs/mobile/billing/webhooks
title: Webhooks
description: Handle webhooks from your mobile app's billing provider.
---
To handle billing webhooks in production (and allow your Expo app to talk to your backend), you must first deploy the Hono API.
TurboStarter uses billing webhooks to keep customer data in sync based on events sent by your billing provider.
However, sometimes you may want to perform custom actions when specific events arrive.
In that case, customize the billing webhook handler in your API backend (the endpoint that receives billing webhooks).
By default, the webhook handler is configured to be **as straightforward as possible**:
```ts title="router.ts"
import { webhookHandler, provider } from "@workspace/billing-mobile/server";
export const billingRouter = new Hono().post(`/webhook/${provider}`, (c) =>
webhookHandler(c.req.raw),
);
```
However, you can extend it using the callbacks provided by the `@workspace/billing-mobile` package:
```ts title="router.ts"
import { webhookHandler, provider } from "@workspace/billing-mobile/server";
export const billingRouter = new Hono().post(`/webhook/${provider}`, (c) =>
webhookHandler(c.req.raw, {
onOneTimePurchaseSucceeded: (orderId) => {},
onSubscriptionCreated: (subscriptionId) => {},
onSubscriptionUpdated: (subscriptionId) => {},
onSubscriptionDeleted: (subscriptionId) => {},
onEvent: (rawEvent) => {},
}),
);
```
You can provide one or more of the callbacks to handle the events you are interested in.
Mobile billing webhooks are set up using the same method as [in the web app](/docs/web/billing/webhooks). Make sure to keep your configurations organized and confirm that events are handled properly for each provider on both mobile and web platforms.
---
url: /docs/mobile/cli
title: CLI
description: Start your new project with a single command.
---
To help you get started with TurboStarter **as quickly as possible**, we've developed a [CLI](https://www.npmjs.com/package/@turbostarter/cli) that enables you to create a new project (with all the configuration) in seconds.
The CLI is a set of commands that will help you create a new project, generate code, and manage your project efficiently.
Currently, the following actions are available:
* **Starting a new project** - Generate starter code for your project with all necessary configurations in place (billing, database, emails, etc.)
* **Updating existing project** - Pull the latest upstream changes into your TurboStarter repository
**The CLI is in beta**, and we're actively working on adding more commands and actions.
## Installation
You can run commands without installing globally:
```bash
npx @turbostarter/cli@latest
```
```bash
pnpm dlx @turbostarter/cli@latest
```
```bash
yarn dlx @turbostarter/cli@latest
```
```bash
bunx @turbostarter/cli@latest
```
Or install globally and run:
```bash
npm install -g @turbostarter/cli
turbostarter
```
```bash
pnpm add -g @turbostarter/cli
turbostarter
```
```bash
yarn global add @turbostarter/cli
turbostarter
```
```bash
bun add -g @turbostarter/cli
turbostarter
```
You can also display help for it or check the actual version using `--help` or `-v` flags.
### Starting a new project
Use the `new` command to initialize configuration and dependencies for a new project.
```bash
turbostarter new
```
You will be asked a few questions to configure your project:
```bash
✔ All prerequisites satisfied, let's start! 🚀
? What do you want to ship? ›
◉ Web app
◉ Mobile app
◯ Browser extension
? Enter your project name. ›
? Configure all providers now? ›
Yes, configure now (recommended)
No, just let me ship, now!
Creating a new TurboStarter project in ...
✔ Repository successfully pulled!
✔ Git successfully configured!
✔ Dependencies successfully installed!
✔ Services successfully started!
🎉 You can now get started. Open the project and just ship it! 🎉
Problems? https://turbostarter.dev/docs
```
It will create a new project, configure providers, install dependencies and start required services in development mode.
### Updating existing project
Use the `project update` command to pull the latest upstream changes into your TurboStarter repository.
```bash
turbostarter project update
```
Before updating, the CLI validates that:
* You are running the command from a TurboStarter project root
* Your git working tree is clean
* Your `upstream` remote points to `turbostarter/core`
Then it fetches upstream changes and merges `upstream/main` into your current branch. If conflicts occur, it prints the conflicting files with next steps.
---
url: /docs/mobile/configuration/app
title: App configuration
description: Learn how to setup the overall settings of your app.
---
When configuring your app, you'll need to define settings in different places depending on which provider will use them (e.g., Expo, EAS).
## App configuration
Let's start with the core settings for your app. These settings are **crucial** as they're used by Expo and EAS to build your app, determine its store presence, prepare updates, and more.
This configuration includes essential details like the official name, description, scheme, store IDs, splash screen configuration, and more.
You'll define these settings in `apps/mobile/app.config.ts`. Make sure to follow the [Expo config schema](https://docs.expo.dev/versions/latest/config/app/) when setting this up.
Here is an example of what the config file looks like:
```ts title="apps/mobile/app.config.ts"
import { ExpoConfig } from "expo/config";
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
name: "TurboStarter",
slug: "turbostarter",
scheme: "turbostarter",
version: "0.1.0",
orientation: "portrait",
icon: "./assets/images/icon.png",
userInterfaceStyle: "automatic",
assetBundlePatterns: ["**/*"],
sdkVersion: "51.0.0",
platforms: ["ios", "android"],
updates: {
fallbackToCacheTimeout: 0,
},
newArchEnabled: true,
ios: {
bundleIdentifier: "your.bundle.identifier",
supportsTablet: false,
},
android: {
package: "your.bundle.identifier",
adaptiveIcon: {
monochromeImage: "./public/images/icon/android/monochrome.png",
foregroundImage: "./public/images/icon/android/adaptive.png",
backgroundColor: "#0D121C",
},
},
extra: {
eas: {
projectId: "your-project-id",
},
},
experiments: {
tsconfigPaths: true,
typedRoutes: true,
},
plugins: ["expo-router", ["expo-splash-screen", SPLASH]],
});
```
Make sure to replace the values with your own and take your time to set everything correctly.
### Internal configuration
The same as for the [web app](/docs/web/configuration/app), and [extension](/docs/extension/configuration/app), we're defining the internal app config, which stores some overall variables for your application (that can't be read from Expo config).
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/mobile/src/config/app.ts"
import env from "env.config";
export const appConfig = {
locale: env.EXPO_PUBLIC_DEFAULT_LOCALE,
url: env.EXPO_PUBLIC_SITE_URL,
theme: {
mode: env.EXPO_PUBLIC_THEME_MODE,
color: env.EXPO_PUBLIC_THEME_COLOR,
},
} as const;
```
For example, to set the mobile app default theme color, you'd update the following variable:
```dotenv title=".env.local"
EXPO_PUBLIC_THEME_COLOR="yellow"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
## EAS configuration
To properly build and publish your app, you need to define settings for the EAS build service.
This is done in `apps/mobile/eas.json` and it must follow the [EAS config scheme](https://docs.expo.dev/eas/json/).
Here is an example of what the config file looks like:
```json title="apps/mobile/eas.json"
{
"cli": {
"version": ">= 4.1.2"
},
"build": {
"base": {
"node": "20.15.0",
"pnpm": "9.6.0",
"ios": {
"resourceClass": "m-medium"
},
"env": {
"EXPO_PUBLIC_DEFAULT_LOCALE": "en",
"EXPO_PUBLIC_AUTH_PASSWORD": "true",
"EXPO_PUBLIC_AUTH_MAGIC_LINK": "false",
"EXPO_PUBLIC_THEME_MODE": "system",
"EXPO_PUBLIC_THEME_COLOR": "orange"
}
},
...
"preview": {
"extends": "base",
"distribution": "internal",
"android": {
"buildType": "apk"
},
"env": {
"APP_ENV": "test",
}
},
"production": {
"extends": "base",
"env": {
"APP_ENV": "production",
}
}
...
},
}
```
Make sure to also fill all the [environment variables](/docs/mobile/configuration/environment-variables) with the correct values for your project and correct environment, otherwise your app won't build and you won't be able to publish it.
---
url: /docs/mobile/configuration/environment-variables
title: Environment variables
description: Learn how to configure environment variables.
---
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/mobile` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, mobile).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/web`). These are specific to the app and are not shared between apps.
* **Build environment variables**: Not stored in the `.env` file. Instead, they are stored in `eas.json` file used to build app on [Expo Application Services](https://expo.dev/eas).
* **Secret keys**: They're not stored on mobile side, instead [they're defined on the web side.](/docs/web/configuration/environment-variables#secret-keys)
## Shared variables
Here you can add all the environment variables that are shared across all the apps.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/mobile`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/mobile/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in Expo app
EXPO_PUBLIC_SITE_URL="${URL}"
EXPO_PUBLIC_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
EXPO_PUBLIC_THEME_MODE="system"
EXPO_PUBLIC_THEME_COLOR="orange"
# Use this variable to enable or disable password-based authentication. If you set this to true, users will be able to sign up and sign in using their email and password. If you set this to false, the form won't be shown.
EXPO_PUBLIC_AUTH_PASSWORD="true"
...
```
To make environment variables available in the Expo app code, you need to prefix them with `EXPO_PUBLIC_`. They will be injected to the code during the build process.
Only environment variables prefixed with `EXPO_PUBLIC_` will be injected.
[Read more about Expo environment variables.](https://docs.expo.dev/guides/environment-variables/)
## Build environment variables
To allow your app to build properly on [EAS](https://expo.dev/eas) you need to define your environment variables either in your `eas.json` file under corresponding profile (e.g. `preview` or `production`) or directly in the [EAS platform](https://docs.expo.dev/eas/environment-variables/):

Then, when you trigger build, correct environment variables will be injected to your mobile app code ensuring that everything is working correctly.
[Check EAS documentation for more details.](https://docs.expo.dev/eas/environment-variables/)
## Secret keys
Secret keys and sensitive information are to be **never** stored on the mobile app code.
It means that you will need to add the secret keys to the **web app, where the API is deployed.**
The mobile app should only communicate with the backend API, which is typically part of the web app. The web app is responsible for handling sensitive operations and storing secret keys securely.
[See web documentation for more details.](/docs/web/configuration/environment-variables#secret-keys)
This is not a TurboStarter-specific requirement, but a best practice for security for any
application. Ultimately, it's your choice.
---
url: /docs/mobile/configuration/paths
title: Paths configuration
description: Learn how to configure the paths of your app.
---
The paths configuration is set at `apps/mobile/config/paths.ts`. This configuration stores all the paths that you'll be using in your application. It is a convenient way to store them in a central place rather than scatter them in the codebase using magic strings.
It is **unlikely you'll need to change** this unless you're heavily editing the codebase.
```ts title="apps/mobile/config/paths.ts"
const pathsConfig = {
index: "/",
setup: {
welcome: "/welcome",
auth: {
login: `${AUTH_PREFIX}/login`,
register: `${AUTH_PREFIX}/register`,
forgotPassword: `${AUTH_PREFIX}/password/forgot`,
updatePassword: `${AUTH_PREFIX}/password/update`,
error: `${AUTH_PREFIX}/error`,
join: `${AUTH_PREFIX}/join`,
},
steps: {
start: `${STEPS_PREFIX}/start`,
required: `${STEPS_PREFIX}/required`,
skip: `${STEPS_PREFIX}/skip`,
final: `${STEPS_PREFIX}/final`,
},
},
dashboard: {
user: {
index: DASHBOARD_PREFIX,
ai: `${DASHBOARD_PREFIX}/ai`,
...
}
...
}
} as const;
```
By declaring the paths as constants, we can use them safely throughout the
codebase. There is no risk of misspelling or using magic strings.
---
url: /docs/mobile/customization/add-app
title: Adding apps
description: Learn how to add apps to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `mobile` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/mobile` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/mobile` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/mobile` folder. We will create a branch named `mobile-branch` and create a subtree for the `apps/mobile` folder.
```bash
git subtree split --prefix=apps/mobile --branch mobile-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/mobile` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin mobile-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/mobile` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `mobile-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the mobile-branch:
```bash
git subtree split --prefix=apps/mobile --branch mobile-branch
```
Now, you can push the updates to the `mobile-branch`:
```bash
git push origin mobile-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin mobile-branch --squash
```
That's it! You now have a new application in the monorepo 🎉
---
url: /docs/mobile/customization/add-package
title: Adding packages
description: Learn how to add packages to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/mobile` or modify existing packages under `packages`. You don't need to do this to add a new screen or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen package
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@workspace/`. If you named it `example`, the package will be named `@workspace/example`.
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@workspace/example/client'`
2. The `server` module can be imported using `import { server } from '@workspace/example/server'`
## Use the package in your application
You can now use the package in your application by importing it using the package name:
```ts title="apps/mobile/src/app/index.tsx"
import { example } from "@workspace/example";
console.log(example());
```
Et voilà! You have successfully added a new package to your TurboStarter application. 🎉
---
url: /docs/mobile/customization/components
title: Components
description: Manage and customize your app components.
---
For the components part, we're using [react-native-reusables](https://reactnativereusables.com//getting-started/introduction/) for atomic, accessible and highly customizable components.
> It's like shadcn/ui, but for mobile apps.
react-native-reusables is a powerful tool that allows you to generate
pre-designed components with a single command. It's built with Uniwind (like
Tailwind CSS for mobile) and accessibility in mind, it's also highly
customizable.
TurboStarter defines two packages that are responsible for the UI part of your app:
* `@workspace/ui` - shared styles, [themes](/docs/mobile/customization/styling#themes) and assets (e.g. icons)
* `@workspace/ui-mobile` - pre-built UI mobile components, ready to use in your app
## Adding a new component
There are basically two ways to add a new component:
TurboStarter is fully compatible with [react-native-reusables CLI](https://www.npmjs.com/package/@react-native-reusables/cli), so you can generate new components with single command.
Run the following command from the **root** of your project:
```bash
pnpm --filter @workspace/ui-mobile ui:add
```
This will launch an interactive command-line interface to guide you through the process of adding a new component where you can pick which component you want to add.
```bash
Which components would you like to add? > Space to select. A to toggle all.
Enter to submit.
◯ accordion
◯ alert
◯ alert-dialog
◯ aspect-ratio
◯ avatar
◯ badge
◯ button
◯ calendar
◯ card
◯ checkbox
```
Newly created components will appear in the `packages/ui/mobile/src` directory.
You can always copy-paste a component from the [react-native-reusables](https://reactnativereusables.com//getting-started/introduction/) website and modify it to your needs.
This is possible, because the components are headless and don't need (in most cases) any additional dependencies.
Copy code from the website, create a new file in the `packages/ui/mobile/src` directory and paste the code into the file.
Keep in mind that you should always try to keep shared components as atomic as possible. This will make it easier to reuse them and to build specific views by composition.
E.g. include components like `Button`, `Input`, `Card`, `Dialog` in shared package, but keep specific components like `LoginForm` in your app directory.
## Using components
Each component is a standalone entity which has a separate export from the package. It helps to keep things modular, avoid unnecessary dependencies and make tree-shaking possible.
To import a component from the UI package, use the following syntax:
```tsx title="apps/mobile/src/modules/common/my-component.tsx"
// [!code word:card]
import {
Card,
CardContent,
CardHeader,
CardFooter,
CardTitle,
CardDescription,
} from "@workspace/ui-mobile/card";
```
Then you can use it to build a component specific to your app:
```tsx title="apps/mobile/src/modules/common/my-component.tsx"
export function MyComponent() {
return (
My ComponentMy Component Content
);
}
```
Most of the components are the same as for the [web app](/docs/web/customization/components).
It means that you can basically migrate existing web components to the mobile app with just an import change!
---
url: /docs/mobile/customization/styling
title: Styling
description: Get started with styling your app.
---
To build the mobile user interface, TurboStarter comes with [Uniwind](https://uniwind.dev/) pre-configured.
Uniwind brings Tailwind CSS utilities to React Native. It lets you style with familiar classes while keeping native performance and platform-appropriate primitives.
## Tailwind configuration
In the `packages/ui/shared/src/styles` directory, you will find shared CSS files with Tailwind configuration. To change global styles, edit the files in this folder.
Here is an example of a shared CSS file that includes the Tailwind CSS configuration:
```css title="packages/ui/shared/src/styles/globals.css"
@import "tailwindcss";
@import "./themes.css";
@custom-variant dark (&:is(.dark *));
:root {
--radius: 0.65rem;
}
@theme inline {
--color-background: var(--background);
--color-foreground: var(--foreground);
--color-card: var(--card);
--color-card-foreground: var(--card-foreground);
--color-popover: var(--popover);
--color-popover-foreground: var(--popover-foreground);
--color-primary: var(--primary);
--color-primary-foreground: var(--primary-foreground);
--color-secondary: var(--secondary);
--color-secondary-foreground: var(--secondary-foreground);
--color-muted: var(--muted);
--color-muted-foreground: var(--muted-foreground);
...
}
```
For colors, we rely strictly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) format to allow for easy theme management without the need for any JavaScript.
Also, each app has its own `globals.css` file, which extends the shared config and allows you to override global styles.
Here is an example of an app's `globals.css` file:
```css title="apps/mobile/src/assets/styles/globals.css"
@import "@workspace/ui-mobile/globals.css";
@theme inline {
--font-sans: "Geist_400Regular";
--font-sans-medium: "Geist_500Medium";
--font-sans-semibold: "Geist_600SemiBold";
--font-sans-bold: "Geist_700Bold";
--font-mono: "GeistMono_400Regular";
}
```
This keeps a clear separation of concerns and a consistent structure for the Tailwind CSS configuration across apps.
## Themes
TurboStarter comes with **9+** predefined themes, which you can use to quickly change the look and feel of your app.
They're defined in the `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [1, 0, 0],
foreground: [0.141, 0.005, 285.823],
card: [1, 0, 0],
"card-foreground": [0.141, 0.005, 285.823],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) array, which is then converted to a CSS variable at build time (by our custom build script). That way we can ensure full type-safety and reuse themes across different parts of our apps (e.g. use the same theme in emails).
These variables are consumed across platforms. On mobile, the theme provider injects the shared variables into the app, so Uniwind utility classes like `bg-background` and `text-foreground` resolve correctly.
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a custom theme to your app, use a `useTheme` hook to modify the config:
```tsx title="apps/mobile/src/lib/providers/theme.tsx"
import { ThemeColor, ThemeMode } from "@workspace/ui";
import { useTheme } from "~/modules/common/hooks/use-theme";
export const ThemeSwitcher = () => {
const { setConfig } = useTheme();
return (
setConfig({ mode: ThemeMode.DARK, color: ThemeColor.BLUE })
}
>
Change the theme to dark blue
);
};
```
Under the hood, the `useTheme` hook uses [Uniwind.setTheme](https://docs.uniwind.dev/theming/basics#switch-to-a-specific-theme) and [updateCSSVariables](https://docs.uniwind.dev/theming/update-css-variables) utilities to apply the correct theme to the app together with its variables.
## Dark mode
TurboStarter comes with built-in dark mode support.
Each theme has a corresponding set of dark mode variables, which are used to switch the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0.141, 0.005, 285.823],
foreground: [0.985, 0, 0],
card: [0.21, 0.006, 285.885],
"card-foreground": [0.985, 0, 0],
...
}
} satisfies ThemeColors;
```
Our custom implementation reads the system color scheme via `useColorScheme` and applies `dark:` variants automatically. With the provider injecting shared variables, dark mode works out of the box.
You can also define the default theme mode and color in the [app configuration](/docs/mobile/configuration/app).
---
url: /docs/mobile/database
title: Database
description: Get started with the database.
---
To enable communication between your Expo app and the server in a production environment, the web application with Hono API must be deployed first.
As a mobile app uses only client-side code, **there's no way to interact with the database directly**.
Also, you should avoid any workarounds to interact with the database directly, because it can lead to leaking your database credentials and other security issues.
## Recommended approach
You can safely use the [API](/docs/mobile/api/overview) and call the endpoints which will run queries on the database.
To do this you need to set up the database on the [web, server side](/docs/web/database/overview) and then use the [API client](/docs/mobile/api/client) to interact with it.
Learn more about its configuration in the web part of the docs, especially in the following sections:
---
url: /docs/mobile/extras
title: Extras
description: See what you get together with the code.
---
## Tips and Tricks
In many places, next to the code you will find some marketing tips, design suggestions, and potential risks. This is to help you build a better product and avoid common pitfalls.
```tsx title="Hero.tsx"
return (
{/* 💡 Use something that user can visualize e.g.
"Ship your startup while on the toilet" */}
Best startup on the world
);
```
### Submission tips
When it comes to mobile app and browser extension, you must submit your product to review from Apple/Google etc. We have some tips for you to make sure your submission goes smoothly.
```json title="app.json"
{
"ios": {
"infoPlist": {
/* 🍎 add descriptive justification of using this permission on iOS */
"NSCameraUsageDescription": "This app uses the camera to scan barcodes on event tickets."
}
}
}
```
As well as providing you with the info on how to make your store listings better:
```json title="package.json"
{
"manifest": {
/* 💡 Use localized messages to get more visibility in web stores */
"name": "__MSG_extensionName__",
"default_locale": "en"
}
}
```
## 25+ SaaS Ideas
Not sure what to build? We have a list of **25+** SaaS ideas that you can use to get started 🔥
Grouped by category, these ideas are a great way to get inspired and start building your next project.
Including design, copies, marketing tips and potential risks, this list is a great resource for anyone looking to build a SaaS product.

## AI rules, skills, subagents and commands
TurboStarter ships with a set of custom AI rules, skills, subagents, and commands you can use in popular AI editors and tools. They help the AI understand the codebase conventions and generate changes faster and more reliably.
To learn how to set them up and use them effectively, see the [AI-assisted development docs](/docs/web/installation/ai-development).
## Discord community
We have a Discord community where you can ask questions and share your projects. It's a great place to get help and meet other developers. Check more details at [/discord](/discord).

---
url: /docs/mobile/faq
title: FAQ
description: Find answers to common technical questions.
---
## Why isn't everything hidden and configured with one BIG config file?
TurboStarter intentionally exposes the underlying code rather than hiding it behind configuration files (like some starters do). This design choice follows our **you own your code** philosophy, giving you full control and flexibility over your codebase.
While a single config file might seem simpler initially, it often becomes restrictive when you need to customize functionality beyond what the config allows. With direct access to the code, you can modify any part of the system to match your specific requirements.
## I don't know some technology! Should I buy TurboStarter?
You should be prepared for a learning curve or consider learning it first. However, TurboStarter will still work for you if you're willing to learn.
Even without knowing some technologies, you can still use the rest of the features.
## I don't need mobile app or browser extension, what should I do?
You can simply ignore the mobile app and browser extension parts of the project. You can remove the `apps/mobile` and `apps/extension` directories from the project.
The modular nature of TurboStarter allows you to remove parts of the project that you don't need without affecting the rest of the stack.
## I want to use a different provider for X
Sure! TurboStarter is designed to be modular, so configuring new provider (e.g. for emails, billing or any other service) is straightforward. You just need to make sure your configuration is compatible with common interface to be able to plug it into the codebase.
## Will you add more packages in the future?
Yes, we will keep updating TurboStarter with new packages and features. This kit is designed to be modular, allowing for new features and packages to be added without interfering with your existing code. You can always [update your project](/docs/web/installation/update) to the latest version.
## Can I use this kit for a non-SaaS project?
This kit is mainly designed for SaaS projects. If you're building something other than a SaaS, the Next.js SaaS Boilerplate might include features you don't need. You can still use it for non-SaaS projects, but you may need to remove or modify features that are specific to SaaS use cases.
## Can I use personal accounts only?
Yes! You can disable team accounts and have personal accounts only by setting a feature flag.
## Does it set up the production instance for me?
No, TurboStarter does not set up the production instance for you. This includes setting up databases, Stripe, or any other services you need. TurboStarter does not have access to your Stripe or Resend accounts, so setup on your end is required. TurboStarter provides the codebase and documentation to help you set up your SaaS project.
## Does the starter include Solito?
No. Solito will not be included in this repo. It is a great tool if you want to share code between your Next.js and Expo app. However, the main purpose of this repo is not the integration between Next.js and Expo — it's the code splitting of your SaaS platforms into a monorepo. You can utilize the monorepo with multiple apps, and it can be any app such as Vite, Electron, etc.
Integrating Solito into this repo isn't hard, and there are a few [official templates](https://github.com/nandorojo/solito/tree/master/example-monorepos) by the creators of Solito that you can use as a reference.
## Does this pattern leak backend code to my client applications?
No, it does not. The `api` package should only be a production dependency in the Next.js application where it's served. The Expo app, browser extension, and all other apps you may add in the future should only add the `api` package as a dev dependency. This lets you have full type safety in your client applications while keeping your backend code safe.
If you need to share runtime code between the client and server, you can create a separate `shared` package for this and import it on both sides.
## How do I get support if I encounter issues?
For support, you can:
1. Visit our [Discord](https://discord.gg/KjpK2uk3JP)
2. Contact us via support email ([hello@turbostarter.dev](mailto:hello@turbostarter.dev))
## Are there any example projects or demos?
Yes - feel free to check out our demo app at [demo.turbostarter.dev](https://demo.turbostarter.dev). Also, you can get inspired by projects built by our customers - take a look at [Showcase](/#showcase).
## How do I deploy my application?
Please check the [production checklist](/docs/web/deployment/checklist) for more information.
## How do I update my project when a new version of the boilerplate is released?
Please read the [documentation for updating your TurboStarter code](/docs/web/installation/update).
## Can I use the React package X with this kit?
Yes, you can use any React package with this kit. The kit is based on React, so you are generally only constrained by the underlying technologies and not by the kit itself. Since you own and can edit all the code, you can adapt the kit to your needs. However, if there are limitations with the underlying technology, you might need to work around them.
## Can I integrate TurboStarter into an existing project?
TurboStarter is a full-stack starter intended to be used as the foundation of your app. You can copy individual modules or patterns into an existing codebase, but retrofitting the entire starter into a mature project is typically not recommended and is not officially supported. If you choose to copy parts, prefer isolating boundaries (e.g., `packages/` modules) and aligning interfaces first.
## Where can I deploy my application?
TurboStarter targets modern Node.js/Next.js runtimes. You can deploy to providers that support these environments, such as [Vercel](/docs/web/deployment/vercel), [Railway](/docs/web/deployment/railway), [Render](/docs/web/deployment/render), [Fly](/docs/web/deployment/fly), or [Netlify](/docs/web/deployment/netlify) - following their Next.js guidance. Review our [production checklist](/docs/web/deployment/checklist) before going live.
## Can I easily swap providers (billing, email, etc.)?
Yes. The starter organizes integrations behind clear interfaces so you can replace providers (e.g., billing or email) with minimal surface changes. Keep your implementation behind a module boundary and adapt to the existing types to avoid ripple effects.
---
url: /docs/mobile
title: Introduction
description: Get started with TurboStarter mobile kit.
---
Welcome to the TurboStarter documentation. This is your starting point for learning about the starter kit, its structure, features, and how to use it for your app development.
## What is TurboStarter?
TurboStarter is a fullstack starter kit that helps you build scalable and production-ready web apps, mobile apps, and browser extensions in minutes.
Looking to bootstrap your project quickly? Check out our [TurboStarter CLI guide](/blog/the-only-turbo-cli-you-need-to-start-your-next-project-in-seconds) to get started in seconds.
## Demo apps
TurboStarter provides a suite of live demo applications you can try instantly - right in your browser, on your phone, or via browser extensions. Try them live by clicking the buttons below.
## Principles
TurboStarter is built with the following principles:
* **As simple as possible** - It should be easy to understand, easy to use, and strongly avoid overengineering things.
* **As few dependencies as possible** - It should have as few dependencies as possible to allow you to take full control over every part of the project.
* **As performant as possible** - It should be fast and light without any unnecessary overhead.
## Features
Before diving into the technical details, let's overview the features TurboStarter provides.
### Multi-platform development
* [Web](/docs/web/stack): Build web apps with React, Next.js, and Tailwind CSS.
* [Mobile](/docs/mobile/stack): Build mobile apps with React Native and Expo.
* [Browser extension](/docs/extension/stack): Build browser extensions with React and WXT.
If you're specifcally interested in AI-related features (such as chatbots, agents, image generation, etc.), check out our dedicated [TurboStarter AI documentation](/ai/docs) which includes specialized stuff for building AI-powered applications.
Most features are available on all platforms. You can use the **same codebase** to build web, mobile, and browser extension apps.
### Authentication
### Organizations/teams
### Billing
### Database
### API
### Admin
### AI
Seamless integration of OpenAI, Anthropic, Groq, Mistral, and Gemini. For more advanced AI features, check out [TurboStarter AI](/ai/docs).
### Internationalization
### Emails
### Landing page
### Marketing
### Storage
### CMS
### Theming
### Analytics
### Monitoring
### Deployment
### Testing
## Use like LEGO blocks
The biggest advantage of TurboStarter is its modularity. You can use the entire stack or just the parts you need. It's like LEGO blocks - you can build anything you want with it.
If you don't need a specific feature, feel free to remove it without affecting the rest of the stack.
This approach allows for:
* **Easy feature integration** - plug new features into the kit with minimal changes.
* **Simplified maintenance** - keep the codebase clean and maintainable.
* **Core feature separation** - distinguish between core features and custom features.
* **Additional modules** - easily add modules like billing, CMS, monitoring, logger, mailer, and more.
## Scope of this documentation
While building a SaaS application involves many moving parts, this documentation focuses specifically on TurboStarter. For in-depth information on the underlying technologies, please refer to their respective official documentation.
This documentation will guide you through configuring, running, and deploying the kit, and will provide helpful links to the official documentation of technologies where necessary.
## Enjoy!
This documentation is designed to be easy to follow and understand. If you have any questions or need help, feel free to reach out to us at [hello@turbostarter.dev](mailto:hello@turbostarter.dev).
Explore new features, build amazing apps, and have fun! 🚀
---
url: /docs/mobile/installation/ai-development
title: AI-assisted development
description: Configure AI coding assistants like Cursor, Claude Code, Codex, or Antigravity to build your SaaS faster.
---
TurboStarter includes pre-configured rules, skills, subagents, and commands for AI coding assistants. These help AI understand your codebase, follow project conventions, and produce consistent, high-quality code.
Everything works out-of-the-box with all major AI tools like [Cursor](https://cursor.com), [Claude Code](https://claude.ai/code), [Codex](https://openai.com/codex), [Antigravity](https://antigravity.dev), and many more. Just open the project in your AI tool and start coding with the help of LLMs.
## Structure
The codebase organizes AI-specific configuration in the following structure:
The `.agents/` directory contains shared skills, commands, and agents that ship with TurboStarter. The tool-specific folders (e.g., `.cursor/`, `.claude/`, `.github/`) are [symlinked](https://en.wikipedia.org/wiki/Symbolic_link) to the `.agents/` directory, allowing you to add your own skills, commands, and agents to all tools at once while also customizing them individually.
## Rules
Rules provide persistent instructions that LLMs can read when they need to know more about specific parts of your project. They define code conventions, project structure, and workflow guidelines.
### AGENTS.md
The `AGENTS.md` file at the project root is the primary rules file. It uses a standardized format recognized by [most](https://agents.md) AI coding tools.
```md title="AGENTS.md"
## Agent rules
**DO:**
- Read existing files before editing; understand imports and structure first
- Keep diffs minimal and scoped to the request
...
**DON'T:**
- Commit, push, or modify git state unless explicitly asked
- Run destructive commands (`reset --hard`, force-push) without permission
...
## Code conventions
- TypeScript: functional, declarative; no classes
- File layout: exported component → subcomponents → helpers → types
```
Rules should be concise and actionable. Include only information the AI **cannot infer from code alone**, such as:
* Bash commands and common workflows
* Code style rules that differ from defaults
* Architectural decisions specific to your project
* Common gotchas or non-obvious behaviors
Keep rules short. Overly long files cause AI to ignore important instructions. If you notice the AI not following a rule, the file might be too verbose.
### CLAUDE.md
The `CLAUDE.md` file provides compatibility with Claude-specific tools. In TurboStarter, it simply references the main rules file:
```md title="CLAUDE.md"
@AGENTS.md
```
This ensures consistent behavior across all AI tools without duplicating content.
You can also nest AGENTS.md files in subdirectories to create more granular rules for specific parts of your project.
For example, you can create an `AGENTS.md` file in the `apps/web/` directory to add rules for the web application, or an `AGENTS.md` file in the `packages/api/` directory to add specific rules for the API.
The right approach depends on your project's complexity and where you need more targeted AI assistance.
Most providers allow you to add tool-specific rules. For example, Cursor rules go in the `.cursor/` directory, while Claude rules go in the `.claude/` directory.
If you primarily use one AI tool in your workflow, consider creating tool-specific rules rather than relying solely on the shared `AGENTS.md` file.
## Skills
Skills are modular capabilities that extend AI functionality with domain-specific knowledge. They package instructions, workflows, and reference materials that AI loads on-demand when relevant.
### How skills work
Skills are organized as directories containing a `SKILL.md` file and optionally a `references/` directory with additional documentation:
Each skill includes YAML frontmatter that describes when to use it:
```md title="SKILL.md"
---
name: better-auth-best-practices
description: Skill for integrating Better Auth - the comprehensive TypeScript authentication framework.
---
# Better Auth Integration Guide
**Always consult [better-auth.com/docs](https://better-auth.com/docs) for code examples and latest API.**
...
```
AI tools read the `description` field to determine when to apply the skill automatically. When triggered, the full skill content loads into context.
### Included skills
TurboStarter ships with several pre-configured skills covering common development scenarios:
| Skill | Description |
| ----------------------------- | ---------------------------------------------- |
| `turborepo` | Turborepo best practices and configuration |
| `better-auth-best-practices` | Auth integration patterns and API reference |
| `building-native-ui` | Mobile UI patterns with Expo and React Native |
| `native-data-fetching` | Network requests, caching, and offline support |
| `vercel-react-best-practices` | React and Next.js performance optimization |
| `vercel-composition-patterns` | Component architecture and API design |
| `web-design-guidelines` | UI review and accessibility compliance |
| `find-skills` | Discover and install additional skills |
### Installing skills
To install additional skills, we recommend using [Skills CLI](https://skills.sh), which allows you to easily install skills from the [open skills ecosystem](https://skills.sh). To install a skill, run:
```bash
npx skills add
```
Browse available skills at [skills.sh](https://skills.sh).
### Creating custom skills
If you have project-specific workflows, you can create your own skills:
Create a directory in `.agents/skills/`:
```bash
mkdir -p .agents/skills/my-custom-skill
```
Add a `SKILL.md` file with frontmatter:
```md title=".agents/skills/my-custom-skill/SKILL.md"
---
name: my-custom-skill
description: Handles X workflow. Use when working with Y or when user asks about Z.
---
# My Custom Skill
## Instructions
1. First, check the existing patterns in `packages/api/`
2. Follow the established naming conventions
3. ...
```
The skill will be automatically available in your AI tool. Test by asking about the topic described in the `description` field.
## Subagents
Subagents are specialized AI assistants that handle specific types of tasks in isolation. They operate in their own context window, preventing long research or review tasks from cluttering your main conversation.
### Included subagents
TurboStarter includes a code reviewer subagent:
```md title=".agents/agents/code-reviewer.md"
---
name: code-reviewer
description: Reviews code for quality, conventions, and potential issues.
model: inherit
readonly: true
---
You are a senior code reviewer for the TurboStarter project...
```
The subagent runs in read-only mode and checks for:
* TypeScript best practices (no `any`, explicit types)
* Component conventions (named exports, props interface)
* Architecture patterns (shared logic in packages)
* Security issues (no hardcoded secrets, proper auth)
### Using subagents
Invoke subagents explicitly in your prompts:
```txt
Use the code-reviewer to review the changes in src/modules/auth/
```
Or let the AI delegate automatically based on the task.
### Creating custom subagents
Add subagent definitions to `.agents/agents/`:
```md title=".agents/agents/security-auditor.md"
---
name: security-auditor
description: Security specialist. Use when implementing auth, payments, or handling sensitive data.
model: inherit
readonly: true
---
You are a security expert auditing code for vulnerabilities.
When invoked:
1. Identify security-sensitive code paths
2. Check for common vulnerabilities (injection, XSS, auth bypass)
3. Verify secrets are not hardcoded
4. Review input validation and sanitization
Report findings by severity: Critical, High, Medium, Low.
```
## Commands
Commands are reusable workflows triggered with a `/` prefix in chat. They standardize common tasks and encode institutional knowledge.
### Included commands
TurboStarter includes a feature setup command:
```md title=".agents/commands/setup-new-feature.md"
# Setup New Feature
Set up a new feature in the TurboStarter.dev website following project conventions.
## Before starting
1. **Clarify scope**: What part of the site needs this feature?
2. **Check existing code**: Look in `packages/*` for reusable logic
3. **Identify shared vs app-specific**: Shared logic goes in `packages/*`
## Project structure
...
```
### Using commands
Type `/` in chat to see available commands:
```txt
/setup-new-feature
```
Follow the guided workflow to scaffold features consistently.
### Creating custom commands
Add command definitions to `.agents/commands/`:
```md title=".agents/commands/fix-issue.md"
# Fix GitHub Issue
Fix a GitHub issue following project conventions.
## Steps
1. Use `gh issue view ` to get issue details
2. Search the codebase for relevant files
3. Implement the fix following existing patterns
4. Write tests to verify the fix
5. Run `pnpm typecheck` and `pnpm lint`
6. Create a descriptive commit message
7. Push and create a PR
```
## Model Context Protocol (MCP)
MCP enables AI tools to connect to external services like databases, APIs, and third-party tools. This allows AI to access real data and perform actions beyond code generation.
### Common MCP integrations
| Service | Use case |
| ---------------------------------------------------------------------------------------------- | -------------------------------------- |
| [GitHub](https://github.com/github/github-mcp-server) | Create issues, open PRs, read comments |
| [Database](https://github.com/crystaldba/postgres-mcp) | Query schemas, inspect data |
| [Figma](https://help.figma.com/hc/en-us/articles/32132100833559-Guide-to-the-Figma-MCP-server) | Import designs for implementation |
| [Linear](https://linear.app/docs/mcp)/[Jira](https://github.com/sooperset/mcp-atlassian) | Read tickets, update status |
| [Browser](https://browsermcp.io/) | Test UI, take screenshots |
For a full list of available MCP servers, see the [Cursor documentation](https://cursor.com/docs/context/mcp/directory) or the [MCP directory](https://www.pulsemcp.com/servers/).
### Setting up MCP
MCP configuration varies by tool. Generally, you create a configuration file that specifies server connections:
```json title="mcp.json"
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}
```
Consult your AI tool's documentation for specific setup instructions.
## Documentation
Like the rest of TurboStarter, the documentation is optimized for AI-assisted workflows. You can chat with it and get answers about specific features using the **most up-to-date** information.
### `llms.txt`
You can access the entire TurboStarter documentation in Markdown format at [/llms.txt](/llms.txt). This allows you to ask any LLM (assuming it has a large enough context window) questions about TurboStarter using the most up-to-date documentation.
#### Example usage
For example, to prompt an LLM with questions about TurboStarter:
1. Copy the documentation contents from [/llms.txt](/llms.txt)
2. Use the following prompt format:
```txt
Documentation:
{paste documentation here}
---
Based on the above documentation, answer the following:
{your question}
```
This works with any AI tool that accepts large context, regardless of whether it has native integration with your editor.
### Markdown format
Each documentation page is also available in raw Markdown format. You can copy the contents using the *Copy Markdown* button in the page header.
You can also access it directly by adding the `.mdx` extension to the specific documentation page. For example, to access this page, visit [/docs/web/installation/ai-development.mdx](/docs/web/installation/ai-development.mdx).
### Open in ...
To make chatting with TurboStarter documentation even more convenient, each page includes an *Open in...* button in the header that opens the documentation directly in your preferred chatbot.
For example, opening the documentation page in [ChatGPT](https://chatgpt.com) will create a new chat with the documentation automatically attached as a context:

## Best practices
Following best practices helps you get the most out of AI-assisted development. Review the tips below and share your experiences on our [Discord](https://discord.gg/KjpK2uk3JP) server.
### Plan before coding
The most impactful change you can make is planning before implementation. Planning forces clear thinking about what you're building and gives the AI concrete goals to work toward.
For complex tasks, use this workflow:
1. **Explore**: Have the AI read files and understand the existing architecture
2. **Plan**: Ask for a detailed implementation plan with file paths and code references
3. **Implement**: Execute the plan, verifying against each step
4. **Commit**: Review changes and commit with descriptive messages
Not every task needs a detailed plan. For quick changes or familiar patterns, jumping straight to implementation is fine.
### Provide verification criteria
AI performs dramatically better when it can verify its own work. Include tests, screenshots, or expected outputs:
```txt
// Instead of:
"implement email validation"
// Use:
"write a validateEmail function. test cases: user@example.com → true,
invalid → false, user@.com → false. run tests after implementing."
```
Without clear success criteria, the AI might produce something that looks right but doesn't actually work. Verification can be a test suite, a linter, or a command that checks output.
### Write specific prompts
The more precise your instructions, the fewer corrections you'll need. Reference specific files, mention constraints, and point to example patterns:
| Strategy | Before | After |
| ---------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scope the task** | "add tests for auth" | "write a test for `auth.ts` covering the logout edge case, using patterns in `__tests__/` and avoiding mocks" |
| **Reference patterns** | "add a calendar widget" | "look at how existing widgets are implemented. `HotDogWidget.tsx` is a good example. follow the pattern to implement a calendar widget" |
| **Describe symptoms** | "fix the login bug" | "users report login fails after session timeout. check the auth flow in `src/auth/`, especially token refresh. write a failing test, then fix it" |
### Use absolute rules
When writing rules, be direct. Absolute rules beat suggestions. "Always verify ownership with `userId` before database writes" works. "Consider checking ownership" gets ignored.
Structure rules with clear "MUST do" and "MUST NOT do" sections:
```md
## MUST DO
- Verify ownership before ALL database writes
- Run `pnpm typecheck` after every implementation
- Use `@workspace/ui` components - never install shadcn directly
## MUST NOT DO
- Never use `any` type - fix the types instead
- Never store secrets in code - use environment variables
- Never create new UI components if one exists in @workspace/ui
```
### Use rules as a router
Tell AI where and how to find things. This prevents hallucinated file paths and inconsistent patterns:
```md
## Where to find things
- Database schemas: `packages/db/src/schema/`
- Server action patterns: `apps/web/app/api/`
- UI components: `packages/ui/src/`
- Existing features to reference: `apps/web/app/`
```
### Course-correct early
Stop AI mid-action if it goes off track. Most tools support an interrupt key (usually `Esc`). Redirect early rather than waiting for a complete but wrong implementation.
If you've corrected the AI more than twice on the same issue in one session, the context is cluttered with failed approaches. Start fresh with a more specific prompt that incorporates what you learned.
### Manage context aggressively
Long sessions accumulate irrelevant context that degrades AI performance. Clear context between unrelated tasks or start fresh sessions for new features.
**Start a new conversation when:**
* You're moving to a different task or feature
* The AI seems confused or keeps making the same mistakes
* You've finished one logical unit of work
**Continue the conversation when:**
* You're iterating on the same feature
* The AI needs context from earlier in the discussion
* You're debugging something it just built
### Use subagents for research
When exploring unfamiliar code, delegate to subagents. They run in separate context windows and report back summaries, keeping your main conversation clean for implementation.
This is especially useful for:
* Codebase exploration that might read many files
* Code review (fresh context prevents bias toward code just written)
* Security audits and performance analysis
### Review AI-generated code carefully
AI-generated code can look right while being subtly wrong. Read the diffs and review carefully. The faster the AI works, the more important your review process becomes.
For significant changes, consider:
* Running a dedicated review pass after implementation
* Asking the AI to generate architecture diagrams
* Using a separate AI session to review the changes (fresh context)
### Add business domain context
Generic rules produce generic code. Add your application's domain to help AI understand context:
```md
## Business Domain
This application is a project management tool for software teams.
### Key Entities
- **Projects**: User-created workspaces containing tasks
- **Tasks**: Work items with status, assignee, and due date
### Business Rules
- Projects belong to organizations (use organizationId for queries)
- Tasks require project membership to view (check via RBAC)
- Deleted projects cascade-delete all tasks
```
## Troubleshooting
Common issues when using AI coding assistants and how to resolve them:
1. Check that `AGENTS.md` exists at the project root
2. Verify the file contains valid Markdown
3. Some tools require reopening the project to reload rules
4. Check if the file is too long—important rules may be getting lost in the noise
5. Try adding emphasis (e.g., "IMPORTANT" or "MUST") to critical instructions
Long sessions cause AI to "forget" rules and earlier instructions. This happens because:
* Context windows fill up with irrelevant information
* Important instructions get pushed out during summarization
* Failed approaches pollute the conversation
**Solutions:**
1. Start fresh sessions for complex or unrelated tasks
2. Re-state important rules when you notice drift
3. After two failed corrections, clear context and write a better initial prompt
1. Verify the skill's `description` field clearly describes when to use it
2. Try invoking the skill explicitly by name (e.g., `/skill-name`)
3. Check that the `SKILL.md` file has valid YAML frontmatter
4. Skills may require explicit invocation for workflows with side effects
1. Ensure subagent files are in the correct directory (`.agents/agents/`)
2. Check the frontmatter for syntax errors
3. Some tools require specific configuration to enable subagents
4. Verify the `name` and `description` fields are properly defined
AI can produce plausible-looking implementations that don't handle edge cases or reference non-existent APIs.
**Prevention:**
1. Always provide verification criteria (tests, expected outputs)
2. Use typed languages and configure linters
3. Point AI to reference implementations rather than documenting APIs
4. Run verification commands after every implementation
**Recovery:**
1. Don't try to fix incorrect code through follow-up prompts repeatedly
2. Revert changes and start fresh with a more specific prompt
3. Use a dedicated review pass to catch issues before committing
When you have multiple `AGENTS.md` files (root and package-level), they can conflict. Generally, the more specific file (closer to the code being edited) takes priority.
**Solutions:**
1. Check which `AGENTS.md` is being read by asking the AI
2. Consolidate conflicting rules into one location
3. Use package-level rules only for domain-specific guidance
Unbounded exploration fills context with irrelevant information.
**Solutions:**
1. Scope investigations narrowly: "search for JWT validation in `src/auth/`" instead of "find auth code"
2. Use subagents for exploration so it doesn't consume your main context
3. Specify file types or directories to limit search scope
Large codebases or long sessions can consume significant resources.
**Solutions:**
1. Use compact/summarize features regularly to reduce context size
2. Close and restart between major tasks
3. Add large build directories (e.g., `node_modules`, `dist`) to `.gitignore`
4. Disable unnecessary extensions that might impact performance
## Learn more
Dive deeper into AI-assisted development with these resources. They cover open standards, tool directories, and specifications that power modern AI coding workflows.
---
url: /docs/mobile/installation/clone
title: Cloning repository
description: Get the code to your local machine and start developing your app.
---
Ensure you have Git installed on your local machine before proceeding. You can download Git from [here](https://git-scm.com).
## Git clone
Clone the repository using the following command:
```bash
git clone git@github.com:turbostarter/core
```
By default, we're using [SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for all Git commands. If you don't have it configured, please refer to the [official documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to set it up.
Alternatively, you can use HTTPS to clone the repository:
```bash
git clone https://github.com/turbostarter/core
```
Another alternative could be to use the [Github CLI](https://cli.github.com/) or [Github Desktop](https://desktop.github.com/) for Git operations.
## Git remote
After cloning the repository, remove the original origin remote:
```bash
git remote rm origin
```
Add the upstream remote pointing to the original repository to pull updates:
```bash
git remote add upstream git@github.com:turbostarter/core
```
Once you have your own repository set up, add your repository as the origin:
```bash
git remote add origin
```
## Staying up to date
To pull updates from the upstream repository, run the following command daily (preferably with your morning coffee ☕):
```bash
git pull upstream main
```
This ensures your repository stays up to date with the latest changes.
Check [Updating codebase](/docs/web/installation/update) for more details on updating your codebase.
---
url: /docs/mobile/installation/commands
title: Common commands
description: Learn about common commands you need to know to work with the mobile project.
---
For sure, you don't need these commands to kickstart your project, but it's useful to know they exist for when you need them.
You can set up aliases for these commands in your shell configuration file. For example, you can set up an alias for `pnpm` to `p`:
```bash title="~/.bashrc"
alias p='pnpm'
```
Or, if you're using [Zsh](https://ohmyz.sh/), you can add the alias to `~/.zshrc`:
```bash title="~/.zshrc"
alias p='pnpm'
```
Then run `source ~/.bashrc` or `source ~/.zshrc` to apply the changes.
You can now use `p` instead of `pnpm` in your terminal. For example, `p i` instead of `pnpm install`.
To inject environment variables into the command you run, prefix it with `with-env`:
```bash
pnpm with-env
```
For example, `pnpm with-env pnpm build` will run `pnpm build` with the environment variables injected.
Some commands, like `pnpm dev`, automatically inject the environment variables for you.
## Installing dependencies
To install the dependencies, run:
```bash
pnpm install
```
## Starting development server
Start development server by running:
```bash
pnpm dev
```
## Building project
To build the project (including all apps and packages), run:
```bash
pnpm build
```
## Building specific app/package
To build a specific app/package, run:
```bash
pnpm turbo build --filter=
```
## Cleaning project
To clean the project, run:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm install
```
## Formatting code
To check for formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format
```
To fix formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format:fix
```
## Linting code
To check for linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint
```
To fix linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint:fix
```
## Adding UI components
To add a new web component, run:
```bash
pnpm --filter @workspace/ui-web ui:add
```
This command will add and export a new component to `@workspace/ui-web` package.
To add a new mobile component, run:
```bash
pnpm --filter @workspace/ui-mobile ui:add
```
This command will add and export a new component to `@workspace/ui-mobile` package.
## Services commands
To run the services containers locally, you need to have [Docker](https://www.docker.com/) installed on your machine.
You can always use the cloud-hosted solution (e.g. [Neon](https://neon.tech/), [Turso](https://turso.tech/) for database) for your projects.
We have a few commands to help you manage the services containers (for local development).
### Starting containers
To start the services containers, run:
```bash
pnpm services:start
```
It will run all the services containers. You can check their configs in `docker-compose.yml`.
### Setting up services
To setup all the services, run:
```bash
pnpm services:setup
```
It will start all the services containers and run necessary setup steps.
### Stopping containers
To stop the services containers, run:
```bash
pnpm services:stop
```
### Displaying status
To check the status and logs of the services containers, run:
```bash
pnpm services:status
```
### Displaying logs
To display the logs of the services containers, run:
```bash
pnpm services:logs
```
### Database commands
We have a few commands to help you manage the database leveraging [Drizzle CLI](https://orm.drizzle.team/kit-docs/commands).
#### Generating migrations
To generate a new migration, run:
```bash
pnpm with-env turbo db:generate
```
It will create a new migration `.sql` file in the `packages/db/migrations` folder.
#### Running migrations
To run the migrations against the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
It will apply all the pending migrations.
#### Pushing changes directly
Make sure you know what you're doing before pushing changes directly to the db.
To push changes directly to the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:push
```
It lets you push your schema changes directly to the database and omit managing SQL migration files.
#### Checking database status
To check the status of the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:status
```
It will display the status of the applied migrations and the pending ones.
```bash
Applied migrations:
- 0000_cooing_vargas
- 0001_curious_wallflower
- 0002_good_vertigo
- 0003_peaceful_devos
- 0004_fat_mad_thinker
- 0005_yummy_bucky
- 0006_glorious_vargas
Pending migrations:
- 0007_nebulous_havok
```
#### Resetting database
To reset the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:reset
```
It will reset the database to the initial state.
#### Seeding database
To seed the database with some example data (for development purposes), run:
```bash
pnpm with-env turbo db:seed
```
It will populate your database with some example data.
#### Checking database
To check the database schema consistency, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:check
```
#### Studying database
To study the database schema in the browser, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:studio
```
This will start the Studio on [https://local.drizzle.studio](https://local.drizzle.studio).
## Tests commands
### Running tests
To run the tests, run:
```bash
pnpm test
```
This will run all the tests in the project using Turbo tasks. As it leverages Turbo caching, it's [recommended](/docs/web/tests/unit#configuration) to run it in your CI/CD pipeline.
### Running tests projects
To run tests for all Vitest [Test Projects](https://vitest.dev/guide/projects), run:
```bash
pnpm test:projects
```
This will run all the tests in the project using Vitest.
### Watching tests
To watch the tests, run:
```bash
pnpm test:projects:watch
```
This will watch the tests for all [Test Projects](https://vitest.dev/guide/projects) and run them automatically when you make changes.
### Generating code coverage
To generate code coverage report, run:
```bash
pnpm turbo test:coverage
```
This will generate a code coverage report in the `coverage` directory under `tooling/vitest` package.
### Viewing code coverage
To preview the code coverage report in the browser, run:
```bash
pnpm turbo test:coverage:view
```
This will launch the report's `.html` file in your default browser.
---
url: /docs/mobile/installation/conventions
title: Conventions
description: Some standard conventions used across TurboStarter codebase.
---
You're not required to follow these conventions; they're simply a standard set of practices used in the core kit. If you like them, we encourage you to keep them during your usage of the kit so you have a consistent code style that you and your teammates understand.
## Turborepo packages
In this project, we use [Turborepo packages](https://turbo.build/repo/docs/core-concepts/internal-packages) to define reusable code that can be shared across multiple applications.
* **Apps** are used to define the main application, including routing, layout, and global styles.
* **Packages** share reusable code and add functionality across multiple applications. They're configurable from the main application.
**Recommendation:** Do not create a package for your app code unless you plan to reuse it across multiple applications or are experienced in writing library code.
If your application is not intended for reuse, keep all code in the app folder. This approach saves time and reduces complexity, both of which are beneficial for fast shipping.
**Experienced developers:** If you have the experience, feel free to create packages as needed.
## Imports and paths
When importing modules from packages or apps, use the following conventions:
* **From a package:** Use `@workspace/package-name` (e.g., `@workspace/ui`, `@workspace/api`, etc.).
* **From an app:** Use `~/` (e.g., `~/components`, `~/config`, etc.).
## Enforcing conventions
We don't enforce complex rules or specific style guides that are not relevant to the project, giving you more freedom to customize things to your needs.
To enforce these conventions, we use the following tools:
* [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) is a [Prettier-compatible](https://oxc.rs/docs/guide/usage/formatter/migrate-from-prettier.html) tool used to enforce code formatting.
* [Oxlint](https://oxc.rs/docs/guide/usage/linter.html) is an [ESLint-compatible](https://oxc.rs/docs/guide/usage/linter/migrate-from-eslint.html) tool used to enforce code quality and best practices.
* [TypeScript](https://www.typescriptlang.org/) is used to enforce type safety.
## Code health
TurboStarter provides a set of tools to ensure code health and quality in your project.
### GitHub Actions
By default, TurboStarter sets up GitHub Actions to run tests on every push to the repository. You can find the workflow configuration in the `.github/workflows` directory.
The workflow has multiple stages:
* `format` - runs Oxfmt to format the code.
* `lint` - runs Oxlint to check for linting errors.
* `test` - runs tests.
### Git hooks
Together with TurboStarter, we have set up a `pre-commit` hook that will check for linting and formatting errors in the files being committed.
It's configured using [Lefthook](https://lefthook.dev), which supports multiple hooks and can be configured to run commands on specific files or directories.
Feel free to customize the hook to your needs, e.g. to check consistency of commit messages (useful for generating changelogs) using [commitlint](https://commitlint.js.org/):
```yaml title="lefthook.yml"
commit-msg:
commands:
"lint commit message":
run: pnpm commitlint --edit {1}
```
---
url: /docs/mobile/installation/dependencies
title: Managing dependencies
description: Learn how to manage dependencies in your project.
---
As the package manager we chose [pnpm](https://pnpm.io/).
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
## Install dependency
To install a package you need to decide whether you want to install it to the root of the monorepo or to a specific workspace. Installing it to the root makes it available to all packages, while installing it to a specific workspace makes it available only to that workspace.
To install a package globally, run:
```bash
pnpm add -w
```
To install a package to a specific workspace, run:
```bash
pnpm add --filter
```
For example:
```bash
pnpm add --filter @workspace/ui motion
```
It will install `motion` to the `@workspace/ui` workspace.
## Remove dependency
Removing a package is the same as installing but with the `remove` command.
To remove a package globally, run:
```bash
pnpm remove -w
```
To remove a package from a specific workspace, run:
```bash
pnpm remove --filter
```
## Update a package
Updating is a bit easier since there is a nice way to update a package in all workspaces at once:
```bash
pnpm update -r
```
When you update a package, pnpm will respect the [semantic versioning](https://docs.npmjs.com/about-semantic-versioning) rules defined in the `package.json` file. If you want to update a package to the latest version, you can use the `--latest` flag.
## Renovate bot
By default, TurboStarter comes with [Renovate](https://www.npmjs.com/package/renovate) enabled. It is a tool that helps you manage your dependencies by automatically creating pull requests to update your dependencies to the latest versions. You can find its configuration in the `.github/renovate.json` file. Learn more about it in the [official docs](https://docs.renovatebot.com/configuration-options/).
When it creates a pull request, it is treated as a normal PR, so all tests and preview deployments will run. **It is recommended to always preview and test the changes in the staging environment before merging the PR to the main branch to avoid breaking the application.**
---
url: /docs/mobile/installation/development
title: Development
description: Get started with the code and develop your mobile SaaS.
---
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (24.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local services e.g. database)
* [pnpm](https://pnpm.io)
* [Firebase](https://firebase.google.com) project (optional for some features - check [Firebase project](/docs/mobile/installation/firebase) section for more details)
## Project development
### Set up environment
We won't copy the official docs, as there is quite a bit of setup you need to make to get started with iOS and Android development and it also depends what approach you want to take.
[Check this official setup guide to get started](https://docs.expo.dev/get-started/set-up-your-environment/). After you're done with the setup, go back to this guide and continue with the next step.
You can pick if you want to develop the app for iOS or Android by using the real device or the simulator.
We recommend using the simulators and [development builds](https://docs.expo.dev/develop/development-builds/create-a-build/) for development, as it is more real and reliable approach. It also won't limit you in terms of native dependencies (required for e.g. [analytics](/docs/mobile/analytics/overview)).
Of course, you can start with the simplest approach (using [Expo Go](https://expo.dev/go)) and when you iterate further, switch to different approach.
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
You can use the following command to recursively copy the `.env.example` files to the `.env.local` files:
```bash
find . -name ".env.example" -exec sh -c 'cp "$1" "${1%.example}.local"' _ {} \;
```
```bash
Get-ChildItem -Recurse -Filter ".env.example" | ForEach-Object {
Copy-Item $_.FullName ($\_.FullName -replace '\.example$', '.local')
}
```
Check [Environment variables](/docs/web/configuration/environment-variables) for more details on setting up environment variables.
### Setup services
If you want to use local services like database etc. (**recommended for development purposes**), ensure Docker is running, then setup them with:
```bash
pnpm services:setup
```
This command initiates the containers and runs necessary setup steps, ensuring your services are up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your development server should now be running at `http://localhost:8081`.

Scan the QR code with your mobile device to start the app or press the appropriate key on your keyboard to run it on simulator. In case of any issues check the [Troubleshooting](https://docs.expo.dev/troubleshooting/overview/) section.
### Publish to stores
When you're ready to publish the project to the stores, follow [guidelines](/docs/mobile/marketing) and [checklist](/docs/mobile/publishing/checklist) to ensure everything is set up correctly.
---
url: /docs/mobile/installation/editor-setup
title: Editor setup
description: Learn how to set up your editor for the fastest development experience.
---
Of course, you can use any IDE you like, but you'll have the best possible developer experience with this starter kit when using a **VSCode-based** editor with the suggested settings and extensions.
## Settings
We've included most recommended settings in the `.vscode/settings.json` file to make your development experience as smooth as possible. It includes configuration for tools like [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), and Tailwind CSS, which are used to enforce conventions across the codebase. You can adjust them to your needs.
```json title=".vscode/settings.json"
{
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.oxc": "always"
},
"editor.formatOnSave": true
...
}
```
## Extensions
Once you've cloned the project and opened it in VSCode, you should be prompted to install the suggested extensions, which are defined in `.vscode/extensions.json`. If you'd rather install them manually, you can do so at any time.
These are the extensions we recommend:
### OXC
Global extension for static code analysis. It will help you find and fix problems in your JavaScript/TypeScript code using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) and [Oxlint](https://oxc.rs/docs/guide/usage/linter.html). It's compatible with [Prettier](https://prettier.io/) and [ESLint](https://eslint.org/).
### Pretty TypeScript Errors
Improves TypeScript error messages shown in the editor.
### Tailwind CSS IntelliSense
Adds IntelliSense for Tailwind CSS classes to enable autocompletion and linting.
---
url: /docs/mobile/installation/firebase
title: Firebase project
description: Learn how to set up a Firebase project for your TurboStarter mobile app.
---
For some features of your mobile app, you will need to set up a Firebase project. It's a requirement enforced by how these features are implemented under the hood and we cannot change it.
You would need a Firebase project to use the following features:
* [Analytics](/docs/mobile/analytics/overview) with [Google Analytics](/docs/mobile/analytics/configuration#google-analytics) provider
Here, we'll go through the steps to set up a Firebase project and link it to your mobile app.
In development environment, the integration with Firebase is possible only when using a [development build](https://docs.expo.dev/workflow/overview/#development-builds). It means that **it won't work in the [Expo Go](https://expo.dev/go) app**.
## Create a Firebase project
First things first, you need to create a Firebase project. You can do this by going to the [Firebase console](https://console.firebase.google.com/) and clicking on "Add Project":

Name it as you want, and proceed to the dashboard.
## Install Firebase SDK
To install React Native Firebase's base app module, run the following command in your mobile app directory:
```bash
npx expo install @react-native-firebase/app
```
## Configure Firebase modules
The recommended approach to configure React Native Firebase is to use [Expo Config Plugins](https://docs.expo.dev/config-plugins/introduction/).
To enable Firebase on the native Android and iOS platforms, create and download Service Account files for each platform from your Firebase project.
You can find them in the dashboard under the Firebase project settings:

For Android, it will be a `google-services.json` file, and for iOS it will be a `GoogleService-Info.plist` file.
Then provide paths to the downloaded files in the following `app.config.ts` fields: [`android.googleServicesFile`](https://docs.expo.io/versions/latest/config/app/#googleservicesfile-1) and [`ios.googleServicesFile`](https://docs.expo.io/versions/latest/config/app/#googleservicesfile). This is how an example configuration looks like:
```ts title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
ios: {
googleServicesFile: "./GoogleService-Info.plist",
},
android: {
googleServicesFile: "./google-services.json",
},
plugins: [
"@react-native-firebase/app",
[
"expo-build-properties",
{
ios: {
useFrameworks: "static",
},
},
],
],
});
```
For iOS only, since `firebase-ios-sdk` requires `use_frameworks` you need to configure `expo-build-properties` by adding `"useFrameworks": "static"`.
Listing a module in the Config Plugins (the `plugins` array in the config above) is only required for React Native Firebase modules that involve native installation steps - e.g. modifying the Xcode project, `Podfile`, `build.gradle`, `AndroidManifest.xml` etc. React Native Firebase modules without native steps will work out of the box.
## Generate native code
If you are compiling your app locally, you'll need to regenerate the native code for the platforms to pick up the changes:
```bash
npx expo prebuild --clean
```
Then, you could follow the same steps as in the [development environment setup](/docs/mobile/installation/development) guide to run the app locally or [build a production version](/docs/mobile/publishing/checklist#build-your-app) of your app.
Et voilà! You've set up and linked your Firebase project to your mobile app 🎉
You can learn more about the Firebase integration and it's possibilities in the [official documentation](https://rnfirebase.io/).
---
url: /docs/mobile/installation/structure
title: Project structure
description: Learn about the project structure and how to navigate it.
---
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Mobile application structure
The mobile application is located in the `apps/mobile` folder. It contains the following folders:
---
url: /docs/mobile/installation/update
title: Updating codebase
description: Learn how to update your codebase to the latest version.
---
If you've been following along with our previous guides, you should already have a Git repository set up for your project, with an `upstream` remote pointing to the original repository.
Updating your project involves fetching the latest changes from the `upstream` remote and merging them into your project. Let's dive into the steps!
## Stash changes
If you don't have any changes to stash, you can skip this step and proceed with the update process.
Alternatively, you can [commit](https://git-scm.com/docs/git-commit) your changes.
If you have any uncommitted changes, stash them before proceeding. It will allow you to avoid any conflicts that may arise during the update process.
```bash
git stash
```
This command will save your changes in a temporary location, allowing you to retrieve them later. Once you're done updating, you can apply the stash to your working directory.
```bash
git stash apply
```
## Pull changes
Pull the latest changes from the `upstream` remote.
```bash
git pull upstream main
```
When prompted the first time, please opt for merging instead of rebasing.
Don't forget to run `pnpm i` in case there are any updates in the dependencies.
## Resolve conflicts
If there are any conflicts during the merge, Git will notify you. You can resolve them by opening the conflicting files in your code editor and making the necessary changes.
If you find conflicts in the `pnpm-lock.yaml file`, accept either of the two changes (avoid manual edits), then run:
```bash
pnpm i
```
Your lock file will now reflect both your changes and the updates from the upstream repository.
## Run a health check
After resolving the conflicts, it's time to test your project to ensure everything is working as expected. Run your project locally and navigate through the various features to verify that everything is functioning correctly.
For a quick health check, you can run:
```bash
pnpm lint
pnpm typecheck
```
If everything looks good, you're all set! Your project is now up to date with the latest changes from the `upstream` repository.
## Commit and push
Once everything is working fine, don't forget to commit your changes using:
```bash
git commit -m ""
```
and push them to your remote repository with:
```bash
git push origin
```
---
url: /docs/mobile/internationalization
title: Internationalization
description: Learn how to internationalize your mobile app.
---
TurboStarter mobile uses [i18next](https://www.i18next.com/) and [expo-localization](https://docs.expo.dev/versions/latest/sdk/localization/) for internationalization. This powerful combination allows you to leverage both i18next's mature translation framework and Expo's native device locale detection.
While i18next handles the translation management, expo-localization provides
seamless integration with the device's locale settings. This means your app
can automatically detect and adapt to the user's preferred language, while
still maintaining the flexibility to override it when needed.
The mobile app's internationalization is configured to work out of the box with:
* Automatic device language detection
* Right-to-left (RTL) layout support
* Locale-aware date and number formatting
* Fallback language handling
You can read more about the underlying technologies in their documentation:
* [i18next documentation](https://www.i18next.com/overview/getting-started)
* [expo-localization documentation](https://docs.expo.dev/versions/latest/sdk/localization/)

## Configuration
The global configuration is defined in the `@workspace/i18n` package and shared across all applications. You can read more about it in the [web configuration](/docs/web/internationalization/configuration) documentation.
By default, the locale is automatically detected based on the user's device settings. You can override it and set the default locale of your mobile app in the [app configuration](/docs/mobile/configuration/app) file.
## Translating app
To translate individual components and screens, you can use the `useTranslation` hook.
```tsx
import { useTranslation } from "@workspace/i18n";
export default function MyComponent() {
const { t } = useTranslation();
return {t("hello")};
}
```
It's a recommended way to translate your app.
### Store presence
If you plan on shipping your app to different countries or regions or want it to support various languages, you can provide localized strings for things like the display name and system dialogs.
To do so, check the [official Expo documentation](https://docs.expo.dev/guides/localization/) as it requires modifying your app configuration (`app.config.ts`).
You can find the resources below helpful in this process:
## Language switcher
TurboStarter ships with a language customizer component that allows you to switch between languages. You can import and use the `LocaleCustomizer` component and drop it anywhere in your application to allow users to change the language seamlessly.
```tsx
import { LocaleCustomizer } from "@workspace/ui-mobile/i18n";
export default function MyComponent() {
return ;
}
```
The component automatically displays all languages configured in your i18n settings. When a user switches languages, it will be reflected in the app and saved into persistent storage to keep the language across app restarts.
## Best practices
Here are key best practices for managing translations in your mobile app:
* Use clear, hierarchical translation keys for easy maintenance
```ts
// ✅ Good
"screen.home.welcome";
"component.button.submit";
// ❌ Bad
"welcomeText";
```
* Organize translations by app screens and features
```
translations/
├── en/
│ ├── layout.json
│ └── common.json
└── es/
├── layout.json
└── common.json
```
* Consider device language settings and regional formats
* Cache translations locally for offline access
* Handle dynamic content for mobile contexts:
```ts
// Device-specific messages
t("errors.noConnection"); // "Check your internet connection"
// Dynamic values
t("storage.space", { gb: 2.5 }); // "2.5 GB available"
```
* Keep translations concise - mobile screens have limited space
* Test translations with different screen sizes and orientations
---
url: /docs/mobile/marketing
title: Marketing
description: Learn how to market your mobile application.
---
As you saw in the [Extras](/docs/mobile/extras) section, TurboStarter comes with a lot of tips and tricks to make your product better and help you launch your app faster with higher traffic.
The same applies to [submission tips](/docs/mobile/extras#submission-tips) to help you get your app approved by Apple and Google faster.
We'll talk more about the whole process of deploying and publishing your app in the [Publishing](/docs/mobile/publishing/checklist) section, here we'll go through some guidelines that you need to follow to make your store's visibility higher.
## Before you submit
To help your app approval go as smoothly as possible, review the common missteps listed below that can slow down the review process or trigger a rejection. This doesn't replace the official guidelines or guarantee approval, but making sure you can check every item on the list is a good start.
Make sure you:
* Test your app for crashes and bugs
* Ensure that all app information and metadata is complete and accurate
* Update your contact information in case App Review needs to reach you
* Provide App Review with full access to your app. If your app includes account-based features, provide either an active demo account or fully-featured demo mode, plus any other hardware or resources that might be needed to review your app (e.g. login credentials or a sample QR code)
* Enable backend services so that they're live and accessible during review
* Include detailed explanations of non-obvious features and in-app purchases in the App Review notes, including supporting documentation where appropriate
Following these basic steps during development and before submission will help you get your app approved faster.
## App Store (iOS)
Apple reviews are much stricter than Google reviews, so you need to make sure your app is ready for the App Store.
### Guidelines
Apple has a set of [guidelines](https://developer.apple.com/app-store/review/guidelines/) that you need to follow to make sure your app can be accepted in the App Store.
These include:
* **Safety**: Your app must not contain content or behavior that is harmful, abusive, or threatening.
* **Performance**: Your app must be performant and stable, with a smooth user experience.
* **Business**: Your app must not engage in unethical or deceptive practices.
* **Design**: Your app must have a clean and intuitive design.
* **Legal**: Your app must comply with all relevant laws and regulations.
You can read more about each guideline in the [official App Review Guidelines](https://developer.apple.com/app-store/review/guidelines/).
### Search optimization
App store optimization is the process of increasing an app or game's visibility in an app store, with the objective of increasing organic app downloads. Apps are more visible when they rank high on a wide variety of search terms, maintain a high position in the top charts, or get featured on the store.
There are a few actions that you can take to improve your app's visibility in the App Store:
* **Choose accurate keywords**: Use relevant keywords in your app's store listing.
* **Create a compelling app name, subtitle, and description**: Your app's title should be catchy and descriptive, the same applies to the subtitle and description.
* **Assign the right categories**: Make sure your app is categorized in the right category, this will help you reach the right audience.
* **Foster positive ratings**: Ratings and reviews appear on your product page and influence how your app ranks in search results. They can encourage people to engage with your app, so focus on providing a great app experience that motivates users to leave positive reviews.
* **Publish in-app events**: You can publish in-app events to promote your app and encourage users to engage with your app. (e.g. game competitions)
* **Promote in-app purchases**: Your promoted in-app purchases appear in search results on the App Store. Tapping an in-app purchase leads to your product page, which displays your app's description, screenshots, app previews, and in-app events — and lets people initiate an in-app purchase.
Read more about App Store Optimization in the [official documentation](https://developer.apple.com/app-store/search/).
## Google Play (Android)
Google reviews are less stringent than Apple reviews and usually take less time to review, but you still need to make sure your app is ready for the Play Store.
### Guidelines
Google has its own guidelines that apps must adhere to. Some important aspects to consider include:
* **Spam, functionality, and user experience**: Your app must not be spammy, must work as expected and must provide a good user experience.
* **Restricted content**: Before submitting an app to Google Play, ensure it complies with these content policies and with local laws.
* **Privacy**: Apps that are deceptive, malicious, or intended to abuse or misuse any network, device, or personal data are strictly prohibited
* **Monetization**: Your app must not engage in unethical or deceptive practices.
For more detailed information and an interactive checklist, check the [Google requirements page](https://developers.google.com/workspace/marketplace/about-app-review).
### Search optimization
Ensuring that your app and store listing is thorough and optimized is an important factor in getting discovered by users on Google Play.
Follow these steps to optimize your app's visibility on Google Play:
* **Build a comprehensive store listing**: This includes providing accurate **title**, **description** and **promo text**.
* **Use high-quality graphics and images**: App icons, images, and screenshots help make your app stand out in search results, categories, and featured app lists.
* **Diversify your audience**: Google provides automated machine translations of store listings that you don't explicitly define for your app. However, using a professional translation service for your *Description* can lead to better search results and discoverability for worldwide users.
* **Create a great user experience**: Google Play search factors in the overall experience of your app based on user behavior and feedback. Apps are ranked based on a combination of ratings, reviews, downloads, and other factors.
## Common mistakes
There are a few common mistakes that you should avoid to make sure your app can be accepted in the stores. Apple reports that, on average, over **40%** of unresolved issues relate to [guideline 2.1: App Completeness](https://developer.apple.com/app-store/review/guidelines/#2.1), so make sure to avoid these:
* **Crashes and bugs**
* **Broken links**
* **Placeholder content**
* **Incomplete information**
* **Privacy policy issues**
* **Inaccurate screenshots**
* **Repeated submission of similar apps**
Don't worry if your first submission is rejected, improve it, fix all the mentioned issues and try again.
---
url: /docs/mobile/monitoring/overview
title: Overview
description: Get started with mobile monitoring in TurboStarter.
---
TurboStarter ships with powerful, provider-agnostic monitoring helpers for the mobile app so you can answer the questions that matter in production: **what broke**, **on which screen**, and **which users were impacted**. It's designed for simplicity and extensibility, and works with multiple providers behind a single API.
## Capturing exceptions
On mobile, you'll usually want to report errors from a few key places:
* **UI/runtime crashes**: unexpected JS errors that would otherwise blank the screen or break navigation.
* **Async work**: background tasks, effects, and data fetching where failures are easy to miss.
* **Manual reporting**: wrap critical flows (auth, purchases, sync, deep-links) with `try/catch` so you can attach context when things go wrong.
```tsx
import { Pressable, Text } from "react-native";
import { captureException } from "@workspace/monitoring-mobile";
export default function ExampleComponent() {
const handleClick = () => {
try {
/* some risky operation */
} catch (error) {
captureException(error);
}
};
return (
Trigger Exception
);
}
```
`try/catch` (and most JS error handlers) can only see JavaScript exceptions. Native crashes (for example, a hard crash in a native module) typically require provider-specific native setup to capture crash reports. Use the provider pages below for platform details.
## Identifying users
Error reports become much more actionable once they're tied to a signed-in user. TurboStarter supports identifying the current user after the auth session resolves, so your monitoring provider can associate errors with a stable user profile (without you plumbing this through every capture call).
If you want richer filtering, pass non-sensitive traits (plan, role, locale) depending on what your provider supports.
```tsx title="monitoring.tsx"
import { useEffect } from "react";
import { identify } from "@workspace/monitoring-mobile";
import { authClient } from "~/lib/auth";
export const MonitoringProvider = ({
children,
}: {
children: React.ReactNode;
}) => {
const session = authClient.useSession();
useEffect(() => {
if (session.isPending) {
return;
}
identify(session.data?.user ?? null);
}, [session]);
return children;
};
```
Identify users with **stable IDs** and only the traits you need for debugging. Avoid sending PII or secrets (tokens, raw emails, payment details) unless you've explicitly decided it's acceptable for your monitoring provider and compliance requirements.
## Providers
TurboStarter can report through different monitoring providers while keeping your app code consistent. Choose a provider (or swap later) by updating the exports/config in the monitoring package.
## Recommended practices
Prioritize crashes, failed network calls that break a flow, and unexpected
states. Skip noisy “expected” errors (validation, user cancellations).
Include the screen/route, the action the user took, and relevant IDs
(request id, order id). Mobile issues are often device- or version-specific,
so make sure app version/build info is included by your provider.
If an effect or retry path can fire repeatedly, debounce or dedupe your
capture calls so you don't spam reports (or exceed quotas).
Keep environments isolated so test devices don't pollute production signal.
Tag builds/releases so you can correlate spikes with deployments.
With solid capture + identification in place, mobile monitoring becomes a feedback loop: you can spot regressions quickly, understand who they affect, and validate fixes by release.
---
url: /docs/mobile/monitoring/posthog
title: PostHog
description: Learn how to setup PostHog as your mobile monitoring provider.
---
[PostHog](https://posthog.com/) is a product analytics platform that can also help with monitoring via error tracking and session replay. On mobile, it's especially useful when you want to connect **what went wrong** with **what the user did** right before it happened.
TurboStarter keeps monitoring provider selection behind a unified API, so you can route captures to PostHog without changing your app code.
You'll need a PostHog account ([cloud](https://app.posthog.com/signup) or [self-hosted](https://posthog.com/docs/self-host)) to use it as your monitoring provider.
PostHog is one of the preconfigured analytics providers for mobile apps. If you want product analytics (events, screens, funnels), see [analytics overview](/docs/mobile/analytics/overview) and the [PostHog configuration](/docs/mobile/analytics/configuration#posthog).

## Configuration
PostHog makes it easy to monitor your mobile app for errors and issues, giving you full visibility into when things go wrong. With TurboStarter, you can enable PostHog-based monitoring in just a few steps, sending errors and related user actions to your PostHog dashboard for debugging and product improvement.
### Create a project
Create a new PostHog [project](https://app.posthog.com/project/settings) for your mobile app. You can do this from the [PostHog dashboard](https://app.posthog.com) using the *New Project* action.
### Activate PostHog as your monitoring provider
TurboStarter chooses the mobile monitoring provider through exports in `packages/monitoring/mobile`. To route monitoring events to PostHog, export the PostHog implementation from the package entrypoint:
```ts title="index.ts"
// [!code word:posthog]
export * from "./posthog";
export * from "./posthog/env";
```
### Set environment variables
Add your PostHog project key (and host, if you're not using the default cloud region) to your mobile app env. Set these locally and in your build environment (for example, in your [EAS build profile](/docs/mobile/publishing/checklist#environment-variables)):
```dotenv title="apps/mobile/.env.local"
EXPO_PUBLIC_POSTHOG_KEY="your-posthog-project-api-key"
EXPO_PUBLIC_POSTHOG_HOST="https://us.i.posthog.com"
```
That's it - launch the app, trigger an error, and confirm events are arriving in your PostHog project.

If you want to go beyond basic capture (session replay, feature flags, richer device/session context), follow [PostHog's React Native setup guidance](https://posthog.com/docs/error-tracking/installation/react-native).
## Uploading source maps
**Source maps** map the bundled/minified JavaScript running on devices back to your original source code. Without them, mobile stack traces are often hard to read and difficult to action.
With source maps uploaded to PostHog, error reports can be symbolicated so stack traces point to the real files and line numbers from your project.
PostHog's React Native source maps flow has two main parts:
* **Inject debug IDs** into the bundle during bundling (Metro)
* **Upload source maps** during your iOS/Android build (or via CLI in CI)
### Install and authenticate the PostHog CLI
Install the CLI globally:
```bash
npm install -g @posthog/cli
```
Then authenticate:
```bash
posthog-cli login
```
If you're running in CI, you can authenticate with environment variables instead:
```dotenv
POSTHOG_CLI_HOST="https://us.posthog.com"
POSTHOG_CLI_ENV_ID="your-posthog-project-id"
POSTHOG_CLI_TOKEN="your-personal-api-key"
```
### Inject debug IDs with Metro
Automatic injection relies on Expo's debug ID support. Update `metro.config.js` to use PostHog's Expo config:
```js title="metro.config.js"
const { getPostHogExpoConfig } = require("posthog-react-native/metro");
const config = getPostHogExpoConfig(__dirname);
module.exports = config;
```
### Upload source maps during builds
If you can use the Expo plugin (recommended for managed EAS builds), add the plugin to your Expo config:
```ts title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
plugins: ["posthog-react-native/expo"],
});
```
If you can't use the Expo plugin, PostHog also supports wiring uploads directly into:
* **Android**: your Gradle build (`android/app/build.gradle`)
* **iOS**: your Xcode “Bundle React Native code and images” build phase
Follow the [official PostHog instructions](https://posthog.com/docs/error-tracking/upload-source-maps/react-native) for the exact snippets for each platform.
### Verify uploads in PostHog
After a release build, confirm your symbol sets are present in [PostHog project error tracking dashboard](https://app.posthog.com/settings/project-error-tracking#error-tracking-symbol-sets) and then trigger a test error to ensure stack traces are resolving as expected.
With debug IDs injected and source maps uploaded, PostHog can symbolicate React Native errors so stack traces point back to your original source files. If traces still look minified, double-check that you're testing a release build and that the latest symbol sets are present in your project settings.
---
url: /docs/mobile/monitoring/sentry
title: Sentry
description: Learn how to setup Sentry as your mobile monitoring provider.
---
[Sentry](https://sentry.io/) is a popular error monitoring platform that captures crashes and exceptions from production devices and helps you debug them with stack traces, breadcrumbs, and user context.
TurboStarter's mobile monitoring layer is provider-agnostic, but Sentry is a great default when you want reliable crash reporting plus readable stack traces in release builds.
To use Sentry, create an [account in Sentry](https://sentry.io/signup) first.

## Configuration
TurboStarter integrates effortlessly with Sentry, so you can capture application errors and analyze performance from development through production. Setting up Sentry as your provider lets you quickly find and fix issues, contributing to a more robust and dependable app.
Follow the steps below to integrate Sentry with your TurboStarter project.
### Create a project
Begin by creating a [project](https://docs.sentry.io/product/projects/) in Sentry. You can set this up from your [dashboard](https://sentry.io/settings/account/projects/) by clicking the *Create Project* button.
### Activate Sentry as your monitoring provider
The monitoring provider to use is determined by the exports in `packages/monitoring/mobile` package. To activate Sentry as your monitoring provider, you need to update the exports in:
```ts title="index.ts"
// [!code word:sentry]
export * from "./sentry";
export * from "./sentry/env";
```
If you want to customize the provider, you can find its definition in `packages/monitoring/mobile/src/providers/sentry` directory.
### Set environment variables
Based on your [project settings](https://sentry.io/project/settings), fill the following environment variables in your `.env.local` file in `apps/mobile` directory and your deployment environment (e.g. [EAS build profile](/docs/mobile/publishing/checklist#environment-variables)):
```dotenv title="apps/mobile/.env.local"
EXPO_PUBLIC_SENTRY_DSN="your-sentry-dsn"
EXPO_PUBLIC_PROJECT_ENVIRONMENT="your-project-environment"
```
### Wrap your app
Install the Sentry React Native SDK in the `mobile` workspace.
```bash
pnpm i @sentry/react-native --filter mobile
```
And then wrap the root component of your application with Sentry.wrap:
```tsx title="app/_layout.tsx"
import * as Sentry from "@sentry/react-native";
export default Sentry.wrap(RootLayout);
```
TurboStarter initializes the SDK for you based on env + provider exports; you only need to wrap the root component.
You're all set! Start your app and view any errors or exceptions directly in your [Sentry dashboard](https://sentry.io/settings/account/projects/).

You can tailor the setup further if needed. For more details, refer to the [official Sentry documentation](https://docs.sentry.io/platforms/react-native/features/).
## Uploading source maps
Readable stack traces in Sentry require uploading source maps for release builds. For Expo projects, Sentry recommends enabling **two pieces**:
* the **Sentry Expo config plugin** (uploads during native builds)
* the **Sentry Metro plugin** (adds debug IDs so bundles and source maps match)
### Add the Sentry Expo plugin
Add `@sentry/react-native/expo` plugin to your Expo config (`app.config.ts`):
```ts title="app.config.ts"
export default ({ config }: ConfigContext): ExpoConfig => ({
...config,
plugins: [
[
"@sentry/react-native/expo",
{
url: "https://sentry.io/",
project: "your-sentry-project",
organization: "your-sentry-organization",
},
],
],
});
```
Then provide an auth token through environment variables (locally in `.env.local` file in `apps/mobile` directory) and your build environment:
```dotenv title="apps/mobile/.env.local"
SENTRY_AUTH_TOKEN="your-sentry-auth-token"
```
### Add the Sentry Metro plugin
To ensure unique Debug IDs are assigned to the generated bundles and source maps, add the Sentry Metro Plugin to the configuration.
Update `metro.config.js` to use `getSentryExpoConfig`:
```js title="metro.config.js"
const { getSentryExpoConfig } = require("@sentry/react-native/metro");
const config = getSentryExpoConfig(__dirname);
module.exports = config;
```
With the Expo plugin + Metro plugin in place, source maps are uploaded automatically during release native builds and EAS builds (debug builds typically rely on Metro's symbolication).
Take a moment to test your setup by triggering an error in your app, then confirm that source maps are resolving stack traces accurately in your [Sentry dashboard](https://sentry.io/settings/account/projects/). For advanced setup details, troubleshooting, or further customization with React Native and Expo, refer to the [official Sentry documentation](https://docs.sentry.io/platforms/react-native/guides/expo/sourcemaps/).
---
url: /docs/mobile/organizations/active-organization
title: Active organization
description: Set and switch the current organization context within your application.
---
The active organization on mobile mirrors the behavior used on the [web app](/docs/web/organizations/active-organization) and in the [extension](/docs/extension/organizations). It is tracked in the authenticated session as `activeOrganizationId` and used to scope all organization-bound data and actions.
Below you can find how to read and work with the active organization in your mobile app context.
## Reading the active organization
Use your auth client's helper to read the active organization from the session. This keeps the client in sync with the server and avoids duplicating tenancy logic.
```tsx title="organizations.tsx"
import { authClient } from "~/lib/auth";
export function OrganizationsScreen() {
const organization = authClient.useActiveOrganization();
const member = authClient.useActiveMember();
return (
<>
{organization?.name}{member?.role}
>
);
}
```
This mirrors the [extension](/docs/extension/organizations) approach and the [web hook](/docs/web/organizations/active-organization), ensuring the active organization and member role stay consistent with the server session.
## Performing actions
When invoking API routes from the mobile app, prefer passing the `organizationId` explicitly with the payload. This guarantees the correct tenant is targeted even if multiple devices or views are active simultaneously.
```tsx title="create-post.tsx"
import { api } from "~/lib/api";
export function CreatePost() {
const activeOrganization = authClient.useActiveOrganization();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) =>
api.posts.$post({
...post,
organizationId: activeOrganization?.id,
}),
});
return (
);
}
```
This mirrors the recommendation from the [web guide](/docs/web/organizations/active-organization#api-route) and avoids edge cases tied to stale session values.
## Switching organizations
TurboStarter ships an account switcher out of the box for mobile. You can drop it into your app and customize labels and styling as needed.
```tsx title="settings.tsx"
import { AccountSwitcher } from "~/modules/organization/account-switcher";
export function SettingsScreen() {
return ;
}
```
When a user selects a new organization, it calls your backend to update the session's `activeOrganizationId` and then re-read the session or invalidate related queries.
For deeper background on how the active organization is resolved, see the [web guide](/docs/web/organizations/active-organization).
---
url: /docs/mobile/organizations/invitations
title: Invitations
description: Send, track, and accept organization invites.
---
Invite teammates by email to join an organization directly from your mobile app. Acceptance is straightforward: we verify the invite, create or reuse the membership with the intended role, and set the user's active organization.
The implementation uses the same APIs and rules as the [web app](/docs/web/organizations/invitations) and is powered by the [Better Auth organization plugin](https://www.better-auth.com/docs/plugins/organization).

## Capabilities
* Send invitations by email.
* View and filter invitations by status or role, and search by email.
* Resend or revoke an invitation.
* Accept an invitation via a [deep link](https://docs.expo.dev/linking/into-your-app/).
Permissions are enforced by roles. Typically, only organization admins can
send or manage invites. See [RBAC (Roles &
Permissions)](/docs/mobile/organizations/rbac).
## Inviting members
Sending an invitation typically requires the invitee's email and the intended role. You can add multiple recipients in the invitation form to invite several members at once.

After sending, the invitee receives an email with a link to accept. It's a [deep link](https://docs.expo.dev/guides/linking) that opens your app and automatically validates the invite.
## Handling invitations
When a recipient opens an invite link on their device, the app automatically handles the entire flow - reading, parsing, and validating the invite - for you.

When the user accepts, we create or reuse their membership and set the active organization in their session. If they reject the invite, we redirect them to their account home.
## Learn more
For underlying details shared across platforms, see the web documentation:
These cover the schema, token lifecycle, and admin tooling shared by the mobile and web apps.
---
url: /docs/mobile/organizations/overview
title: Overview
description: Learn how to use organizations/teams/multi-tenancy in TurboStarter mobile app.
---
Organizations let you build teams and multi-tenant SaaS out of the box in the mobile app.
Users can create organizations, invite teammates, assign roles, and seamlessly switch between workspaces — all from iOS/Android — with the same secure data isolation used on the [web app](/docs/web/organizations/overview).
[Multi-tenancy](https://www.ibm.com/think/topics/multi-tenant) is a software architecture pattern where a single instance of an application serves multiple tenants, each with its own data and configuration.
The feature is powered by the same [Better Auth organization plugin](https://www.better-auth.com/docs/plugins/organization) and shares TurboStarter's API, routing, and data layer with the [web app](/docs/web/organizations/overview) and [extension](/docs/extension/organizations). That means your mobile app benefits from the same tenancy rules, RBAC checks, and invitations flow without duplicating backend logic.
## Architecture
On mobile, TurboStarter uses the same pragmatic multi-tenant architecture as the [web app](/docs/web/organizations/overview):
* **Tenant context** lives in the authenticated session as the active organization ID. The mobile client reads this context from the API and includes it when making requests.
* **Data scoping** is performed server-side via `organizationId` on tenant-owned tables and guard clauses in queries. Mobile screens consume scoped endpoints so users only see data for their selected organization.
* **Authorization** combines tenant scoping with role checks. We separate “can access this tenant?” from “can perform this action within the tenant?”.
* **Extensibility**: add new tenant-bound entities by including `organizationId` in your schema and using the provided helpers to read or switch the active organization in the app.
This keeps data isolated per organization while remaining simple to reason about across platforms.
For deeper details on the shared data model used by the mobile app, see [Data
model](/docs/web/organizations/data-model).
## Concepts
The same core concepts apply in the mobile app:
| Concept | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------- |
| **Organization** | A workspace that owns resources and settings, acting as an isolated tenant. |
| **Member** | A user assigned to an organization. |
| **Role** | Access level within an organization (see [RBAC](/docs/mobile/organizations/rbac)). |
| **Invitation** | Email request to join an organization (see [Invitations](/docs/mobile/organizations/invitations)). |
| **Active organization** | The currently selected organization in a user's session, used to scope data and permissions. |
These concepts provide the building blocks for flexible team management and secure, multi-tenant SaaS applications on mobile.
## Development data
In development, TurboStarter automatically [seeds](/docs/mobile/installation/commands#seeding-database) example data when you set up services. The mobile app connects to the same development API, so you can test the full organizations flow end-to-end:
* One organization is created by default.
* All default roles are created and assigned within that organization.
* Sample invitations are generated so you can test the invite flow.
You can safely experiment with these sample organizations, roles, and invitations to understand multi-tenancy features — [reset](/docs/mobile/installation/commands#resetting-database) or [reseed](/docs/mobile/installation/commands#seeding-database) anytime to return to the default state.
The default credentials for demo users can be customized using the `SEED_EMAIL` and `SEED_PASSWORD` environment variables.
The default development data and setup are intended for local development and
testing only. **Never** use these seeds or configurations in a production
environment - they are insecure and may expose sensitive functionality.
## Customization
You have flexibility to adapt organizations to fit your mobile experience. For example, you might rename labels (such as Organization to *Team* or *Workspace*), and update the app copy accordingly.
You can adjust the available [roles and permissions](/docs/mobile/organizations/rbac) to suit your access model.
The [invitation flow](/docs/mobile/organizations/invitations) can be customized, including how verification, onboarding, or metadata capture work.
Feel free to check how to configure all of these features inside mobile application in the dedicated sections linked above.
---
url: /docs/mobile/organizations/rbac
title: RBAC (Roles & Permissions)
description: Manage roles, permissions, and access scopes.
---
Role-based access control (RBAC) lets you define who can do what in an organization.
If you're new to the RBAC concept, a simple mental model is:
* Users belong to organizations.
* Users get roles.
* Roles map to permissions on resources.
In TurboStarter, we primarily rely on the [Better Auth plugin](https://www.better-auth.com/docs/plugins/organization) for the heavy lifting—roles, permissions, teams, and member management—while handling critical logic with our own code.
This provides a flexible access control system, letting you control user access based on their role in the organization. You can also define custom permissions per role.
TurboStarter ships with the default RBAC system configured out of the box. This setup may be enough if you're not planning a very complex access control system, but you can also easily customize it to your needs.
On mobile, use conditional UI (disable or hide actions) together with client helpers to match each member's role.
## Roles
Roles are named bundles of permissions. Keep them few and well-defined. By default, we have the following roles:
```ts
const MemberRole = {
MEMBER: "member",
ADMIN: "admin",
OWNER: "owner",
} as const;
```
A user can have multiple roles in an organization. For example, a user can be a member and an admin (if it makes sense for your application).
The organization's `admin` role is different from the user's global `admin` role.
The organization `admin` governs permissions only inside the organization, whereas the global `admin` controls access to the [super admin dashboard](/docs/web/admin/overview).
To create additional roles with custom permissions, see the [official documentation](https://www.better-auth.com/docs/plugins/organization#create-access-control) for more details.
## Permissions
Permissions represent what actions a role can perform on which resources.
To check if the current user has permission to perform an action on mobile, use the client helper and handle the boolean result in your component logic.
```tsx title="create-project.tsx"
import { useQuery, useMutation } from "@tanstack/react-query";
import { authClient } from "~/lib/auth";
export function CreateProject() {
const { data: canCreate } = useQuery({
queryKey: ["permission", "project", "create"],
queryFn: () =>
authClient.organization.hasPermission({
permissions: { project: ["create"] },
}),
});
const { mutate, isPending } = useMutation({
mutationFn: async () => {
// perform the create action
},
});
return (
);
}
```
When you already have the active member's role, prefer the client-side `checkRolePermission` to avoid extra API calls.
```tsx title="update-project.tsx"
import { authClient } from "~/lib/auth";
export function UpdateProject() {
const activeMember = authClient.useActiveMember();
const canUpdate = authClient.organization.checkRolePermission({
permission: {
project: ["update"],
},
role: activeMember.role,
});
return ;
}
```
We leverage the existing hook to retrieve the active member role within the [active organization](/docs/mobile/organizations/active-organization) context. That way, you can easily check whether a member has permission to perform an action without a server round trip.
This does not include any dynamic roles or permissions because everything runs synchronously on the client-side. Use the `hasPermission` APIs to include checks for dynamic roles and permissions.
If you need to add more granular permissions to existing roles, or create new ones, use the [`createAccessControl`](https://www.better-auth.com/docs/plugins/organization#custom-permissions) API.
For further customization—such as dynamic access control, lifecycle hooks, or team management—see the guidance in the [official documentation](https://www.better-auth.com/docs/plugins/organization) and the [web guide](/docs/web/organizations/rbac).
---
url: /docs/mobile/publishing/android
title: Google Play (Android)
description: Learn how to publish your mobile app to the Google Play Store.
---
[Google Play](https://play.google.com/) is the primary platform for distributing Android apps to billions of users worldwide. It's a powerful marketplace that allows you to reach a large audience and monetize your app.
To submit your app to the Play Store, you'll need to follow a series of steps. We'll walk through those steps here.
Before you submit, review the publishing [guidelines](/docs/mobile/marketing) and confirm that your app meets Google's policies to avoid common rejections.
## Developer account
A Google Play Developer account is required to submit your app to the Google Play Store. You can sign up on the [Google Play Console](https://play.google.com/console/) and pay the one-time registration fee.

To publish apps to Google Play, you must verify your identity. See the [official guide](https://support.google.com/googleplay/android-developer/answer/14177239) for more information. Next, you'll need to create a new app in the [Google Play Console](https://play.google.com/apps/publish/) by clicking the *Create app* button.
## Submission
After registering your developer account, setting it up, and preparing your app, you're ready to publish it to the Play Store.
There are multiple ways to submit your app:
* **Manual submission:** Upload your app bundle directly to the Play Store via the Play Console.
* **Local submission:** Use [EAS CLI](https://github.com/expo/eas-cli) to submit your app.
* **CI/CD submission:** Use ready-to-use GitHub Actions workflow to automatically submit your app.
**The first submission must be done manually, while subsequent updates can be submitted automatically.** We'll go through each approach in detail below.
### Manual submission
This approach is not recommended, as it is more error-prone and time-consuming due to manual steps. However, it's still the **only way to submit your app for the first time**. You can also use this route if you need to upload a build without EAS Submit (for example, during service maintenance) or if you prefer a fully manual flow.
**Create the app entry in Google Play Console**
1. Visit [Google Play Console](https://play.google.com/console/) and sign in. Accept any pending agreements if prompted.
2. Click *Create app*, then enter your app name, default language, app type, and pricing (free/paid). Confirm policy declarations.
3. Finish initial setup tasks (App access, Ads, Content rating, Target audience, Data safety, Privacy policy URL).
**Upload the `.aab` file to a track (internal/closed/open/production)**
1. The fastest route for a first upload is often *Internal testing*. Go to *Internal testing* → *Releases* (or choose *Closed/Open/Production*), then click *Create new release*.
2. Upload the `.aab` file, add release notes, and review any warnings.
3. Save and continue through the checks until you're ready to submit for review or roll out to [testers](https://play.google.com/console/about/internal-testing/).
**Verify and submit for review**
1. Complete Store listing assets and metadata if not already done.
2. Resolve any policy warnings. When ready, start the rollout to request a [review](/docs/mobile/publishing/android#review).
After your first manual upload is accepted, you can use [Local submission](/docs/mobile/publishing/android#local-submission) or [CI/CD submission](/docs/mobile/publishing/android#cicd-submission-recommended) for subsequent releases.
For more information, please refer to the guides listed below.
### Local submission
Due to Google Play API limitations, you must upload your app to Google Play **manually at least once** (to any track: internal, closed, open, or production) before automated submissions will work. See the detailed walkthrough in the ["First Android submission" guide](https://github.com/expo/fyi/blob/main/first-android-submission.md).
First, you need to **upload and configure a Google Service Account Key with EAS**. This is the required first step to submit your Android app to the Google Play Store. Follow the [guide on uploading a Google Service Account Key for Play Store submissions with EAS](https://github.com/expo/fyi/blob/main/creating-google-service-account.md) for detailed instructions.
Next, you have to get your app bundle — if you followed the [checklist](/docs/mobile/publishing/checklist), you should have the `.aab` file in your app folder from the [build step](/docs/mobile/publishing/checklist#build-your-app). If you used GitHub Actions to build your app, you can find the results in the `Builds` tab of your [EAS project](https://expo.dev). Download the artifacts and save them on your local machine.
Then, navigate to your app folder and run the following command to submit your app to the Play Store:
```bash
eas submit --platform android
```
The command will guide you through the submission process. You can also configure the steps of the submission process by adding a submission profile in `eas.json`.
If you upload your Google Service Account key to EAS credentials, you do not need to reference a local file path anywhere.
To speed up the submission process, you can use the `--auto-submit` flag to automatically submit a build after it is built:
```bash
eas build --platform android --auto-submit
```
This will automatically submit the build with all the required credentials to the Play Store right after it is built.
### CI/CD submission (recommended)
Due to Google Play API limitations, you must upload your app to Google Play **manually at least once** (to any track: internal, closed, open, or production) before automated submissions will work. See the detailed walkthrough in the ["First Android submission" guide](https://github.com/expo/fyi/blob/main/first-android-submission.md).
TurboStarter comes with a pre-configured GitHub Actions workflow to automatically submit your mobile app to the Play Store. You'll find the workflow in the `.github/workflows/publish-mobile.yml` file.
To use this workflow, [upload your Google Play Service Account key to EAS](https://github.com/expo/fyi/blob/main/creating-google-service-account.md) and check your Android credentials setup by running:
```bash
eas credentials --platform android
```
This way, you avoid storing the JSON key in your repository or CI/CD provider.
This workflow also requires a [personal access token](https://docs.expo.dev/accounts/programmatic-access/#personal-access-tokens) for your Expo account. Add it as `EXPO_TOKEN` in your [GitHub repository secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions), which will allow the `eas submit` command to run.
That's it! After completing these steps, [trigger the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) to submit your new build to the Play Store automatically 🎉
## Review
After filling out the information about your item, you're ready to submit it for review. Click on the *Send for review* button and confirm that you want to proceed with the submission:

To control **when** your app is released after review, you can configure [Managed publishing](https://support.google.com/googleplay/android-developer/answer/9859654) in the Google Play Console.
After submitting your app for review, it will enter Google's review process. The review time may vary depending on your app, and you'll receive a notification when the status updates. For more details, check out the [Google Play Review Process](https://developers.google.com/workspace/marketplace/about-app-review) documentation.
If your submission is rejected, you'll receive an email from Google with the rejection reason. You'll need to fix the issues and upload a new version of your app.

Make sure to follow the [guidelines](/docs/mobile/marketing) or check [publishing troubleshooting](/docs/mobile/troubleshooting/publishing) for more info.
When your app is approved by Google, you'll be able to publish it on the Play Store.

You can learn more about the review process in the official guides listed below.
---
url: /docs/mobile/publishing/checklist
title: Checklist
description: Let's publish your TurboStarter app to stores!
---
When you're ready to publish your TurboStarter app to stores, follow this checklist.
This process may take a few hours and some trial and error, so buckle up - you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](/docs/mobile/recipes/supabase) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities of doing a migration:
TurboStarter comes with predefined Github Action to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is to set your `DATABASE_URL` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in `.env.local` file and run the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## (Optional) Set up Firebase project
**Why it's necessary?**
Setting up a Firebase project is optional, and depends on which features your app is using. For example, if you want to use [Analytics](/docs/mobile/analytics/overview) with [Google Analytics](/docs/mobile/analytics/configuration#google-analytics), setting up a Firebase project is required.
**How to do it?**
Please refer to the [Firebase project](/docs/mobile/installation/firebase) section on how to set up and configure your Firebase project.
## Set up web backend API
**Why it's necessary?**
Setting up the backend is necessary to have a place to store your data and to have other features work properly (e.g. authentication, billing or storage).
**How to do it?**
Please refer to the [web deployment checklist](/docs/web/deployment/checklist) on how to set up and deploy the web app backend to production.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the application to function correctly. These variables include API keys, database URLs, and other configuration details required for your app to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your project on the [EAS platform](https://docs.expo.dev/eas/environment-variables/) for correct profile and environment:

Alternatively, you can add them to your `eas.json` file under correct profile.
```json title="eas.json"
{
"profiles": {
"base": {
"env": {
"EXPO_PUBLIC_DEFAULT_LOCALE": "en",
"EXPO_PUBLIC_AUTH_PASSWORD": "true",
"EXPO_PUBLIC_AUTH_MAGIC_LINK": "false",
"EXPO_PUBLIC_THEME_MODE": "system",
"EXPO_PUBLIC_THEME_COLOR": "orange"
}
},
"production": {
"extends": "base",
"autoIncrement": true,
"env": {
"APP_ENV": "production",
"EXPO_PUBLIC_SITE_URL": "https://www.turbostarter.dev",
}
}
}
}
```
## Build your app
Building your app requires an EAS account and project. If you don't have one, you can create it by following the steps [here](https://expo.dev/eas).
**Why it's necessary?**
Building your app is necessary to create a standalone application bundle that can be published to the stores.
**How to do it?**
You basically have two possibilities to build a bundle for your app:
TurboStarter comes with predefined Github Action to handle building your app on EAS. You can find its definition in the `.github/workflows/publish-mobile.yml` file.
What you need to do is to set your `EXPO_TOKEN` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). You can obtain it from your EAS account, in the [Access Tokens](https://expo.dev/settings/access-tokens) section.
Then, you can run the workflow which will build the app on [EAS platform](https://expo.dev/eas).
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your build locally, although this is not recommended for production.
To do it, you'll need to have [EAS CLI](https://github.com/expo/eas-cli) installed on your machine. You can install it by running the following command:
```bash
npm install -g eas-cli
```
Then, run the following command to build your app with the `production` profile:
```bash
eas build --profile production --platform all
```
This will build the app for both platforms (iOS and Android) and output the results in your app folder.
## Submit to stores
**Why it's necessary?**
Releasing your app to the stores is essential for making it accessible and discoverable by your users. This allows users to find, install, and trust your application through official channels.
**How to do it?**
We've prepared dedicated guides for each store that TurboStarter supports out-of-the-box, please refer to the following pages:
That's it! Your app is now live and accessible to your users, good job! 🎉
* Optimize your store listings with compelling descriptions, keywords, screenshots and preview videos
* Remove placeholder content and replace with your final production content
* Update all visual branding including favicon, scheme, splash screen and app icons
---
url: /docs/mobile/publishing/ios
title: App Store (iOS)
description: Learn how to publish your mobile app to the Apple App Store.
---
[Apple App Store](https://www.apple.com/app-store/) is the primary platform for distributing iOS apps, making them available on iPhones, iPads, and other Apple devices to millions of users worldwide.
To submit your app to the App Store, you'll need to follow a series of steps. We'll walk through those steps here.
Before you submit, review the publishing [guidelines](/docs/mobile/marketing) and confirm that your app meets Apple's policies to avoid common rejections.
## Developer account
An Apple Developer account is required to submit your app to the Apple App Store. You can sign up for an Apple Developer account on the [Apple Developer Portal](https://developer.apple.com/account/).

To submit apps to the App Store, you must also be a member of the Apple Developer Program. You can join the program by paying the annual fee.
## Submission
There are two primary ways to submit your iOS app to the App Store:
* **Manual:** Uploading the build yourself through Apple's tools, such as [Transporter](https://apps.apple.com/app/transporter/id1450874784) or [Xcode](https://developer.apple.com/xcode/).
* **Automatic (recommended):** Using [EAS Submit](/docs/mobile/publishing/ios#local-submission) or [CI/CD](/docs/mobile/publishing/ios#cicd-submission-recommended), which simplifies the process, ensures consistency, and reduces manual error.
Below, you'll find guidance for both submission methods—choose the one that fits your workflow and project needs.
### Manual submission
This approach is not recommended, as it is more error-prone and time-consuming due to manual steps. Use this route if you need to upload a build without EAS Submit (for example, during service maintenance) or prefer a fully manual flow from macOS.
**Create the app entry in App Store Connect**
1. Visit [App Store Connect](https://appstoreconnect.apple.com/) and sign in. Accept any pending agreements if prompted.
2. From Apps, click the + button and select *New App*.
3. Enter the app name, primary language, bundle identifier, and a unique SKU (for example, your bundle ID, such as `com.company.myapp`).
4. Press Create to finish setting up the app record.
**Upload the IPA with Transporter**
1. Install [Apple's Transporter](https://apps.apple.com/app/transporter/id1450874784) from the Mac App Store.
2. Open Transporter and sign in with your Apple ID.
3. Drag the `.ipa` into Transporter (or click *Add App* to choose the file).
4. Press *Deliver* to upload. Transfer time varies by file size and network.
**Verify processing and select the build**
1. Once uploaded, Apple processes the binary (often 10-20 minutes).
2. Back in [App Store Connect](https://appstoreconnect.apple.com/), open My Apps and select your app.
3. Under the *App Store* tab, select the new build in the *Build* section. If it's missing, wait and refresh.
4. Proceed with the usual App Store steps (screenshots, metadata, compliance, then submit for review).
For more information about the required metadata, refer to the official guides.
### Local submission
If you followed the [checklist](/docs/mobile/publishing/checklist), you should have the `.ipa` file in your app folder from the [build step](/docs/mobile/publishing/checklist#build-your-app). If you used GitHub Actions to build your app, you can find the results in the `Builds` tab of your [EAS project](https://expo.dev). Download the artifacts and save them on your local machine.
Then, navigate to your app folder and run the following command to submit your app to the App Store:
```bash
eas submit --platform ios
```
The command will guide you through the submission process. You can configure the submission process by adding a submission profile in `eas.json`:
```json title="eas.json"
{
"submit": {
"production": {
"ios": {
"ascAppId": "your-app-store-connect-app-id"
}
}
}
}
```
1. Sign in to [App Store Connect](https://appstoreconnect.apple.com/) and choose your team.
2. Open the [Apps](https://appstoreconnect.apple.com/apps) area.
3. Select your app from the list.
4. Switch to the *App Store* tab.
5. Go to *General* → *App Information*.
6. In *General Information*, the value labeled *Apple ID* is your `ascAppId`.

To speed up the submission process, you can use the `--auto-submit` flag to automatically submit a build after it is built:
```bash
eas build --platform ios --auto-submit
```
This will automatically submit the build with all the required credentials to the App Store right after it is built.
### CI/CD submission (recommended)
TurboStarter comes with a pre-configured GitHub Actions workflow to submit your mobile app to the App Store automatically. It's located in the `.github/workflows/publish-mobile.yml` file.
To be able to use this workflow, you'd need to fulfill the following prerequisites:
1. **Configure your App Store Connect API Key**
Run the following command to configure your App Store Connect API Key:
```bash
eas credentials --platform ios
```
The command will prompt you to configure credentials:
1. Choose the `production` build profile.
2. Authenticate with your Apple Developer account and proceed through the prompts.
3. Pick **App Store Connect → Manage your API Key**.
4. Enable **Use an API Key for EAS Submit** for the project.
2. **Provide a submission profile in `eas.json`**
Next, add a submission profile in `eas.json` with the following:
```json title="eas.json"
{
"submit": {
"production": {
"ios": {
"ascAppId": "your-app-store-connect-app-id"
}
}
}
}
```
1) Log into [App Store Connect](https://appstoreconnect.apple.com/) under the correct team.
2) Go to [Apps](https://appstoreconnect.apple.com/apps) and open your app.
3) Ensure the *App Store* tab is selected.
4) Navigate to *General* → *App Information*.
5) Copy the value shown as *Apple ID* — that is the `ascAppId`.

This workflow also requires a [personal access token](https://docs.expo.dev/accounts/programmatic-access/#personal-access-tokens) for your Expo account. Add it as `EXPO_TOKEN` in your [GitHub repository secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions), which will allow the `eas submit` command to run.
That's it! After completing these steps, [trigger the workflow](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) to submit your new build to the App Store automatically 🎉
## Review
After completing your app information, you're ready to submit it for review. Click the *Add for review* button and confirm that you want to proceed with the submission:

On the *Distribution* tab, you can configure the release process after the review is complete — whether you want to release the app automatically after review, later, or manually.

Once you've submitted your app for review, it will go through Apple's review process. The duration can vary based on the specifics of your app and you'll be notified when the status changes. For more information, refer to the [App Review](https://developer.apple.com/distribute/app-review/) docs.
If your submission is rejected, you'll receive an email from Apple with the rejection reason. You'll need to fix the issues and upload a new version of your app.

Make sure to follow the [guidelines](/docs/mobile/marketing) or check [publishing troubleshooting](/docs/mobile/troubleshooting/publishing) for more information.
If you need to clarify anything with Apple, you can reply to the app review request in App Store Connect:

This helps you understand the rejection and what you need to change to make your app eligible for distribution.
When your app is approved by Apple (by email or push notification), you'll be able to publish it on the App Store.

You can learn more about the review process in the official guides listed below.
---
url: /docs/mobile/publishing/updates
title: Updates
description: Learn how to update your published app.
---
After you publish your app to the stores, you can release updates to provide your users with new features and bug fixes.
TurboStarter offers two ready-to-use methods for updating your apps; we'll walk through both of them below.
## Over-the-air (OTA) updates
[Over-the-air (OTA) updates](https://en.wikipedia.org/wiki/Over-the-air_update) allow you to push updates to your app without requiring users to download a new version from the app store. This powerful feature enables rapid iteration and quick fixes.

TurboStarter integrates with [EAS Update](https://docs.expo.dev/eas-updates/overview/) to provide you with a seamless experience for managing your app updates. We also shipped a native notification that you can use to notify your users about the new updates available.
Then, to push your update straight to your users, you'll just need to run single command:
```bash
eas update --environment [environment] --channel [channel-name] --message "[message]"
```
The app will automatically download the update in the background and install it when your users are ready. You can also configure the update channel and message to be displayed to your users.
Feel free to check the [official documentation](https://docs.expo.dev/eas-update/getting-started/) for more information.
OTA updates are **only supported for non-native changes**. If you need to update your app with a new native feature (or add a package that uses native dependencies), you'll need to submit a new version to the stores - see below for more details.
## Submitting a new version
The most traditional way to update your app is to submit a new version to the stores. This is the most reliable approach, but it can take some time for the new version to be approved and made available to users.
To submit a new version, update the version number in both your `package.json` file and your `app.config.ts` file.
```json
{
...
"version": "1.0.0", // [!code --]
"version": "1.0.1", // [!code ++]
...
}
```
Next, follow the exact same steps as [when you initially published your app](/docs/mobile/publishing/checklist). When you submit your app for review, be sure to include release notes for the new version.
---
url: /docs/mobile/push-notifications
title: Push notifications
description: Engage your users with personalized notifications.
---
We are working on push notifications to help you engage your users. Stay tuned for updates.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/mobile/recipes/supabase
title: Supabase
description: Learn how to set up Supabase as the database (and optional storage) provider for your TurboStarter project.
---
[Supabase](https://supabase.com) is an open-source backend platform built on top of PostgreSQL that provides a managed database, storage, and other features out of the box.
You can adopt Supabase incrementally - start with just the pieces you need (for example, database only, or database + storage) and add more features over time. There's no requirement to integrate everything at once.
In this guide, we'll walk you through the process of setting up Supabase as a provider for your TurboStarter project. This could include using it as a [database](https://supabase.com/docs/guides/database), [storage](https://supabase.com/docs/guides/storage), [edge runtime for your API](https://supabase.com/docs/guides/functions) and more.
## Prerequisites
Before you start, make sure you have:
* **TurboStarter project** cloned locally with dependencies installed (you can use our [CLI](/docs/web/cli) to create a new project in seconds)
* **Supabase account** - you can create one at [supabase.com](https://supabase.com/sign-up)
* Basic familiarity with the core database docs:
* [Database overview](/docs/web/database/overview)
* [Migrations](/docs/web/database/migrations)
* [Database client](/docs/web/database/client)
## (Optional) Use Supabase locally with Docker
If you're on the Supabase free plan, you can only have a limited number of active hosted databases at once. A good workflow is:
* Use **local Supabase** for day-to-day development
* Keep **one hosted Supabase project** for staging/production (and for testing features that require a deployed project)
Supabase provides a local development stack that runs via **Docker**, managed by the **Supabase CLI**.
### Install prerequisites
* Install **Docker** (Docker Desktop is the easiest option)
* Install the **Supabase CLI** (pick one):
* macOS (Homebrew): `brew install supabase/tap/supabase`
* npm (no global install): `npx supabase --version`
### Initialize and start Supabase locally
From the monorepo root:
```bash
supabase init
supabase start
```
Once it’s running, get the local URLs and credentials:
```bash
supabase status
```
You should see a local **DB URL** (Postgres), plus URLs for **Studio** and the local API.
In most default setups, the local Postgres URL looks like:
`postgresql://postgres:postgres@127.0.0.1:54322/postgres`
Always prefer copying the exact value from `supabase status` to avoid port mismatches.
### Point TurboStarter to the local database
Update the **root** `.env.local` so TurboStarter’s `@workspace/db` uses the local Postgres:
```dotenv title=".env.local"
DATABASE_URL="postgresql://postgres:postgres@127.0.0.1:54322/postgres"
```
Then run migrations (same as with hosted Supabase):
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
That’s it — TurboStarter now talks to your **local Supabase Postgres**.
### Useful local commands
```bash
supabase stop # stop containers
supabase start # start again
supabase status # show URLs/ports/keys
supabase db reset # reset local DB (drops data)
```
## Create a new Supabase project
1. Go to the [Supabase dashboard](https://supabase.com).
2. Create a **new project** (choose a strong database password and a region close to your users).
3. Supabase will automatically provision a **PostgreSQL database** for you.

Optionally, you can customize the **Security options** by choosing the **Only Connection String** option - it will opt out of autogenerating API for tables inside your database. It's not needed for TurboStarter setup, but of course you can still leverage it for your custom use-cases.

Once the project is ready, you can fetch the connection string.
## Get the database connection string
In the Supabase dashboard:
1. Open your project.
2. Click on the **Connect** button at the top.
3. Locate the **connection string** for your chosen ORM (it will be under the **ORMs** tab).

Copy this value - you'll use it as your `DATABASE_URL`.
In your Supabase connection string, you can see a placeholder like `[YOUR-PASSWORD]`. Make sure to replace this with the actual password you set when creating your Supabase project.
## Configure environment variables
TurboStarter reads database connection settings from the **root** `.env.local` file and uses them inside the `@workspace/db` package.
Create (or update) the `.env.local` file in the **monorepo root**:
```dotenv title=".env.local"
DATABASE_URL="postgres://postgres.[YOUR-PROJECT-REF]:[YOUR-PASSWORD]@aws-0-[aws-region].pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1"
```
Replace:
* `YOUR-PROJECT-REF` with your Supabase project ref
* `YOUR-PASSWORD` with the database password you set when creating the project
* `aws-region` with the region shown in the Supabase connection string
These variables are validated in the `@workspace/db` package and used to create Drizzle client for your database.
For more background on how `DATABASE_URL` is used, see [Database overview](/docs/web/database/overview).
## Setup your Supabase database
With `DATABASE_URL` now pointing to Supabase, you can apply the existing TurboStarter schema to your Supabase database.
From the monorepo root, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This will:
* Use your Supabase `DATABASE_URL` from `.env.local`
* Run all pending SQL migrations from `packages/db/migrations`
* Create the full TurboStarter schema (users, billing, demo tables, etc.) in Supabase
If you're actively iterating on the schema, you can generate new migrations and apply them as described in [Migrations](/docs/web/database/migrations).
After running your migrations, you may want to seed your database with initial data (such as demo users or organizations). You can do this by running the following command:
```bash
pnpm with-env pnpm turbo db:seed
```
This will populate your Supabase database with some example data you can use to test your application.
## Use Supabase Storage as S3-compatible storage
TurboStarter's storage layer is designed to work seamlessly with **any S3-compatible provider**. In this section, we'll show how to use [Supabase Storage](/docs/web/storage/overview) as your application's file storage back-end.
Supabase Storage provides a simple, S3-compatible API and is a great choice if you're already using Supabase for your database.
### Create a storage bucket
1. In the Supabase dashboard, go to **Storage → Buckets**.
2. Click **Create bucket** (name it whatever you want, for example `avatars` or `uploads`).
3. Adjust settings based on your needs (e.g. limit the maximum file size, specify the allowed file types, etc.)

You can create multiple buckets (for documents, images, videos, etc.) if needed.
### Generate S3 access keys in Supabase dashboard
1. Go to **Storage → S3 → Access keys**.
2. Click **New access key**.
3. Give it a descriptive name and create the key.
4. Copy the **Access key ID** and **Secret access key** to use in your application.

### Configure S3 environment variables for Supabase Storage
In your weba application's `.env.local`, add (or update) the S3 configuration used by TurboStarter's storage layer:
```dotenv title=".env.local"
S3_REGION="us-east-1"
S3_BUCKET="avatars"
S3_ENDPOINT="https://[YOUR-PROJECT-REF].supabase.co/storage/v1/s3"
S3_ACCESS_KEY_ID="your-access-key-id"
S3_SECRET_ACCESS_KEY="your-secret-access-key"
```
These variables integrate directly with the storage configuration described in:
* [Storage overview](/docs/web/storage/overview)
* [Storage configuration](/docs/web/storage/configuration)
Once set, existing TurboStarter file upload flows (e.g. user avatars, organization logos) will use Supabase Storage via presigned URLs.
## Run your API on Supabase Edge Functions
As we're using a [Hono](https://hono.dev) as our API server, you can deploy it as a Supabase Edge Function so it runs close to your users.
At a high level:
1. Install the [Supabase CLI](https://supabase.com/docs/guides/cli) and initialize a Supabase project locally with `supabase init`.
2. Create a new [Edge Function](https://supabase.com/docs/guides/functions/quickstart) (for example `hono-backend`) with `supabase functions new hono-backend`.
3. Inside the generated function (for example `supabase/functions/hono-backend/index.ts`), set up a basic Hono app and export it via `Deno.serve(app.fetch)`:
```ts
import { Hono } from "jsr:@hono/hono";
// change this to your function name
const functionName = "hono-backend";
const app = new Hono().basePath(`/${functionName}`);
app.get("/hello", (c) => c.text("Hello from hono-server!"));
Deno.serve(app.fetch);
```
4. Run the function locally with `supabase start` and `supabase functions serve --no-verify-jwt`, then call it from your TurboStarter app using the local or deployed function URL.
5. When you're ready, deploy the function with `supabase functions deploy` (or `supabase functions deploy hono-backend`) and manage it using the Supabase dashboard, as described in the [Supabase Edge Functions docs](https://supabase.com/docs/guides/functions).
This is entirely optional, but it's a great fit for lightweight APIs, webhooks, and other serverless logic you want to run alongside your Supabase project.
## Explore additional Supabase features
Supabase is a full Postgres development platform, so beyond the database and storage pieces wired up above you can gradually add more features as your app grows ([see the Supabase homepage](https://supabase.com/) for an overview).
Some features that fit especially well with TurboStarter's design are:
* [Realtime](https://supabase.com/docs/guides/realtime) - built on [Postgres replication](https://www.postgresql.org/docs/current/runtime-config-replication.html), so you can stream changes from your existing TurboStarter tables (inserts, updates, deletes) into live UIs without changing how you manage schema or RLS. You still define tables and policies via `@workspace/db`, and opt into Realtime on top.
* [Vector](https://supabase.com/docs/guides/vector) - powered by the [pgvector](https://github.com/pgvector/pgvector) extension and stored in regular Postgres tables, making it easy to integrate semantic search or AI features while keeping everything in the same migrations and Drizzle models you already use in TurboStarter. We're using it extensively in our dedicated [AI Kit](/ai).
* [Cron](https://supabase.com/docs/guides/functions/cron) - enables you to schedule background jobs and periodic tasks with [pg\_cron](https://github.com/citusdata/pg_cron). You can define cron jobs for things like scheduled database cleanups, sending emails, report generation, or any recurring logic, all managed alongside your TurboStarter app with full Postgres integration.
Because these features are all layered on top of Postgres, you can introduce them incrementally and keep managing everything through your familiar workflow.
## Start the development server
With the database and other services configured to use Supabase, you can start TurboStarter as usual from the monorepo root:
```bash
pnpm dev
```
TurboStarter will now:
* Use **Supabase Postgres** as your database through `DATABASE_URL`
* Use **Supabase Storage** as your file storage through the S3-compatible endpoint
* Leverage **Supabase Edge Functions** (for example, with Hono) for your serverless backend
That's it! You can now start building your application with Supabase as your main provider. Explore the [Supabase documentation](https://supabase.com/docs) for more features and best practices.
---
url: /docs/mobile/stack
title: Tech Stack
description: A detailed look at the technical details.
---
## Turborepo
[Turborepo](https://turbo.build/) is a monorepo tool that helps you manage your project's dependencies and scripts. We chose a monorepo setup to make it easier to manage the structure of different features and enable code sharing between different packages.
} />
## React Native + Expo
[React Native](https://reactnative.dev/) is an open-source mobile application development framework created by Facebook. It is used to develop applications for Android and iOS by enabling developers to use [React](https://react.dev) along with native platform capabilities.
> It's like Next.js for mobile development.
[Expo](https://expo.dev/) is a framework and a platform built around React Native. It provides a set of tools and services that help you develop, build, deploy, and quickly iterate on iOS, Android, and web apps from the same JavaScript/TypeScript codebase. It's like Next.js for mobile development.
} />
} />
## Tailwind CSS
[Uniwind](https://uniwind.dev/) uses Tailwind CSS as scripting language to create a universal style system for React Native. It allows you to use Tailwind CSS classes in your React Native components, providing a familiar styling experience for web developers. We also use [React Native Reusables](https://github.com/mrzachnugent/react-native-reusables) for our headless components library with support of CLI to generate pre-designed components with a single command.
} />
} />
## Hono & React Query
[Hono](https://hono.dev) is a small, simple, and ultrafast web framework for the edge. It provides tools to help you build APIs and web applications faster. It includes an RPC client for making type-safe function calls from the frontend. We use Hono to build our serverless API endpoints.
To make data fetching and caching from our API easy and reliable, we pair Hono with [React Query](https://tanstack.com/query/latest). It helps manage asynchronous data, caching, and state synchronization between the client and backend, delivering a fast and seamless UX.
} />
} />
## Better Auth
[Better Auth](https://www.better-auth.com) is a modern authentication library for fullstack applications. It provides ready-to-use snippets for features like email/password login, magic links, OAuth providers, and more. We use Better Auth to handle all authentication flows in our application.
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a super fast [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) tool for databases. It helps manage databases, generate TypeScript types from your schema, and run queries in a fully type-safe way.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but thanks to Drizzle's flexibility, you can easily switch to MySQL, SQLite or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
## EAS (Expo Application Services)
[EAS](https://expo.dev/eas) is a set of cloud services provided by Expo for React Native app development. It includes tools for building, submitting, and updating your app, as well as over-the-air updates and analytics.
} />
---
url: /docs/mobile/tests/e2e
title: E2E tests
description: Simulate real user scenarios across the entire stack with automated end-to-end test tools and examples.
---
End-to-end (E2E) tests will be available soon, allowing you to automate testing of real user flows and interactions across your application.
Stay tuned for updates as we roll out robust E2E testing resources and examples.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/mobile/tests/unit
title: Unit tests
description: Write and run fast unit tests for individual functions and components with instant feedback.
---
Unit tests are a type of automated test where individual units or components are tested. The "unit" in "unit test" refers to the smallest testable parts of an application. These tests are designed to verify that each unit of code performs as expected.
TurboStarter uses [Vitest](https://vitest.dev) as the unit testing framework. It's a blazing-fast test runner built on top of [Vite](https://vitejs.dev), designed for modern JavaScript and TypeScript projects.
If you've used [Jest](https://jestjs.io) before, you already know Vitest - it shares the same API. But Vitest is built for speed: native TypeScript support without transpilation, parallel test execution, and a smart watch mode that only re-runs tests affected by your changes.
It comes with everything you need out of the box - code coverage, snapshot testing, mocking, and a slick UI for debugging. Fast feedback, zero configuration.
## Why write unit tests?
Unit tests give you **fast, focused feedback** on small pieces of your code - individual functions, hooks, or components. Instead of debugging an entire page or flow, you can verify just the logic you care about in isolation.
They also act as **living documentation**: a good test tells you how a function is supposed to behave, which edge cases are important, and what assumptions the code makes. This makes it much easier to safely refactor or extend features later.
In TurboStarter, unit tests are designed to be **cheap and quick to run**, so you can keep Vitest running in watch mode while you code. Every change you make is immediately checked, helping you catch regressions before they ever reach integration or end‑to‑end tests.
## Configuration
TurboStarter configures Vitest to be **as simple as possible**, while still taking advantage of [Turborepo's caching](https://turborepo.com/docs/crafting-your-repository/caching) and Vitest's [Test Projects](https://vitest.dev/guide/projects).
```ts title="vitest.config.ts"
import { mergeConfig } from "vitest/config";
import baseConfig from "@workspace/vitest-config/base";
export default mergeConfig(baseConfig, {
test: {
/* your extended test configuration here */
},
});
```
* **Per-package tests**: each package that has unit tests defines its own `test` script. This keeps the configuration close to the code and makes it easy to add tests to any workspace.
* **Turbo tasks for CI**: the root `test` task (`pnpm test`) uses `turbo run test` to execute all package-level test scripts with smart caching, which is ideal for CI pipelines where you want to avoid re-running unchanged tests.
* **Vitest Test Projects for local dev**: a root Vitest configuration uses [Test Projects](https://vitest.dev/guide/projects) to run all unit test suites from a single command, which is perfect for local development when you want fast feedback across the whole monorepo.
This **hybrid setup** combines Turborepo and Vitest Projects in a way that fits TurboStarter's principles: cached, package-aware runs in CI, and a single, unified Vitest entry point for local development.
You can read more about this setup in the official documentation guides listed below.
## Running tests
There are a few different ways to run unit tests, depending on what you're doing:
* **CI / full test run** - at the root of the repo:
```bash
pnpm test
```
This runs `turbo run test`, which executes all `test` scripts in packages that define them, with Turborepo handling caching so unchanged packages are skipped. This is what you should use in your CI/CD pipeline.
* **One-off local run with Vitest Projects**:
```bash
pnpm test:projects
```
This uses Vitest [Test Projects](https://vitest.dev/guide/projects) to run all configured unit test suites from a single command, which is great when you want to quickly validate the whole monorepo locally.
* **Watch mode during development**:
```bash
pnpm test:projects:watch
```
This starts Vitest in watch mode across all Test Projects. As you edit files, only the affected tests are re-run, giving you fast feedback while you work.
## Code coverage
Unit test coverage helps you understand **how much** of your code is being tested. While it can't guarantee bug-free code, it shines a light on untested paths that could hide issues or regressions.
To generate a code coverage report for all unit tests, run:
```bash
pnpm turbo test:coverage
```
This command runs the coverage task across all relevant packages (using Turborepo) and collects the results into a single coverage output.
To open the coverage report in your browser:
```bash
pnpm turbo test:coverage:view
```
This will build the HTML report and launch it using your default browser, so you can explore which files and branches are covered.
You can also store the generated coverage report as a [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) during your CI/CD pipeline, just add the following steps to your workflow job:
```yaml title=".github/workflows/ci.yml"
# your workflow job configuration here
- name: 📊 Generate coverage
run: pnpm turbo test:coverage
- name: 🗃️ Archive coverage report
uses: actions/upload-artifact@v5
with:
name: coverage-${{ github.sha }}
path: tooling/vitest/coverage/report
```
This will generate a test coverage report and upload it as an artifact, so you can access it from GitHub Actions tab for later inspection.
A high coverage percentage means your tests execute most lines and branches - but the quality and relevance of your tests matter more than the raw number. Use coverage reports to spot gaps and guide improvements, not as the sole metric of test health.

## Best practices
Unit tests should work **for you**, not the other way around. Focus on writing tests that make it easier to change code with confidence, not on satisfying arbitrary rules or reaching a magic number in a dashboard.
Code coverage is a **useful metric**, but it **SHOULD NOT** be the goal. It's better to have a smaller set of high‑value tests that cover critical paths and edge cases than a huge suite of fragile tests that are hard to maintain.
When in doubt, ask: *“Does this test give **me** confidence that I can change this code without breaking users?”* If the answer is no, refactor or remove it.
Finally, keep unit tests focused on **small, isolated pieces of logic**. More advanced flows — like multi-step user journeys, cross-service interactions, or full-page behavior — are better covered by [end-to-end (E2E) tests](/docs/web/tests/e2e), where you can verify the system as a whole.
---
url: /docs/mobile/troubleshooting/billing
title: Billing
description: Find answers to common billing issues.
---
## Products/offerings not visible on the paywall
If your paywall loads but shows **no products** (empty packages/offerings), it's almost always a **store configuration** issue (App Store Connect / Google Play) or an **app-to-store mismatch**, not a UI bug in the paywall.
### Quick checks
First, verify the **product identifiers** in your provider match the store **exactly** (case-sensitive), you're testing on a **real device** (not simulator/emulator), your app's **Bundle ID / package name** matches what the store knows, and you're using the correct provider **platform key** for the build/environment you're running.
### iOS
On iOS, confirm your IAPs are in **Ready to Submit** or **Approved** (and allow **24h+** after approval for store propagation). If you see an error like the one below, it usually means App Store Connect has pending **Agreements/Tax/Banking** requirements (e.g. Paid Applications Agreement not signed, banking not cleared, tax forms incomplete):
```
[StoreKit] Error enumerating unfinished transactions: Error Domain=ASDErrorDomain Code=509 "No active account"
```
Also double-check you're not accidentally using a **StoreKit Configuration file** when you expect live store products, and if you recently changed product metadata and things got flaky, try creating a **new product identifier** and testing again.
### Android
On Android, make sure the product is **Active** in Play Console and that you're testing with an app build distributed via a **testing track** (internal/closed) with your account added as a **tester**. If products are region/compatibility-limited, confirm they're available for your tester's country/device settings.
### Still empty?
Sign into the App Store / Play Store on the device with the intended test account, confirm you're running the expected build type (local debug vs TestFlight can differ), and add logs around the provider's product fetch plus any underlying store error—those messages typically point directly to what's misconfigured.
---
url: /docs/mobile/troubleshooting/installation
title: Installation
description: Find answers to common mobile installation issues.
---
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/mobile/internationalization#configuration)
## Expo cannot detect XCode
If you get the following error:
```bash
Expo cannot detect Xcode Xcode must be fully installed before you can continue
```
This is usually related to the Xcode CLI not being installed. You can fix this by running the following command:
```bash
sudo xcode-select -s /Applications/Xcode.app/Contents/Developer
```
If you still face the issue, please make sure you have the latest version of Xcode installed.
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter mobile my-package
# For a specific package
pnpm install --filter @workspace/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
---
url: /docs/mobile/troubleshooting/publishing
title: Publishing
description: Find answers to common mobile publishing issues.
---
## My app submission was rejected
If your app submission was rejected, you probably got an email with the reason. You'll need to fix the issues and upload a new build of your app to the store and send it for review again.
Make sure to follow the [guidelines](/docs/mobile/marketing) when submitting your app to ensure that everything is setup correctly.
## App Store screenshots don't match requirements
If your app submission was rejected due to screenshot issues, make sure:
1. Screenshots match the required dimensions for each device
2. Screenshots accurately represent your app's functionality
3. You have provided screenshots for all required device sizes
4. Screenshots don't contain device frames unless they match Apple's requirements
[See Apple's screenshot specifications](https://developer.apple.com/help/app-store-connect/reference/screenshot-specifications/)
## Version number conflicts
If you get version number conflicts when submitting:
1. Ensure your `app.json` version matches what's in the store
2. Increment the version number appropriately:
```bash
"version": "1.0.1",
"android.versionCode": 2,
"ios.buildNumber": "2"
```
3. Make sure both stores have unique version numbers
## Missing or incorrect environment variables
If your build succeeds but the binary is misconfigured (e.g., API URL shows as `undefined`, Sentry auth fails, or `app.config.*` settings don’t apply), verify your EAS environment variables:
1. Define variables on EAS and assign them to the correct environment (`development`, `preview`, `production`).
2. For values used in app code, prefix with `EXPO_PUBLIC_` and read via `process.env.EXPO_PUBLIC_...`.
3. For config-time values (bundle identifiers, file paths), read `process.env.VARNAME` from your `app.config.*`.
4. Explicitly set `environment` in `eas.json` build profiles, or pass `--environment` to `eas update` so updates use the same variables as builds.
5. For local development, pull variables into a `.env` file:
```bash
eas env:pull --environment development
```
6. Use secret file variables (e.g., `GOOGLE_SERVICES_JSON`) and reference them in `app.config.*`.
7. Keep `.env` out of git; cloud builds don’t rely on your local `.env`.
See: [Environment variables in EAS](https://docs.expo.dev/eas/environment-variables/).
## My app crashes on production build
If the app works in development but crashes in a production build, check these common causes:
1. **Missing or incorrect environment variables at build time**. EAS cloud jobs don’t use your local `.env` by default. Ensure variables exist on EAS, are assigned to the correct environment, and use `EXPO_PUBLIC_` for values read in app code. See: [Environment variables in EAS](https://docs.expo.dev/eas/environment-variables/).
2. **Missing native config files**. Provide `google-services.json` / `GoogleService-Info.plist` via secret file variables (e.g., `GOOGLE_SERVICES_JSON`) and reference them in `app.config.*`.
3. **Production-only code paths**. Guard dev-only code with `__DEV__`, avoid importing dev tools in production, and ensure feature flags don’t access undefined values.
4. **Misconfigured native modules or plugins**. Verify required plugins/babel config are present and rebuild after cache clears.
Try this:
1. Run the app with a production JS bundle locally to surface minification issues:
```bash
npx expo start --no-dev --minify
```
2. Inspect device logs when the crash occurs (Android: `adb logcat`, iOS: Console.app or Xcode Devices).
3. Rebuild with a clean cache if needed:
```bash
eas build --clear-cache
```
---
url: /docs/web/admin/overview
title: Overview
description: Get started with the admin dashboard in TurboStarter.
---
TurboStarter ships with a fully functional admin dashboard - it's a comprehensive tool for managing your application and users from one central place.
The panel is designed to be intuitive and easy to use, while being customizable and scalable at the same time. You can access it at [/admin](http://localhost:3000/admin).

## Roles and permissions
With the initial configuration, your app has two roles available to users: `user` and `admin`. By default, all users are created with the `user` role.
To access the admin dashboard, a user must have the `admin` permission.
```ts
const UserRole = {
USER: "user",
ADMIN: "admin",
} as const;
```
You can, of course, define more roles and assign granular permissions, but we recommend keeping the number of roles to a minimum.
## Making a user an admin
To promote a user to the admin role, use your database provider's UI or leverage our built-in [Studio](/docs/web/database/overview#studio). After you find the user you want to promote, change their role from `user` to `admin`.
**Ensure the user you are promoting truly requires admin privileges, as they will gain access to all resources and permissions.**
To determine whether a user is eligible for the `admin` role, review the following recommendations before promoting the user:
* The user's email is verified
* Two-factor authentication (2FA) is enabled
* The user is **not** banned or reported
By default, when you [run services](/docs/web/installation/commands#setting-up-services) for the first time, your database is [seeded](/docs/web/installation/commands#seeding-database) with example data. This includes an admin user with test credentials that you can use to test admin functionality locally.
```json
{
"email": "me+admin@turbostarter.dev",
"password": "Pa$$w0rd"
}
```
You can modify these by setting the `SEED_EMAIL` and `SEED_PASSWORD` environment variables in the `.env.local` file and running the seed process again.
**This flow is for local testing purposes only. Do not use it in production.**
## Dashboard
The admin dashboard is your **central place** to manage your application. It includes management tools for each resource you have defined.
Users with the `admin` permission will see an additional dropdown item in the navigation menu, allowing them to access the admin dashboard.

Explore each section of the page below to familiarize yourself with the available tools and options.
---
url: /docs/web/admin/ui
title: Super Admin UI
description: Get familiar with the Super Admin dashboard and start managing your application.
---
When you open [/admin](http://localhost:3000/admin), you will see the homepage of the admin dashboard. It includes some quick actions and a summary of the resources you have in your application. Feel free to customize it to your needs.
To simplify navigation, we also shipped a sidebar that you can use to navigate between different sections and access all admin capabilities.

Check below for more details about each section.
## Users
Central place to manage your users. You can see the list of users, search and filter them e.g. by role, 2FA, banned state, and created date.
Use it to quickly find users that you need to manage or to see how your SaaS is performing.

When you click on a user, you will see the user details. You can edit the user's name and role, view the user's 2FA status, and see the user's created/updated timestamps.
You can also see and manage the resources related to this specific user like user's connected accounts/providers, subscriptions, memberships, etc.

Beyond simply viewing user information, the admin dashboard enables you to perform a variety of essential user management actions, including:
* **Impersonate the user**: Temporarily log in as the selected user to troubleshoot their experience, verify permissions, or offer assistance directly from their perspective.
* **Ban or unban the user**: Restrict access to your application by banning users who violate terms of service, or lift restrictions when appropriate by unbanning them.
* **Delete the user**: Permanently remove a user and any associated data from your system when necessary, such as for GDPR compliance or at user request.
These administrative actions help you maintain a secure, compliant, and user-friendly environment for your SaaS platform.
## Organizations
See how your multi-tenancy is performing in an elegant way presented as a data table. You can search and filter organizations by name, slug, member count and many more.

In the single organization view, you can get an overview of the specified organization, e.g see its members or invitations that are associated with it.

Here are some example actions you can perform when managing an organization:
* **Edit organization details**: Change the organization name, slug, or other profile information.
* **Invite or remove members**: Add new users to an organization or revoke access from existing members.
* **Change member roles**: Promote a member to an admin or downgrade their access.
* **View and manage invitations**: See pending invites and revoke them if needed.
* **Delete organization**: Remove an organization and all its related data (action usually restricted to super admins).
* **Impersonate organization admin**: Temporarily assume the perspective of an organization's admin for troubleshooting.
* **Audit activity**: View a history of actions taken within the organization for security and compliance.
These actions help you maintain control over multi-tenant environments and ensure that your SaaS remains secure and organized.
## Customers
Manage your customers and their subscriptions. Use search, filters, and sorting to quickly find the right record and understand billing state at a glance.

A few example actions you can perform when managing a customer:
* **Open a customer** to view subscription details and billing history.
* **Change subscription plan** or move a customer to a different tier.
* **Start or extend a trial**, or **cancel a subscription** when needed.
* **Update billing details** like billing email and tax information.
* **Delete customer** to remove them and their billing profile (restricted action).
## Add your own resources
It’s your admin panel at the end of the day - extend it with any domain‑specific resources your product needs. The UI ships with reusable table, filter, form, and layout primitives so you can compose new sections quickly.
To make CRUD panels fast to build, we also provide dedicated hooks, UI components, and API helpers that handle the boring plumbing - data fetching, pagination, sorting, filters, and mutations — so you can focus on your domain logic instead of boilerplate.
### Start from an example
Duplicate an existing resource (like `Users` or `Organizations`) as a baseline and adjust the schema/columns to your needs.
### Build the list view
Compose a data table with columns, sorting, full‑text search, and filters using the shipped primitives.
Leverage the dedicated hooks, UI components, and API helpers to handle fetching, pagination, sorting, filters, and mutations with minimal boilerplate.
### Add a details view
Create a single‑resource page and, if helpful, add tabs for related entities (e.g., memberships, invoices) using the same building blocks.
### Wire up navigation
Register your route in the admin sidebar so the new resource appears alongside the built‑ins.
### Secure with permissions
Protect access using your RBAC rules and feature flags to control who can view or manage the resource.
Et voilà! You now have a new resource in your admin panel 🥳
---
url: /docs/web/ai/configuration
title: Configuration
description: Configure AI integration in your TurboStarter project.
---
To ensure scalability and avoid security vulnerabilities, AI requests are proxied by our [Hono backend](/docs/web/api/overview). This means you need to set up AI integration on both the client and server side.
We want to avoid exposing API keys directly to the browser, as this could lead to abuse of your key and generate unnecessary costs.
In this section, we'll explore the configuration for both sides to give you a smooth start.
## Server-side
On the backend, you need to set up two things: environment variables to configure the provider and the procedure to pass responses to the client. Let's go through it!
### Environment variables
You need to set the environment variables that correspond to the AI provider you want to use.
For example, for the OpenAI provider, you would need to set the following environment variables:
```dotenv
OPENAI_API_KEY=
```
However, if you want to use the Anthropic provider, you would need to set these environment variables:
```dotenv
ANTHROPIC_API_KEY=
```
You can find the list of all available providers in the [official documentation](https://sdk.vercel.ai/providers/ai-sdk-providers), along with the required variables that need to be set to ensure the integration works correctly.
### API endpoint
As we're proxying the requests, we need to register an [API endpoint](/docs/web/api/new-endpoint) that will be used to pass the responses to the client.
The steps will be the same as we described in the [API](/docs/web/api/new-endpoint) section. An example implementation could look like this:
```ts title="ai/router.ts"
export const aiRouter = new Hono().post("/chat", async (c) =>
streamText({
model: openai.responses("gpt-5"),
messages: convertToModelMessages((await c.req.json()).messages),
}).toUIMessageStreamResponse(),
);
```
As you can see, we're defining which provider and specific model we want to use here.
We're also using [Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Concepts), which allows us to pass the result to the user as soon as the model starts generating it, without needing to wait for the full response to be completed. This gives the user a sense of immediacy and makes the conversation more interactive.
## Client-side
To consume the server response, we can leverage the ready-to-use hooks provided by the [Vercel AI SDK](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot), such as the `useChat` hook:
```tsx title="page.tsx"
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
const AI = () => {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: "/api/ai/chat",
}),
});
return (
{messages.map((message) => (
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return
{part.text}
;
}
})}
))}
);
};
export default AI;
```
By leveraging this integration, we can easily manage the state of the AI request and update the UI as soon as the response is ready.
TurboStarter ships with a ready-to-use implementation of AI chat, allowing you to see this solution in action. Feel free to reuse or modify it according to your needs.
---
url: /docs/web/ai/overview
title: Overview
description: Get started with AI integration in your TurboStarter project.
---
TurboStarter includes a set of AI rules, skills, subagents, and commands for popular AI editors and tools - so the AI follows this repo's conventions and produces more consistent changes.
See [AI-assisted development](/docs/web/installation/ai-development) to set it up.
For AI integration, TurboStarter uses the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction), which provides a unified toolkit for building AI features across providers.
It's a simple yet powerful library that exposes a unified API for all major AI providers.
This lets you build AI features without worrying about the quirks of each underlying provider API.
You can learn more about the `ai` package in the [official documentation](https://sdk.vercel.ai/docs/introduction).
## Features
The starter includes common AI features out of the box, such as:
* **Chat**: Build chat applications with ease.
* **Streaming responses**: Stream responses from your AI provider in real time.
* **Image generation**: Generate images using AI technology.
* **Embeddings**: Generate embeddings for your data.
* **Vector stores**: Store and query your embeddings efficiently.
You can easily compose your application using these building blocks or extend them to suit your specific needs.
## Providers
TurboStarter relies on the AI SDK to support multiple AI providers. This means you can switch providers without changing your code, as long as they are supported by the `ai` package.
You can find the list of supported providers in the [official documentation](https://sdk.vercel.ai/providers/ai-sdk-providers).
You can also add your own custom provider. It just needs to implement the common interface and provide the required methods.
Read more about this in the [official guide](https://sdk.vercel.ai/providers/community-providers/custom-providers).
Provider configuration is straightforward. We'll explore it in more detail in the [Configuration](/docs/web/ai/configuration) section.
---
url: /docs/web/analytics/configuration
title: Configuration
description: Learn how to configure web analytics in TurboStarter.
---
The `@workspace/analytics-web` package offers a streamlined and flexible approach to tracking events in your TurboStarter web app using various analytics providers. It abstracts the complexities of different analytics services and provides a consistent interface for event tracking.
In this section, we'll guide you through the configuration process for each supported provider.
Note that the configuration is validated against a schema, so you'll see error messages in the console if anything is misconfigured.
## Providers
TurboStarter supports multiple analytics providers, each with its own unique configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best suits your needs and follow the instructions in the respective accordion section.
To use Vercel Analytics as your provider, you need to [create a Vercel account](https://vercel.com/) and [set up a project](https://vercel.com/docs/projects/overview).
Next, enable analytics in your Vercel project settings:
1. Navigate to the [Vercel dashboard](https://vercel.com/dashboard).
2. Select your project.
3. Go to the *Analytics* section.
4. Click *Enable* in the dialog.
Enabling Web Analytics will add new routes (scoped at `/_vercel/insights/*`) after your next deployment.
Also, make sure to activate the Vercel provider as your analytics provider by updating the exports in:
```ts
// [!code word:vercel]
export * from "./vercel";
```
```ts
// [!code word:vercel]
export * from "./vercel/server";
```
```ts
// [!code word:vercel]
export * from "./vercel/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/vercel` directory.
For more information, please refer to the [Vercel Analytics documentation](https://vercel.com/docs/analytics/overview).

To use Google Analytics as your analytics provider, you need to [create a Google Analytics account](https://analytics.google.com/) and [set up a property](https://support.google.com/analytics/answer/9304153).
Next, add a data stream in your Google Analytics account settings:
1. Navigate to [Google Analytics](https://analytics.google.com/).
2. In the *Admin* section, under *Data collection and modification*, click on *Data Streams*.
3. Click *Add stream*.
4. Select *Web* as the platform.
5. Enter the required details for the stream (at minimum, provide a name and website URL).
6. Click *Create stream*.
After creating the stream, you'll need two pieces of information:
1. Your [Measurement ID](https://support.google.com/analytics/answer/12270356) (it should look like `G-XXXXXXXXXX`):

2. Your [Measurement Protocol API secret](https://support.google.com/analytics/answer/9814495):

Set these values in your `.env.local` file in the `apps/web` directory and in your deployment environment:
```dotenv
NEXT_PUBLIC_ANALYTICS_GOOGLE_MEASUREMENT_ID="your-measurement-id"
GOOGLE_ANALYTICS_SECRET="your-measurement-protocol-api-secret"
```
Also, make sure to activate the Google Analytics provider as your analytics provider by updating the exports in:
```ts
// [!code word:google-analytics]
export * from "./google-analytics";
```
```ts
// [!code word:google-analytics]
export * from "./google-analytics/server";
```
```ts
// [!code word:google-analytics]
export * from "./google-analytics/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/google-analytics` directory.
For more information, please refer to the [Google Analytics documentation](https://developers.google.com/analytics).

PostHog is also one of pre-configured providers for [monitoring](/docs/web/monitoring/overview) in TurboStarter. You can learn more about it [here](/docs/web/monitoring/posthog).
To use PostHog as your analytics provider, you need to configure a PostHog instance. You can obtain the [Cloud](https://app.posthog.com/signup) instance by [creating an account](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
Then, create a project and, based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_POSTHOG_KEY="your-posthog-api-key"
NEXT_PUBLIC_POSTHOG_HOST="your-posthog-instance-host"
```
Also, make sure to activate the PostHog provider as your analytics provider by updating the exports in:
```ts
// [!code word:posthog]
export * from "./posthog";
```
```ts
// [!code word:posthog]
export * from "./posthog/server";
```
```ts
// [!code word:posthog]
export * from "./posthog/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/posthog` directory.
For more information, please refer to the [PostHog documentation](https://posthog.com/docs).

To use Mixpanel as your analytics provider, you need to [create an account](https://mixpanel.com/) and [obtain your project token](https://help.mixpanel.com/hc/en-us/articles/115004502806-Find-Project-Token).
Then, set it as an environment variable in your `.env.local` file in the `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_MIXPANEL_TOKEN="your-project-token"
```
Also, make sure to activate the Mixpanel provider as your analytics provider by updating the exports in:
```ts
// [!code word:mixpanel]
export * from "./mixpanel";
```
```ts
// [!code word:mixpanel]
export * from "./mixpanel/server";
```
```ts
// [!code word:mixpanel]
export * from "./mixpanel/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/mixpanel` directory.
For more information, please refer to the [Mixpanel documentation](https://docs.mixpanel.com/).

To use Plausible as your analytics provider, you need to [create an account](https://plausible.io/) and [set up a website](https://plausible.io/docs/add-website).
Then, set your domain and host in your `.env.local` file in the `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_PLAUSIBLE_DOMAIN="your-website-domain.com"
NEXT_PUBLIC_PLAUSIBLE_HOST="https://plausible.io"
```
Also, make sure to activate the Plausible provider as your analytics provider by updating the exports in:
```ts
// [!code word:plausible]
export * from "./plausible";
```
```ts
// [!code word:plausible]
export * from "./plausible/server";
```
```ts
// [!code word:plausible]
export * from "./plausible/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/plausible` directory.
For more information, please refer to the [Plausible documentation](https://plausible.io/docs).

To use Umami as your analytics provider, you need to [set up Umami](https://umami.is/docs/getting-started) either by using their [cloud service](https://cloud.umami.is/) or [self-hosting](https://umami.is/docs/install).
Then, set your website ID and host in your `.env.local` file in the `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_UMAMI_WEBSITE_ID="your-website-id"
NEXT_PUBLIC_UMAMI_HOST="https://your-umami-instance.com"
UMAMI_API_HOST="https://your-umami-instance.com"
UMAMI_API_KEY="your-api-key"
```
Also, make sure to activate the Umami provider as your analytics provider by updating the exports in:
```ts
// [!code word:umami]
export * from "./umami";
```
```ts
// [!code word:umami]
export * from "./umami/server";
```
```ts
// [!code word:umami]
export * from "./umami/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/umami` directory.
For more information, please refer to the [Umami documentation](https://umami.is/docs).

To use Open Panel as your analytics provider, you need to [create an account](https://openpanel.dev/) and [set up a client for your project](https://docs.openpanel.dev/docs).
Then, you would need to set your client ID and secret in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_OPEN_PANEL_CLIENT_ID="your-client-id"
OPEN_PANEL_CLIENT_SECRET="your-client-secret"
```
Also, make sure to activate the Open Panel provider as your analytics provider by updating the exports in:
```ts
// [!code word:open-panel]
export * from "./open-panel";
```
```ts
// [!code word:open-panel]
export * from "./open-panel/server";
```
```ts
// [!code word:open-panel]
export * from "./open-panel/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/open-panel` directory.
For more information, please refer to the [Open Panel documentation](https://docs.openpanel.dev/).

To use Vemetric as your analytics provider, you need to [create an account](https://vemetric.com/) and [obtain your project token](https://vemetric.com/docs/).
Then, set it as an environment variable in your `.env.local` file in the `apps/web` directory and your deployment environment:
```dotenv
NEXT_PUBLIC_VEMETRIC_PROJECT_TOKEN="your-project-token"
```
Also, make sure to activate the Vemetric provider as your analytics provider by updating the exports in:
```ts
// [!code word:vemetric]
export * from "./vemetric";
```
```ts
// [!code word:vemetric]
export * from "./vemetric/server";
```
```ts
// [!code word:vemetric]
export * from "./vemetric/env";
```
To customize the provider, you can find its definition in `packages/analytics/web/src/providers/vemetric` directory.
For more information, please refer to the [Vemetric documentation](https://vemetric.com/docs/).

## Client-side context
To enable tracking events, capturing page views and other analytics features **on the client-side**, you need to wrap your app with the `Provider` component that's implemented by every provider and available through the `@workspace/analytics-web` package:
```tsx title="providers.tsx"
// [!code word:AnalyticsProvider]
import { memo } from "react";
import { Provider as AnalyticsProvider } from "@workspace/analytics-web";
interface ProvidersProps {
readonly children: React.ReactNode;
}
export const Providers = memo(({ children }) => {
return (
{children}
);
});
Providers.displayName = "Providers";
```
By implementing this setup, you ensure that all analytics events are properly tracked from your client-side code. This configuration allows you to safely utilize the [Analytics API](/docs/web/analytics/tracking) within your client components, enabling comprehensive event tracking and data collection.
---
url: /docs/web/analytics/overview
title: Overview
description: Get started with web analytics in TurboStarter.
---
TurboStarter comes with built-in analytics support for multiple providers as well as a unified API for tracking events. This API enables you to easily and consistently track user behavior and app usage across your SaaS application.
## Providers
The starter implements multiple providers for managing analytics. To learn more about each provider and how to configure them, see their respective sections:
All configuration and setup is built-in with a unified API, allowing you to switch between providers by simply changing the exports. You can even introduce your own provider without breaking any tracking-related logic.
In the following sections, we'll cover how to set up each provider and how to track events in your application.
---
url: /docs/web/analytics/tracking
title: Tracking events
description: Learn how to track events in your TurboStarter web app.
---
The implementation strategy for each analytics provider varies depending on whether it's designed for client-side or server-side use. We'll explore both approaches, as they are crucial for ensuring accurate and comprehensive analytics data in your web SaaS application.
## Client-side tracking
The client strategy for tracking events, which every provider must implement, is straightforward:
```ts
export type AllowedPropertyValues = string | number | boolean;
type TrackFunction = (
event: string,
data?: Record,
) => void;
export interface AnalyticsProviderClientStrategy {
Provider: ({ children }: { children: React.ReactNode }) => React.ReactNode;
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
As shown above, each provider must supply two key elements:
1. `Provider` - a component that [wraps your app](/docs/web/analytics/configuration#client-side-context).
2. `track` - a function responsible for sending event data to the provider.
To track an event, you simply need to invoke the `track` method, passing the event name and an optional data object:
```tsx
import { track } from "@workspace/analytics-web";
export const MyComponent = () => {
return (
);
};
```
## Identifying users
Linking events to specific users enables you to build a full picture of how they're using your product across different sessions, devices, and platforms.
For identification purposes, the client strategy can also expose `identify` and `reset` methods. They are optional and only needed if you want to identify users in your app and associate their actions with a specific user ID.
Not all analytics providers support user identification (for example, [Vercel Analytics](/docs/web/analytics/configuration#vercel) and [Plausible](/docs/web/analytics/configuration#plausible)), so make sure your chosen provider exposes these methods before using them.
```ts
type IdentifyFunction = (
userId: string,
traits?: Record,
) => void;
export interface AnalyticsProviderClientStrategy {
identify: IdentifyFunction;
reset: () => void;
}
```
To identify users on the client, call the `identify` function, passing the user's ID and an optional traits object:
```tsx
import { identify } from "@workspace/analytics-web";
identify("user-123", { name: "John Doe" });
```
This will associate all future events with the user's ID, allowing you to track user behavior and gain valuable insights into your application's usage patterns.
The `identify` method is configured out-of-the-box to react to changes in the user's authentication state.
When the user is authenticated, the `identify` method will be called with the user's ID and traits. When the user is logged out, the `reset` method will be called to clear the existing user identification.
## Server-side tracking
The server strategy for tracking events that every provider has to implement is even simpler:
```ts
export interface AnalyticsProviderServerStrategy {
track: TrackFunction;
}
```
You don't need to worry much about this implementation, as all the providers are already configured for you. However, it's useful to be aware of this structure if you plan to add your own custom provider.
This server-side strategy allows you to track events outside of the browser environment, which is particularly useful for scenarios involving server actions or React Server Components.
To track an event on the server side, simply call the `track` method, providing the event name and an optional data object:
```tsx
// [!code word:server]
import { track } from "@workspace/analytics-web/server";
track("button.click", {
country: "US",
region: "California",
});
```
Make sure to use the correct import for the `track` function. We're using the same name for both client and server tracking, but they are different functions. For server-side, just add `/server` to the import path (`@workspace/analytics-web/server`).
```tsx
import { track } from "@workspace/analytics-web";
```
```tsx
// [!code word:server]
import { track } from "@workspace/analytics-web/server";
```
On the server, there are no dedicated identification helpers like `identify` or `reset`. Most providers that support user-level tracking expect you to pass an identifier or traits directly within the `track` call (for example, as a `userId` or similar property), so make sure to check your specific provider's documentation for the recommended way to include user information.
Congratulations! You've now mastered event tracking in your TurboStarter web app. With this knowledge, you're well-equipped to analyze user behaviors and gain valuable insights into your application's usage patterns. Happy analyzing! 📊
---
url: /docs/web/api/client
title: Using API client
description: How to use API client to interact with the API.
---
In Next.js, you can access the API client in two ways:
* **server-side**: in server components and API routes
* **client-side**: in client components
When you create a new page and want to fetch data, you have flexibility in where to make the API calls. Server Components are great for initial data loading since the fetching happens during server-side rendering, eliminating an extra client-server round trip. The data is then efficiently streamed to the client.
By default in Next.js, every component is a Server Component. You can opt into client-side rendering by adding the `use client` directive at the top of a component file. Client Components are useful when you need interactive features or want to fetch data based on user interactions. While they're initially server-rendered, they're also hydrated and rendered on the client, allowing you to make API calls directly from the browser.
Let's explore both approaches to understand their differences and use cases.
## Server-side
We're creating a server-side API client inside `apps/web/src/lib/api/server.ts` file. The client automatically handles passing authentication headers from the user's session to secure API endpoints.
It's pre-configured with all the necessary setup, so you can start using it right away without any additional configuration.
Then, there is nothing simpler than calling the API from your server component:
```tsx title="page.tsx"
import { api } from "~/lib/api/server";
export default async function MyServerComponent() {
const response = await api.posts.$get();
const posts = await response.json();
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
## Client-side
We're creating a separate client-side API client in `apps/web/src/lib/api/client.tsx` file. It's a simple wrapper around the [@tanstack/react-query](https://tanstack.com/query/latest/docs/framework/react/overview) that fetches or mutates data from the API.
It also requires wrapping your app in a `QueryClientProvider` component to provide the query client to the rest of the app:
```tsx title="layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
Of course, it's all already configured for you, so you just need to start using `api` in your client components:
```tsx title="page.tsx"
"use client";
import { api } from "~/lib/api/client";
export default function MyClientComponent() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: async () => {
const response = await api.posts.$get();
if (!response.ok) {
throw new Error("Failed to fetch posts!");
}
return response.json();
},
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
Inside the `apps/web/src/lib/api/utils.ts` we're calling a function to get base url of your api, so make sure it's set correctly (especially on production) and your API endpoint is corresponding with the name there.
```tsx title="utils.ts"
export const getBaseUrl = () => {
if (typeof window !== "undefined") return window.location.origin;
if (env.NEXT_PUBLIC_URL) return env.NEXT_PUBLIC_URL;
if (env.VERCEL_URL) return `https://${env.VERCEL_URL}`;
return `http://localhost:${process.env.PORT ?? 3000}`;
};
```
As you can see we're mostly relying on the [environment variables](/docs/web/configuration/environment-variables) to get it, so there shouldn't be any issues with it, but in case, please be aware where to find it 😉
## Handling responses
As you can see in the examples above, the [Hono RPC](https://hono.dev/docs/guides/rpc) client returns a plain `Response` object, which you can use to get the data or handle errors. However, implementing this handling in every query or mutation can be tedious and will introduce unnecessary boilerplate in your codebase.
That's why we've developed the `handle` function that unwraps the response for you, handles errors, and returns the data in a consistent format. You can safely use it with any procedure from the API client:
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/server";
export default async function MyServerComponent() {
const posts = await handle(api.posts.$get)();
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
```tsx
// [!code word:handle]
"use client";
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/client";
export default function MyClientComponent() {
const { data: posts, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: handle(api.posts.$get),
});
if (isLoading) {
return
Loading...
;
}
/* do something with the data... */
return
{JSON.stringify(posts)}
;
}
```
With this approach, you can focus on the business logic instead of repeatedly writing code to handle API responses in your browser extension components, making your extension's codebase more readable and maintainable.
The same error handling and response unwrapping benefits apply whether you're building web, mobile, or extension interfaces - allowing you to keep your data fetching logic consistent across all platforms.
---
url: /docs/web/api/internationalization
title: Internationalization
description: Learn how to localize and translate your API.
---
Since TurboStarter provides fully featured [internationalization](/docs/web/internationalization/overview) out of the box, you can easily localize not only the frontend but also the API layer. This can be useful when you need to fetch localized data from the database or send emails in different languages.
Let's explore possibilities of this feature.
## Request-based localization
To get the locale for the current request, you can leverage the `localize` middleware:
```ts title="email/router.ts"
const emailRouter = new Hono().get("/", localize, (c) => {
const locale = c.var.locale;
// do something with the locale
});
```
Inside it, we're setting the `locale` variable in the current request context, making it available to the procedure.
## Error handling
When handling errors in an internationalized API, you'll want to ensure error messages are properly translated for your users. TurboStarter provides built-in support for localizing error messages using error codes and a special `onError` hook.
That's why it's recommended to use error codes instead of direct messages in your throw statements:
```ts
throw new HttpException(HttpStatusCode.UNAUTHORIZED, {
code: "auth:error.unauthorized",
/* 👇 optional */
message: "You are not authorized to access this resource.",
});
```
The error code will then be used to retrieve the localized message, and the returned response from your API will look like this:
```json
{
"code": "auth:error.unauthorized",
/* 👇 localized based on request's locale */
"message": "You are not authorized to access this resource.",
"path": "/api/auth/login",
"status": 401,
"timestamp": "2024-01-01T00:00:00.000Z"
}
```
Then, you can either use the returned code to get the localized message in your frontend, or simply use the returned message as is.
---
url: /docs/web/api/mutations
title: Mutations
description: Learn how to mutate data on the server.
---
As we saw in [adding new endpoint](/docs/web/api/new-endpoint#maybe-mutation), mutations allow us to modify data on the server, like creating, updating, or deleting resources. They can be defined similarly to queries using our API client.
Just like queries, mutations can be executed either server-side or client-side depending on your needs. Let's explore both approaches.
## Server actions
Next.js provides [server actions](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations) as a powerful way to handle mutations directly on the server. They're particularly well-suited for form submissions and other data modifications.
Using our `api` client with server actions is straightforward - you simply call the API function on the server.
Here's an example of how you can define an action to create a new post:
```tsx
// [!code word:handle]
"use server";
import { revalidatePath } from "next/cache";
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/server";
export async function createPost(post: PostInput) {
try {
await handle(api.posts.$post)(post);
} catch (error) {
onError(error);
}
revalidatePath("/posts");
}
```
```tsx
"use server";
import { revalidatePath } from "next/cache";
import { api } from "~/lib/api/server";
export async function createPost(post: PostInput) {
const response = await api.posts.$post(post);
if (!response.ok) {
return { error: "Failed to create post" };
}
revalidatePath("/posts");
}
```
In the above example we're also using `revalidatePath` to revalidate the path `/posts` to fetch the updated list of posts.
## useMutation hook
On the other hand, if you want to perform a mutation on the client-side, you can use the `useMutation` hook that comes straight from the integration with [React Query](https://tanstack.com/query).
```tsx
// [!code word:handle]
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/react";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: handle(api.posts.$post),
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
}
```
```tsx
import { api } from "~/lib/api/react";
export function CreatePost() {
const queryClient = useQueryClient();
const { mutate } = useMutation({
mutationFn: async (post: PostInput) => {
const response = await api.posts.$post(post);
if (!response.ok) {
throw new Error("Failed to create post!");
}
},
onSuccess: () => {
toast.success("Post created successfully!");
queryClient.invalidateQueries({ queryKey: ["posts"] });
},
});
return ;
}
```
---
url: /docs/web/api/new-endpoint
title: Adding new endpoint
description: How to add new endpoint to the API.
---
To define a new API endpoint, you can either extend an existing entity (e.g. add new customer route) or create a new, separate module.
## Create new module
To create a new module you can create a new folder in the `modules` folder. For example `modules/posts`.
Then you would need to create a router declaration for this module. We're following a convention with the filename describing its purpose, so you would need to create a file named `router.ts` in the `modules/posts` folder.
```typescript title="modules/posts/router.ts"
import { Hono } from "hono";
import { validate } from "../../middleware";
export const postsRouter = new Hono().get(
"/",
validate("query", filtersSchema),
(c) => getAllPosts(c.req.valid("query")),
);
```
As you can see we're implementing a `.get` method without any additional middlewares for the router. This is a simple way to define a new GET endpoint.
Also, we're using a [zod](https://zod.dev/) validator to ensure that input passed to the endpoint is correct.
### Maybe mutation?
The same way you can define a mutation for the new entity, just by changing the `get` to `post`:
```ts title="modules/posts/router.ts"
// [!code word:.post]
export const postsRouter = new Hono().post(
"/",
enforceAuth,
validate("json", postSchema),
(c) => createPost(c.req.valid("json")),
);
```
Hono supports all [HTTP methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods), so you can define a new endpoint for any method you need (e.g. `put`, `delete`, etc.).
The `enforceAuth` middleware ensures that only authenticated users can access the endpoint, while the zod validator checks if the input data matches the expected schema. This combination provides both authentication and data validation in a single, clean setup.
[Read more about protected routes](/docs/web/api/protected-routes).
## Implement logic
Then you would need to create a controller for this module. There is a place, where the logic happens, e.g. for the `GET /` endpoint we would need to create a `getAllPosts` function which will fetch posts from the database.
```typescript title="modules/posts/queries.ts"
import { db } from "@workspace/db/server";
import { posts } from "@workspace/db/schema";
export const getAllPosts = (filters: Filters) => {
return db.select().from(posts).all().where(/* your filter logic here */);
};
```
## Register router
To make the module and its endpoints available in the API you need to register a router for this module in the `index.ts` file:
```ts title="index.ts"
import { postsRouter } from "./modules/posts/router";
const appRouter = new Hono()
.basePath("/api")
.route("/posts", postsRouter)
/* other routers from your app logic */
.onError(onError);
type AppRouter = typeof appRouter;
export type { AppRouter };
export { appRouter };
```
The `basePath` method sets a prefix for all routes in this router. While optional, using it helps organize API endpoints. This modular approach makes the API structure clearer and easier to maintain.
That's it! You've just created a new API endpoint - it's now available at `/api/posts` 🎉
By exporting the `AppRouter` type you get fully type-safe RPC calls in the
client. It's important because without producing a huge amount of code, we're
fully type-safe from the frontend code. It helps avoid passing incorrect data
to the procedure and streamline consuming returned types without a need to
define these types by hand.
---
url: /docs/web/api/overview
title: Overview
description: Get started with the API.
---
TurboStarter is designed to be a scalable and production-ready full-stack starter kit. One of its core features is a dedicated and extensible API layer. To enable this in a type-safe way, we chose [Hono](https://hono.dev) as the API server and client library.
Hono is a small, simple, and ultrafast web framework that gives you a way to
define your API endpoints with full type safety. It provides built-in
middleware for common needs like validation, caching, and CORS.
It also includes a [RPC client](https://hono.dev/docs/guides/rpc) for making
type-safe function calls from the frontend. Being edge-first, it's optimized
for serverless environments and offers excellent performance.
All API endpoints and their resolvers live in the `packages/api` package. Inside, the `modules` folder contains the API's feature modules. Each module has its own directory and exports its resolvers.
For each module, we create a separate Hono router and then aggregate all sub-routers into one main router in the `index.ts` file.
By default, the API is integrated with the web app and exposed as a [Next.js route handler](https://nextjs.org/docs/app/getting-started/route-handlers):
```ts title="apps/web/src/app/api/[...route]/route.ts"
import { handle } from "hono/vercel";
import { appRouter } from "@workspace/api";
const handler = handle(appRouter);
export {
handler as GET,
handler as POST,
handler as OPTIONS,
handler as PUT,
handler as PATCH,
handler as DELETE,
handler as HEAD,
};
```
As the API is a separate service, it **must** be deployed to use it in other apps (e.g. [mobile app](/docs/mobile/api/overview) or [browser extension](/docs/extension/api/overview)), even if you don't need the web app itself.
By default, it's hosted together with the web app, so you don't need to worry about it separately. However, you can also [deploy it as a standalone service](/docs/web/deployment/api).
## Observability
To give you some visibility into how the API is performing in production, and to track its usage and performance metrics, we integrated a basic status monitor, which is available at the [/api/status](http://localhost:3000/api/status) route.

You can use it to check if the API is running and to get basic metrics like:
* uptime
* response time
* recent errors
* CPU and memory usage
* event loop lag
* server info
* route analytics
* p50, p95, and p99 latency
Feel free to extend it with your own metrics and monitoring tools.
To learn more about the API, check the following sections:
---
url: /docs/web/api/protected-routes
title: Protected routes
description: Learn how to protect your API routes.
---
Hono has built-in support for [middlewares](https://hono.dev/docs/guides/middleware), which are functions that can be used to modify the context or execute code before or after a route handler is executed.
That's how we can secure our API endpoints from unauthorized access. Below are some examples of you can leverage middlewares to protect your API routes.
## Authenticated access
After validating the user's authentication status, we store their data in the context using [Hono's built-in context](https://hono.dev/docs/api/context). This allows us to access the user's information in subsequent middleware and procedures without having to re-validate the session.
Here's an example of middleware that validates whether the user is currently logged in and stores their data in the context:
```ts title="middleware.ts"
export const enforceAuth = createMiddleware<{
Variables: {
user: User;
};
}>(async (c, next) => {
const session = await auth.api.getSession({ headers: c.req.raw.headers });
const user = session?.user ?? null;
if (!user) {
throw new HTTPException(HttpStatusCode.UNAUTHORIZED, {
message: "You need to be logged in to access this feature!",
});
}
c.set("user", user);
await next();
});
```
Then we can use our defined middleware to protect endpoints by adding it before the route handler:
```ts title="billing/router.ts"
export const billingRouter = new Hono().get(
"/customer",
enforceAuth,
async (c) => c.json(await getCustomerByUserId(c.var.user.id)),
);
```
## Role-based access
In most cases, you will want to restrict access to certain endpoints based on the user's role.
You can achieve this by creating a middleware that will check if the user has the required role and then pass the execution to the next middleware or procedure.
E.g. for admin endpoints we want to ensure that the user has the `admin` role:
```ts title="middleware.ts"
export const enforceAdmin = createMiddleware<{
Variables: {
user: User;
};
}>(async (c, next) => {
const user = c.var.user;
if (!hasAdminPermission(user)) {
throw new HttpException(HttpStatusCode.FORBIDDEN, {
message: "You need to be an admin to access this feature!",
});
}
await next();
});
```
Then we can use our defined middleware to protect endpoints by adding it before the route handler:
```ts title="admin/router.ts"
export const adminRouter = new Hono().get(
"/users",
enforceAuth,
enforceAdmin,
(c) => c.json(...),
);
```
## Feature-based access
When developing your API you may want to restrict access to certain features based on the user's current subscription plan. (e.g. only users with "Pro" plan can access teams).
You can achieve this by creating a middleware that will check if the user has access to the feature and then pass the execution to the next middleware or procedure:
```ts title="middleware.ts"
export const enforceFeatureAvailable = (feature: Feature) =>
createMiddleware<{
Variables: {
user: User;
};
}>(async (c, next) => {
const { data: customer } = await getCustomerById(c.var.user.id);
const hasFeature = isFeatureAvailable(customer, feature);
if (!hasFeature) {
throw new HTTPException(HttpStatusCode.PAYMENT_REQUIRED, {
message: "Upgrade your plan to access this feature!",
});
}
await next();
});
```
Use it within your procedure the same way as we did with `enforceAuth` middleware:
```ts title="teams/router.ts"
export const teamsRouter = new Hono().get(
"/",
enforceAuth,
enforceFeatureAvailable(FEATURES.PRO.TEAMS),
(c) => c.json(...),
);
```
These are just examples of what you can achieve with Hono middlewares. You can use them to add any kind of logic to your API (e.g. [logging](https://hono.dev/docs/middleware/builtin/logger), [caching](https://hono.dev/docs/middleware/builtin/cache), etc.)
---
url: /docs/web/auth/2fa
title: Two-Factor Authentication (2FA)
description: Add an extra layer of security with two-factor authentication.
---
TurboStarter uses [Better Auth's 2FA plugin](https://www.better-auth.com/docs/plugins/2fa) to provide multi-factor authentication (MFA) capabilities. Two-factor authentication adds an extra layer of security by requiring users to provide a second form of verification alongside their password.
## Available methods
TurboStarter supports multiple 2FA verification methods through Better Auth:
* **TOTP (Time-based One-Time Password)** - codes generated by authenticator apps
* **OTP (One-Time Password)** - codes sent via email or SMS
* **Backup codes** - single-use recovery codes for account recovery
You can use any TOTP-compatible authenticator app, such as:
* [Google Authenticator](https://support.google.com/accounts/answer/1066447)
* [Authy](https://authy.com/)
* [Microsoft Authenticator](https://www.microsoft.com/en-us/security/mobile-authenticator-app)
* [1Password](https://1password.com/features/authenticator/)
* [Bitwarden](https://bitwarden.com/help/authenticator-keys/)
## Enabling 2FA
### Enable in settings
Users enable two-factor authentication in their account security settings.

### Setup authenticator
A QR code is displayed for users to scan with their authenticator app.

### Verify setup
Users enter a verification code from their authenticator to confirm setup.
### Backup codes
Users receive single-use backup codes for account recovery.

Recovery codes are essential for account recovery if users lose access to
their authenticator device. Make sure to educate users about safely storing
their backup codes.
## Using 2FA
### Sign in normally
Users enter their email and password or other methods as usual.
### 2FA prompt
After successful password verification, users are prompted for their 2FA code.

### Enter verification code
Users input the 6-digit code from their authenticator app.
### Access granted
Upon successful verification, users gain access to their account.
### Trusted devices
Users can mark devices as trusted during 2FA verification. Trusted devices won't require 2FA verification for 60 days, providing a balance between security and convenience.
## Configuration
2FA is configured through Better Auth's plugin system. The plugin handles:
* Secure secret generation and storage
* QR code generation for authenticator setup
* TOTP code validation
* Backup code generation and management
* Trusted device management
For detailed implementation instructions, refer to the [Better Auth 2FA documentation](https://www.better-auth.com/docs/plugins/2fa).
---
url: /docs/web/auth/configuration
title: Configuration
description: Configure authentication for your application.
---
TurboStarter supports multiple different authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - passwordless email link authentication
* **OTP** - one-time passwords sent to email or phone number
* **Passkey** - passkeys as an alternative to passwords
* **Anonymous** - guest mode for unauthenticated users
* **OAuth** - OAuth providers; [Apple](https://www.better-auth.com/docs/authentication/apple), [Google](https://www.better-auth.com/docs/authentication/google), and [GitHub](https://www.better-auth.com/docs/authentication/github) are set up by default
* [Google One Tap](https://developers.google.com/identity/gsi/web/guides) - native, one-click prompt for Google authentication
All authentication methods are enabled by default, but you can easily customize them to your needs. You can enable or disable any method, or configure it separately according to your requirements.
Remember that you can mix and match these methods or add new ones - for
example, you can have both password and magic link/OTP authentication enabled
at the same time, giving your users more flexibility in how they authenticate.
Authentication configuration can be customized through a simple configuration file. The following sections explain the available options and how to configure each authentication method based on your requirements.
## API
The **server-side** authentication configuration lives in `packages/auth/src/server.ts`. It configures the [Better Auth](https://better-auth.com) package with the desired providers and settings:
```ts title="server.ts"
export const auth = betterAuth({
emailAndPassword: {
enabled: true,
requireEmailVerification: true,
sendResetPassword: () => {},
},
emailVerification: {
sendOnSignUp: true,
autoSignInAfterVerification: true,
sendVerificationEmail: () => {},
},
database: drizzleAdapter(db, {
provider: "pg",
schema,
}),
plugins: [
magicLink({
sendMagicLink: () => {},
}),
passkey(),
anonymous(),
expo(),
nextCookies(),
],
socialProviders: {
[SocialProvider.APPLE]: {
clientId: env.APPLE_CLIENT_ID,
clientSecret: env.APPLE_CLIENT_SECRET,
appBundleIdentifier: env.APPLE_APP_BUNDLE_IDENTIFIER,
},
[SocialProvider.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
[SocialProvider.GITHUB]: {
clientId: env.GITHUB_CLIENT_ID,
clientSecret: env.GITHUB_CLIENT_SECRET,
},
},
/* other configuration options */
});
```
The configuration is validated against Better Auth's schema at runtime, providing immediate feedback if any settings are incorrect or insecure. This validation ensures your authentication setup remains robust and properly configured.
All authentication routes and handlers are centralized within the [Hono API](/docs/web/api/overview), giving you a single source of truth and complete control over the authentication flow. This centralization makes it easier to maintain, debug, and customize the authentication process as needed.
[Read more about it in the official documentation](https://www.better-auth.com/docs/basic-usage).
## UI
We have separate configuration that determines what is displayed to your users in the **UI**. It's set at `apps/web/config/auth.ts`.
```ts title="apps/web/config/auth.ts"
import { authConfigSchema, type AuthConfig } from "@workspace/auth";
export const authConfig = authConfigSchema.parse({
providers: {
password: true,
magicLink: true,
emailOtp: false,
passkey: false,
anonymous: true,
oAuth: ["apple", "google"],
},
}) satisfies AuthConfig;
```
The configuration is also validated using the Zod schema, so if something is off, you'll see the errors.
**Avoid editing the config file directly.** Prefer environment variables to override the defaults.
For example, if you want to switch from password to magic link, you'd change the following environment variables:
```dotenv title=".env.local"
NEXT_PUBLIC_AUTH_PASSWORD=false
NEXT_PUBLIC_AUTH_MAGIC_LINK=true
```
To display third-party providers in the UI, you need to set the `oAuth` array to include the provider you want to display. The default is Apple, Google and Github:
```tsx title="apps/web/config/auth.ts"
providers: {
...
oAuth: ["apple", "google", "github"],
...
},
```
## Third-party providers
To enable third-party authentication providers, you'll need to:
1. Create an OAuth application in the provider’s developer console ([Apple](https://developer.apple.com/account/), [Google Cloud Console](https://console.cloud.google.com/), [GitHub](https://github.com/settings/developers), or another supported provider).
2. Set the matching environment variables in your TurboStarter app.
Each provider needs its own credentials and env vars. See the [Better Auth OAuth docs](https://better-auth.com/docs/concepts/oauth) for step-by-step setup per provider.
Make sure to set both development and production environment variables
appropriately. Your OAuth provider may require different callback URLs for
each environment.
---
url: /docs/web/auth/flow
title: User flow
description: Discover the authentication flow in TurboStarter.
---
TurboStarter ships with a fully functional authentication system. Most views and components are preconfigured and easy to customize.
Here you will find a quick walkthrough of the authentication flow.
## Sign up
The sign-up page is where users can create an account. They need to provide their email address and password.

Once successful, users are asked to confirm their email address. This is enabled by default - and due to security reasons, it's not possible to disable it.
Make sure to configure the [email provider](/docs/web/emails/configuration) together with the [auth hooks](/docs/web/emails/sending#authentication-emails) to be able to send emails from your app.

## Sign in
The sign-in page lets users log in with email and password, magic link (if enabled), one-time password (if enabled), or third-party providers.

## Sign out
The sign out button is located in the user menu.

## Forgot password
The forgot-password page lets users request a reset. They enter their email and follow the instructions sent to them.

The reset-password page is where users land from the password-reset email. They set a new password and confirm it.

## Two-factor authentication
Two-factor authentication is a security feature that requires users to provide a code sent to their email or phone number in addition to their password when logging in.

---
url: /docs/web/auth/oauth
title: OAuth
description: Get started with social authentication.
---
Better Auth supports over **30** (!) different [OAuth providers](https://www.better-auth.com/docs/concepts/oauth). They can be easily configured and enabled in the kit without any additional configuration needed.
TurboStarter provides you with all the configuration required to handle OAuth providers responses from your app:
* redirects
* middleware
* confirmation API routes
You just need to configure one of the below providers on their side and set correct credentials as environment variables in your TurboStarter app.

Third Party providers need to be configured, managed and enabled fully on the provider's side. TurboStarter just needs the correct credentials to be set as environment variables in your app and passed to the [authentication API configuration](/docs/web/auth/configuration#api).
To enable OAuth providers in your TurboStarter app, you need to:
1. Set up an OAuth application in the provider's developer console (like [Apple Developer Portal](https://developer.apple.com/account/), [Google Cloud Console](https://console.cloud.google.com/), [Github Developer Settings](https://github.com/settings/developers) or any other provider you want to use)
2. Configure the provider's credentials as environment variables in your app. For example, for Google OAuth:
```dotenv title="apps/web/.env.local"
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
```
Then, pass it to the authentication configuration in `packages/auth/src/server.ts`:
```ts title="server.ts"
export const auth = betterAuth({
...
socialProviders: {
[SocialProvider.GOOGLE]: {
clientId: env.GOOGLE_CLIENT_ID,
clientSecret: env.GOOGLE_CLIENT_SECRET,
},
},
...
});
```
Better Auth provides a [generic OAuth plugin](https://www.better-auth.com/docs/plugins/generic-oauth) that allows you to add any OAuth provider to your app.
It supports both OAuth 2.0 and OpenID Connect (OIDC) flows, allowing you to easily add social login or custom OAuth authentication to your application.
---
url: /docs/web/auth/overview
title: Overview
description: Get started with authentication.
---
TurboStarter uses [Better Auth](https://better-auth.com) to handle authentication. It's a secure, production-ready authentication solution that integrates seamlessly with many frameworks and provides enterprise-grade security out of the box.
One of the core principles of TurboStarter is to do things **as simple as possible**, and to make everything **as performant as possible**.
Better Auth provides an excellent developer experience with minimal configuration while keeping enterprise-grade security. Its framework-agnostic approach and focus on performance make it the perfect choice for TurboStarter.
Recently, Better Auth [announced](https://www.better-auth.com/blog/authjs-joins-better-auth) an incorporation of [Auth.js (28k+ stars on GitHub)](https://authjs.dev/), making it even more powerful and flexible.

You can read more about Better Auth in the [official documentation](https://better-auth.com/docs).
TurboStarter supports multiple authentication methods:
* **Password** - the traditional email/password method
* **Magic Link** - magic links
* **OTP** - one-time passwords with automatic expiration
* **Passkey** - passkeys ([WebAuthn](https://developer.mozilla.org/en-US/docs/Web/API/Web_Authentication_API))
* **Anonymous** - allowing users to proceed anonymously
* **OAuth** - social providers ([Apple](https://www.better-auth.com/docs/authentication/apple), [Google](https://www.better-auth.com/docs/authentication/google), and [GitHub](https://www.better-auth.com/docs/authentication/github) preconfigured)
* [Google One Tap](https://developers.google.com/identity/gsi/web/guides) - native, one-click prompt for Google authentication
As well as common applications flows, with ready-to-use views and components:
* **Sign in** - sign in with email/password, magic link, one-time password, or OAuth providers
* **Sign up** - sign up with email/password or OAuth providers
* **Sign out** - end session by signing out
* **2FA** - two-factor authentication with TOTP, OTP, or recovery codes
* **Password recovery** - forgot and reset password
* **Email verification** - verify email address
You can **build your auth flow like LEGO bricks** - plug in only the parts you need and customize them as you wish.
---
url: /docs/web/background-tasks/overview
title: Overview
description: Learn about background tasks & cron jobs and how they can power your application.
---
Background tasks and cron jobs are long-running processes that execute outside of your main application flow, allowing you to handle time-intensive operations and scheduled workflows without blocking user interactions or hitting serverless function timeouts.
Background tasks are ideal for operations that take longer than typical serverless function timeouts (10-60 seconds), such as processing large files, sending batch emails, or making multiple API calls.
Cron jobs are perfect for recurring operations like daily reports, cleanup tasks, or periodic data synchronization.
## What are background tasks?
**Background tasks** are asynchronous processes that run separately from your main application thread. Instead of forcing users to wait for lengthy operations to complete, you can offload these tasks to run in the background while your application remains responsive.
**Cron jobs** are scheduled background tasks that run automatically at specific times or intervals. They're perfect for maintenance operations, reports, and recurring workflows that need to happen without user intervention.
Think of background tasks as your application's *worker threads* - they handle the heavy lifting while your main application stays fast and responsive for users.
## Why use background tasks?
Most serverless platforms have strict execution limits:
* **[Vercel (Hobby)](https://vercel.com/docs/functions/serverless-functions/runtimes#max-duration)**: 300 seconds
* **[Vercel (Pro)](https://vercel.com/docs/functions/serverless-functions/runtimes#max-duration)**: 800 seconds
* **[Vercel (Enterprise)](https://vercel.com/docs/functions/serverless-functions/runtimes#max-duration)**: 800 seconds
* **[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/configuration-timeout.html)**: 900 seconds
* **[Netlify Functions](https://docs.netlify.com/functions/overview/#default-deployment-options)**: 30 seconds
Background tasks let you bypass these limitations entirely.
Users don't have to wait for long-running processes. They can continue using
your application while tasks complete in the background.
Cron jobs enable hands-off automation of recurring tasks like daily backups,
weekly reports, or monthly user engagement analysis - all running reliably
without manual intervention.
Background tasks can be automatically retried if they fail, ensuring your
critical processes eventually complete successfully.
Your main application servers stay available to handle user requests instead of being tied up with heavy processing tasks.
## Common use cases
Here are some typical scenarios where background tasks shine:
* **Video transcoding**: Converting uploaded videos to different formats or resolutions
* **Image optimization**: Batch processing user-uploaded images
* **Document parsing**: Extracting text from PDFs or generating thumbnails
* **Database migrations**: Moving or transforming large datasets
* **Report generation**: Creating complex analytics reports
* **Data synchronization**: Syncing data between different systems
* **Email campaigns**: Sending personalized emails to large user lists
* **Notification processing**: Delivering push notifications across multiple platforms
* **SMS campaigns**: Bulk SMS sending with rate limiting
* **Content generation**: Using AI models to generate text, images, or videos
* **Data analysis**: Running machine learning models on large datasets
* **Natural language processing**: Analyzing text content for insights
* **API synchronization**: Syncing data with external services
* **Webhook processing**: Handling incoming webhooks that trigger complex workflows
* **Social media automation**: Posting content across multiple platforms
* **Daily reports**: Generating and emailing daily analytics or performance reports
* **Database maintenance**: Cleaning up old records, optimizing indexes, or running backups
* **User engagement**: Sending weekly newsletters or monthly account summaries
* **System monitoring**: Health checks, performance monitoring, and alert notifications
* **Content management**: Auto-publishing scheduled content or archiving old posts
## When not to use background tasks?
Background tasks and cron jobs aren't always the right solution. Consider alternatives for:
* **Real-time operations**: Tasks that users need immediate results from
* **Simple, fast operations**: Tasks that complete in under 5-10 seconds
* **Database queries**: Standard CRUD operations that should remain synchronous
* **User authentication**: Login/logout processes should be immediate
Start with synchronous processing for simple tasks and manual processes for infrequent operations. Only move to background tasks when you hit timeout limitations or user experience issues, and only use cron jobs when you need reliable automation.
## Getting started
Ready to add background tasks to your TurboStarter application? Check out our [Trigger.dev integration guide](/docs/web/background-tasks/trigger) or [Upstash QStash integration guide](/docs/web/background-tasks/qstash) to learn how to implement background tasks using one of the most developer-friendly background job frameworks available.
---
url: /docs/web/background-tasks/qstash
title: Upstash QStash
description: Integrate Upstash QStash with your TurboStarter application for serverless-first background task processing.
---
[Upstash QStash](https://upstash.com/docs/qstash) is a serverless message queue and task scheduler designed specifically for serverless and edge environments. It uses HTTP endpoints instead of persistent connections, making it perfect for modern web applications.
QStash is built for the serverless world - no infrastructure to manage, automatic scaling, and pay-per-use pricing. It delivers messages to your HTTP endpoints with built-in retries, delays, and scheduling capabilities.
## Setup
Visit [Upstash Console](https://console.upstash.com) and create a free account. Create a new QStash project and note down your credentials.
Add your QStash credentials to your root environment variables:
```dotenv title=".env.local"
QSTASH_URL=https://qstash.upstash.io
QSTASH_TOKEN=your_qstash_token_here
QSTASH_CURRENT_SIGNING_KEY=your_current_signing_key_here
QSTASH_NEXT_SIGNING_KEY=your_next_signing_key_here
```
You can find these values in your Upstash Console under the QStash project settings.
For production, make sure to add these environment variables to your deployment platform.
## Install dependencies
Add the QStash SDK to your API package:
```bash
pnpm add --filter api @upstash/qstash
```
## Create the QStash client
Create a utility file to initialize the QStash client in your API package:
```ts title="packages/api/src/lib/qstash.ts"
import { Client } from "@upstash/qstash";
import { env } from "~/env";
export const qstashClient = new Client({
baseUrl: env.QSTASH_URL,
token: env.QSTASH_TOKEN,
});
```
## Create task handlers
QStash delivers messages to HTTP endpoints, so you'll create API routes to handle your background tasks.
Let's create task handlers for common operations:
```ts title="packages/api/src/modules/tasks/router.ts"
import { Hono } from "hono";
import * as z from "zod";
import { qstashVerifyMiddleware } from "../../middleware/qstash-verify";
import { dailyCleanupHandler } from "./handlers/daily-cleanup";
import { processUserDataHandler } from "./handlers/process-user-data";
const processUserDataSchema = z.object({
userId: z.string(),
operation: z.enum(["export", "analyze", "cleanup"]),
});
export const tasksRouter = new Hono()
.basePath("/tasks")
// Apply QStash signature verification to all task routes
.use(qstashVerifyMiddleware)
.post("/process-user-data", processUserDataHandler)
.post("/daily-cleanup", dailyCleanupHandler);
```
```ts title="packages/api/src/modules/tasks/handlers/process-user-data.ts"
import type { Context } from "hono";
import * as z from "zod";
const ProcessUserDataSchema = z.object({
userId: z.string(),
operation: z.enum(["export", "analyze", "cleanup"]),
});
export async function processUserDataHandler(c: Context) {
try {
const payload = ProcessUserDataSchema.parse(await c.req.json());
const { userId, operation } = payload;
console.log("Starting user data processing", { userId, operation });
switch (operation) {
case "export":
// Simulate data export
await new Promise((resolve) => setTimeout(resolve, 2000));
console.log("User data exported successfully");
return c.json({
success: true,
result: "Data exported to CSV",
});
case "analyze":
// Simulate data analysis
await new Promise((resolve) => setTimeout(resolve, 5000));
console.log("User data analysis completed");
return c.json({
success: true,
result: { totalActions: 156, avgSessionTime: "4m 32s" },
});
case "cleanup":
// Simulate data cleanup
await new Promise((resolve) => setTimeout(resolve, 3000));
console.log("User data cleanup completed");
return c.json({
success: true,
result: "Removed 23 obsolete records",
});
default:
throw new Error(`Unknown operation: ${operation}`);
}
} catch (error) {
console.error("Task failed:", error);
return c.json({ error: "Task failed" }, 500);
}
}
```
```ts title="packages/api/src/modules/tasks/handlers/daily-cleanup.ts"
import type { Context } from "hono";
export async function dailyCleanupHandler(c: Context) {
try {
console.log("Starting daily cleanup");
// Cleanup old logs
await new Promise((resolve) => setTimeout(resolve, 5000));
console.log("Logs cleaned up");
// Cleanup temporary files
await new Promise((resolve) => setTimeout(resolve, 3000));
console.log("Temp files cleaned up");
// Generate daily reports
await new Promise((resolve) => setTimeout(resolve, 8000));
console.log("Reports generated");
return c.json({
success: true,
cleanupTime: new Date().toISOString(),
itemsProcessed: 1247,
});
} catch (error) {
console.error("Daily cleanup failed:", error);
return c.json({ error: "Daily cleanup failed" }, 500);
}
}
```
```ts title="packages/api/src/middleware/qstash-verify.ts"
import { Receiver } from "@upstash/qstash";
import { createMiddleware } from "hono/factory";
export const qstashVerifyMiddleware = createMiddleware(async (c, next) => {
const currentSigningKey = process.env.QSTASH_CURRENT_SIGNING_KEY;
const nextSigningKey = process.env.QSTASH_NEXT_SIGNING_KEY;
if (!currentSigningKey || !nextSigningKey) {
return c.json({ error: "QStash signing keys not configured" }, 500);
}
const signature = c.req.header("upstash-signature");
if (!signature) {
return c.json({ error: "Missing QStash signature" }, 401);
}
try {
const body = await c.req.text();
const receiver = new Receiver({
currentSigningKey,
nextSigningKey,
});
const isValid = receiver.verify({
body,
signature,
});
if (!isValid) {
return c.json({ error: "Invalid QStash signature" }, 401);
}
// Re-create the request with the body for the next handler
const newRequest = new Request(c.req.url, {
method: c.req.method,
headers: c.req.headers,
body,
});
c.req = newRequest;
await next();
} catch (error) {
console.error("QStash signature verification failed:", error);
return c.json({ error: "Invalid signature" }, 401);
}
});
```
## Register task routes
Add the tasks router to your main API:
```ts title="packages/api/src/index.ts"
import { tasksRouter } from "./modules/tasks/router";
const appRouter = new Hono()
.basePath("/api")
.route("/tasks", tasksRouter)
// ... other existing routers
.onError(onError);
export { appRouter };
```
## Triggering tasks
You can trigger tasks from your TurboStarter application by publishing messages to QStash, which will then deliver them to your task endpoints.
Create a service to handle task triggering:
```ts title="packages/api/src/modules/tasks/service.ts"
import { qstashClient } from "../../lib/qstash";
function getTaskUrl(taskName: string): string {
const baseUrl = process.env.NEXT_PUBLIC_URL || "http://localhost:3000";
return `${baseUrl}/api/tasks/${taskName}`;
}
export class TaskService {
static async processUserData(
userId: string,
operation: "export" | "analyze" | "cleanup",
) {
return await qstashClient.publishJSON({
url: getTaskUrl("process-user-data"),
body: { userId, operation },
});
}
static async scheduleUserDataProcessing(
userId: string,
operation: "export" | "analyze" | "cleanup",
delaySeconds: number,
) {
return await qstashClient.publishJSON({
url: getTaskUrl("process-user-data"),
body: { userId, operation },
delay: `${delaySeconds}s`,
});
}
static async scheduleDailyCleanup() {
return await qstashClient.schedules.create({
destination: getTaskUrl("daily-cleanup"),
cron: "0 2 * * *", // Daily at 2 AM
});
}
}
```
## Create API endpoints for triggering
Create endpoints to trigger tasks from your application:
```ts title="packages/api/src/modules/tasks/trigger/router.ts"
import { Hono } from "hono";
import * as z from "zod";
import { enforceAuth, validate } from "../../middleware";
import { TaskService } from "./service";
const triggerUserDataSchema = z.object({
userId: z.string(),
operation: z.enum(["export", "analyze", "cleanup"]),
delaySeconds: z.number().optional(),
});
export const taskTriggerRouter = new Hono()
.post(
"/trigger/process-user-data",
enforceAuth,
validate("json", triggerUserDataSchema),
async (c) => {
const { userId, operation, delaySeconds } = c.req.valid("json");
const result = delaySeconds
? await TaskService.scheduleUserDataProcessing(
userId,
operation,
delaySeconds,
)
: await TaskService.processUserData(userId, operation);
return c.json({
success: true,
messageId: result.messageId,
message: delaySeconds
? `Task scheduled to run in ${delaySeconds} seconds`
: "Task queued for immediate processing",
});
},
)
.post("/trigger/daily-cleanup", enforceAuth, async (c) => {
const result = await TaskService.scheduleDailyCleanup();
return c.json({
success: true,
scheduleId: result.scheduleId,
message: "Daily cleanup scheduled",
});
});
```
Add it to your main router:
```ts title="packages/api/src/index.ts"
import { taskTriggerRouter } from "./modules/tasks/trigger/router";
const appRouter = new Hono()
.basePath("/api")
.route("/tasks", tasksRouter)
.route("/", taskTriggerRouter) // Trigger routes at root level
// ... other existing routers
.onError(onError);
export { appRouter };
```
## Using tasks in your application
### From the client
```tsx title="apps/web/src/modules/tasks/process-data-button.tsx"
"use client";
import { handle } from "@workspace/api/utils";
import { useMutation } from "@tanstack/react-query";
import { api } from "~/lib/api/client";
export function ProcessDataButton({ userId }: { userId: string }) {
const { mutate: processData, isPending } = useMutation({
mutationFn: handle(api.trigger["process-user-data"].$post),
onSuccess: (data) => {
console.log("Task queued:", data.messageId);
},
});
return (
);
}
```
### From a server action
```ts title="apps/web/src/app/actions/user-actions.ts"
"use server";
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/server";
export async function processUserData(
userId: string,
operation: "export" | "analyze" | "cleanup",
) {
try {
const result = await handle(api.trigger["process-user-data"].$post)({
json: { userId, operation },
});
return {
success: true,
messageId: result.messageId,
};
} catch (error) {
console.error("Failed to queue background task:", error);
throw new Error("Failed to queue background task");
}
}
```
## Advanced features
### Cron jobs & scheduling
QStash makes it easy to schedule recurring tasks:
```ts
// Schedule a task to run every day at 2 AM
await qstashClient.schedules.create({
destination: `${baseUrl}/api/tasks/daily-cleanup`,
cron: "0 2 * * *",
});
// Schedule a task to run every Monday at 9 AM
await qstashClient.schedules.create({
destination: `${baseUrl}/api/tasks/weekly-report`,
cron: "0 9 * * 1",
});
// One-time delayed task
await qstashClient.publishJSON({
url: `${baseUrl}/api/tasks/reminder`,
body: { userId: "123", type: "follow-up" },
delay: "3d", // 3 days from now
});
```
### Topics (Fanout pattern)
Create topics to send messages to multiple endpoints:
```ts
// Create a topic
await qstashClient.topics.upsert({
name: "user-events",
endpoints: [
{ url: `${baseUrl}/api/tasks/update-analytics` },
{ url: `${baseUrl}/api/tasks/send-notification` },
{ url: `${baseUrl}/api/tasks/update-crm` },
],
});
// Publish to topic - all endpoints will receive the message
await qstashClient.publishJSON({
topic: "user-events",
body: {
userId: "123",
event: "user-registered",
timestamp: new Date().toISOString(),
},
});
```
### Queues (Sequential processing)
Create queues for ordered task processing:
```ts
// Create a queue
const queue = qstashClient.queue({ queueName: "user-onboarding" });
// Add tasks to queue (they'll run in order)
await queue.enqueueJSON({
url: `${baseUrl}/api/tasks/send-welcome-email`,
body: { userId: "123" },
});
await queue.enqueueJSON({
url: `${baseUrl}/api/tasks/setup-user-profile`,
body: { userId: "123" },
});
await queue.enqueueJSON({
url: `${baseUrl}/api/tasks/trigger-onboarding-sequence`,
body: { userId: "123" },
});
```
## Monitoring and debugging
### QStash Dashboard
Visit the [Upstash Console](https://console.upstash.com) to monitor your tasks:
* **Message tracking**: See all messages, their status, and delivery attempts
* **Logs**: View detailed logs for each message delivery
* **Analytics**: Monitor throughput, success rates, and error patterns
* **Schedules**: Manage and monitor your cron jobs
* **Dead letter queue**: Handle messages that failed after all retries
### Local development
During development, you can:
1. **Use ngrok** for local testing:
```bash
# Install ngrok
npm install -g ngrok
# Expose your local server
ngrok http 3000
# Use the ngrok URL in your QStash configuration
```
2. **Check message delivery** in the Upstash Console
3. **Use console.log** in your task handlers for debugging
## Best practices
Use the QStash signature verification middleware to ensure messages are authentic:
```ts
// ✅ Good - Always verify QStash signatures
.use(qstashVerifyMiddleware)
// ❌ Not secure - Accepting unverified requests
.post("/tasks/sensitive-operation", handler)
```
Return appropriate HTTP status codes so QStash knows whether to retry:
```ts
// ✅ Good - Clear error handling
try {
await processTask(payload);
return c.json({ success: true });
} catch (error) {
console.error("Task failed:", error);
// 5xx = QStash will retry, 4xx = won't retry
return c.json({ error: "Task failed" }, 500);
}
```
Make your tasks safe to run multiple times in case of retries:
```ts
// ✅ Good - Check if work already done
const existingResult = await db.findProcessedResult(payload.id);
if (existingResult) {
return c.json({ success: true, result: existingResult });
}
// Proceed with processing...
```
Configure timeouts based on your expected processing time:
```ts
// For quick tasks
await qstashClient.publishJSON({
url: taskUrl,
body: payload,
timeout: "30s",
});
// For longer tasks
await qstashClient.publishJSON({
url: taskUrl,
body: payload,
timeout: "300s", // 5 minutes
});
```
Include relevant context in your logs:
```ts
console.log("Task started", {
taskType: "process-user-data",
userId: payload.userId,
operation: payload.operation,
timestamp: new Date().toISOString(),
});
```
## Next steps
With QStash integrated into your TurboStarter application, you can now:
* **Process background tasks** without worrying about serverless timeouts
* **Schedule recurring operations** with reliable cron job functionality
* **Handle high-volume messaging** with automatic retries and scaling
* **Build complex workflows** using topics, queues, and delays
Ready to explore more advanced features? Check out the official documentation for webhooks, batch operations, and advanced routing patterns.
---
url: /docs/web/background-tasks/trigger
title: trigger.dev
description: Integrate trigger.dev with your TurboStarter application for reliable background task processing.
---
[trigger.dev](https://trigger.dev) is an open-source background jobs framework that lets you write reliable workflows in plain async code.
trigger.dev provides automatic retries, real-time monitoring, and seamless scaling - all while letting you write background tasks in familiar JavaScript/TypeScript code directly in your TurboStarter project.
## Setup
Visit [trigger.dev](https://trigger.dev) and create a free account. Create a new project and note down your API key.
Add your trigger.dev API key to your root environment variables:
```dotenv title=".env.local"
TRIGGER_SECRET_KEY=your_secret_key_here
```
For production, make sure to add the production API key to your deployment environment.
## Create a new package in your repository
You can use the [Turbo generator](/docs/web/customization/add-package) to quickly scaffold the package structure:
```bash
turbo gen package
```
When prompted, name your package `tasks`. This will create the basic structure for you.
Alternatively, create a new folder `tasks` in the `/packages` directory and add the following files:
```json
{
"name": "@workspace/tasks",
"private": true,
"version": "0.1.0",
"type": "module",
"exports": {
".": "./src/index.ts"
},
"scripts": {
"clean": "git clean -xdf .cache .turbo dist node_modules",
"dev": "pnpm dlx trigger.dev@latest dev",
"deploy": "pnpm dlx trigger.dev@latest deploy"
},
"dependencies": {
"@trigger.dev/sdk": "4.3.3"
},
"devDependencies": {
"@trigger.dev/build": "4.3.3",
"@workspace/tsconfig": "workspace:*",
"typescript": "catalog:"
}
}
```
```json
{
"extends": "@workspace/tsconfig/base.json",
"include": ["**/*.ts"],
"exclude": ["dist", "build", "node_modules"]
}
```
```ts
import { defineConfig } from "@trigger.dev/sdk";
export default defineConfig({
project: "your_project_id", // Replace with your actual project ID
runtime: "node",
logLevel: "log",
maxDuration: 300,
dirs: ["./src/trigger"],
});
```
## Create your first task
Now create your first task in the `packages/tasks/src/trigger` directory:
```ts title="packages/tasks/src/trigger/process-user-data.ts"
import { task, logger, wait } from "@trigger.dev/sdk";
import * as z from "zod";
const ProcessUserDataSchema = z.object({
userId: z.string(),
operation: z.enum(["export", "analyze", "cleanup"]),
});
export const processUserDataTask = task({
id: "process-user-data",
run: async (payload: z.infer) => {
const { userId, operation } = payload;
logger.info("Starting user data processing", { userId, operation });
switch (operation) {
case "export":
await wait.for({ seconds: 2 });
logger.info("User data exported successfully");
return { success: true, result: "Data exported to CSV" };
case "analyze":
await wait.for({ seconds: 5 });
logger.info("User data analysis completed");
return {
success: true,
result: { totalActions: 156, avgSessionTime: "4m 32s" },
};
case "cleanup":
await wait.for({ seconds: 3 });
logger.info("User data cleanup completed");
return { success: true, result: "Removed 23 obsolete records" };
default:
throw new Error(`Unknown operation: ${operation}`);
}
},
});
```
```ts title="packages/tasks/src/trigger/daily-cleanup.ts"
import { schedules, task, logger, wait } from "@trigger.dev/sdk";
export const dailyCleanupTask = task({
id: "daily-cleanup",
run: async () => {
logger.info("Starting daily cleanup");
// Cleanup old logs
await wait.for({ seconds: 5 });
logger.info("Logs cleaned up");
// Cleanup temporary files
await wait.for({ seconds: 3 });
logger.info("Temp files cleaned up");
// Generate daily reports
await wait.for({ seconds: 8 });
logger.info("Reports generated");
return {
success: true,
cleanupTime: new Date().toISOString(),
itemsProcessed: 1247,
};
},
});
// Schedule the task to run daily at 2 AM
schedules.create({
task: "daily-cleanup",
cron: "0 2 * * *",
});
```
```ts title="packages/tasks/src/index.ts"
export * from "./trigger/process-user-data";
export * from "./trigger/daily-cleanup";
```
## Test your task
You can test your tasks locally by running:
```bash
# Start the development server
pnpm --filter @workspace/tasks dev
```
This will deploy your tasks to trigger.dev in the development environment, allowing you to trigger them from the dashboard or programmatically.
## Deploy your tasks
To deploy your tasks to production on trigger.dev, run:
```bash
pnpm --filter @workspace/tasks deploy
```
You can also add this command as an automated deployment step in your CI/CD pipeline by creating a new GitHub action.
Add the `TRIGGER_ACCESS_TOKEN` secret to your repository secrets, which you can create in the trigger.dev dashboard.
```yml title=".github/workflows/deploy-tasks.yml"
name: Deploy to trigger.dev (prod)
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "24"
- uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install
- name: Deploy trigger tasks
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
pnpm --filter @workspace/tasks deploy
```
## Triggering tasks
You can trigger tasks from your TurboStarter application using the API layer.
While you can trigger tasks directly from your frontend or server components using the trigger.dev SDK, it's recommended to use the API layer approach shown below.
This provides better security, validation, and separation of concerns.
First, add the `@workspace/tasks` package as a dependency to your API package:
```json title="packages/api/package.json"
{
"dependencies": {
"@workspace/tasks": "workspace:*"
}
}
```
### From an API endpoint
Create a new API module to handle task triggering:
```ts title="packages/api/src/modules/tasks/tasks.router.ts"
import { tasks } from "@trigger.dev/sdk";
import { Hono } from "hono";
import * as z from "zod";
import type { processUserDataTask } from "@workspace/tasks";
import { enforceAuth, validate } from "../../middleware";
const processUserDataSchema = z.object({
userId: z.string(),
operation: z.enum(["export", "analyze", "cleanup"]),
});
export const tasksRouter = new Hono().post(
"/process-user-data",
enforceAuth,
validate("json", processUserDataSchema),
async (c) => {
const { userId, operation } = c.req.valid("json");
const handle = await tasks.trigger(
"process-user-data",
{ userId, operation },
);
return c.json({
success: true,
taskId: handle.id,
message: "Background task started successfully",
});
},
);
```
Then register it in your main API router:
```ts title="packages/api/src/index.ts"
import { tasksRouter } from "./modules/tasks/tasks.router";
const appRouter = new Hono()
.basePath("/api")
.route("/tasks", tasksRouter)
// ... other existing routers
.onError(onError);
export { appRouter };
```
### From the client
You can call the task endpoint from your web app using TurboStarter's API client:
```tsx title="apps/web/src/modules/tasks/process-data-button.tsx"
"use client";
import { handle } from "@workspace/api/utils";
import { useMutation } from "@tanstack/react-query";
import { api } from "~/lib/api/client";
export function ProcessDataButton({ userId }: { userId: string }) {
const { mutate: processData, isPending } = useMutation({
mutationFn: handle(api.tasks["process-user-data"].$post),
onSuccess: (data) => {
console.log("Task started:", data.taskId);
},
});
return (
);
}
```
### From a server action
```ts title="apps/web/src/app/actions/user-actions.ts"
"use server";
import { handle } from "@workspace/api/utils";
import { api } from "~/lib/api/server";
export async function processUserData(userId: string, operation: string) {
try {
const result = await handle(api.tasks["process-user-data"].$post)({
json: { userId, operation },
});
return {
success: true,
taskId: result.taskId,
};
} catch (error) {
console.error("Failed to trigger background task:", error);
throw new Error("Failed to start background task");
}
}
```
## Monitoring and debugging
### Dashboard access
Visit the [trigger.dev dashboard](https://trigger.dev) to monitor your tasks:
* View task execution logs and performance metrics
* Track success and failure rates across all your tasks
* Monitor task duration and resource usage
* Replay failed tasks with a single click
* Set up alerts for task failures or performance issues
### Local development
During development, run your tasks locally while connected to trigger.dev:
```bash
# Start everything in the workspace
pnpm dev
# or start the tasks package only
pnpm --filter @workspace/tasks dev
```
This allows you to:
* Test tasks locally with real data
* Debug with breakpoints and console logs
* See immediate feedback as you develop
## Best practices
```ts
// ✅ Good - Clear and descriptive
id: "user-data-export-csv";
id: "weekly-newsletter-campaign";
id: "cleanup-temp-files";
// ❌ Not so good - Generic and unclear
id: "task1";
id: "job";
id: "process";
```
```ts
run: async (payload) => {
try {
const result = await processData(payload);
logger.info("Task completed successfully", { result });
return result;
} catch (error) {
logger.error("Task failed:", error.message);
throw error; // Re-throw to trigger retry logic
}
},
```
```ts
logger.info("Processing started", {
userId: payload.userId,
operation: payload.operation,
timestamp: new Date().toISOString(),
});
```
Instead of one massive task, create focused, single-purpose tasks that can be composed together for complex workflows.
Set retry policies based on your task's requirements:
```ts
// For critical operations
retry: {
maxAttempts: 5,
minTimeoutInMs: 2000,
maxTimeoutInMs: 30000,
factor: 2,
}
// For less critical operations
retry: {
maxAttempts: 2,
minTimeoutInMs: 1000,
maxTimeoutInMs: 5000,
factor: 1.5,
}
```
## Next steps
With trigger.dev integrated into your TurboStarter application, you can now:
* **Handle long-running operations** that would timeout in serverless functions
* **Schedule recurring tasks** like reports, cleanups, and maintenance
* **Process background jobs** reliably with automatic retries
* **Scale your application** without worrying about task execution infrastructure
Ready to explore more advanced features? Check out the official documentation for additional capabilities like webhooks, batching, and custom integrations.
---
url: /docs/web/billing/configuration
title: Configuration
description: Configure billing for your application.
---
The billing configuration schema replicates your billing provider's schema, so that:
* we can display the data in the UI (pricing table, billing section, etc.)
* we can create the correct checkout session
* some features can work correctly (e.g. feature-based access)
It is common to all billing providers and is defined in `packages/billing/shared/src/config/index.ts`. Some billing providers differ in what you can (and cannot) do. In these cases, the schema will try to validate and enforce the rules - but it's still up to you to make sure the data is correct.
The schema is based on a few entities:
* **Plans:** The main products you are selling (e.g. "Free", "Premium", etc.)
* **Variants:** The purchasable pricing options for a plan (one-time or recurring)
* **Features:** The list of features included in a plan (used for the UI and access control)
* **Discounts:** Optional discounts that apply to specific variants
```ts title="index.ts"
type BillingConfig = {
plans: BillingConfigPlan[];
discounts?: BillingConfigDiscount[];
};
```
Getting the IDs of your plans and variants is **extremely important**, as these are used to:
* create the correct checkout
* manage your customers billing data
Please take it easy while you configure this, do one step at a time, and test it thoroughly.
## Billing provider
To set the billing provider, modify the exports in the `packages/billing/src/providers` directory. It defaults to [Stripe](/docs/web/billing/stripe).
```ts
// [!code word:stripe]
export * from "./stripe";
```
```ts
// [!code word:stripe]
export * from "./stripe/env";
```
It's important to set it correctly, as this is used to determine the correct API calls and environment variables used during the communication with the billing provider.
## Plans
Plans are the main products you are selling. They are defined by the following fields:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: PricingPlanType.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
features: [
"Unlimited projects",
"Priority support",
"Advanced integrations",
"Team collaboration",
"Analytics dashboard"
],
variants: [],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the plan (e.g. `free`, `pro`, `enterprise`, etc.). **This is chosen by you, it doesn't need to be the same one as the one in the provider.** It's also used to determine the access level of the plan.
* `name`: The name of the plan
* `description`: The description of the plan
* `badge`: A badge to display on the product (e.g. "Bestseller", "Popular", etc.). Can be `null`.
* `features`: The list of included features for the plan- `variants`: The list of purchasable variants for this plan (see below)
Most of these fields populate the pricing table UI.
### Variants
Variants are the purchasable options for a plan. They can be one-time or recurring.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: BillingPlan.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
variants: [
{
/* 👇 This is the `priceId` from the provider (e.g. Stripe), `variantId` (e.g. Lemon Squeezy) or `productId` (e.g. Polar) */
id: "price_1PpZAAFQH4McJDTlig6Fxsyy",
cost: 1900,
currency: "usd",
model: BillingModel.RECURRING,
interval: RecurringInterval.MONTH,
trialDays: 7,
hidden: false,
},
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the variant. **This must match the corresponding identifier in the billing provider.**
* `cost`: The price amount in the smallest currency unit (e.g. cents). Displayed values are typically divided by 100.
* `currency`: The currency code for the price (defaults to `usd`)
Make sure you have the same currency set on your billing provider (e.g. as a [store currency](https://docs.lemonsqueezy.com/help/payments/currencies) on Lemon Squeezy).
* `model`: The billing model for this variant (`one-time` or `recurring`)
* `interval`: The interval for recurring variants (e.g. `month`, `year`)
* `trialDays`: Trial length in days for recurring variants (optional)
* `hidden`: Whether this variant should be hidden from the pricing table (defaults to `false`) Could be used to grandfather some variants without complicated migrations.
The cost is used **only** for UI purposes. The billing provider will handle the actual billing - therefore, please make sure the cost is correctly set in the billing provider.
Make sure the `id` matches the correct identifier in the billing provider. This is critical, as it’s used to identify the correct variant when creating a checkout session.
### Custom variants
Sometimes - you want to display a variant in the pricing table - but not actually have it in the billing provider. This is common for custom plans, free plans that don't require the billing provider subscription, or plans that are not yet available.
To do so, let's add the `custom` flag to the variant:
```ts title="index.ts"
{
id: "enterprise-monthly",
label: "Contact us!",
href: "/contact",
model: BillingModel.RECURRING,
interval: RecurringInterval.MONTH,
custom: true, //[!code highlight]
}
```
Here's the full example:
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: BillingPlan.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
features: [
"Unlimited projects",
"Priority support",
"Advanced integrations",
"Team collaboration",
"Analytics dashboard"
],
prices: [
{
id: "premium-monthly",
label: "Contact us!",
href: "/contact",
model: BillingModel.RECURRING,
interval: RecurringInterval.MONTH,
custom: true, // [!code highlight]
},
],
},
],
...
}) satisfies BillingConfig;
```
As you can see, the plan now has a custom variant. The UI will display it in the pricing table, but it won't be available for purchase.
We do this by using the following fields:
* `custom`: A flag to indicate that the plan is custom. This will prevent the plan from being available for purchase. It's set to `false` by default.
* `label`: Displayed in the pricing table instead of a numeric amount.
* `href`: The link to the page where the user goes when they click on the variant. This is used in the pricing table.
All labels and descriptions can be translated using the [internationalization](/docs/web/internationalization/overview) feature. The UI will display the correct translation based on the user's locale.
```ts title="index.ts"
label: "common:contactUs",
```
To make strings translatable, make sure to provide the translation key in the config.
### Discounts
Sometimes, you want to offer a discount to your users. This is done by adding a discount to the `discounts` array and pointing it at specific variant IDs via `appliesTo`.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
discounts: [
{
code: "50OFF",
type: BillingDiscountType.PERCENT,
off: 50,
appliesTo: [
"price_1PpUagFQH4McJDTlHwsCzOmyT6",
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `code`: The discount/promo code (e.g. "50OFF"). **This must match the code configured in the billing provider.**
* `type`: The type of the discount (e.g. `percent`, `amount`, etc.)
* `off`: The discount value (e.g. 50 for 50% off when `type` is `percent`)
* `appliesTo`: The list of variant IDs this discount applies to
This data allows you to display the correct banner in the UI (e.g. “10% off for the first 100 customers!”) and apply the discount to the correct variant at checkout.
## Adding more products, plans and discounts
Simply add more plans, variants, and discounts to the config. The UI **should** handle most traditional cases; if you have a more complex billing setup, you may need to adjust the UI accordingly.
---
url: /docs/web/billing/credits
title: Credits-based billing
description: Implement credits-based billing in your TurboStarter application.
---
We are working on adding credits-based billing to our platform. As soon as it's ready, we will update this page with the necessary information.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/web/billing/creem
title: Creem
description: Manage your customers data and subscriptions using Creem.
---
We are working on adding [Creem](https://www.creem.io/) integration to our platform. As soon as it's ready, we will update this page with the necessary information.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/web/billing/lemon-squeezy
title: Lemon Squeezy
description: Manage your customers data and subscriptions using Lemon Squeezy.
---
[Lemon Squeezy](https://lemonsqueezy.com/) is another billing provider available within TurboStarter. Here we'll go through the configuration and how to set it up as a provider for your app.
To switch to Lemon Squeezy, you need to update the exports in:
```ts
// [!code word:lemon-squeezy]
export * from "./lemon-squeezy";
```
```ts
// [!code word:lemon-squeezy]
export * from "./lemon-squeezy/env";
```
Then, let's configure the integration:
## Get API keys
Once you've signed up and created a store on [Lemon Squeezy](https://lemonsqueezy.com/), generate a new API key by navigating to the [API page](https://app.lemonsqueezy.com/settings/api) in your account settings. Click the plus button, enter a name for the API key, and select *Create*. Make sure to copy and save the API key, as you'll need it for configuring the integration.
For development, be sure to enable [Test Mode](https://docs.lemonsqueezy.com/help/getting-started/test-mode) so you don't affect live transactions.
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/web/.env.local"
LEMONSQUEEZY_API_KEY="" # Your Lemon Squeezy API key
LEMONSQUEEZY_SIGNING_SECRET="" # Your Lemon Squeezy webhook signing secret
LEMONSQUEEZY_STORE_ID="" # Your Lemon Squeezy store ID (can be found under Settings > Stores next to your store url, e.g #12345)
```
**Please do not add the secret keys to the .env file in production.** During development, you can place them in `.env.local` as it's not committed to the repository. In production, you can set them in the environment variables of your hosting provider.
## Create products
For your users to choose from the available subscription plans, you need to create those Products first on the [Products page](https://app.lemonsqueezy.com/products). You can create as many products as you want.
Create one product per plan you want to offer. You can add multiple variants within the product to offer multiple models or different billing intervals.

To offer multiple intervals for each plan, you can use the [Variant](https://docs.lemonsqueezy.com/help/products/variants) feature of Lemon Squeezy. Just create one variant for each interval/model you want to offer.

You need to make sure that the variant ID you set in the configuration matches the ID of the variant you created in Lemon Squeezy.
[See configuration](/docs/web/billing/configuration#variants) for more information.
## Create a webhook
To sync the current subscription status or checkout conclusion and other information to your database, you need to set up a webhook.
The webhook handling code comes ready to use with TurboStarter, you just have to create the webhook in the Lemon Squeezy dashboard and insert the URL for your project.
To configure a new webhook, go to the [Webhooks page](https://app.lemonsqueezy.com/settings/webhooks) in the Lemon Squeezy settings and click the *Plus* button.

Select the following events:
* For subscriptions:
* `subscription_created`
* `subscription_updated`
* `subscription_cancelled`
* For one-off payments:
* `order_created`
You will also have to enter a *Signing secret* which you can get by running the following command in your terminal:
```bash
openssl rand -base64 32
```
Copy the generated string and paste it into the *Signing secret* field.
You also need to add this secret to your environment variables:
```dotenv title="apps/web/.env.local"
LEMONSQUEEZY_WEBHOOK_SECRET=
```
To get the callback URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter web development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Lemon Squeezy. Just use that url and add `/api/billing/webhook/lemon-squeezy` to it.
### Production deployment
When going to production, you will need to set the webhook URL and the events you want to listen to in Lemon Squeezy.
The webhook path is `/api/billing/webhook/lemon-squeezy`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook/lemon-squeezy` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events please check [Webhooks](/docs/web/billing/webhooks) for more information.
## Add discount
You can add a discount for your customers that will apply on a specific price.
You can create the discount on [Discounts page](https://app.lemonsqueezy.com/discounts).

You can set there a details of discount such as products that it should apply to, amount off, duration, max redemptions and more.
You need to add also the discount code and details to TurboStarter billing configuration to enable displaying it in the UI, creating checkout sessions with it and calculate prices.
[See discounts configuration](/docs/web/billing/configuration#discounts) for more details.
That's it! 🎉 You have now set up Lemon Squeezy as a billing provider for your app.
Feel free to add more products, prices, discounts and manage your customers data and subscriptions using Lemon Squeezy.
Make sure that the data you set in the configuration matches the details of things you created in Lemon Squeezy.
[See configuration](/docs/web/billing/configuration) for more information.
---
url: /docs/web/billing/metered-usage
title: Metered usage
description: Charge your customers based on their usage of your services.
---
We are working on adding metered usage billing to our platform. As soon as it's ready, we will update this page with the necessary information.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/web/billing/one-time
title: One-time payments
description: Manage one-time payments with TurboStarter.
---
While not a typical SaaS billing model, TurboStarter supports one-time (one-off) payments.
One-time payments are useful when you want to sell products that aren't subscription-based, such as:
* **Lifetime access**: products sold once, granting access forever.
* **Multiple purchases**: one-off items/add-ons that can be bought multiple times.
Some of this will require custom code (e.g. fulfillment), but TurboStarter provides a solid foundation for handling checkout and syncing successful purchases into your app.
## Configuration
One-time payments are represented as **variants** with `model: BillingModel.ONE_TIME` in your billing configuration.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
...
plans: [
{
id: BillingPlan.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
features: [
"Unlimited projects",
"Priority support",
"Advanced integrations",
"Team collaboration",
"Analytics dashboard"
],
variants: [
{
/* 👇 This is the `priceId` from the provider (e.g. Stripe), `variantId` (e.g. Lemon Squeezy) or `productId` (e.g. Polar) */
id: "price_1PpUagFQH4McJDTlHCzOmyT6",
cost: 29900,
currency: "usd",
model: BillingModel.ONE_TIME, // [!code highlight]
},
],
},
],
...
}) satisfies BillingConfig;
```
Let's break down the fields:
* `id`: The unique identifier for the variant. **This must match the identifier in the billing provider.**
* `cost`: The price amount in the smallest currency unit (e.g. cents). Displayed values are typically divided by 100.
* `currency`: The currency code (defaults to `usd`).
* `model`: The billing model for the variant. For one-time payments, it must be `BillingModel.ONE_TIME`.
Please remember that the `cost` is set for UI purposes. **The billing provider handles the actual billing**, so make sure the amount is correct in the provider.
## Provider notes
* **Stripe**: one-time purchases typically complete on `checkout.session.completed`. Your `variant.id` should match the Stripe **Price ID** (e.g. `price_...`). See [Stripe setup](/docs/web/billing/stripe).
* **Lemon Squeezy**: one-time purchases typically emit `order_created`. Your `variant.id` should match the Lemon Squeezy **Variant ID**. See [Lemon Squeezy setup](/docs/web/billing/lemon-squeezy).
* **Polar**: one-time purchases typically emit `order.created`/`order.updated`. Your `variant.id` should match the Polar **Product ID** (Polar models each “variant” as a separate product). See [Polar setup](/docs/web/billing/polar).
When a product is purchased, TurboStarter will create an order in the provider-agnostic `order` table - you can use this data to fulfill the order and grant access to the product.
---
url: /docs/web/billing/overview
title: Overview
description: Get started with web billing in TurboStarter.
---
The `@workspace/billing` and `@workspace/billing-web` packages are used to manage all the billing-related logic and features for your web SaaS application.
Inside, we're making an abstraction layer that allows us to use different billing providers without breaking our code nor changing the internal API calls.

## Providers
TurboStarter implements multiple providers for managing billing:
All configuration and setup is built-in with a unified API, so you can switch between providers by simply changing the exports and even introduce your own provider without breaking any billing-related logic.
Depending on the service you use, you will need to set the environment variables accordingly. By default - the billing package uses [Stripe](/docs/web/billing/stripe). Alternatively, you can use [Lemon Squeezy](/docs/web/billing/lemon-squeezy) or [Polar](/docs/web/billing/polar). In the future, we will also add [Creem](/docs/web/billing/creem).
## Configuration
The core billing configuration is maintained in the `@workspace/billing` package within the `config` directory. This directory houses the primary configuration schema as well as schemas for the available API endpoints.
To better understand all billing features and customization options provided by TurboStarter, explore the following dedicated guides:
## Fetching customer status
After your user completes checkout, you'll often want to fetch their current billing summary (subscription status, entitlements, credits) to:
* gate features in your UI
* show “Current plan” / “Manage subscription” states
* keep the app in sync after upgrades/downgrades
You can do this via the billing `me` endpoint (`/api/billing/me`) using the web [API client](/docs/web/api/client).
### Server-side
```tsx title="page.tsx"
import { handle } from "@workspace/api/utils";
import { getActivePlan } from "@workspace/billing";
import { api } from "~/lib/api/server";
export default async function BillingStatus() {
const summary = await handle(api.billing.me.$get)();
const plan = getActivePlan(summary);
return
Current plan: {plan}
;
}
```
### Client-side
```tsx title="billing-status.tsx"
"use client";
import { useQuery } from "@tanstack/react-query";
import { handle } from "@workspace/api/utils";
import { getActivePlan } from "@workspace/billing";
import { api } from "~/lib/api/client";
export function BillingStatus() {
const summary = useQuery({
queryKey: ["me"],
queryFn: handle(api.billing.me.$get),
});
if (!summary.data) {
return null;
}
const plan = getActivePlan(summary.data);
return
Current plan: {plan}
;
}
```
In summary, TurboStarter offers a flexible and unified billing framework, allowing you to mix and match billing models and providers as needed. Each section above provides focused documentation to help you configure the approach that best suits your SaaS application's needs.
---
url: /docs/web/billing/per-seat
title: Per-seat billing
description: Charge your customers based on the number of seats they have.
---
We are working on adding per-seat billing to our platform. As soon as it's ready, we will update this page with the necessary information.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/web/billing/polar
title: Polar
description: Manage your customers data and subscriptions using Polar.
---
[Polar](https://www.polar.com/) is another billing provider available within TurboStarter. Here we'll go through the configuration and how to set it up as a provider for your app.
To switch to Polar, you need to update the exports in:
```ts
// [!code word:polar]
export * from "./polar";
```
```ts
// [!code word:polar]
export * from "./polar/env";
```
Then, let's configure the integration:
## Get the access token
After you have created your account for [Polar](https://www.polar.com/) and created your store, you will need to get the API key.
Under the *Settings*, scroll to *Developers* and click "New token". Enter a name for the token, set the expiration duration and select the scopes you want the token to have.
To keep it simple, you can select all scopes.

For local development, make sure to use [Sandbox Mode](https://docs.polar.sh/integrate/sandbox) to not mess with the real transactions.
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/web/.env.local"
POLAR_ACCESS_TOKEN="" # Your Polar access token
POLAR_WEBHOOK_SECRET="" # Your Polar webhook secret
POLAR_ORGANIZATION_SLUG="" # Your Polar organization slug (can be found under Settings > Organization)
```
**Please do not add the secret keys to the .env file in production.** During development, you can place them in `.env.local` as it's not committed to the repository. In production, you can set them in the environment variables of your hosting provider.
## Create products
For your users to choose from the available subscription plans, you need to create those Products first on the [Products page](https://docs.polar.sh/features/products). You can create as many products as you want.

Polar takes a different approach to product variants. Instead of having one product with multiple pricing options, Polar treats each pricing option as a separate product. This simplifies the user experience and API while giving you full flexibility.
At checkout, customers can choose between different products (like monthly or yearly plans), each with its own pricing and benefits.

You need to make sure that the product ID you set in the configuration matches the ID of the product you created in Polar.
[See configuration](/docs/web/billing/configuration#variants) for more information.
## Create a webhook
To sync the current subscription status or checkout conclusion and other information to your database, you need to set up a webhook.
The webhook handling code comes ready to use with TurboStarter, you just have to create the webhook in the Polar dashboard and insert the URL for your project.
To configure a new webhook, go to the [Webhooks page](https://docs.polar.sh/integrate/webhooks/endpoints) in the Polar settings and click the *Add endpoint* button.

Select the following events:
* For subscriptions:
* `subscription.created`
* `subscription.updated`
* For one-off payments:
* `order.created`
* `order.updated`
You will also have to enter a *Secret* which you can get by running the following command in your terminal:
```bash
openssl rand -base64 32
```
Copy the generated string and paste it into the *Secret* field.
You also need to add this secret to your environment variables:
```dotenv title="apps/web/.env.local"
POLAR_WEBHOOK_SECRET=
```
To get the URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter web development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Polar. Just use that url and add `/api/billing/webhook/polar` to it.
### Production deployment
When going to production, you will need to set the webhook URL and the events you want to listen to in Polar.
The webhook path is `/api/billing/webhook/polar`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook/polar` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events please check [Webhooks](/docs/web/billing/webhooks) for more information.
## Add discount
You can add a discount for your customers that will apply on a specific product.
You can create the discount under the *Products* page on *Discounts* tab in the Polar dashboard.

You can set there a details of discount such as products that it should apply to, amount off, duration, max redemptions and more.
You need to add also the discount code and details to TurboStarter billing configuration to enable displaying it in the UI, creating checkout sessions with it and calculate prices.
[See discounts configuration](/docs/web/billing/configuration#discounts) for more details.
That's it! 🎉 You have now set up Polar as a billing provider for your app.
Feel free to add more products, prices, discounts and manage your customers data and subscriptions using Polar.
Make sure that the data you set in the configuration matches the details of things you created in Polar.
[See configuration](/docs/web/billing/configuration) for more information.
---
url: /docs/web/billing/stripe
title: Stripe
description: Manage your customers data and subscriptions using Stripe.
---
[Stripe](https://stripe.com) is the default billing provider for TurboStarter. Here we'll go through the configuration and how to set it up as a provider for your app.
## Get API keys
After you have created your account for [Stripe](https://stripe.com), you will need to get the API key. You can do this by going to the [API page](https://dashboard.stripe.com/apikeys) in the dashboard. Here you will find the *Secret key* and the *Publishable key*. You will need the *Secret key* for the integration to work.
For local development, make sure to create and use a dedicated [Sandbox](https://docs.stripe.com/sandboxes) to not mess with the real transactions.
## Set environment variables
You need to set the following environment variables:
```dotenv title="apps/web/.env.local"
STRIPE_SECRET_KEY="" # Your Stripe secret key
STRIPE_WEBHOOK_SECRET="" # The secret key of the webhook you created (see below)
```
**Please do not add the secret keys to the .env file in production.** During development, you can place them in `.env.local` as it's not committed to the repository. In production, you can set them in the environment variables of your hosting provider.
## Create products
For your users to choose from the available subscription plans, you need to create those Products first on the [Products page](https://dashboard.stripe.com/products). You can create as many products as you want.
Create one product per plan you want to offer. You can add multiple prices within this product to offer multiple models or different billing intervals.

You need to make sure that the variant ID you set in the configuration matches the ID of the price you created in Stripe.
[See configuration](/docs/web/billing/configuration#variants) for more information.
## Create a webhook
To sync the current subscription status or checkout conclusion and other information to your database, you need to set up a webhook.
The webhook code comes ready to use with TurboStarter, you just have to create the webhook in the Stripe dashboard and insert the URL for your project.
To configure a new webhook, go to the [Webhooks page](https://dashboard.stripe.com/webhooks) in the Stripe settings and click the Add endpoint button.

Select the following events:
* For subscriptions:
* `customer.subscription.created`
* `customer.subscription.updated`
* `customer.subscription.deleted`
* For one-off payments:
* `checkout.session.completed`
To get the URL for the webhook, you can either use a local development URL or the URL of your deployed app:
### Local development
There are two ways to test the webhook during local development:
The Stripe CLI which allows you to listen to Stripe events straight to your own localhost. You can install and use the CLI using a variety of methods, but we recommend using official way to do it.
[Install the Stripe CLI](https://docs.stripe.com/stripe-cli)
Then - login to your Stripe account using the project you want to run:
```bash
stripe login
```
Copy the webhook secret displayed in the terminal and set it as the `STRIPE_WEBHOOK_SECRET` environment variable in your `apps/web/.env.local` file:
```dotenv title="apps/web/.env.local"
STRIPE_WEBHOOK_SECRET=
```
Now, you can listen to Stripe events running the following command:
```bash
stripe listen --forward-to localhost:3000/api/billing/webhook/stripe
```
This will forward all the Stripe events to your local endpoint.
**If you have not logged in** - the first time you set it up, you are required to sign in. This is a one-time process. Once you sign in, you can use the CLI to listen to Stripe events.
**Please sign in and then re-run the command.** Now, you can listen to Stripe events.
If you're not receiving events, please make sure that:
* the webhook secret is correct
* the account you signed in is the same as the one you're using in your app
You can even trigger the event manually for testing purposes:
```bash
stripe trigger customer.subscription.created
```
If you want to test the webhook locally, you can use [ngrok](https://ngrok.com) to create a tunnel to your local machine. Ngrok will then give you a URL that you can use to test the webhook locally.
To do so, install ngrok and run it with the following command (while your TurboStarter web development server is running):
```bash
ngrok http 3000
```

This will give you a URL (see the *Forwarding* output) that you can use to create a webhook in Stripe. Just use that url and add `/api/billing/webhook/stripe` to it.
### Production deployment
When going to production, you will need to set the webhook URL and the events you want to listen to in Stripe.
The webhook path is `/api/billing/webhook/stripe`. If your app is hosted at `https://myapp.com` then you need to enter `https://myapp.com/api/billing/webhook/stripe` as the URL.
All the relevant events are automatically handled by TurboStarter, so you don't need to do anything else. If you want to handle more events please check [Webhooks](/docs/web/billing/webhooks) for more information.
## Configure Stripe Customer Portal
Stripe requires you to set up the Customer Portal so that users can manage their billing information, invoices and plan settings from there.
You can do it [under the following link.](https://dashboard.stripe.com/settings/billing/portal)

Remember to:
1. Ensure that users have the ability to change or upgrade their subscription plans by enabling the relevant option in the Customer Portal settings.
2. Adjust the cancellation settings to suit your application's requirements, such as whether users can cancel immediately or at the end of the billing period.
## Add discount
You can add a discount for your customers that will apply on a specific price.
### Create coupon
First, you'd need to create a coupon on the [Coupons page](https://dashboard.stripe.com/coupons).

You can set there a details of discount such as prices that it should apply to, amount off, duration, max redemptions and more.
### Add promotion code
To enable using code during checkout you need to get a promotion code. You can define it on the same page as the coupon and give some user-friendly name to it.

This code can then be applied at new checkout sessions by passing it as one of the parameters in the checkout session creation.
### Configure discount
You need to add also the discount code and details to TurboStarter billing configuration to enable displaying it in the UI, creating checkout sessions with it and calculate prices.
[See discounts configuration](/docs/web/billing/configuration#discounts) for more details.
That's it! 🎉 You have now set up Stripe as a billing provider for your app.
Feel free to add more products, prices, discounts and manage your customers data and subscriptions using Stripe.
Make sure that the data you set in the configuration matches the details of things you created in Stripe.
[See configuration](/docs/web/billing/configuration) for more information.
---
url: /docs/web/billing/subscriptions
title: Subscriptions
description: Learn how to manage subscriptions in your application.
---
TurboStarter supports subscription billing (recurring payments) on the web across providers like [Stripe](/docs/web/billing/stripe), [Lemon Squeezy](/docs/web/billing/lemon-squeezy), and [Polar](/docs/web/billing/polar).
Subscriptions are configured in your **billing config** using:
* **plans**: what you sell (Free, Premium, Enterprise, etc.)
* **variants**: how you sell it (monthly, yearly, trials, etc.)
## Configuration
Subscriptions are represented as **variants** with `model: BillingModel.RECURRING`.
```ts title="index.ts"
export const config = billingConfigSchema.parse({
plans: [
{
id: BillingPlan.PREMIUM,
name: "Premium",
description: "Become a power user and gain benefits",
badge: "Bestseller",
features: [
"Unlimited projects",
"Priority support",
"Advanced integrations",
"Team collaboration",
"Analytics dashboard",
],
variants: [
// Monthly
{
id: "price_monthly_or_variant_id",
cost: 1900,
currency: "usd",
model: BillingModel.RECURRING, // [!code highlight]
interval: RecurringInterval.MONTH,
trialDays: 7,
},
// Yearly
{
id: "price_yearly_or_variant_id",
cost: 8900,
currency: "usd",
model: BillingModel.RECURRING, // [!code highlight]
interval: RecurringInterval.YEAR,
trialDays: 7,
},
],
},
],
}) satisfies BillingConfig;
```
Breaking down the fields:
* `id`: **Provider identifier** for this recurring price/variant/product.
* `cost`: Amount in the smallest currency unit (e.g. cents). Used for UI; provider charges the real amount.
* `currency`: Currency code (defaults to `usd`).
* `model`: Must be `BillingModel.RECURRING`.
* `interval`: Required for recurring variants (`RecurringInterval.MONTH`, `RecurringInterval.YEAR`, etc.).
* `trialDays`: Optional trial length in days.
The `variant.id` value must match what your billing provider expects (Stripe price ID, Lemon Squeezy variant ID, Polar product ID, etc.). A mismatch is the **#1 reason** why a checkout can't be created.
## Provider notes
* **Stripe**: `variant.id` should match a Stripe **Price ID** (`price_...`). Webhook events used for subscriptions include `customer.subscription.*`. See [Stripe setup](/docs/web/billing/stripe).
* **Lemon Squeezy**: `variant.id` should match a Lemon Squeezy **Variant ID**. See [Lemon Squeezy setup](/docs/web/billing/lemon-squeezy).
* **Polar**: `variant.id` should match a Polar **Product ID** (Polar treats each “variant” as a separate product). Subscription events include `subscription.created` / `subscription.updated`. See [Polar setup](/docs/web/billing/polar).
---
url: /docs/web/billing/webhooks
title: Webhooks
description: Handle webhooks from your web app's billing provider.
---
TurboStarter handles billing webhooks to update customer data based on events received from the billing provider.
Occasionally, you may need to set up additional webhooks or perform custom actions with webhooks.
In such cases, you can customize the billing webhook handler in the billing router at `packages/api/src/modules/billing/router.ts`.
By default, the webhook handler is configured to be **as straightforward as possible**:
```ts title="router.ts"
import { webhookHandler, provider } from "@workspace/billing-web/server";
export const billingRouter = new Hono().post(`/webhook/${provider}`, (c) =>
webhookHandler(c.req.raw),
);
```
However, you can extend it using the callbacks provided from `@workspace/billing-web` package:
```ts title="router.ts"
import { webhookHandler, provider } from "@workspace/billing-web/server";
export const billingRouter = new Hono().post(`/webhook/${provider}`, (c) =>
webhookHandler(c.req.raw, {
onCheckoutSessionCompleted: (sessionId) => {},
onSubscriptionCreated: (subscriptionId) => {},
onSubscriptionUpdated: (subscriptionId) => {},
onSubscriptionDeleted: (subscriptionId) => {},
onEvent: (rawEvent) => {},
}),
);
```
You can provide one or more of the callbacks to handle the events you are interested in.
Web billing webhooks are set up using the same method as [in the mobile app](/docs/mobile/billing/webhooks). Make sure to keep your configurations organized and confirm that events are handled properly for each provider on both platforms.
---
url: /docs/web/cli
title: CLI
description: Start your new project with a single command.
---
To help you get started with TurboStarter **as quickly as possible**, we've developed a [CLI](https://www.npmjs.com/package/@turbostarter/cli) that enables you to create a new project (with all the configuration) in seconds.
The CLI is a set of commands that will help you create a new project, generate code, and manage your project efficiently.
Currently, the following actions are available:
* **Starting a new project** - Generate starter code for your project with all necessary configurations in place (billing, database, emails, etc.)
* **Updating existing project** - Pull the latest upstream changes into your TurboStarter repository
**The CLI is in beta**, and we're actively working on adding more commands and actions.
## Installation
You can run commands without installing globally:
```bash
npx @turbostarter/cli@latest
```
```bash
pnpm dlx @turbostarter/cli@latest
```
```bash
yarn dlx @turbostarter/cli@latest
```
```bash
bunx @turbostarter/cli@latest
```
Or install globally and run:
```bash
npm install -g @turbostarter/cli
turbostarter
```
```bash
pnpm add -g @turbostarter/cli
turbostarter
```
```bash
yarn global add @turbostarter/cli
turbostarter
```
```bash
bun add -g @turbostarter/cli
turbostarter
```
You can also display help for it or check the actual version using `--help` or `-v` flags.
### Starting a new project
Use the `new` command to initialize configuration and dependencies for a new project.
```bash
turbostarter new
```
You will be asked a few questions to configure your project:
```bash
✔ All prerequisites satisfied, let's start! 🚀
? What do you want to ship? ›
◉ Web app
◉ Mobile app
◯ Browser extension
? Enter your project name. ›
? Configure all providers now? ›
Yes, configure now (recommended)
No, just let me ship, now!
Creating a new TurboStarter project in ...
✔ Repository successfully pulled!
✔ Git successfully configured!
✔ Dependencies successfully installed!
✔ Services successfully started!
🎉 You can now get started. Open the project and just ship it! 🎉
Problems? https://turbostarter.dev/docs
```
It will create a new project, configure providers, install dependencies and start required services in development mode.
### Updating existing project
Use the `project update` command to pull the latest upstream changes into your TurboStarter repository.
```bash
turbostarter project update
```
Before updating, the CLI validates that:
* You are running the command from a TurboStarter project root
* Your git working tree is clean
* Your `upstream` remote points to `turbostarter/core`
Then it fetches upstream changes and merges `upstream/main` into your current branch. If conflicts occur, it prints the conflicting files with next steps.
---
url: /docs/web/cms/blog
title: Blog
description: Learn how to manage your blog content.
---
TurboStarter comes with a pre-configured blog implementation that allows you to manage your blog content.
## Creating a new blog post
To create a new blog post, you need to create a new directory (its name will be used as the slug of the blog post) with `.mdx` files in the `packages/cms/src/collections/blog/content` directory. Each file in this directory should be named after the locale it belongs to (e.g `en.mdx`, `es.mdx`, etc.).
The file will start with a [frontmatter](https://mdxjs.com/guides/frontmatter/) block, which is a yaml-like block that contains metadata about the post. The frontmatter block should be surrounded by three dashes (`---`).
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
---
title: Quick Tips to Improve Your Skills Right Away
description: Whether you're learning a new technical skill or working on personal development, these quick tips can help you improve right away. Learn how to break down your goals, practice consistently, and track your progress using Markdown.
publishedAt: 2023-04-19
tags: [learning, skills, progress]
thumbnail: https://images.unsplash.com/photo-1483639130939-150975af84e5?q=80&w=2370&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D
status: published
---
```
Let's break down the frontmatter fields:
* `title`: The title of the blog post (it will be also used to generate a slug for the blog post)
* `description`: The description of the blog post
* `publishedAt`: The date when the blog post was published
* `tags`: The tags of the blog post
* `thumbnail`: The thumbnail of the blog post
* `status`: The status of the blog post (could be `published` or `draft`)
After the frontmatter block, you can add the content of the blog post:
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
# Quick Tips to Improve Your Skills Right Away
Awesome paragraph!
[Link](https://www.turbostarter.dev)
This is a callout component.
...
```
You can consume the content the same as it's described in [Content Collections](/docs/web/cms/content-collections).
## BONUS: Using custom components
As you're using MDX, you can use **any React component** in your blog posts. Just define it as a normal React component and pass it to `` in `components` prop:
```tsx title="apps/web/src/app/content/page.tsx"
import { MyComponent } from "~/modules/common/my-component";
export default function Page() {
return (
);
}
```
Then, you would be able to use it in your document content and it will rendered on the page as a result:
```mdx title="packages/cms/src/collections/blog/content/my-first-blog-post/en.mdx"
...
# Heading
Excellent paragraph!
1. First item
2. Second item
3. Third item
```
TurboStarter ships with a set of default components that you can use in your blog posts, e.g. ``, `` etc. Use them or define your own to make your blog posts more engaging.
---
url: /docs/web/cms/content-collections
title: Content Collections
description: Get started with Content Collections.
---
By default, TurboStarter uses [Content Collections](https://www.content-collections.dev/) to store and retrieve content from the MDX files.
Content from there is used to populate data in the following places:
* **Blog**
* **Legal pages**
* **Documentation**
It is a great alternative to headless CMS like Contentful or Prismic based on MDX (a more powerful version of markdown). It is free, open source and the content is located right in your repository.
Of course, you can add more collections and views, as it's very flexible.
## Defining new collection
To define a new collection, you need to create a new file in the `packages/cms/src/collections` directory:
```ts title="packages/cms/src/collections/legal/index.ts"
import { defineCollection } from "@content-collections/core";
export const legal = defineCollection({
name: "legal",
directory: "src/collections/legal/content",
include: "**/*.mdx",
schema: (z) => ({
title: z.string(),
description: z.string(),
}),
transform: async (doc, context) => {
const mdx = await transformMDX(doc, context);
return {
...mdx,
slug: doc._meta.directory,
locale: doc._meta.fileName.split(".")[0],
};
},
});
```
Then it's passed to the config in `packages/cms/content-collections.ts` file which is used to generate types and parse content from MDX files.
```tsx title="packages/cms/content-collections.ts"
import { defineConfig } from "@content-collections/core";
import { legal } from "./src/collections/legal";
export default defineConfig({
collections: [legal],
});
```
When you run a development server, content collections will be automatically rebuilt (in `.content-collections` directory) and you will be able to import the content and metadata of each file in your application.
By exporting the generated content you get fully type-safe API to interact
with the content. We can have type safety on the data that we're receiving
from the MDX files.
## Using content collections
To get some content from `@workspace/cms` package, you need to use the exposed API that we described in the [Overview section](/docs/web/cms/overview#api):
```tsx title="apps/web/src/app/[locale](marketing)/legal/[slug]/page.tsx"
import { content } from "@workspace/cms";
export default async function Page({
params,
}: {
params: Promise<{ slug: string; locale: string }>;
}) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
return
{title}
;
}
```
Voila! You can now access the content from the MDX files.
---
url: /docs/web/cms/overview
title: Overview
description: Manage your content in TurboStarter.
---
TurboStarter implements a CMS interface that abstracts the implementation from where you store your data. It provides a simple API to interact with your data, and it's easy to extend and customize.
By default, the starter kit ships with these implementations in place:
1. [Content Collections](https://www.content-collections.dev/) - a headless CMS that uses [MDX](https://mdxjs.com/) files to store your content.
The implementation is available under `@workspace/cms` package, here we'll go over how to use it.
## API
The CMS package provides a simple, unified API to interact with the content. It's the same for all the providers, so you can easily use it with any of the implementations without changing the code.
### Fetching content items
To fetch items from your colletions, you can use the `getContentItems` function.
```ts
import { getContentItems } from "@workspace/cms";
const { items, count } = getContentItems({
collection: CollectionType.BLOG,
tags: [ContentTag.SKILLS],
sortBy: "publishedAt",
sortOrder: SortOrder.DESCENDING,
status: ContentStatus.PUBLISHED,
locale: "en",
});
```
It accepts an object with the following properties:
* `collection`: The collection to fetch the items from.
* `tags`: The tags to filter the items by.
* `sortBy`: The field to sort the items by.
* `sortOrder`: The order to sort the items in.
* `status`: The status of the items to fetch. It can be `published` or `draft`. By default, only `published` items are fetched.
* `locale`: The locale to fetch the items in. By default, all locales are fetched.
### Fetching a single content item
To fetch a single content item, you can use the `getContentItemBySlug` function.
```ts
import { getContentItemBySlug } from "@workspace/cms";
const item = getContentItemBySlug({
collection: CollectionType.BLOG,
slug: "my-first-blog-post",
status: ContentStatus.PUBLISHED,
locale: "en",
});
```
It accepts an object with the following properties:
* `collection`: The collection to fetch the item from.
* `slug`: The slug of the item to fetch.
* `status`: The status of the item to fetch. It can be `published` or `draft`. By default, only `published` items are fetched.
* `locale`: The locale to fetch the item in. By default, all locales are fetched.
---
url: /docs/web/configuration/app
title: App configuration
description: Learn how to setup the overall settings of your app.
---
The application configuration is set at `apps/web/src/config/app.ts`. This configuration stores some overall variables for your application.
This allows you to host multiple apps in the same monorepo, as every application defines its own configuration.
The recommendation is to **not update this directly** - instead, please define the environment variables and override the default behavior. The configuration is strongly typed so you can use it safely accross your codebase - it'll be validated at build time.
```ts title="apps/web/src/config/app.ts"
import env from "env.config";
export const appConfig = {
name: env.NEXT_PUBLIC_PRODUCT_NAME,
url: env.NEXT_PUBLIC_URL,
locale: env.NEXT_PUBLIC_DEFAULT_LOCALE,
theme: {
mode: env.NEXT_PUBLIC_THEME_MODE,
color: env.NEXT_PUBLIC_THEME_COLOR,
},
} as const;
```
For example, to set the product name and default locale, you'd update the following variables:
```dotenv title=".env.local"
NEXT_PUBLIC_PRODUCT_NAME="TurboStarter"
NEXT_PUBLIC_DEFAULT_LOCALE="en"
```
Do NOT use `process.env` to get the values of the variables. Variables
accessed this way are not validated at build time, and thus the wrong variable
can be used in production.
---
url: /docs/web/configuration/environment-variables
title: Environment variables
description: Learn how to configure environment variables.
---
Environment variables are defined in the `.env` file in the root of the repository and in the root of the `apps/web` package.
* **Shared environment variables**: Defined in the **root** `.env` file. These are shared between environments (e.g., development, staging, production) and apps (e.g., web, mobile).
* **Environment-specific variables**: Defined in `.env.development` and `.env.production` files. These are specific to the development and production environments.
* **App-specific variables**: Defined in the app-specific directory (e.g., `apps/web`). These are specific to the app and are not shared between apps.
* **Secret keys**: Not stored in the `.env` file. Instead, they are stored in the environment variables of the CI/CD system.
* **Local secret keys**: If you need to use secret keys locally, you can store them in the `.env.local` file. This file is not committed to Git, making it safe for sensitive information.
## Shared variables
Here you can add all the environment variables that are shared across all the apps. This file should be located in the **root** of the project.
To override these variables in a specific environment, please add them to the specific environment file (e.g. `.env.development`, `.env.production`).
```dotenv title=".env.local"
# Shared environment variables
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# The name of the product. This is used in various places across the apps.
PRODUCT_NAME="TurboStarter"
# The url of the web app. Used mostly to link between apps.
URL="http://localhost:3000"
...
```
If you're using Supabase for your database, the [Supabase recipe](/docs/web/recipes/supabase#configure-environment-variables) shows the exact `DATABASE_URL` format and how to set it in your `.env.local`.
## App-specific variables
Here you can add all the environment variables that are specific to the app (e.g. `apps/web`).
You can also override the shared variables defined in the root `.env` file.
```dotenv title="apps/web/.env.local"
# App-specific environment variables
# Env variables extracted from shared to be exposed to the client in Next.js app
NEXT_PUBLIC_PRODUCT_NAME="${PRODUCT_NAME}"
NEXT_PUBLIC_URL="${URL}"
NEXT_PUBLIC_DEFAULT_LOCALE="${DEFAULT_LOCALE}"
# Theme mode and color
NEXT_PUBLIC_THEME_MODE="system"
NEXT_PUBLIC_THEME_COLOR="orange"
...
```
For example, server-only app-specific variables in `apps/web/.env.local` often include third-party integration keys that should never be exposed with `NEXT_PUBLIC_`. In the AI starter, that can include provider keys such as:
```dotenv title="apps/web/.env.local"
OPENAI_API_KEY=""
ANTHROPIC_API_KEY=""
BRAVE_SEARCH_API_KEY=""
EXA_API_KEY=""
FIRECRAWL_API_KEY=""
TAVILY_API_KEY=""
```
To make environment variables available in the Next.js **client-side** app code, you need to prefix them with `NEXT_PUBLIC_`. They will be injected to the code during the build process.
Only environment variables prefixed with `NEXT_PUBLIC_` will be injected, so don't use this prefix for environment variables that should be used only in the server-side code.
[Read more about Next.js environment variables.](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables)
## Secret keys
Secret keys and sensitive information are to be never stored in the `.env` file. Instead, **they are stored in the environment variables of the CI/CD system.**
It means that you will need to add the secret keys to the environment
variables of your CI/CD system (e.g., GitHub Actions, Vercel, Cloudflare, your
VPS, Netlify, etc.). This is not a TurboStarter-specific requirement, but a
best practice for security for any application. Ultimately, it's your choice.
Below is some examples of "what is a secret key?" in practice.
```dotenv title=".env.local"
# Secret keys
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
# Stripe server config - required only if you use Stripe as a billing provider
STRIPE_WEBHOOK_SECRET=""
STRIPE_SECRET_KEY=""
# Lemon Squeezy server config - required only if you use Lemon Squeezy as a billing provider
LEMON_SQUEEZY_API_KEY=""
LEMON_SQUEEZY_SIGNING_SECRET=""
LEMON_SQUEEZY_STORE_ID=""
...
```
If you need to use secret keys locally, you can store them in the `.env.local`
file. This file is not committed to Git, therefore it is safe to store
sensitive information in it.
---
url: /docs/web/configuration/paths
title: Paths configuration
description: Learn how to configure the paths of your app.
---
The paths configuration is set at `apps/web/config/paths.ts`. This configuration stores all the paths that you'll be using in your application. It is a convenient way to store them in a central place rather than scatter them in the codebase using magic strings.
It is **unlikely you'll need to change** this unless you're heavily editing the codebase.
```ts title="apps/web/config/paths.ts"
const pathsConfig = {
index: "/",
marketing: {
pricing: "/pricing",
contact: "/contact",
blog: {
index: BLOG_PREFIX,
post: (slug: string) => `${BLOG_PREFIX}/${slug}`,
},
legal: (slug: string) => `${LEGAL_PREFIX}/${slug}`,
},
auth: {
login: `${AUTH_PREFIX}/login`,
register: `${AUTH_PREFIX}/register`,
join: `${AUTH_PREFIX}/join`,
forgotPassword: `${AUTH_PREFIX}/password/forgot`,
updatePassword: `${AUTH_PREFIX}/password/update`,
error: `${AUTH_PREFIX}/error`,
},
dashboard: {
user: {
index: DASHBOARD_PREFIX,
ai: `${DASHBOARD_PREFIX}/ai`,
settings: {
index: `${DASHBOARD_PREFIX}/settings`,
security: `${DASHBOARD_PREFIX}/settings/security`,
billing: `${DASHBOARD_PREFIX}/settings/billing`,
},
},
...
},
...,
} as const;
```
By declaring the paths as constants, we can use them safely throughout the
codebase. There is no risk of misspelling or using magic strings.
---
url: /docs/web/customization/add-app
title: Adding apps
description: Learn how to add apps to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new app to your TurboStarter project within your monorepo and want to keep pulling updates from the TurboStarter repository.
In some ways - creating a new repository may be the easiest way to manage your application. However, if you want to keep your application within the monorepo and pull updates from the TurboStarter repository, you can follow these instructions.
To pull updates into a separate application outside of `web` - we can use [git subtree](https://www.atlassian.com/git/tutorials/git-subtree).
Basically, we will create a subtree at `apps/web` and create a new remote branch for the subtree. When we create a new application, we will pull the subtree into the new application. This allows us to keep it in sync with the `apps/web` folder.
To add a new app to your TurboStarter project, you need to follow these steps:
## Create a subtree
First, we need to create a subtree for the `apps/web` folder. We will create a branch named `web-branch` and create a subtree for the `apps/web` folder.
```bash
git subtree split --prefix=apps/web --branch web-branch
```
## Create a new app
Now, we can create a new application in the `apps` folder.
Let's say we want to create a new app `ai-chat` at `apps/ai-chat` with the same structure as the `apps/web` folder (which acts as the template for all new apps).
```bash
git subtree add --prefix=apps/ai-chat origin web-branch --squash
```
You should now be able to see the `apps/ai-chat` folder with the contents of the `apps/web` folder.
## Update the app
When you want to update the new application, follow these steps:
### Pull the latest updates from the TurboStarter repository
The command below will update all the changes from the TurboStarter repository:
```bash
git pull upstream main
```
### Push the `web-branch` updates
After you have pulled the updates from the TurboStarter repository, you can split the branch again and push the updates to the web-branch:
```bash
git subtree split --prefix=apps/web --branch web-branch
```
Now, you can push the updates to the `web-branch`:
```bash
git push origin web-branch
```
### Pull the updates to the new application
Now, you can pull the updates to the new application:
```bash
git subtree pull --prefix=apps/ai-chat origin web-branch --squash
```
That's it! You now have a new application in the monorepo 🎉
---
url: /docs/web/customization/add-package
title: Adding packages
description: Learn how to add packages to your Turborepo workspace.
---
This is an **advanced topic** - you should only follow these instructions if you are sure you want to add a new package to your TurboStarter application instead of adding a folder to your application in `apps/web` or modify existing packages under `packages`. You don't need to do this to add a new page or component to your application.
To add a new package to your TurboStarter application, you need to follow these steps:
## Generate a new package
First, enter the command below to create a new package in your TurboStarter application:
```bash
turbo gen package
```
Turborepo will ask you to enter the name of the package you want to create. Enter the name of the package you want to create and press enter.
If you don't want to add dependencies to your package, you can skip this step by pressing enter.
The command will have generated a new package under packages named `@workspace/`. If you named it `example`, the package will be named `@workspace/example`.
Finally, to make fast refresh work when you make changes to the package, you need to add the package to the `next.config.ts` file in the root of your TurboStarter application `apps/web`.
```ts title="next.config.ts"
const INTERNAL_PACKAGES = [
// all internal packages,
"@workspace/example",
];
```
## Export a module from your package
By default, the package exports a single module using the `index.ts` file. You can add more exports by creating new files in the package directory and exporting them from the `index.ts` file or creating export files in the package directory and adding them to the `exports` field in the `package.json` file.
### From `index.ts` file
The easiest way to export a module from a package is to create a new file in the package directory and export it from the `index.ts` file.
```ts title="packages/example/src/module.ts"
export function example() {
return "example";
}
```
Then, export the module from the `index.ts` file.
```ts title="packages/example/src/index.ts"
export * from "./module";
```
### From `exports` field in `package.json`
**This can be very useful for tree-shaking.** Assuming you have a file named `module.ts` in the package directory, you can export it by adding it to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./module": "./src/module.ts"
}
}
```
**When to do this?**
1. when exporting two modules that don't share dependencies to ensure better tree-shaking. For example, if your exports contains both client and server modules.
2. for better organization of your package
For example, create two exports `client` and `server` in the package directory and add them to the `exports` field in the `package.json` file.
```json title="packages/example/package.json"
{
"exports": {
".": "./src/index.ts",
"./client": "./src/client.ts",
"./server": "./src/server.ts"
}
}
```
1. The `client` module can be imported using `import { client } from '@workspace/example/client'`
2. The `server` module can be imported using `import { server } from '@workspace/example/server'`
## Use the package in your application
You can now use the package in your application by importing it using the package name:
```ts title="apps/web/src/app/page.tsx"
import { example } from "@workspace/example";
console.log(example());
```
Et voilà! You have successfully added a new package to your TurboStarter application. 🎉
---
url: /docs/web/customization/components
title: Components
description: Manage and customize your app components.
---
For the components part, we're using [shadcn/ui](https://ui.shadcn.com) for atomic, accessible and highly customizable components.
shadcn/ui is a powerful tool that allows you to generate pre-designed
components with a single command. It's built with Tailwind CSS and Base UI,
and it's highly customizable.
TurboStarter defines two packages that are responsible for the UI part of your app:
* `@workspace/ui` - shared styles, [themes](/docs/web/customization/styling#themes) and assets (e.g. icons)
* `@workspace/ui-web` - pre-built UI web components, ready to use in your app
## Adding a new component
There are basically two ways to add a new component:
TurboStarter is fully compatible with [shadcn CLI](https://ui.shadcn.com/docs/cli), so you can generate new components with single command.
Run the following command from the **root** of your project:
```bash
pnpm --filter @workspace/ui-web ui:add
```
This will launch an interactive command-line interface to guide you through the process of adding a new component where you can pick which component you want to add.
```bash
Which components would you like to add? > Space to select. A to toggle all.
Enter to submit.
◯ accordion
◯ alert
◯ alert-dialog
◯ aspect-ratio
◯ avatar
◯ badge
◯ button
◯ calendar
◯ card
◯ checkbox
```
Newly created components will appear in the `packages/ui/web/src` directory.
You can always copy-paste a component from the [shadcn/ui](https://ui.shadcn.com/docs/components) website and modify it to your needs.
This is possible, because the components are headless and don't need (in most cases) any additional dependencies.
Copy code from the website, create a new file in the `packages/ui/web/src` directory and paste the code into the file.
Keep in mind that you should always try to keep shared components as atomic as possible. This will make it easier to reuse them and to build specific views by composition.
E.g. include components like `Button`, `Input`, `Card`, `Dialog` in shared package, but keep specific components like `LoginForm` in your app directory.
## Using components
Each component is a standalone entity which has a separate export from the package. It helps to keep things modular, avoid unnecessary dependencies and make tree-shaking possible.
To import a component from the UI package, use the following syntax:
```tsx title="components/my-component.tsx"
// [!code word:card]
import {
Card,
CardContent,
CardHeader,
CardFooter,
CardTitle,
CardDescription,
} from "@workspace/ui-web/card";
```
Then you can use it to build a component specific to your app:
```tsx title="components/my-component.tsx"
export function MyComponent() {
return (
My Component
My Component Content
);
}
```
We recommend using [v0](https://v0.dev) to generate layouts for your app. It's a powerful tool that allows you to generate layouts from the natural language instructions.
Of course, **it won't replace a designer**, but it can be a good starting point for your layout.
---
url: /docs/web/customization/styling
title: Styling
description: Get started with styling your app.
---
To build the web user interface, TurboStarter comes with [Tailwind CSS](https://tailwindcss.com/) and [Base UI](https://base-ui.com) pre-configured.
The combination of Tailwind CSS and Base UI gives ready-to-use, accessible UI components that can be fully customized to match your brand's design.
## Tailwind configuration
In the `packages/ui/shared/src/styles` directory, you will find shared CSS files with Tailwind CSS configuration. To change global styles, you can edit the files in this folder.
Here is an example of a shared CSS file that includes the Tailwind CSS configuration:
```css title="packages/ui/shared/src/styles/globals.css"
@import "tailwindcss";
@import "./themes.css";
@custom-variant dark (&:is(.dark *));
:root {
--radius: 0.65rem;
}
@theme inline {
--color-background: var(--background);
--color-foreground: var(--foreground);
--color-card: var(--card);
--color-card-foreground: var(--card-foreground);
--color-popover: var(--popover);
--color-popover-foreground: var(--popover-foreground);
--color-primary: var(--primary);
--color-primary-foreground: var(--primary-foreground);
--color-secondary: var(--secondary);
--color-secondary-foreground: var(--secondary-foreground);
--color-muted: var(--muted);
--color-muted-foreground: var(--muted-foreground);
...
}
```
For colors, we rely strictly on [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) in [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) format to allow for easy theme management without the need for any JavaScript.
Also, each app has its own `globals.css` file, which extends the shared config and allows you to override the global styles.
Here is an example of an app's `globals.css` file:
```css title="apps/web/src/assets/styles/globals.css"
@import "@workspace/ui-web/globals.css";
@theme inline {
/* Overridden theme variables for the app */
--background: oklch(0.98 0.01 80);
--foreground: oklch(0.22 0.03 120);
--card: oklch(0.97 0.02 50);
--card-foreground: oklch(0.18 0.01 280);
...
}
```
This way, we maintain a separation of concerns and a clear structure for the Tailwind CSS configuration.
## Themes
TurboStarter comes with **9+** predefined themes, which you can use to quickly change the look and feel of your app.
They're defined in the `packages/ui/shared/src/styles/themes` directory. Each theme is a set of variables that can be overridden:
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {
background: [1, 0, 0],
foreground: [0.141, 0.005, 285.823],
card: [1, 0, 0],
"card-foreground": [0.141, 0.005, 285.823],
...
}
} satisfies ThemeColors;
```
Each variable is stored as a [OKLCH](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) array, which is then converted to a CSS variable at build time (by our custom build script). That way we can ensure full type-safety and reuse themes across different parts of our apps (e.g. use the same theme in emails).
Feel free to add your own themes or override the existing ones to match your brand's identity.
To apply a theme to your app, you can use the `data-theme` attribute on the `html` element:
```tsx title="apps/web/src/app/layout.tsx"
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
{children}
);
}
```
## Dark mode
TurboStarter comes with built-in dark mode support.
Each theme has a corresponding set of dark mode variables, which are used to switch the theme to its dark mode counterpart.
```ts title="packages/ui/shared/src/styles/themes/colors/orange.ts"
export const orange = {
light: {},
dark: {
background: [0.141, 0.005, 285.823],
foreground: [0.985, 0, 0],
card: [0.21, 0.006, 285.885],
"card-foreground": [0.985, 0, 0],
...
}
} satisfies ThemeColors;
```
Because the dark variant is defined to use a class (`@custom-variant dark (&:is(.dark *))`) in the shared Tailwind configuration, we need to add the `dark` class to the `html` element to apply dark mode styles.
For this purpose, we're using the [next-themes](https://github.com/pacocoursey/next-themes) package under the hood to handle user preference management.
```tsx title="apps/web/src/providers/theme.tsx"
export const ThemeProvider = memo(({ children }) => {
return (
{children}
);
});
```
You can also define the default theme mode and color in the [app configuration](/docs/web/configuration/app).
---
url: /docs/web/database/client
title: Database client
description: Use database client to interact with the database.
---
The database client is an export of the Drizzle client. It is automatically typed by Drizzle based on the schema and is exposed as the db object from the database package (`@workspace/db`) in the monorepo.
This guide covers how to initialize the client and also basic operations, such as querying, creating, updating, and deleting records. To learn more about the Drizzle client, check out the [official documentation](https://orm.drizzle.team/kit-docs/overview).
## Initializing the client
Pass the validated `DATABASE_URL` to the client to initialize it.
```ts title="server.ts"
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import { env } from "../env";
const client = postgres(env.DATABASE_URL);
export const db = drizzle(client);
```
Now it's exported from the `@workspace/db` package and can be used across the codebase (server-side).
## Querying data
To query data, you can use the `db` object and its methods:
```ts title="query.ts"
import { eq } from "@workspace/db";
import { db } from "@workspace/db/server";
import { customer } from "@workspace/db/schema";
export const getCustomerByUserId = async (userId: string) => {
const [data] = await db
.select()
.from(customer)
.where(eq(customer.userId, userId));
return data ?? null;
};
```
## Mutating data
You can use the exported utilities to mutate data. Insert, update or delete records in fast and fully type-safe way:
```ts title="mutation.ts"
import { eq } from "@workspace/db";
import { db } from "@workspace/db/server";
import { customer } from "@workspace/db/schema";
export const upsertCustomer = (data: InsertCustomer) => {
return db.insert(customer).values(data).onConflictDoUpdate({
target: customer.userId,
set: data,
});
};
```
---
url: /docs/web/database/migrations
title: Migrations
description: Migrate your changes to the database.
---
You have your schema in place, and you want to apply your changes to the database. TurboStarter provides you a convenient way to do so with pre-configured CLI commands.
## Generating migration
To generate a migration, from the schema you need to run the following command:
```bash
pnpm with-env turbo db:generate
```
This will create a new `.sql` file in the `migrations` directory.
Drizzle will also generate a `.json` representation of the migration in the `meta` directory, but it's for its internal purposes and you shouldn't need to touch it.
## Applying migrations
To apply the migrations to the database, you need to run the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This will apply all the migrations that have not been applied yet. If any conflicts arise, you can resolve them by modifying the generated migration file.
## Pushing changes
To push changes directly to the database, you can use the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:push
```
This lets you push your schema changes directly to the database and omit managing SQL migration files.
Pushing changes directly to the database (without using migrations) could be risky. Please be careful when using it; we recommend it only for local development and local databases.
[Read more about it in the Drizzle docs](https://orm.drizzle.team/kit-docs/overview#prototyping-with-db-push).
---
url: /docs/web/database/overview
title: Overview
description: Get started with the database.
---
We're using [Drizzle ORM](https://orm.drizzle.team) to interact with the database. It basically adds a little layer of abstraction between our code and the database.
> If you know SQL, you know Drizzle.
For the database we're leveraging [PostgreSQL](https://www.postgresql.org), but you could use any other database that Drizzle ORM supports (basically any SQL database e.g. [MySQL](https://orm.drizzle.team/docs/get-started-mysql), [SQLite](https://orm.drizzle.team/docs/get-started-sqlite), etc.).
Drizzle ORM is a powerful tool that allows you to interact with the database in a type-safe manner. It ships with **0** (!) dependencies and is designed to be fast and easy to use.
## Setup
To start interacting with the database you first need to ensure that your database service instance is up and running.
For local development we recommend using the [Docker](https://hub.docker.com/_/postgres) container.
You can start the container with the following command:
```bash
pnpm services:setup
```
This will start all the services (including the database container) and initialize the database with the latest schema.
**Where is DATABASE\_URL?**
`DATABASE_URL` is a connection string that is used to connect to the database. When the command will finish it will be displayed in the console and setup to your environment variables.
You can also use a cloud instance of database (e.g. [Supabase](/docs/web/recipes/supabase), [Neon](https://neon.tech/), [Turso](https://turso.tech/), etc.), although it's not recommended for local development.
If you choose Supabase as your provider, follow the [Supabase recipe](/docs/web/recipes/supabase#configure-environment-variables) for details on configuring `DATABASE_URL` and running migrations.
**Where is DATABASE\_URL?**
It's available in your provider's project dashboard. You'll need to copy the connection string from there and add it to your `.env.local` file. The format will look something like:
* Neon: `postgresql://user:password@ep-xyz-123.region.aws.neon.tech/dbname`
* Turso: `libsql://your-db-xyz.turso.io`
Make sure to keep this URL secure and never commit it to version control.
Then, you need to set `DATABASE_URL` environment variable in **root** `.env.local` file.
```dotenv title=".env.local"
# The database URL is used to connect to your database.
DATABASE_URL="postgresql://postgres:postgres@127.0.0.1:54322/postgres"
```
You're ready to go! 🥳
## Studio
TurboStarter provides you also with an interactive UI where you can explore your database and test queries called Studio.
To run the Studio, you can use the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:studio
```
This will start the Studio on [https://local.drizzle.studio](https://local.drizzle.studio).

## Next steps
* [Update schema](/docs/web/database/schema) - learn about schema and how to update it.
* [Generate & run migrations](/docs/web/database/migrations) - migrate your changes to the database.
* [Initialize client](/docs/web/database/client) - initialize the database client and start interacting with the database.
---
url: /docs/web/database/schema
title: Schema
description: Learn about the database schema.
---
Creating a schema for your data is one of the primary tasks when building a new application.
You can find the schema of each table in `packages/db/src/schema` directory. The schema is organized by domain and each file groups related tables together (e.g. `billing.ts` contains the `customer`, `order` and `subscription` tables).
## Defining schema
The schema is defined using SQL-like utilities from [drizzle-orm](https://orm.drizzle.team/docs/sql-schema-declaration).
It supports all the SQL features, such as enums, indexes, foreign keys, extensions and more.
We're relying on the [code-first approach](https://orm.drizzle.team/docs/migrations), where we define the schema in code and then generate the SQL from it. That way we can approach full type-safety and the simplest flow for database updates and migrations.
## Example
Let's take a look at the `subscription` table, where we store information about our customers' subscriptions.
```typescript title="billing.ts"
export const subscription = pgTable(
"subscription",
{
id: text().primaryKey().$defaultFn(generateId),
customerId: text()
.references(() => customer.id, {
onDelete: "cascade",
})
.notNull(),
externalId: text().notNull(),
variantId: text().notNull(),
status: subscriptionStatusEnum().notNull(),
store: text().notNull(),
periodStartsAt: timestamp().notNull(),
periodEndsAt: timestamp().notNull(),
trialStartsAt: timestamp(),
trialEndsAt: timestamp(),
createdAt: timestamp().notNull().defaultNow(),
updatedAt: timestamp()
.notNull()
.$onUpdate(() => new Date()),
},
(t) => [unique().on(t.externalId, t.store)],
);
```
We're using a few native SQL utilities here, such as:
* `pgTable` - a table definition.
* `primaryKey` - a primary key.
* `defaultFn` - a default function.
* `$onUpdate` - an on update function.
* `notNull` - a not null constraint.
* `defaultNow` - a default now function.
* `timestamp` - a timestamp.
* `text` - a text.
* `unique` - a unique constraint.
* `references` - a reference to another table.
What's more, Drizzle gives us the ability to export the TypeScript types for the table, which we can reuse e.g. for the API calls.
Also, we can use the drizzle extension [drizzle-zod](https://orm.drizzle.team/docs/zod) to generate the Zod schemas for the table.
```typescript title="customer.ts"
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
export const insertSubscriptionSchema = createInsertSchema(subscription);
export const selectSubscriptionSchema = createSelectSchema(subscription);
export const updateSubscriptionSchema = createUpdateSchema(subscription);
export type SelectSubscription = z.infer;
export type InsertSubscription = z.infer;
export type UpdateSubscription = z.infer;
```
Then we can use the generated schemas in API handlers and form validations to validate the data.
---
url: /docs/web/deployment/amplify
title: AWS Amplify
description: Learn how to deploy your TurboStarter app to AWS Amplify.
---
[AWS Amplify](https://aws.amazon.com/amplify/) is a fully managed service that makes it easy to build, deploy, and host modern web applications. It provides features like continuous deployment, serverless functions, authentication, and more - all integrated into a seamless developer experience.
This guide explains how to deploy your TurboStarter app on AWS Amplify. You'll learn how to set up your repository for automated deployments, configure build settings, manage environment variables, and ensure your application runs smoothly in production. **AWS Amplify handles the infrastructure management, allowing you to focus on developing your application.**
To deploy to AWS Amplify, you need to have an AWS account. You can create one [here](https://aws.amazon.com/amplify/).
## Create configuration file
To deploy your TurboStarter app to AWS Amplify, you need to create a config file. This file will contain the necessary information to connect your repository to AWS Amplify and deploy your application.
Let's create a new file called `amplify.yml` in the root of your project:
```yaml title="amplify.yml"
version: 1
applications:
- frontend:
buildPath: "/"
phases:
preBuild:
commands:
- npm install -g pnpm
- pnpm install
build:
commands:
- pnpm dlx turbo build --filter=web
artifacts:
baseDirectory: apps/web/.next
files:
- "**/*"
cache:
paths:
- node_modules/**/*
- apps/web/.next/cache/**/*
appRoot: apps/web
```
This configuration file tells AWS Amplify how to build and deploy your application:
* The `version` field specifies the Amplify configuration version
* Under `applications`, we define the build settings for our web app:
* `buildPath` indicates where to run the build commands
* `preBuild` phase installs pnpm and project dependencies
* `build` phase runs the Turborepo build command for the web app
* `artifacts` specifies which files to deploy (the Next.js build output)
* `cache` configures which directories to cache between builds
* `appRoot` points to the web application directory
AWS Amplify will use this configuration to automatically build and deploy your app whenever you push changes to your repository. It also useful to define other resources that you can use and link to your project.
## Create a new Amplify project
We'll use the [AWS Amplify](https://aws.amazon.com/amplify/) web interface to deploy our app. First, let's create a new project.

Proceed with the option to *Deploy an app*.
## Connect repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Amplify to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
Select the branch you want to deploy and make sure to enable the *My app is a monorepo* option - configure it with the path to the app that you want to deploy (e.g. `apps/web`).

## Configure build settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the points below to ensure a seamless deployment process.

Make sure that the build command and build output directory is set to the correct values (it should be defined based on your configuration file from Step 1.).
### Environment variables
In the *Advanced settings* section, you can define environment variables that will be available to your application at runtime.

Verify that all required environment variables are defined, so your app can be build and deployed successfully.
## Review and deploy!
On the next step, you'll be able to review the configuration that you've created and deploy your app. It's the right time to make sure that everything is set up correctly.

After making sure that everything is set up correctly, you can click on the *Save and deploy* button to start the deployment process.
When your app is deployed, you'll be able to access it via the URL provided in the Amplify console:

That's it! Your app is now deployed to AWS Amplify, congratulations! 🎉
Feel free to scale your deployment to multiple regions, add custom domains, and use other Amplify features to make your app more robust and scalable.
Check out the [AWS Amplify documentation](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) for more information on how to use Amplify to its full potential.
---
url: /docs/web/deployment/api
title: Standalone API
description: Learn how to deploy your API as a dedicated service.
---
Sometimes you want to deploy your API as a standalone service. This is useful if you want to deploy your API to a different domain or to deploy it as a microservice. You can also follow this approach if you don't need a web app, but still need API service for [mobile app](/docs/mobile) or [browser extension](/docs/extension).
Deploying your API as a standalone service provides enhanced flexibility and scalability. This allows you to independently scale your API from your web app. It's particularly beneficial for executing "long-running" tasks on your backend, such as report generation, real-time data processing, or background tasks that are likely to timeout in a serverless environment.
This guide explains how to deploy your TurboStarter API as a standalone service. As Hono has multiple deployment options (e.g. [Deno](https://hono.dev/docs/getting-started/deno), [Bun](https://hono.dev/docs/getting-started/bun)), this guide will focus primarily on the [Node.js](https://hono.dev/docs/getting-started/nodejs) deployment.
## Create separate API app
We have a [dedicated guide](/docs/web/customization/add-app) on how to add another app to your project. However, in this case, only a few files need to be added, so we can do it quickly here.
First, let's create an `api` directory inside the `apps` directory - it will be the root of your API app.
Next, add the following files into the `apps/api` directory:
```json
{
"name": "api",
"version": "0.1.0",
"private": true,
"scripts": {
"build": "esbuild ./src/index.ts --bundle --platform=node --outfile=dist/index.js",
"clean": "git clean -xdf dist .turbo node_modules",
"dev": "dotenv -c -- tsx watch src/index.ts",
"start": "node dist/index.js",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@hono/node-server": "1.13.7",
"@workspace/api": "workspace:*"
},
"devDependencies": {
"@workspace/tsconfig": "workspace:*",
"@types/node": "24.0.0",
"esbuild": "0.24.2",
"tsx": "4.19.2",
"typescript": "catalog:"
}
}
```
```json
{
"extends": "@workspace/tsconfig/base.json",
"include": ["src"],
"exclude": ["node_modules"]
}
```
```ts
import { serve } from "@hono/node-server";
import { appRouter } from "@workspace/api";
serve(
{
fetch: appRouter.fetch,
port: Number(process.env.PORT) || 3001,
},
({ port }) => {
console.log(`Server is running on ${port} 🚀`);
},
);
```
This will enable you to have a minimal configuration required to run your API as a standalone service. For sure, you can add more configuration if needed, we just want to keep it minimal for the sake of this guide.
## Connect web app to API
The API will be running on a different URL than your web app. For the minimal setup and to avoid handling [cross-origin resource sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) issues, we will rewrite the API URL in the web app.
To do this, you will need to change your `next.config.ts` file to include the API URL rewrite:
```js title="apps/web/next.config.ts"
import type { NextConfig } from "next";
const config: NextConfig = {
rewrites: async () => [
{
source: "/api/:path*",
destination: `${env.NEXT_PUBLIC_API_URL ?? "http://localhost:3001"}/api/:path*`,
},
],
};
```
It's recommended to use an environment variable (e.g. `NEXT_PUBLIC_API_URL`) to set the API URL. This is a good practice to make it easier to change the API URL in different environments (e.g. development, staging, production).
Now you should be able to run your API as a standalone service. When you run the project with `pnpm dev`, you will see the new app called `api` with your API server running on [http://localhost:3001](http://localhost:3001).
## Deploy!
You can basically deploy your API as any other Node.js project. We will quickly go through the two most popular options: [PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service) and [Docker](https://www.docker.com/).
### Platform as a Service (PaaS)
PaaS providers like [Vercel](https://vercel.com/), [Heroku](https://www.heroku.com/), or [Netlify](https://www.netlify.com/) allow you to deploy your Node.js app with a few clicks. You can follow our [dedicated guides](/docs/web/deployment/checklist#deploy-web-app-to-production) for the most popular providers. Every process is similar, and will contains a few crucial steps:
1. Connecting your repository to the PaaS provider
2. Setting up build settings (e.g. build command, output directory)
3. Setting up environment variables
4. Deploying the project
To make sure your API is built and run correctly, you will need to ensure that appropriate commands are correctly set up. In our case, the following commands will need to be configured:
```bash
pnpm turbo build --filter=api
```
```bash
pnpm --filter=api start
```
This is required to ensure that the PaaS provider of your choice will be able to build and run your application correctly.
### Docker
Deploying your API as a Docker container is a good option if you want to have more control over the deployment process. You can follow our [dedicated guide](/docs/web/deployment/docker) to learn how to deploy your API as a Docker container.
For the API application, the `Dockerfile` will be located in the `apps/api` directory and it could look like this:
```dockerfile title="apps/api/Dockerfile"
FROM node:24-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune api --docker
FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
ENV SKIP_ENV_VALIDATION=1 \
NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=api
FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
adduser -S api -u 1001 -G nodejs
COPY --from=builder --chown=api:nodejs /app/apps/api/dist/ ./
USER api
EXPOSE 3001
CMD ["node", "index.js"]
```
To test if everything works correctly, you can run a [container](https://docs.docker.com/get-started/03_run_your_app/) locally with the following commands:
```bash
docker build -f ./apps/api/Dockerfile . -t turbostarter-api
docker run -p 3001:3001 turbostarter-api
```
Make sure to also [pass](https://docs.docker.com/reference/cli/docker/container/run/#env) all the required environment variables to the container, so your API can start without any issues.
Deploying your API as a Docker container is a great way to isolate your API from the host environment, making it easier to deploy and scale. It also simplifies the workflow if you're working with a team, as you can easily share the Docker image with your colleagues and they will run the API in the **exact same** environment.
That's it! You can now grow your API layer as a standalone service, separated from other apps in your project, and deploy it anywhere you want.
---
url: /docs/web/deployment/checklist
title: Checklist
description: Let's deploy your TurboStarter app to production!
---
When you're ready to deploy your project to production, follow this checklist.
This process may take a few hours and some trial and error, so buckle up - you're almost there!
## Create database instance
**Why it's necessary?**
A production-ready database instance is essential for storing your application's data securely and reliably in the cloud. [PostgreSQL](https://www.postgresql.org/) is the recommended database for TurboStarter due to its robustness, features, and wide support.
**How to do it?**
You have several options for hosting your PostgreSQL database:
* [Supabase](/docs/web/recipes/supabase) - Provides a fully managed Postgres database with additional features
* [Vercel Postgres](https://vercel.com/storage/postgres) - Serverless SQL database optimized for Vercel deployments
* [Neon](https://neon.tech/) - Serverless Postgres with automatic scaling
* [Turso](https://turso.tech/) - Edge database built on libSQL with global replication
* [DigitalOcean](https://www.digitalocean.com/products/managed-databases) - Managed database clusters with automated failover
Choose a provider based on your needs for:
* Pricing and budget
* Geographic region availability
* Scaling requirements
* Additional features (backups, monitoring, etc.)
## Migrate database
**Why it's necessary?**
Pushing database migrations ensures that your database schema in the remote database instance is configured to match TurboStarter's requirements. This step is crucial for the application to function correctly.
**How to do it?**
You basically have two possibilities of doing a migration:
TurboStarter comes with predefined Github Actions workflow to handle database migrations. You can find its definition in the `.github/workflows/publish-db.yml` file.
What you need to do is to set your `DATABASE_URL` as a [secret for your Github repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions).
Then, you can run the workflow which will publish the database schema to your remote database instance.
[Check how to run Github Actions workflow.](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow)
You can also run your migrations locally, although this is not recommended for production.
To do so, set the `DATABASE_URL` environment variable to your database URL (that comes from your database provider) in `.env.local` file and run the following command:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This command will run the migrations and apply them to your remote database.
[Learn more about database migrations.](/docs/web/database/migrations)
## Configure OAuth Providers
**Why it's necessary?**
Configuring OAuth providers like [Google](https://www.better-auth.com/docs/authentication/google) or [Github](https://www.better-auth.com/docs/authentication/github) ensures that users can log in using their existing accounts, enhancing user convenience and security. This step involves setting up the OAuth credentials in the provider's developer console, configuring the necessary environment variables, and setting up callback URLs to point to your production app.
**How to do it?**
1. Follow the provider-specific guides to set up OAuth credentials for the providers you want to use. For example:
* [Apple OAuth setup guide](https://www.better-auth.com/docs/authentication/apple)
* [Google OAuth setup guide](https://www.better-auth.com/docs/authentication/google)
* [Github OAuth setup guide](https://www.better-auth.com/docs/authentication/github)
2. Once you have the credentials, set the corresponding environment variables in your project. For the example providers above:
* For Apple: `APPLE_CLIENT_ID` and `APPLE_CLIENT_SECRET`
* For Google: `GOOGLE_CLIENT_ID` and `GOOGLE_CLIENT_SECRET`
* For Github: `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
3. Ensure that the callback URLs for each provider are set to point to your production app. **This is crucial for the OAuth flow to work correctly.**
You can add or remove OAuth providers based on your needs. Just make sure to follow the provider's setup guide, set the required environment variables, and configure the callback URLs correctly.
## Setup billing provider
**Why it's necessary?**
Well - you want to get paid, right? Setting up billing ensures that you can charge your users for using your SaaS application, enabling you to monetize your service and cover operational costs.
**How to do it?**
* Create a [Stripe](/docs/web/billing/stripe), [Lemon Squeezy](/docs/web/billing/lemon-squeezy) or [Polar](/docs/web/billing/polar) account.
* Update the environment variables with the correct values for your billing service.
* Point webhooks from Stripe, Lemon Squeezy or Polar to `/api/billing/webhook`.
* Refer to the [relevant documentation](/docs/web/billing/overview) for more details on setting up billing.
## Setup emails provider
**Why it's necessary?**
Setting up an email provider is crucial for your SaaS application to send notifications, confirmations, and other important messages to your users. This enhances user experience and engagement, and is a standard practice in modern web applications.
**How to do it?**
* Create an account with an email service provider of your choice. See [available providers](/docs/web/emails/configuration#providers) for more information.
* Update the environment variables with the correct values for your email service.
* Refer to the [relevant documentation](/docs/web/emails/overview) for more details on setting up email.
## Setup storage provider
**Why it's necessary?**
Don't forget to configure your storage provider, if you want to operate on files in your app. By default, this is optional — the app can run without a storage provider — but some features could be unavailable (e.g., avatar uploads and other file-related actions).
**How to do it?**
* Review the [Storage overview](/docs/web/storage/overview).
* Follow [Storage configuration](/docs/web/storage/configuration) to choose and set up a provider.
* Add any required environment variables in your **hosting provider**.
## Environment variables
**Why it's necessary?**
Setting the correct environment variables is essential for the application to function correctly. These variables include API keys, database URLs, and other configuration details required for your app to connect to various services.
**How to do it?**
Use our `.env.example` files to get the correct environment variables for your project. Then add them to your **hosting provider's environment variables**. Redeploy the app once you have the URL to set in the environment variables.
## Deploy web app to production
**Why it's necessary?**
Because your users are waiting! Deploying your Next.js app to a hosting provider makes it accessible to users worldwide, allowing them to interact with your application.
**How to do it?**
Deploy your Next.js app to chosen hosting provider. **Copy the deployment URL and set it as an environment variable in your project's settings.** Feel free to check out our dedicated guides for the most popular hosting providers:
We also have a dedicated guide for [deploying your API as a standalone service](/docs/web/deployment/api).
That's it! Your app is now live and accessible to your users, good job! 🎉
* Update the legal pages with your company's information (privacy policy, terms of service, etc.).
* Remove the placeholder blog and documentation content / or replace it with your own.
* Customize authentication emails and other email templates.
* Update the favicon and logo with your own branding.
* Update the FAQ and other static content with your own information.
---
url: /docs/web/deployment/docker
title: Docker
description: Learn how to containerize your TurboStarter app with Docker.
---
[Docker](https://docker.com) is a popular platform for containerizing applications, making it easy to package your app with all its dependencies for consistent performance across environments. It simplifies development, testing, and deployment.
This guide explains how to containerize your TurboStarter app using Docker. You'll learn to create a Dockerfile, build a container image, and run your app in a container for a reliable and portable setup.
## Configure Next.js for Docker
First of all, we need to configure Next.js to output the build files in the [standalone format](https://nextjs.org/docs/pages/api-reference/config/next-config-js/output) - it's required for the Docker image to work. To do this, we need to add the following to our `next.config.ts` file:
```js title="apps/web/next.config.ts"
import type { NextConfig } from "next";
const config: NextConfig = {
output: "standalone",
...
};
```
## Create a Dockerfile
[Dockerfile](https://docs.docker.com/get-started/02_our_app/) is a text file that contains the instructions for building a [Docker image](https://docs.docker.com/get-started/02_our_app/). It defines the environment, dependencies, and commands needed to run your app. You can safely copy the following Dockerfile to your project:
```dockerfile title="apps/web/Dockerfile"
FROM node:24-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune web --docker
FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
ENV SKIP_ENV_VALIDATION=1 \
NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=web
FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
adduser -S web -u 1001 -G nodejs
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=web:nodejs /app/apps/web/public ./apps/web/public
USER web
EXPOSE 3000
CMD ["node", "apps/web/server.js"]
```
Feel free to check out our [self-hosting guide](/blog/self-host-your-nextjs-turborepo-app-with-docker-in-5-minutes) for more details on how each stage of the Dockerfile works.
And that's all we need! You can now build and run your Docker image to deploy your app anywhere you want in an [isolated environment](https://docs.docker.com/get-started/workshop/04_sharing_app/).
## Run a container
To test if everything works correctly, you can run a [container](https://www.docker.com/resources/what-container/) locally with the following commands:
```bash
docker build -f ./apps/web/Dockerfile . -t turbostarter
docker run -p 3000:3000 turbostarter
```
Make sure to also [pass](https://docs.docker.com/reference/cli/docker/container/run/#env) all the required environment variables to the container, so your app can start without any issues.
If everything works correctly, you should be able to access your app at [http://localhost:3000](http://localhost:3000).
That's it! You can now build and deploy your app as a Docker container to any supported hosting (e.g. [Fly.io](/docs/web/deployment/fly)).
Using Docker containers is a great way to isolate your app from the host environment, making it easier to deploy and scale. It also simplifies the workflow if you're working with a team, as you can easily share the Docker image with your colleagues and they will run the app in the **exact same** environment.
---
url: /docs/web/deployment/fly
title: Fly.io
description: Learn how to deploy your TurboStarter app to Fly.io.
---
[Fly.io](https://fly.io) makes deploying web applications to the cloud easy and efficient. It handles scaling, monitoring, and logging so you can focus on building your app.
This guide explains how to deploy your TurboStarter app on Fly.io. You'll learn how to leverage [Docker](/docs/web/deployment/docker) containers to deploy your app, set up builds, and manage environment variables for a smooth and reliable deployment.
To deploy to Fly.io, you need to have an account. You can create one [here](https://fly.io/app/sign-up).
You also need to have [Docker](/docs/web/deployment/docker) configured in your project.
## Setup Fly CLI
As we will be using Fly CLI to launch and manage our app, you need to install and setup it on your machine.
[Check the official documentation on how to install Fly CLI](https://fly.io/docs/flyctl/install/).
After you've installed Fly CLI, you need to login to your Fly account and connect it with your machine:
```bash
fly auth login
```
[Read more about authenticating CLI](https://fly.io/docs/flyctl/auth/#available-commands).
Now you're ready to launch your app!
## Launch project
Use a [Dockerfile](/docs/web/deployment/docker) to launch your app with [Fly CLI](https://fly.io/docs/flyctl/). You can use the following command to do this from your local machine:
```bash
fly launch --dockerfile apps/web/Dockerfile
```
Make sure to set all the required configuration in the CLI steps (e.g. set port to `3000`, setup additional services, choose billing plan, etc.).

If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Render service. Make sure to set it to the region closest to your database and users.
After the launch is complete, Fly will output your project configuration into `fly.toml` file. The configuration of your project is stored there, feel free to customize it to your needs:
```toml title="fly.toml"
app = 'web-aged-sky-5596'
primary_region = 'ams'
[build]
dockerfile = 'apps/web/Dockerfile'
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = 'stop'
auto_start_machines = true
min_machines_running = 0
processes = ['app']
[[vm]]
memory = '512mb'
cpu_kind = 'shared'
cpus = 1
```
See [Fly.io documentation](https://fly.io/docs/reference/configuration) for more information on how to use this file.
## Set up secrets
To make your app fully functional, you need to set up required environment variables. You can do this by running the following command:
```bash
fly secrets set --app DATABASE_URL=...
```
They will be automatically added to your app's runtime environment.
## Deploy!
Each time you make changes to `fly.toml` or secrets, you need to re-deploy your app to apply changes to the running app.
To do this, just run the following command in your project directory:
```bash
fly deploy
```
This will build your app and deploy it to Fly.io with the latest code version.

That's it! Your app is now deployed to Fly.io, congratulations! 🎉
Fly is a platform that allows you to deploy and manage applications in the cloud. It provides a simple and intuitive way to deploy your app, with features such as automatic scaling, load balancing, and rolling updates. With Fly, you can focus on building your app without worrying about the underlying infrastructure.
---
url: /docs/web/deployment/netlify
title: Netlify
description: Learn how to deploy your TurboStarter app to Netlify.
---
[Netlify](https://netlify.com) is a powerful platform for deploying modern web applications. It offers continuous deployment, serverless functions, and a global CDN to ensure your application is fast and reliable.
In this guide, we will walk through the steps to deploy your TurboStarter app to Netlify. You will learn how to connect your repository, configure build settings, and manage environment variables to ensure a smooth deployment process.
To deploy to Netlify, you need to have an account. You can create one [here](https://netlify.com/signup).
## Create new site
Once you've created your account and logged in, the Netlify dashboard will display an option to add a new site. Click on the *Import from Git* button to begin connecting your Git repository.

If you've already had a Netlify account, you can get to this step by clicking on the *Sites* tab in the navigation menu.
## Connect your repository
Choose the Git provider of your project and select the repository you want to deploy.

To connect your repository, you need to authorize Netlify to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure build settings
Last step before deploying! Configure the build settings according to your project configuration. Use the screenshots provided below for reference to ensure a smooth deployment process.

Also, add all environment variables under *Environment variables* section - it's required to make the build process work.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Netlify, congratulations! 🎉
If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Netlify serverless functions. Make sure to set it to the region closest to your database and users.

Unfortunately, it's a paid feature, so you need to upgrade your Netlify account to be able to change it.
---
url: /docs/web/deployment/railway
title: Railway
description: Learn how to deploy your TurboStarter app to Railway.
---
[Railway](https://railway.app) is a platform that allows you to deploy your web applications to a cloud environment. It provides a simple and efficient way to manage your application's infrastructure, including scaling, monitoring, and logging.
This guide provides a step-by-step walkthrough for deploying your TurboStarter app on Railway, and taking advantage of its features in production environment. You'll discover how to link your repository, tailor build settings, and oversee environment variables, ensuring a smooth and optimized deployment process that leverages Railway's capabilities.
To deploy to Railway, you need to have an account. You can create one [here](https://railway.app/signup).
## Create new project
We'll use [Railway](https://railway.app) web app to deploy our project. First, let's create a new project.

Proceed with the option to *Deploy from Github repo*.
## Connect repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Railway to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure project settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the points below to ensure a seamless deployment process.
### Commands
Configure the build and start commands to ensure that your project is built and started correctly.

Make sure to set them to the following values:
* **Build command** - `pnpm dlx turbo build --filter=web`
* **Start command** - `pnpm --filter=web start`
### Environment variables
Last, but not least, you need to set the environment variables for your project. Make sure to check if all the required variables are set.

If you want to achieve better performance, lower latency in your API requests or add some replicas of your application, you can customize the region of your Railway instance. Make sure to set it to the region closest to your database and users.

You can also use a [Railway config file](https://docs.railway.com/guides/config-as-code) to manage your project's settings in one place, as a code.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Railway, congratulations! 🎉
Feel free to scale your deployment to multiple regions or isolate it in the separate network. Check out the [Railway documentation](https://docs.railway.app) for more information about which services are available.
---
url: /docs/web/deployment/render
title: Render
description: Learn how to deploy your TurboStarter app to Render.
---
[Render](https://render.com) offers a unique combination of features that make it an ideal platform for deploying modern web applications. With Render, you can leverage continuous deployment, managed databases, and a global CDN to ensure your application is not only fast and reliable but also scalable and secure.
In this guide, we will walk through the steps to deploy your TurboStarter app to Render, highlighting the benefits of using Render's platform. You will learn how to connect your repository, configure build settings, and manage environment variables to ensure a seamless and efficient deployment process that takes advantage of Render's features.
To deploy to Render, you need to have an account. You can create one [here](https://dashboard.render.com/register).
## Create a new service
Navigate to the [Render dashboard](https://dashboard.render.com) and click on the *New* button.

Pick the *Web Service* option and proceed to the next step.
## Connect your repository
Choose the Git provider of your project and select the repository you want to deploy.

If your repository is private you need to authorize Render to access it. It's recommended to follow a *least privileged access* approach, so to only grant access to the repository you want to deploy, not the entire account.
## Configure service settings
Finalize your deployment by configuring the build settings to match your project's specific needs. Refer to the screenshots below to ensure a seamless deployment process.

You can also group your service with other services (e.g. [databases](https://render.com/docs/postgresql-creating-connecting) or [cron jobs](https://render.com/docs/cronjobs)) in a [Project](https://render.com/docs/projects), which will help you manage them together.
[Read official documentation for more information](https://render.com/docs/projects).
If you want to achieve better performance and lower latency in your API requests, you can customize the region of your Render service. Make sure to set it to the region closest to your database and users.
### Commands
Configure the build and start commands to ensure that your project is built and started correctly.

Make sure to set them to the following values:
* **Build command** - `pnpm install --frozen-lockfile; pnpm dlx turbo build --filter=web`
* **Start command** - `pnpm --filter=web start`
### Instance type
Select a plan that fits your project's needs.

For testing purposes or MVPs, you can safely use the *Free* plan. Although, for the production version, it's recommended to upgrade your plan, as it offers more resources and your project won't be paused after periods of inactivity.
### Environment variables
Last, but not least, you need to set the environment variables for your project. Make sure to check if all the required variables are set.

You can also modify *Advanced settings* to set e.g. [health checks](https://render.com/docs/deploys#health-checks) or modify [auto deploy](https://render.com/docs/deploys#automatic-git-deploys) triggers.
## Deploy!
Click on the *Deploy Web Service* button to start the deployment process.

That's it! Your app is now deployed to Render, congratulations! 🎉
Render is a powerful platform with a lot of integrations and features. Feel free to check out the [official documentation](https://render.com/docs) for more information.
---
url: /docs/web/deployment/vercel
title: Vercel
description: Learn how to deploy your TurboStarter app to Vercel.
---
In general you can deploy the application to any hosting provider that supports Node.js, but we recommend using [Vercel](https://vercel.com) for the best experience.
Vercel is the easiest way to deploy Next.js apps. It's the company behind Next.js and has first-class support for Next.js.
To deploy to Vercel, you need to have an account. You can create one [here](https://vercel.com/signup).
TurboStarter has two, separate ways to deploy to Vercel, each ships with **one-click deployment**. Choose the one that best fits your needs.
Deploying with this method is the easiest and fastest way to get your app up and running on the cloud provider. Follow these steps:
## Connect your git repository
After signing up you will be promted to import a git repository. Select the git provider of your project and connect your git account with Vercel.

## Configure project settings
As we're working in monorepo, some additional settings are required to make the build process work.
Make sure to set the following settings:
* **Build command**: `pnpm turbo build --filter=web` - to build only the web app
* **Root directory**: `apps/web` - to make sure Vercel uses the web folder as the root directory (make sure to check *Include files outside the root directory in the Build Step* option, it will ensure that all packages from your monorepo are included in the build process)

## Configure environment variables
Please make sure to set all the environment variables required for the project to work correctly. You can find the list of required environment variables in the `.env.example` file in the `apps/web` directory.
The environment variables can be set in the Vercel dashboard under *Project Settings* > *Environment Variables*. Make sure to set them for all environments (Production, Preview, and Development) as needed.
**Failure to set the environment variables will result in the project not working correctly.**
If the build fails, deep dive into the logs to see what is the issue. Our Zod configuration will validate and report any missing environment variables. To find out which environment variables are missing, please check the logs.
The first time this may fail if you don't yet have a custom domain connected since you cannot place it in the environment variables yet. It's fine. Make the first deployment fail, then pick the domain and add it. Redeploy.
## Deploy!
Click on the *Deploy* button to start the deployment process.

That's it! Your app is now deployed to Vercel, congratulations! 🎉
Despite connecting your repository is the easiest way to deploy to Vercel, we recommend using preconfigured Github Actions for the most granular control over your deployments.
We'll leverage [Vercel CLI](https://vercel.com/docs/cli) to deploy the application on the CI/CD pipeline. [See official documentation on deploying to Github Actions](https://vercel.com/guides/how-can-i-use-github-actions-with-vercel).
## Get Vercel Access Token
To deploy the application, we need to get Vercel access token.
Please, follow [this guide](https://vercel.com/guides/how-do-i-use-a-vercel-api-access-token) to create one.

## Install Vercel CLI
We need to install [Vercel CLI](https://vercel.com/docs/cli) locally to be able to get required credentials for our Github Actions.
You can install it using following command:
```bash
pnpm i -g vercel
```
Then, login to Vercel using following command:
```bash
vercel login
```
## Get credentials
Inside your folder, run following command to create a new project:
```bash
vercel link
```
This will generate a `.vercel` folder, where you can find `project.json` file with `projectId` and `orgId`.
## Configure Github Actions
Inside GitHub, add `VERCEL_TOKEN`, `VERCEL_ORG_ID`, and `VERCEL_PROJECT_ID` as [secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions) to your repository.

This will allow Github Actions to access your settings and deploy the application to Vercel.
## Configure project settings
As we're working in monorepo, some additional settings are required to make the build process work.
Make sure to set the following settings:
* **Build command**: `pnpm turbo build --filter=web` - to build only the web app
* **Root directory**: `apps/web` - to make sure Vercel uses the web folder as the root directory (make sure to check *Include files outside the root directory in the Build Step* option, it will ensure that all packages from your monorepo are included in the build process)

## Configure environment variables
Please make sure to set all the environment variables required for the project to work correctly. You can find the list of required environment variables in the `.env.example` file in the `apps/web` directory.
The environment variables can be set in the Vercel dashboard under *Project Settings* > *Environment Variables*. Make sure to set them for all environments (Production, Preview, and Development) as needed.
**Failure to set the environment variables will result in the project not working correctly.**
If the build fails, deep dive into the logs to see what is the issue. Our Zod configuration will validate and report any missing environment variables. To find out which environment variables are missing, please check the logs.
The first time this may fail if you don't yet have a custom domain connected since you cannot place it in the environment variables yet. It's fine. Make the first deployment fail, then pick the domain and add it. Redeploy.
## Deploy!
By default, TurboStarter comes with a Github Actions workflow that can be [triggered manually](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow).
The configuration is located in `.github/workflows/publish-web.yml`, you can easily customize it to your needs, for example to trigger a deployment from `main` branch.
```diff title=".github/workflows/publish-web.yml"
on:
- workflow_dispatch:
+ push:
+ branches:
+ - main
```
Then, every time you push to `main` branch, the workflow will be triggered and the application will be deployed to Vercel.

That's it! Your app is now deployed to Vercel, congratulations! 🎉
## Troubleshooting
In some cases, users have reported issues with the deployment to Vercel using the default parameters. If you encounter problems, try these troubleshooting steps:
1. **Check root directory settings**
* Set the root directory to `apps/web`
* Enable *Include source files outside of the Root Directory* option
2. **Verify build configuration**
* Ensure the framework preset is set to Next.js
* Set build command to `pnpm turbo build --filter=web`
* Set install command to `pnpm install`
3. **Review deployment logs**
* If deployment fails, carefully review the build logs
* Look for any error messages about missing dependencies or environment variables
* Verify that all required environment variables are properly configured
If issues persist after trying these steps, check the [deployment troubleshooting guide](/docs/web/troubleshooting/deployment) for additional help.
---
url: /docs/web/emails/configuration
title: Configuration
description: Learn how to configure your emails in TurboStarter.
---
The `@workspace/email` package provides a simple and flexible way to send emails using various email providers. It abstracts the complexity of different email services and offers a consistent interface for sending emails with pre-defined templates.
To configure the email service, you need to set a few environment variables.
```dotenv
EMAIL_FROM="hello@resend.dev"
EMAIL_THEME="orange"
```
Let's break them down:
* `EMAIL_FROM` - The email address that emails will be sent from. **Please make sure that the mail address and domain are verified in your mail provider.**
* `EMAIL_THEME` - The theme color to use for the emails. See [Themes](/docs/web/customization/styling#themes) for more information.
The email provider is configured by modifying the exports in `packages/email` package. By default, [Nodemailer](/docs/web/emails/configuration#nodemailer) is used.
Configuration will be validated against the schema, so you will see the error messages in the console if something is not right.
## Providers
TurboStarter supports multiple email providers, each with its own configuration. Below, you'll find detailed information on how to set up and use each supported provider. Choose the one that best fits your needs and follow the instructions in the respective accordion section.
To use Resend as your email provider, you need to [create an account](https://resend.com/) and [obtain your API key](https://resend.com/docs/dashboard/api-keys/introduction).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
RESEND_API_KEY="your-api-key"
```
Also, make sure to activate Resend as your email provider by updating the exports in:
```ts
// [!code word:resend]
export * from "./resend";
```
```ts
// [!code word:resend]
export * from "./resend/env";
```
To customize the provider, you can find its definition in `packages/email/src/providers/resend` directory.
For more information, please refer to the [Resend documentation](https://resend.com/docs).
To use SendGrid as your email provider, you need to [create an account](https://sendgrid.com/) and [obtain your API key](https://sendgrid.com/docs/ui/account-and-settings/api-keys/).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
SENDGRID_API_KEY="your-api-key"
```
Also, make sure to activate SendGrid as your email provider by updating the exports in:
```ts
// [!code word:sendgrid]
export * from "./sendgrid";
```
```ts
// [!code word:sendgrid]
export * from "./sendgrid/env";
```
To customize the provider, you can find its definition in `packages/email/src/providers/sendgrid` directory.
For more information, please refer to the [SendGrid documentation](https://sendgrid.com/docs).
To use Postmark as your email provider, you need to [create an account](https://postmarkapp.com/) and [obtain your server API token](https://postmarkapp.com/support/article/1008-what-are-the-account-and-server-api-tokens).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
POSTMARK_API_KEY="your-secret-api-token"
```
Also, make sure to activate Postmark as your email provider by updating the exports in:
```ts
export * from "./postmark";
```
```ts
// [!code word:postmark]
export * from "./postmark/env";
```
To customize the provider, you can find its definition in `packages/email/src/providers/postmark` directory.
For more information, please refer to the [Postmark documentation](https://postmarkapp.com/developer).
To use Plunk as your email provider, you need to [create an account](https://plunk.dev/) and [obtain your API key](https://docs.useplunk.com/api-reference/authentication).
Then, set it as an environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv
PLUNK_API_KEY="your-api-key"
```
Also, make sure to activate Plunk as your email provider by updating the exports in:
```ts
// [!code word:plunk]
export * from "./plunk";
```
```ts
// [!code word:plunk]
export * from "./plunk/env";
```
To customize the provider, you can find its definition in `packages/email/src/providers/plunk` directory.
For more information, please refer to the [Plunk documentation](https://docs.useplunk.com).
If you're using the `nodemailer` as your email provider, you'll need to set the following SMTP configuration in your environment variables:
```dotenv
NODEMAILER_HOST="your-smtp-host"
NODEMAILER_PORT="your-smtp-port"
NODEMAILER_USER="your-smtp-user"
NODEMAILER_PASSWORD="your-smtp-password"
```
The variables are:
* `NODEMAILER_HOST`: The host of your SMTP server.
* `NODEMAILER_PORT`: The port of your SMTP server.
* `NODEMAILER_USER`: The email address user of your SMTP server.
* `NODEMAILER_PASSWORD`: The password for the email account.
Also, make sure to activate nodemailer as your email provider by updating the exports in:
```ts
// [!code word:nodemailer]
export * from "./nodemailer";
```
```ts
// [!code word:nodemailer]
export * from "./nodemailer/env";
```
To customize the provider, you can find its definition in `packages/email/src/providers/nodemailer` directory.
For more information, please refer to the [nodemailer documentation](https://nodemailer.com/smtp/).
## Templates
In the `@workspace/email` package, we provide a set of pre-defined templates for you to use. You can find them in the `packages/email/src/templates` directory.
When you run your development server, you will be able to preview all available templates in the browser under [http://localhost:3005](http://localhost:3005).

Next to the templates, you can also find some shared components that you can use in your emails. The file structure looks like this:
Feel free to add your own templates and components or modify existing ones to match them with your brand and style.
### How to add a new template?
We'll go through the process of adding a new template, as it requires a few steps to make sure everything works correctly.
#### Define types
Let's assume that we want to add a **welcome email**, that new users will receive after signing up.
We'll start with defining new template type in `packages/email/src/types/templates.ts` file:
```ts title="templates.ts"
export const EmailTemplate = {
...AuthEmailTemplate,
WELCOME: "welcome",
} as const;
```
Also, we would need to add types for variables that we'll pass to the template (if any), in our case it will be just a `name` of the user:
```ts title="templates.ts"
type WelcomeEmailVariables = {
welcome: {
name: string;
};
};
export type EmailVariables = AuthEmailVariables | WelcomeEmailVariables;
```
By doing this, we ensure that payload passed to the template will have all required properties and we won't end up with an email that tells your user "Hey, undefined!".
#### Create template
Next up, we need to create a file with the template itself. We'll create an `welcome.tsx` file in `packages/email/src/templates` directory.
```tsx title="welcome.tsx"
import { Heading, Preview, Text } from "@react-email/components";
import { Button } from "../_components/button";
import { Layout } from "../_components/layout/layout";
import type { EmailTemplate, EmailVariables } from "../../types";
type Props = EmailVariables[typeof EmailTemplate.WELCOME];
export const Welcome = ({ name }: Props) => {
return (
Welcome to TurboStarter!Hi, {name}!Start your journey with our app by clicking the button below.
);
};
Welcome.subject = "Welcome to TurboStarter!";
Welcome.PreviewProps = {
name: "John Doe",
};
export default Welcome;
```
As you can see, by defining appropriate types for the template, we can safely use the variables as a props in the template.
To learn more about supported components, please refer to the [React Email documentation](https://react.email/docs/components).
#### Register template
We have to register the template in the main entrypoint of the templates in `packages/email/src/templates/index.ts` file:
```ts title="index.ts"
import { Welcome } from "./welcome";
export const templates = {
...
[EmailTemplate.WELCOME]: Welcome,
} as const;
```
That way, it will be available in the `sendEmail` function, enabling us to send it from the server-side of your application.
```ts
import { sendEmail } from "@workspace/email/server";
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
});
```
Learn more about sending emails in the [dedicated section](/docs/web/emails/sending).
Et voilà! You've just added a new email template to your application 🎉
### Translating templates
You can also translate your templates to support multiple languages. Each mail template is passed the `locale` property, which you can use to get the translation for the current locale. This allows you to maintain consistent translations across your application and emails.
The translation system [uses the same i18n setup](/docs/web/internationalization/overview) as your main application, so you can reuse your existing translation files and namespaces. The translations are loaded server-side when the email is generated, ensuring the correct language is used based on the user's preferences.
Here's how you can implement translations in your email templates:
```tsx
import { Heading, Preview, Text } from "@react-email/components";
import { getTranslation } from "@workspace/i18n/server";
import { Button } from "../_components/button";
import { Layout } from "../_components/layout/layout";
import type {
EmailTemplate,
EmailVariables,
CommonEmailProps,
} from "../../types";
type Props = EmailVariables[typeof EmailTemplate.WELCOME] & CommonEmailProps;
export const Welcome = async ({ name, locale }: Props) => {
const { t } = await getTranslation({ locale, ns: "auth" });
return (
{t("account.welcome.preview")}{t("account.welcome.heading", { name })}{t("account.welcome.body")}
);
};
Welcome.subject = async ({ locale }: CommonEmailProps) => {
const { t } = await getTranslation({ locale, ns: "auth" });
return t("account.welcome.subject");
};
Welcome.PreviewProps = {
name: "John Doe",
locale: "en",
};
export default Welcome;
```
To send the email in the specified language, you can pass the optional `locale` argument to the `sendEmail` function:
```ts
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
locale: "en", // [!code highlight]
});
```
Learn more about translations in the [dedicated section](/docs/web/internationalization/translations).
---
url: /docs/web/emails/overview
title: Overview
description: Get started with emails in TurboStarter.
---
For mailing functionality, TurboStarter integrates [React Email](https://react.email/docs/introduction) which enables you to build your emails from composable React components.
It's a simple, yet powerful library that allows you to **write your emails in React**.
It also allows you to use **Tailwind CSS for styling**, which is a huge advantage, as we can share almost everything from the main app with the emails package, keeping them consistent with rest of the app.
You can read more about `react-email` package in the [official documentation](https://react.email/docs/introduction).
## Providers
TurboStarter implements multiple providers for managing and sending emails. To learn more about each provider and how to configure them, see the respective section:
All configuration and setup is built-in with a unified API, so you can switch between providers by simply changing the exports and even introduce your own provider without breaking any sending-related logic.
## Development
When you [setup your development environment](/docs/web/installation/development) and run `pnpm dev` command a new app will start at [http://localhost:3005](http://localhost:3005).

There you'll be able to check your email templates and send test emails from your app. It includes hot-reloading, so when you make change in the code - it will be reflected in the browser.
Learn more about configuration and setup of the emails in TurboStarter in the following sections.
---
url: /docs/web/emails/sending
title: Sending emails
description: Learn how to send emails in TurboStarter.
---
The strategy for sending emails, that every provider has to implement, is **extremely simple**:
```ts
export interface EmailProviderStrategy {
send: (args: {
to: string;
subject: string;
text: string;
html?: string;
}) => Promise;
}
```
You don't need to worry much about it, as all the providers are already configured for you. Just be aware of it if you want to add your custom provider.
Then, we define a general `sendEmail` function that you can use as an API for sending emails in your app:
```ts
const sendEmail = async ({
to,
template,
variables,
locale,
}: {
to: string;
template: T;
variables: EmailVariables[T];
locale?: string;
}) => {
const { html, text, subject } = await getTemplate({
id: template,
variables,
locale,
});
return send({ to, subject, html, text });
};
```
The arguments are:
* `to`: The recipient's email address.
* `template`: The email template to use.
* `variables`: The variables to pass to the template.
* `locale`: The locale to use for the email.
It returns a promise that resolves when the email is sent successfully. If there is an error, the promise will be rejected with an error message.
To send an email, just invoke the `sendEmail` with the correct arguments from the **server-side** of your application:
```ts
import { sendEmail } from "@workspace/email/server";
sendEmail({
to: "user@example.com",
template: EmailTemplate.WELCOME,
variables: {
name: "John Doe",
},
locale: "en",
});
```
And that's it! You're ready to send emails in your application 🚀
## Authentication emails
TurboStarter comes with a set of pre-configured authentication emails for various purposes, including magic links and password reset functionality.
To handle the sending of these emails at the right time, we use [Better Auth Hooks](https://www.better-auth.com/docs/concepts/email), which trigger when specific authentication events occur.
The logic for determining which email to send is already implemented for you in the `packages/auth/src/server.ts` file, alongside your [authentication configuration](/docs/web/auth/configuration):
```ts title="server.ts"
export const auth = betterAuth({
emailAndPassword: {
enabled: true,
sendResetPassword: async ({ user, url }) =>
sendEmail({
to: user.email,
template: EmailTemplate.RESET_PASSWORD,
variables: {
url,
},
}),
},
emailVerification: {
sendVerificationEmail: async ({ user, url }) =>
sendEmail({
to: user.email,
template: EmailTemplate.CONFIRM_EMAIL,
variables: {
url,
},
}),
},
/* other options */
});
```
As you can see, the authentication emails are automatically sent when needed (e.g. when user requests password reset or needs to verify their email address).
You can customize authentication templates by modifying them in the `packages/email/src/templates` directory, or create your own templates for other use cases in your application.
---
url: /docs/web/extras
title: Extras
description: See what you get together with the code.
---
## Tips and Tricks
In many places, next to the code you will find some marketing tips, design suggestions, and potential risks. This is to help you build a better product and avoid common pitfalls.
```tsx title="Hero.tsx"
return (
{/* 💡 Use something that user can visualize e.g.
"Ship your startup while on the toilet" */}
Best startup on the world
);
```
### Submission tips
When it comes to mobile app and browser extension, you must submit your product to review from Apple/Google etc. We have some tips for you to make sure your submission goes smoothly.
```json title="app.json"
{
"ios": {
"infoPlist": {
/* 🍎 add descriptive justification of using this permission on iOS */
"NSCameraUsageDescription": "This app uses the camera to scan barcodes on event tickets."
}
}
}
```
As well as providing you with the info on how to make your store listings better:
```json title="package.json"
{
"manifest": {
/* 💡 Use localized messages to get more visibility in web stores */
"name": "__MSG_extensionName__",
"default_locale": "en"
}
}
```
## 25+ SaaS Ideas
Not sure what to build? We have a list of **25+** SaaS ideas that you can use to get started 🔥
Grouped by category, these ideas are a great way to get inspired and start building your next project.
Including design, copies, marketing tips and potential risks, this list is a great resource for anyone looking to build a SaaS product.

## AI rules, skills, subagents and commands
TurboStarter ships with a set of custom AI rules, skills, subagents, and commands you can use in popular AI editors and tools. They help the AI understand the codebase conventions and generate changes faster and more reliably.
To learn how to set them up and use them effectively, see the [AI-assisted development docs](/docs/web/installation/ai-development).
## Discord community
We have a Discord community where you can ask questions and share your projects. It's a great place to get help and meet other developers. Check more details at [/discord](/discord).

---
url: /docs/web/faq
title: FAQ
description: Find answers to common technical questions.
---
## Why isn't everything hidden and configured with one BIG config file?
TurboStarter intentionally exposes the underlying code rather than hiding it behind configuration files (like some starters do). This design choice follows our **you own your code** philosophy, giving you full control and flexibility over your codebase.
While a single config file might seem simpler initially, it often becomes restrictive when you need to customize functionality beyond what the config allows. With direct access to the code, you can modify any part of the system to match your specific requirements.
## I don't know some technology! Should I buy TurboStarter?
You should be prepared for a learning curve or consider learning it first. However, TurboStarter will still work for you if you're willing to learn.
Even without knowing some technologies, you can still use the rest of the features.
## I don't need mobile app or browser extension, what should I do?
You can simply ignore the mobile app and browser extension parts of the project. You can remove the `apps/mobile` and `apps/extension` directories from the project.
The modular nature of TurboStarter allows you to remove parts of the project that you don't need without affecting the rest of the stack.
## I want to use a different provider for X
Sure! TurboStarter is designed to be modular, so configuring new provider (e.g. for emails, billing or any other service) is straightforward. You just need to make sure your configuration is compatible with common interface to be able to plug it into the codebase.
## Will you add more packages in the future?
Yes, we will keep updating TurboStarter with new packages and features. This kit is designed to be modular, allowing for new features and packages to be added without interfering with your existing code. You can always [update your project](/docs/web/installation/update) to the latest version.
## Can I use this kit for a non-SaaS project?
This kit is mainly designed for SaaS projects. If you're building something other than a SaaS, the Next.js SaaS Boilerplate might include features you don't need. You can still use it for non-SaaS projects, but you may need to remove or modify features that are specific to SaaS use cases.
## Can I use personal accounts only?
Yes! You can disable team accounts and have personal accounts only by setting a feature flag.
## Does it set up the production instance for me?
No, TurboStarter does not set up the production instance for you. This includes setting up databases, Stripe, or any other services you need. TurboStarter does not have access to your Stripe or Resend accounts, so setup on your end is required. TurboStarter provides the codebase and documentation to help you set up your SaaS project.
## Does the starter include Solito?
No. Solito will not be included in this repo. It is a great tool if you want to share code between your Next.js and Expo app. However, the main purpose of this repo is not the integration between Next.js and Expo — it's the code splitting of your SaaS platforms into a monorepo. You can utilize the monorepo with multiple apps, and it can be any app such as Vite, Electron, etc.
Integrating Solito into this repo isn't hard, and there are a few [official templates](https://github.com/nandorojo/solito/tree/master/example-monorepos) by the creators of Solito that you can use as a reference.
## Does this pattern leak backend code to my client applications?
No, it does not. The `api` package should only be a production dependency in the Next.js application where it's served. The Expo app, browser extension, and all other apps you may add in the future should only add the `api` package as a dev dependency. This lets you have full type safety in your client applications while keeping your backend code safe.
If you need to share runtime code between the client and server, you can create a separate `shared` package for this and import it on both sides.
## How do I get support if I encounter issues?
For support, you can:
1. Visit our [Discord](https://discord.gg/KjpK2uk3JP)
2. Contact us via support email ([hello@turbostarter.dev](mailto:hello@turbostarter.dev))
## Are there any example projects or demos?
Yes - feel free to check out our demo app at [demo.turbostarter.dev](https://demo.turbostarter.dev). Also, you can get inspired by projects built by our customers - take a look at [Showcase](/#showcase).
## How do I deploy my application?
Please check the [production checklist](/docs/web/deployment/checklist) for more information.
## How do I update my project when a new version of the boilerplate is released?
Please read the [documentation for updating your TurboStarter code](/docs/web/installation/update).
## Can I use the React package X with this kit?
Yes, you can use any React package with this kit. The kit is based on React, so you are generally only constrained by the underlying technologies and not by the kit itself. Since you own and can edit all the code, you can adapt the kit to your needs. However, if there are limitations with the underlying technology, you might need to work around them.
## Can I integrate TurboStarter into an existing project?
TurboStarter is a full-stack starter intended to be used as the foundation of your app. You can copy individual modules or patterns into an existing codebase, but retrofitting the entire starter into a mature project is typically not recommended and is not officially supported. If you choose to copy parts, prefer isolating boundaries (e.g., `packages/` modules) and aligning interfaces first.
## Where can I deploy my application?
TurboStarter targets modern Node.js/Next.js runtimes. You can deploy to providers that support these environments, such as [Vercel](/docs/web/deployment/vercel), [Railway](/docs/web/deployment/railway), [Render](/docs/web/deployment/render), [Fly](/docs/web/deployment/fly), or [Netlify](/docs/web/deployment/netlify) - following their Next.js guidance. Review our [production checklist](/docs/web/deployment/checklist) before going live.
## Can I easily swap providers (billing, email, etc.)?
Yes. The starter organizes integrations behind clear interfaces so you can replace providers (e.g., billing or email) with minimal surface changes. Keep your implementation behind a module boundary and adapt to the existing types to avoid ripple effects.
---
url: /docs/web
title: Introduction
description: Get started with TurboStarter web kit.
---
Welcome to the TurboStarter documentation. This is your starting point for learning about the starter kit, its structure, features, and how to use it for your app development.
## What is TurboStarter?
TurboStarter is a fullstack starter kit that helps you build scalable and production-ready web apps, mobile apps, and browser extensions in minutes.
Looking to bootstrap your project quickly? Check out our [TurboStarter CLI guide](/blog/the-only-turbo-cli-you-need-to-start-your-next-project-in-seconds) to get started in seconds.
## Demo apps
TurboStarter provides a suite of live demo applications you can try instantly - right in your browser, on your phone, or via browser extensions. Try them live by clicking the buttons below.
## Principles
TurboStarter is built with the following principles:
* **As simple as possible** - It should be easy to understand, easy to use, and strongly avoid overengineering things.
* **As few dependencies as possible** - It should have as few dependencies as possible to allow you to take full control over every part of the project.
* **As performant as possible** - It should be fast and light without any unnecessary overhead.
## Features
Before diving into the technical details, let's overview the features TurboStarter provides.
### Multi-platform development
* [Web](/docs/web/stack): Build web apps with React, Next.js, and Tailwind CSS.
* [Mobile](/docs/mobile/stack): Build mobile apps with React Native and Expo.
* [Browser extension](/docs/extension/stack): Build browser extensions with React and WXT.
If you're specifcally interested in AI-related features (such as chatbots, agents, image generation, etc.), check out our dedicated [TurboStarter AI documentation](/ai/docs) which includes specialized stuff for building AI-powered applications.
Most features are available on all platforms. You can use the **same codebase** to build web, mobile, and browser extension apps.
### Authentication
### Organizations/teams
### Billing
### Database
### API
### Admin
### AI
Seamless integration of OpenAI, Anthropic, Groq, Mistral, and Gemini. For more advanced AI features, check out [TurboStarter AI](/ai/docs).
### Internationalization
### Emails
### Landing page
### Marketing
### Storage
### CMS
### Theming
### Analytics
### Monitoring
### Deployment
### Testing
## Use like LEGO blocks
The biggest advantage of TurboStarter is its modularity. You can use the entire stack or just the parts you need. It's like LEGO blocks - you can build anything you want with it.
If you don't need a specific feature, feel free to remove it without affecting the rest of the stack.
This approach allows for:
* **Easy feature integration** - plug new features into the kit with minimal changes.
* **Simplified maintenance** - keep the codebase clean and maintainable.
* **Core feature separation** - distinguish between core features and custom features.
* **Additional modules** - easily add modules like billing, CMS, monitoring, logger, mailer, and more.
## Scope of this documentation
While building a SaaS application involves many moving parts, this documentation focuses specifically on TurboStarter. For in-depth information on the underlying technologies, please refer to their respective official documentation.
This documentation will guide you through configuring, running, and deploying the kit, and will provide helpful links to the official documentation of technologies where necessary.
## Enjoy!
This documentation is designed to be easy to follow and understand. If you have any questions or need help, feel free to reach out to us at [hello@turbostarter.dev](mailto:hello@turbostarter.dev).
Explore new features, build amazing apps, and have fun! 🚀
---
url: /docs/web/installation/ai-development
title: AI-assisted development
description: Configure AI coding assistants like Cursor, Claude Code, Codex, or Antigravity to build your SaaS faster.
---
TurboStarter includes pre-configured rules, skills, subagents, and commands for AI coding assistants. These help AI understand your codebase, follow project conventions, and produce consistent, high-quality code.
Everything works out-of-the-box with all major AI tools like [Cursor](https://cursor.com), [Claude Code](https://claude.ai/code), [Codex](https://openai.com/codex), [Antigravity](https://antigravity.dev), and many more. Just open the project in your AI tool and start coding with the help of LLMs.
## Structure
The codebase organizes AI-specific configuration in the following structure:
The `.agents/` directory contains shared skills, commands, and agents that ship with TurboStarter. The tool-specific folders (e.g., `.cursor/`, `.claude/`, `.github/`) are [symlinked](https://en.wikipedia.org/wiki/Symbolic_link) to the `.agents/` directory, allowing you to add your own skills, commands, and agents to all tools at once while also customizing them individually.
## Rules
Rules provide persistent instructions that LLMs can read when they need to know more about specific parts of your project. They define code conventions, project structure, and workflow guidelines.
### AGENTS.md
The `AGENTS.md` file at the project root is the primary rules file. It uses a standardized format recognized by [most](https://agents.md) AI coding tools.
```md title="AGENTS.md"
## Agent rules
**DO:**
- Read existing files before editing; understand imports and structure first
- Keep diffs minimal and scoped to the request
...
**DON'T:**
- Commit, push, or modify git state unless explicitly asked
- Run destructive commands (`reset --hard`, force-push) without permission
...
## Code conventions
- TypeScript: functional, declarative; no classes
- File layout: exported component → subcomponents → helpers → types
```
Rules should be concise and actionable. Include only information the AI **cannot infer from code alone**, such as:
* Bash commands and common workflows
* Code style rules that differ from defaults
* Architectural decisions specific to your project
* Common gotchas or non-obvious behaviors
Keep rules short. Overly long files cause AI to ignore important instructions. If you notice the AI not following a rule, the file might be too verbose.
### CLAUDE.md
The `CLAUDE.md` file provides compatibility with Claude-specific tools. In TurboStarter, it simply references the main rules file:
```md title="CLAUDE.md"
@AGENTS.md
```
This ensures consistent behavior across all AI tools without duplicating content.
You can also nest AGENTS.md files in subdirectories to create more granular rules for specific parts of your project.
For example, you can create an `AGENTS.md` file in the `apps/web/` directory to add rules for the web application, or an `AGENTS.md` file in the `packages/api/` directory to add specific rules for the API.
The right approach depends on your project's complexity and where you need more targeted AI assistance.
Most providers allow you to add tool-specific rules. For example, Cursor rules go in the `.cursor/` directory, while Claude rules go in the `.claude/` directory.
If you primarily use one AI tool in your workflow, consider creating tool-specific rules rather than relying solely on the shared `AGENTS.md` file.
## Skills
Skills are modular capabilities that extend AI functionality with domain-specific knowledge. They package instructions, workflows, and reference materials that AI loads on-demand when relevant.
### How skills work
Skills are organized as directories containing a `SKILL.md` file and optionally a `references/` directory with additional documentation:
Each skill includes YAML frontmatter that describes when to use it:
```md title="SKILL.md"
---
name: better-auth-best-practices
description: Skill for integrating Better Auth - the comprehensive TypeScript authentication framework.
---
# Better Auth Integration Guide
**Always consult [better-auth.com/docs](https://better-auth.com/docs) for code examples and latest API.**
...
```
AI tools read the `description` field to determine when to apply the skill automatically. When triggered, the full skill content loads into context.
### Included skills
TurboStarter ships with several pre-configured skills covering common development scenarios:
| Skill | Description |
| ----------------------------- | ---------------------------------------------- |
| `turborepo` | Turborepo best practices and configuration |
| `better-auth-best-practices` | Auth integration patterns and API reference |
| `building-native-ui` | Mobile UI patterns with Expo and React Native |
| `native-data-fetching` | Network requests, caching, and offline support |
| `vercel-react-best-practices` | React and Next.js performance optimization |
| `vercel-composition-patterns` | Component architecture and API design |
| `web-design-guidelines` | UI review and accessibility compliance |
| `find-skills` | Discover and install additional skills |
### Installing skills
To install additional skills, we recommend using [Skills CLI](https://skills.sh), which allows you to easily install skills from the [open skills ecosystem](https://skills.sh). To install a skill, run:
```bash
npx skills add
```
Browse available skills at [skills.sh](https://skills.sh).
### Creating custom skills
If you have project-specific workflows, you can create your own skills:
Create a directory in `.agents/skills/`:
```bash
mkdir -p .agents/skills/my-custom-skill
```
Add a `SKILL.md` file with frontmatter:
```md title=".agents/skills/my-custom-skill/SKILL.md"
---
name: my-custom-skill
description: Handles X workflow. Use when working with Y or when user asks about Z.
---
# My Custom Skill
## Instructions
1. First, check the existing patterns in `packages/api/`
2. Follow the established naming conventions
3. ...
```
The skill will be automatically available in your AI tool. Test by asking about the topic described in the `description` field.
## Subagents
Subagents are specialized AI assistants that handle specific types of tasks in isolation. They operate in their own context window, preventing long research or review tasks from cluttering your main conversation.
### Included subagents
TurboStarter includes a code reviewer subagent:
```md title=".agents/agents/code-reviewer.md"
---
name: code-reviewer
description: Reviews code for quality, conventions, and potential issues.
model: inherit
readonly: true
---
You are a senior code reviewer for the TurboStarter project...
```
The subagent runs in read-only mode and checks for:
* TypeScript best practices (no `any`, explicit types)
* Component conventions (named exports, props interface)
* Architecture patterns (shared logic in packages)
* Security issues (no hardcoded secrets, proper auth)
### Using subagents
Invoke subagents explicitly in your prompts:
```txt
Use the code-reviewer to review the changes in src/modules/auth/
```
Or let the AI delegate automatically based on the task.
### Creating custom subagents
Add subagent definitions to `.agents/agents/`:
```md title=".agents/agents/security-auditor.md"
---
name: security-auditor
description: Security specialist. Use when implementing auth, payments, or handling sensitive data.
model: inherit
readonly: true
---
You are a security expert auditing code for vulnerabilities.
When invoked:
1. Identify security-sensitive code paths
2. Check for common vulnerabilities (injection, XSS, auth bypass)
3. Verify secrets are not hardcoded
4. Review input validation and sanitization
Report findings by severity: Critical, High, Medium, Low.
```
## Commands
Commands are reusable workflows triggered with a `/` prefix in chat. They standardize common tasks and encode institutional knowledge.
### Included commands
TurboStarter includes a feature setup command:
```md title=".agents/commands/setup-new-feature.md"
# Setup New Feature
Set up a new feature in the TurboStarter.dev website following project conventions.
## Before starting
1. **Clarify scope**: What part of the site needs this feature?
2. **Check existing code**: Look in `packages/*` for reusable logic
3. **Identify shared vs app-specific**: Shared logic goes in `packages/*`
## Project structure
...
```
### Using commands
Type `/` in chat to see available commands:
```txt
/setup-new-feature
```
Follow the guided workflow to scaffold features consistently.
### Creating custom commands
Add command definitions to `.agents/commands/`:
```md title=".agents/commands/fix-issue.md"
# Fix GitHub Issue
Fix a GitHub issue following project conventions.
## Steps
1. Use `gh issue view ` to get issue details
2. Search the codebase for relevant files
3. Implement the fix following existing patterns
4. Write tests to verify the fix
5. Run `pnpm typecheck` and `pnpm lint`
6. Create a descriptive commit message
7. Push and create a PR
```
## Model Context Protocol (MCP)
MCP enables AI tools to connect to external services like databases, APIs, and third-party tools. This allows AI to access real data and perform actions beyond code generation.
### Common MCP integrations
| Service | Use case |
| ---------------------------------------------------------------------------------------------- | -------------------------------------- |
| [GitHub](https://github.com/github/github-mcp-server) | Create issues, open PRs, read comments |
| [Database](https://github.com/crystaldba/postgres-mcp) | Query schemas, inspect data |
| [Figma](https://help.figma.com/hc/en-us/articles/32132100833559-Guide-to-the-Figma-MCP-server) | Import designs for implementation |
| [Linear](https://linear.app/docs/mcp)/[Jira](https://github.com/sooperset/mcp-atlassian) | Read tickets, update status |
| [Browser](https://browsermcp.io/) | Test UI, take screenshots |
For a full list of available MCP servers, see the [Cursor documentation](https://cursor.com/docs/context/mcp/directory) or the [MCP directory](https://www.pulsemcp.com/servers/).
### Setting up MCP
MCP configuration varies by tool. Generally, you create a configuration file that specifies server connections:
```json title="mcp.json"
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}
```
Consult your AI tool's documentation for specific setup instructions.
## Documentation
Like the rest of TurboStarter, the documentation is optimized for AI-assisted workflows. You can chat with it and get answers about specific features using the **most up-to-date** information.
### `llms.txt`
You can access the entire TurboStarter documentation in Markdown format at [/llms.txt](/llms.txt). This allows you to ask any LLM (assuming it has a large enough context window) questions about TurboStarter using the most up-to-date documentation.
#### Example usage
For example, to prompt an LLM with questions about TurboStarter:
1. Copy the documentation contents from [/llms.txt](/llms.txt)
2. Use the following prompt format:
```txt
Documentation:
{paste documentation here}
---
Based on the above documentation, answer the following:
{your question}
```
This works with any AI tool that accepts large context, regardless of whether it has native integration with your editor.
### Markdown format
Each documentation page is also available in raw Markdown format. You can copy the contents using the *Copy Markdown* button in the page header.
You can also access it directly by adding the `.mdx` extension to the specific documentation page. For example, to access this page, visit [/docs/web/installation/ai-development.mdx](/docs/web/installation/ai-development.mdx).
### Open in ...
To make chatting with TurboStarter documentation even more convenient, each page includes an *Open in...* button in the header that opens the documentation directly in your preferred chatbot.
For example, opening the documentation page in [ChatGPT](https://chatgpt.com) will create a new chat with the documentation automatically attached as a context:

## Best practices
Following best practices helps you get the most out of AI-assisted development. Review the tips below and share your experiences on our [Discord](https://discord.gg/KjpK2uk3JP) server.
### Plan before coding
The most impactful change you can make is planning before implementation. Planning forces clear thinking about what you're building and gives the AI concrete goals to work toward.
For complex tasks, use this workflow:
1. **Explore**: Have the AI read files and understand the existing architecture
2. **Plan**: Ask for a detailed implementation plan with file paths and code references
3. **Implement**: Execute the plan, verifying against each step
4. **Commit**: Review changes and commit with descriptive messages
Not every task needs a detailed plan. For quick changes or familiar patterns, jumping straight to implementation is fine.
### Provide verification criteria
AI performs dramatically better when it can verify its own work. Include tests, screenshots, or expected outputs:
```txt
// Instead of:
"implement email validation"
// Use:
"write a validateEmail function. test cases: user@example.com → true,
invalid → false, user@.com → false. run tests after implementing."
```
Without clear success criteria, the AI might produce something that looks right but doesn't actually work. Verification can be a test suite, a linter, or a command that checks output.
### Write specific prompts
The more precise your instructions, the fewer corrections you'll need. Reference specific files, mention constraints, and point to example patterns:
| Strategy | Before | After |
| ---------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scope the task** | "add tests for auth" | "write a test for `auth.ts` covering the logout edge case, using patterns in `__tests__/` and avoiding mocks" |
| **Reference patterns** | "add a calendar widget" | "look at how existing widgets are implemented. `HotDogWidget.tsx` is a good example. follow the pattern to implement a calendar widget" |
| **Describe symptoms** | "fix the login bug" | "users report login fails after session timeout. check the auth flow in `src/auth/`, especially token refresh. write a failing test, then fix it" |
### Use absolute rules
When writing rules, be direct. Absolute rules beat suggestions. "Always verify ownership with `userId` before database writes" works. "Consider checking ownership" gets ignored.
Structure rules with clear "MUST do" and "MUST NOT do" sections:
```md
## MUST DO
- Verify ownership before ALL database writes
- Run `pnpm typecheck` after every implementation
- Use `@workspace/ui` components - never install shadcn directly
## MUST NOT DO
- Never use `any` type - fix the types instead
- Never store secrets in code - use environment variables
- Never create new UI components if one exists in @workspace/ui
```
### Use rules as a router
Tell AI where and how to find things. This prevents hallucinated file paths and inconsistent patterns:
```md
## Where to find things
- Database schemas: `packages/db/src/schema/`
- Server action patterns: `apps/web/app/api/`
- UI components: `packages/ui/src/`
- Existing features to reference: `apps/web/app/`
```
### Course-correct early
Stop AI mid-action if it goes off track. Most tools support an interrupt key (usually `Esc`). Redirect early rather than waiting for a complete but wrong implementation.
If you've corrected the AI more than twice on the same issue in one session, the context is cluttered with failed approaches. Start fresh with a more specific prompt that incorporates what you learned.
### Manage context aggressively
Long sessions accumulate irrelevant context that degrades AI performance. Clear context between unrelated tasks or start fresh sessions for new features.
**Start a new conversation when:**
* You're moving to a different task or feature
* The AI seems confused or keeps making the same mistakes
* You've finished one logical unit of work
**Continue the conversation when:**
* You're iterating on the same feature
* The AI needs context from earlier in the discussion
* You're debugging something it just built
### Use subagents for research
When exploring unfamiliar code, delegate to subagents. They run in separate context windows and report back summaries, keeping your main conversation clean for implementation.
This is especially useful for:
* Codebase exploration that might read many files
* Code review (fresh context prevents bias toward code just written)
* Security audits and performance analysis
### Review AI-generated code carefully
AI-generated code can look right while being subtly wrong. Read the diffs and review carefully. The faster the AI works, the more important your review process becomes.
For significant changes, consider:
* Running a dedicated review pass after implementation
* Asking the AI to generate architecture diagrams
* Using a separate AI session to review the changes (fresh context)
### Add business domain context
Generic rules produce generic code. Add your application's domain to help AI understand context:
```md
## Business Domain
This application is a project management tool for software teams.
### Key Entities
- **Projects**: User-created workspaces containing tasks
- **Tasks**: Work items with status, assignee, and due date
### Business Rules
- Projects belong to organizations (use organizationId for queries)
- Tasks require project membership to view (check via RBAC)
- Deleted projects cascade-delete all tasks
```
## Troubleshooting
Common issues when using AI coding assistants and how to resolve them:
1. Check that `AGENTS.md` exists at the project root
2. Verify the file contains valid Markdown
3. Some tools require reopening the project to reload rules
4. Check if the file is too long—important rules may be getting lost in the noise
5. Try adding emphasis (e.g., "IMPORTANT" or "MUST") to critical instructions
Long sessions cause AI to "forget" rules and earlier instructions. This happens because:
* Context windows fill up with irrelevant information
* Important instructions get pushed out during summarization
* Failed approaches pollute the conversation
**Solutions:**
1. Start fresh sessions for complex or unrelated tasks
2. Re-state important rules when you notice drift
3. After two failed corrections, clear context and write a better initial prompt
1. Verify the skill's `description` field clearly describes when to use it
2. Try invoking the skill explicitly by name (e.g., `/skill-name`)
3. Check that the `SKILL.md` file has valid YAML frontmatter
4. Skills may require explicit invocation for workflows with side effects
1. Ensure subagent files are in the correct directory (`.agents/agents/`)
2. Check the frontmatter for syntax errors
3. Some tools require specific configuration to enable subagents
4. Verify the `name` and `description` fields are properly defined
AI can produce plausible-looking implementations that don't handle edge cases or reference non-existent APIs.
**Prevention:**
1. Always provide verification criteria (tests, expected outputs)
2. Use typed languages and configure linters
3. Point AI to reference implementations rather than documenting APIs
4. Run verification commands after every implementation
**Recovery:**
1. Don't try to fix incorrect code through follow-up prompts repeatedly
2. Revert changes and start fresh with a more specific prompt
3. Use a dedicated review pass to catch issues before committing
When you have multiple `AGENTS.md` files (root and package-level), they can conflict. Generally, the more specific file (closer to the code being edited) takes priority.
**Solutions:**
1. Check which `AGENTS.md` is being read by asking the AI
2. Consolidate conflicting rules into one location
3. Use package-level rules only for domain-specific guidance
Unbounded exploration fills context with irrelevant information.
**Solutions:**
1. Scope investigations narrowly: "search for JWT validation in `src/auth/`" instead of "find auth code"
2. Use subagents for exploration so it doesn't consume your main context
3. Specify file types or directories to limit search scope
Large codebases or long sessions can consume significant resources.
**Solutions:**
1. Use compact/summarize features regularly to reduce context size
2. Close and restart between major tasks
3. Add large build directories (e.g., `node_modules`, `dist`) to `.gitignore`
4. Disable unnecessary extensions that might impact performance
## Learn more
Dive deeper into AI-assisted development with these resources. They cover open standards, tool directories, and specifications that power modern AI coding workflows.
---
url: /docs/web/installation/clone
title: Cloning repository
description: Get the code to your local machine and start developing.
---
Ensure you have Git installed on your local machine before proceeding. You can download Git from [here](https://git-scm.com).
## Git clone
Clone the repository using the following command:
```bash
git clone git@github.com:turbostarter/core
```
By default, we're using [SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for all Git commands. If you don't have it configured, please refer to the [official documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to set it up.
Alternatively, you can use HTTPS to clone the repository:
```bash
git clone https://github.com/turbostarter/core
```
Another alternative could be to use the [Github CLI](https://cli.github.com/) or [Github Desktop](https://desktop.github.com/) for Git operations.
## Git remote
After cloning the repository, remove the original origin remote:
```bash
git remote rm origin
```
Add the upstream remote pointing to the original repository to pull updates:
```bash
git remote add upstream git@github.com:turbostarter/core
```
Once you have your own repository set up, add your repository as the origin:
```bash
git remote add origin
```
## Staying up to date
To pull updates from the upstream repository, run the following command daily (preferably with your morning coffee ☕):
```bash
git pull upstream main
```
This ensures your repository stays up to date with the latest changes.
Check [Updating codebase](/docs/web/installation/update) for more details on updating your codebase.
---
url: /docs/web/installation/commands
title: Common commands
description: Learn about common commands you need to know to work with the project.
---
For sure, you don't need these commands to kickstart your project, but it's useful to know they exist for when you need them.
You can set up aliases for these commands in your shell configuration file. For example, you can set up an alias for `pnpm` to `p`:
```bash title="~/.bashrc"
alias p='pnpm'
```
Or, if you're using [Zsh](https://ohmyz.sh/), you can add the alias to `~/.zshrc`:
```bash title="~/.zshrc"
alias p='pnpm'
```
Then run `source ~/.bashrc` or `source ~/.zshrc` to apply the changes.
You can now use `p` instead of `pnpm` in your terminal. For example, `p i` instead of `pnpm install`.
To inject environment variables into the command you run, prefix it with `with-env`:
```bash
pnpm with-env
```
For example, `pnpm with-env pnpm build` will run `pnpm build` with the environment variables injected.
Some commands, like `pnpm dev`, automatically inject the environment variables for you.
## Installing dependencies
To install the dependencies, run:
```bash
pnpm install
```
## Starting development server
Start development server by running:
```bash
pnpm dev
```
## Building project
To build the project (including all apps and packages), run:
```bash
pnpm build
```
## Building specific app/package
To build a specific app/package, run:
```bash
pnpm turbo build --filter=
```
## Cleaning project
To clean the project, run:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm install
```
## Formatting code
To check for formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format
```
To fix formatting errors using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), run:
```bash
pnpm format:fix
```
## Linting code
To check for linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint
```
To fix linting errors using [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), run:
```bash
pnpm lint:fix
```
## Adding UI components
To add a new web component, run:
```bash
pnpm --filter @workspace/ui-web ui:add
```
This command will add and export a new component to `@workspace/ui-web` package.
To add a new mobile component, run:
```bash
pnpm --filter @workspace/ui-mobile ui:add
```
This command will add and export a new component to `@workspace/ui-mobile` package.
## Services commands
To run the services containers locally, you need to have [Docker](https://www.docker.com/) installed on your machine.
You can always use the cloud-hosted solution (e.g. [Neon](https://neon.tech/), [Turso](https://turso.tech/) for database) for your projects.
We have a few commands to help you manage the services containers (for local development).
### Starting containers
To start the services containers, run:
```bash
pnpm services:start
```
It will run all the services containers. You can check their configs in `docker-compose.yml`.
### Setting up services
To setup all the services, run:
```bash
pnpm services:setup
```
It will start all the services containers and run necessary setup steps.
### Stopping containers
To stop the services containers, run:
```bash
pnpm services:stop
```
### Displaying status
To check the status and logs of the services containers, run:
```bash
pnpm services:status
```
### Displaying logs
To display the logs of the services containers, run:
```bash
pnpm services:logs
```
### Database commands
We have a few commands to help you manage the database leveraging [Drizzle CLI](https://orm.drizzle.team/kit-docs/commands).
#### Generating migrations
To generate a new migration, run:
```bash
pnpm with-env turbo db:generate
```
It will create a new migration `.sql` file in the `packages/db/migrations` folder.
#### Running migrations
To run the migrations against the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
It will apply all the pending migrations.
#### Pushing changes directly
Make sure you know what you're doing before pushing changes directly to the db.
To push changes directly to the db, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:push
```
It lets you push your schema changes directly to the database and omit managing SQL migration files.
#### Checking database status
To check the status of the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:status
```
It will display the status of the applied migrations and the pending ones.
```bash
Applied migrations:
- 0000_cooing_vargas
- 0001_curious_wallflower
- 0002_good_vertigo
- 0003_peaceful_devos
- 0004_fat_mad_thinker
- 0005_yummy_bucky
- 0006_glorious_vargas
Pending migrations:
- 0007_nebulous_havok
```
#### Resetting database
To reset the database, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:reset
```
It will reset the database to the initial state.
#### Seeding database
To seed the database with some example data (for development purposes), run:
```bash
pnpm with-env turbo db:seed
```
It will populate your database with some example data.
#### Checking database
To check the database schema consistency, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:check
```
#### Studying database
To study the database schema in the browser, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:studio
```
This will start the Studio on [https://local.drizzle.studio](https://local.drizzle.studio).
## Tests commands
### Running tests
To run the tests, run:
```bash
pnpm test
```
This will run all the tests in the project using Turbo tasks. As it leverages Turbo caching, it's [recommended](/docs/web/tests/unit#configuration) to run it in your CI/CD pipeline.
### Running tests projects
To run tests for all Vitest [Test Projects](https://vitest.dev/guide/projects), run:
```bash
pnpm test:projects
```
This will run all the tests in the project using Vitest.
### Watching tests
To watch the tests, run:
```bash
pnpm test:projects:watch
```
This will watch the tests for all [Test Projects](https://vitest.dev/guide/projects) and run them automatically when you make changes.
### Generating code coverage
To generate code coverage report, run:
```bash
pnpm turbo test:coverage
```
This will generate a code coverage report in the `coverage` directory under `tooling/vitest` package.
### Viewing code coverage
To preview the code coverage report in the browser, run:
```bash
pnpm turbo test:coverage:view
```
This will launch the report's `.html` file in your default browser.
---
url: /docs/web/installation/conventions
title: Conventions
description: Some standard conventions used across the TurboStarter codebase.
---
You're not required to follow these conventions; they're simply a standard set of practices used in the core kit. If you like them, we encourage you to keep them during your usage of the kit so you have a consistent code style that you and your teammates understand.
## Turborepo packages
In this project, we use [Turborepo packages](https://turbo.build/repo/docs/core-concepts/internal-packages) to define reusable code that can be shared across multiple applications.
* **Apps** are used to define the main application, including routing, layout, and global styles.
* **Packages** share reusable code and add functionality across multiple applications. They're configurable from the main application.
**Recommendation:** Do not create a package for your app code unless you plan to reuse it across multiple applications or are experienced in writing library code.
If your application is not intended for reuse, keep all code in the app folder. This approach saves time and reduces complexity, both of which are beneficial for fast shipping.
**Experienced developers:** If you have the experience, feel free to create packages as needed.
## Imports and paths
When importing modules from packages or apps, use the following conventions:
* **From a package:** Use `@workspace/package-name` (e.g., `@workspace/ui`, `@workspace/api`, etc.).
* **From an app:** Use `~/` (e.g., `~/components`, `~/config`, etc.).
## Enforcing conventions
We don't enforce complex rules or specific style guides that are not relevant to the project, giving you more freedom to customize things to your needs.
To enforce these conventions, we use the following tools:
* [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) is a [Prettier-compatible](https://oxc.rs/docs/guide/usage/formatter/migrate-from-prettier.html) tool used to enforce code formatting.
* [Oxlint](https://oxc.rs/docs/guide/usage/linter.html) is an [ESLint-compatible](https://oxc.rs/docs/guide/usage/linter/migrate-from-eslint.html) tool used to enforce code quality and best practices.
* [TypeScript](https://www.typescriptlang.org/) is used to enforce type safety.
## Code health
TurboStarter provides a set of tools to ensure code health and quality in your project.
### GitHub Actions
By default, TurboStarter sets up GitHub Actions to run tests on every push to the repository. You can find the workflow configuration in the `.github/workflows` directory.
The workflow has multiple stages:
* `format` - runs Oxfmt to format the code.
* `lint` - runs Oxlint to check for linting errors.
* `test` - runs tests.
### Git hooks
Together with TurboStarter, we have set up a `pre-commit` hook that will check for linting and formatting errors in the files being committed.
It's configured using [Lefthook](https://lefthook.dev), which supports multiple hooks and can be configured to run commands on specific files or directories.
Feel free to customize the hook to your needs, e.g. to check consistency of commit messages (useful for generating changelogs) using [commitlint](https://commitlint.js.org/):
```yaml title="lefthook.yml"
commit-msg:
commands:
"lint commit message":
run: pnpm commitlint --edit {1}
```
---
url: /docs/web/installation/dependencies
title: Managing dependencies
description: Learn how to manage dependencies in your project.
---
As the package manager we chose [pnpm](https://pnpm.io/).
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
## Install dependency
To install a package you need to decide whether you want to install it to the root of the monorepo or to a specific workspace. Installing it to the root makes it available to all packages, while installing it to a specific workspace makes it available only to that workspace.
To install a package globally, run:
```bash
pnpm add -w
```
To install a package to a specific workspace, run:
```bash
pnpm add --filter
```
For example:
```bash
pnpm add --filter @workspace/ui motion
```
It will install `motion` to the `@workspace/ui` workspace.
## Remove dependency
Removing a package is the same as installing but with the `remove` command.
To remove a package globally, run:
```bash
pnpm remove -w
```
To remove a package from a specific workspace, run:
```bash
pnpm remove --filter
```
## Update a package
Updating is a bit easier since there is a nice way to update a package in all workspaces at once:
```bash
pnpm update -r
```
When you update a package, pnpm will respect the [semantic versioning](https://docs.npmjs.com/about-semantic-versioning) rules defined in the `package.json` file. If you want to update a package to the latest version, you can use the `--latest` flag.
## Renovate bot
By default, TurboStarter comes with [Renovate](https://www.npmjs.com/package/renovate) enabled. It is a tool that helps you manage your dependencies by automatically creating pull requests to update your dependencies to the latest versions. You can find its configuration in the `.github/renovate.json` file. Learn more about it in the [official docs](https://docs.renovatebot.com/configuration-options/).
When it creates a pull request, it is treated as a normal PR, so all tests and preview deployments will run. **It is recommended to always preview and test the changes in the staging environment before merging the PR to the main branch to avoid breaking the application.**
---
url: /docs/web/installation/development
title: Development
description: Get started with the code and develop your SaaS.
---
## Prerequisites
To get started with TurboStarter, ensure you have the following installed and set up:
* [Node.js](https://nodejs.org/en) (24.x or higher)
* [Docker](https://www.docker.com) (only if you want to use local services e.g. database)
* [pnpm](https://pnpm.io)
## Project development
### Install dependencies
Install the project dependencies by running the following command:
```bash
pnpm i
```
It is a fast, disk space efficient package manager that uses hard links and symlinks to save one version of a module only ever once on a disk. It also has a great [monorepo support](https://pnpm.io/workspaces). Of course, you can change it to use [Bun](https://bunpkg.com), [yarn](https://yarnpkg.com) or [npm](https://www.npmjs.com) with minimal effort.
### Setup environment variables
Create a `.env.local` files from `.env.example` files and fill in the required environment variables.
You can use the following command to recursively copy the `.env.example` files to the `.env.local` files:
```bash
find . -name ".env.example" -exec sh -c 'cp "$1" "${1%.example}.local"' _ {} \;
```
```bash
Get-ChildItem -Recurse -Filter ".env.example" | ForEach-Object {
Copy-Item $_.FullName ($\_.FullName -replace '\.example$', '.local')
}
```
Check [Environment variables](/docs/web/configuration/environment-variables) for more details on setting up environment variables.
### Setup services
If you want to use local services like [database](/docs/web/database/overview) (**recommended for development purposes**), ensure Docker is running, then setup them with:
```bash
pnpm services:setup
```
This command initiates the containers and runs necessary setup steps, ensuring your services are up to date and ready to use.
### Start development server
To start the application development server, run:
```bash
pnpm dev
```
Your app should now be up and running at [http://localhost:3000](http://localhost:3000) 🎉
### Deploy to Production
When you're ready to deploy the project to production, follow the [checklist](/docs/web/deployment/checklist) to ensure everything is set up correctly.
---
url: /docs/web/installation/editor-setup
title: Editor setup
description: Learn how to set up your editor for the fastest development experience.
---
Of course, you can use any IDE you like, but you'll have the best possible developer experience with this starter kit when using a **VSCode-based** editor with the suggested settings and extensions.
## Settings
We've included most recommended settings in the `.vscode/settings.json` file to make your development experience as smooth as possible. It includes configuration for tools like [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html), [Oxlint](https://oxc.rs/docs/guide/usage/linter.html), and Tailwind CSS, which are used to enforce conventions across the codebase. You can adjust them to your needs.
```json title=".vscode/settings.json"
{
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.oxc": "always"
},
"editor.formatOnSave": true
...
}
```
## Extensions
Once you've cloned the project and opened it in VSCode, you should be prompted to install the suggested extensions, which are defined in `.vscode/extensions.json`. If you'd rather install them manually, you can do so at any time.
These are the extensions we recommend:
### OXC
Global extension for static code analysis. It will help you find and fix problems in your JavaScript/TypeScript code using [Oxfmt](https://oxc.rs/docs/guide/usage/formatter.html) and [Oxlint](https://oxc.rs/docs/guide/usage/linter.html). It's compatible with [Prettier](https://prettier.io/) and [ESLint](https://eslint.org/).
### Pretty TypeScript Errors
Improves TypeScript error messages shown in the editor.
### Tailwind CSS IntelliSense
Adds IntelliSense for Tailwind CSS classes to enable autocompletion and linting.
---
url: /docs/web/installation/structure
title: Project structure
description: Learn about the project structure and how to navigate it.
---
The main directories in the project are:
* `apps` - the location of the main apps
* `packages` - the location of the shared code and the API
### `apps` Directory
This is where the apps live. It includes web app (Next.js), mobile app (React Native - Expo), and the browser extension (WXT - Vite + React). Each app has its own directory.
### `packages` Directory
This is where the shared code and the API for packages live. It includes the following:
* shared libraries (database, mailers, cms, billing, etc.)
* shared features (auth, mails, billing, ai etc.)
* UI components (buttons, forms, modals, etc.)
All apps can use and reuse the API exported from the packages directory. This makes it easy to have one, or many apps in the same codebase, sharing the same code.
## Repository structure
By default the monorepo contains the following apps and packages:
## Web application structure
The web application is located in the `apps/web` folder. It contains the following folders:
---
url: /docs/web/installation/update
title: Updating codebase
description: Learn how to update your codebase to the latest version.
---
If you've been following along with our previous guides, you should already have a Git repository set up for your project, with an `upstream` remote pointing to the original repository.
Updating your project involves fetching the latest changes from the `upstream` remote and merging them into your project. Let's dive into the steps!
## Stash changes
If you don't have any changes to stash, you can skip this step and proceed with the update process.
Alternatively, you can [commit](https://git-scm.com/docs/git-commit) your changes.
If you have any uncommitted changes, stash them before proceeding. It will allow you to avoid any conflicts that may arise during the update process.
```bash
git stash
```
This command will save your changes in a temporary location, allowing you to retrieve them later. Once you're done updating, you can apply the stash to your working directory.
```bash
git stash apply
```
## Pull changes
Pull the latest changes from the `upstream` remote.
```bash
git pull upstream main
```
When prompted the first time, please opt for merging instead of rebasing.
Don't forget to run `pnpm i` in case there are any updates in the dependencies.
## Resolve conflicts
If there are any conflicts during the merge, Git will notify you. You can resolve them by opening the conflicting files in your code editor and making the necessary changes.
If you find conflicts in the `pnpm-lock.yaml file`, accept either of the two changes (avoid manual edits), then run:
```bash
pnpm i
```
Your lock file will now reflect both your changes and the updates from the upstream repository.
## Run a health check
After resolving the conflicts, it's time to test your project to ensure everything is working as expected. Run your project locally and navigate through the various features to verify that everything is functioning correctly.
For a quick health check, you can run:
```bash
pnpm lint
pnpm typecheck
```
If everything looks good, you're all set! Your project is now up to date with the latest changes from the `upstream` repository.
## Commit and push
Once everything is working fine, don't forget to commit your changes using:
```bash
git commit -m ""
```
and push them to your remote repository with:
```bash
git push origin
```
---
url: /docs/web/internationalization/configuration
title: Configuration
description: Learn how to configure internationalization in TurboStarter.
---
The default global configuration is defined in the `@workspace/i18n` package and shared across all applications. You can override it in each app to customize the internationalization setup for that specific app.
The configuration is defined in the `packages/i18n/src/config.ts` file:
```ts title="packages/i18n/src/config.ts"
export const config = {
locales: ["en", "es"],
defaultLocale: "en",
namespaces: [
"common",
"admin",
"organization",
"dashboard",
"auth",
"billing",
"marketing",
"validation",
],
cookie: "locale",
} as const;
```
Let's break down the configuration options:
* `locales`: An array of all supported locales.
* `defaultLocale`: The default locale to use if no other locale is detected.
* `namespaces`: An array of all namespaces used in the application.
* `cookie`: The name of the cookie to store the detected locale (acts as a cache).
## Translation files
The core of the whole internationalization setup is the translation files. They are stored in the `packages/i18n/src/translations` directory and are used to store the translations for each locale and namespace.
Each directory represents a locale and contains a set of files, each corresponding to a specific namespace (e.g. `en/common.json`). Inside we define the keys and values for the translations.
```ts title="packages/i18n/src/translations/en/common.json"
{
"hello": "Hello, world!"
}
```
That way we can ensure that we have a single source of truth for the translations and we can use them consistently in all the applications.
## Locales
The `locales` array in the configuration defines the list of supported languages in your application. Each locale is represented by a string that uniquely identifies the language.
To add a new locale, you need to:
1. Add the new locale to the `locales` array in the configuration.
2. Create a new directory in the `packages/i18n/src/translations` directory.
3. Create a new file in the new directory for each namespace and add the translations for the new locale.
For example, if you want to add the `fr` locale, you need to:
1. Add `fr` to the `locales` array in the configuration.
2. Create a new directory in the `packages/i18n/src/translations` directory.
3. Create a new file for each namespace in the created directory and add the translations for the new locale.
### Fallback locale
The `defaultLocale` option in the configuration defines the fallback locale. If a translation is not found for a specific locale, the fallback locale will be used.
We can also override this setting in each [app configuration](/docs/web/configuration/app) by configuring the `locale` property.
## Namespaces
`namespaces` are used to group translations by feature or module. This helps in organizing the translations and makes it easier to maintain them.
### Why not one big namespace?
Using multiple namespaces instead of one large namespace helps with:
1. **Performance:** load translations on-demand instead of all at once, reducing the initial bundle size.
2. **Organization:** group translations by feature (e.g., `auth`, `common`, `dashboard`).
3. **Maintenance:** easier to update and manage smaller translation files.
4. **Development:** better TypeScript support and team collaboration.
For example, you might structure your namespaces like this:
```ts title="packages/i18n/src/translations/en/common.json"
{
"hello": "Hello, world!"
}
```
```ts title="packages/i18n/src/translations/en/auth.json"
{
"login": "Login",
"register": "Register"
}
```
```ts title="packages/i18n/src/translations/en/billing.json"
{
"invoice": "Invoice",
"payment": "Payment",
"subscription": "Subscription"
}
```
Remember that while you can create as many namespaces as needed, it's important to maintain a balance - too many namespaces can lead to unnecessary complexity, while too few might defeat the purpose of separation.
## Routing
TurboStarter implements locale-based routing by placing pages under the `[locale]` folder. However, the default locale (usually `en`) is not prefixed in the URL for better SEO and user experience.
For example, with English as the default locale and Polish as an additional language:
* `/dashboard` → English version (default locale)
* `/pl/dashboard` → Polish version
The app also automatically detects the user's preferred language through cookies, HTML `lang` attribute, and browser's `Accept-Language` header.
This ensures a seamless experience where users get content in their preferred language while maintaining clean URLs for the default locale.
You can override the locale by manually setting the cookie or by navigating to
a URL with a different locale prefix.
---
url: /docs/web/internationalization/overview
title: Overview
description: Get started with internationalization in TurboStarter.
---
TurboStarter uses [i18next](https://www.i18next.com/) for internationalization, which is one of the most popular and mature (over 10 years of development!) i18n frameworks for JavaScript.
With i18next, you can easily translate your application into multiple
languages, handle complex pluralization rules, format dates and numbers
according to locale, and much more. The framework is highly extensible through
plugins and provides excellent TypeScript support out of the box.
You can read more about `i18next` package in the [official documentation](https://www.i18next.com/overview/getting-started).

## Getting started
TurboStarter comes with `i18next` pre-configured and abstracted behind the `@workspace/i18n` package. This abstraction layer ensures that any future changes to the underlying translation library won't impact your application code. The internationalization setup is ready to use out of the box and includes:
* Multiple language support out of the box
* Type-safe translations with generated types
* Automatic language detection
* Easy-to-use React hooks for translations
* Built-in number and date formatting
* Support for nested translation keys
* Pluralization handling
To start using internationalization in your app, you'll need to:
1. Configure your supported languages
2. Add translation files
3. Use translation hooks in your components
Check out the following sections to learn more about each step:
---
url: /docs/web/internationalization/translations
title: Translating app
description: Learn how to translate your application to multiple languages.
---
TurboStarter provides a flexible and powerful translation system that works seamlessly across your entire application. Whether you're working with React Server Components (RSC), client-side components, or server-side rendering, you can easily integrate translations to create a fully internationalized experience.
The translation system supports:
* **Server components (RSC)** for efficient server-side translations
* **Client components** for dynamic language switching
* **Server-side rendering** for SEO-friendly translated content
## Server components (RSC)
To get the translations in a server component, you can use the `getTranslation` method:
```tsx
import { getTranslation } from "@workspace/i18n";
export default async function MyComponent() {
const { t } = await getTranslation();
return
{t("common:hello")}
;
}
```
There is also a possibility to use the [Trans](https://react.i18next.com/latest/trans-component) component, which could be useful e.g. for interpolating variables:
```tsx
import { Trans } from "@workspace/i18n";
import { withI18n } from "@workspace/i18n/with-i18n";
const Page = () => {
return }} />;
};
export default withI18n(Page);
```
Although, to make it available in the server component, you need to wrap it with the `withI18n` HOC.
Given that server components are rendered in parallel, it's uncertain which one will render first. Therefore, it's crucial to initialize the translations before rendering the server component on each page/layout.
## Client components
For client components, you can use the `useTranslation` hook from the `@workspace/i18n` package:
```tsx
"use client";
import { useTranslation } from "@workspace/i18n";
export default function MyComponent() {
const { t } = useTranslation();
return
{t("common:hello")}
;
}
```
That's the simplest way to get the translations in a client component.
## Server-side
In all other places (e.g. metadata, API routes, sitemaps etc.) you can use the `getTranslation` method to get the translations server-side:
```ts
import { getTranslation } from "@workspace/i18n";
export const generateMetadata = async () => {
const { t } = await getTranslation();
return {
title: t("common:title"),
};
};
```
It automatically checks the user's preferred locale and uses the correct translation.
## Language switcher
TurboStarter ships with a language customizer component that allows you to switch between languages. You can import and use the `LocaleCustomizer` component and drop it anywhere in your application to allow users to change the language seamlessly.
```tsx
import { LocaleCustomizer } from "@workspace/ui-web/i18n";
export default function MyComponent() {
return ;
}
```
The component automatically displays all languages configured in your i18n settings. When a user switches languages, it will:
1. Update the URL to include the new locale prefix (e.g. `/es/dashboard`)
2. Store the selected locale in a cookie for persistence
3. Refresh translations across the entire application
4. Preserve the current page/route during the language switch
This provides a seamless localization experience without requiring any additional configuration.
## Best practices
Here are some recommended best practices for managing translations in your application:
* Use descriptive translation keys that follow a logical hierarchy
```ts
// ✅ Good
"auth.login.title";
// ❌ Bad
"loginTitleForAuth";
```
* Keep translations organized in separate namespaces/files based on features or sections
```
translations/
├── en/
│ ├── auth.json
│ └── common.json
└── pl/
├── auth.json
└── billing.json
```
* Avoid hardcoding text strings - always use translation keys even for seemingly static content
* Always provide a fallback language (usually English) for when translations are missing
* Use pluralization and interpolation features when dealing with dynamic content
```ts
// Pluralization
t("items", { count: 2 }); // "2 items"
// Interpolation
t("welcome", { name: "John" }); // "Welcome, John!"
```
* Regularly review and clean up unused translation keys to keep files maintainable
* Use TypeScript for type-safe translation keys
---
url: /docs/web/marketing/legal
title: Legal pages
description: Learn how to create and update legal pages
---
Legal pages are defined in the `apps/web/src/app/[locale]/(marketing)/legal` directory.
TurboStarter comes with the following legal pages:
* **Terms and Conditions**: to define the terms and conditions of your application
* **Privacy Policy**: to define the privacy policy of your application
* **Cookie Policy**: to define the cookie policy of your application
For obvious reasons, **these pages are empty and you need to fill in the content.**
## Content from CMS
Content for legal pages are stored as [MDX](https://mdxjs.com/) files in [content collection](/docs/web/cms/content-collections) in `packages/cms/src/content/collections/legal` directory.
Then it's parsed and rendered as a Next.js page under corresponding slug:
```tsx title="apps/web/src/app/[locale]/(marketing)/legal/[slug]/page.tsx"
import {
CollectionType,
getContentItemBySlug,
getContentItems,
} from "@workspace/cms";
export default async function Page({ params }: PageParams) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
if (!item) {
return notFound();
}
return ;
}
export function generateStaticParams() {
return getContentItems({ collection: CollectionType.LEGAL }).items.map(
({ slug, locale }) => ({
slug,
locale,
}),
);
}
```
As it's fully typesafe it also allows us to generate metadata for each page based on the frontmatter that you define in the MDX file:
```tsx title="apps/web/src/app/[locale]/(marketing)/legal/[slug]/page.tsx"
export async function generateMetadata({ params }: PageParams) {
const item = getContentItemBySlug({
collection: CollectionType.LEGAL,
slug: (await params).slug,
locale: (await params).locale,
});
if (!item) {
return notFound();
}
return getMetadata({
title: item.title,
description: item.description,
})({ params });
}
```
Read more about it in the [CMS section](/docs/web/cms/overview).
## ChatGPT prompts
Each `.mdx` file with legal content include a set of useful prompts that you can use to generate the content.
Please, be aware that **ChatGPT is not a lawyer** and the content generated by it should be reviewed by one before publishing. Take your time and treat the generated content as a starting point not a final document.
```mdx title="privacy-policy.mdx"
---
title: Privacy Policy
description: Our privacy policy outlines how we collect, use, and protect your personal information.
---
{/* 💡 You can use one of the following ChatGPT prompts to generate this 💡 */}
...
```
Feel free to add your own content or even additional pages to the `legal` collection.
---
url: /docs/web/marketing/pages
title: Marketing pages
description: Discover which marketing pages are available out of the box and how to add a new one.
---
TurboStarter comes with pre-defined marketing pages to help you get started with your SaaS application. These pages are built with Next.js and Tailwind CSS and are located in the `apps/web/src/app/[locale]/(marketing)` directory.
TurboStarter comes with the following marketing pages:
* **Home**: conversions-optimized [landing page](https://demo.turbostarter.dev) with [hero section](https://demo.turbostarter.dev#hero), [features](https://demo.turbostarter.dev#features), [pricing](https://demo.turbostarter.dev#pricing), [testimonials](https://demo.turbostarter.dev#testimonials), [FAQ](https://demo.turbostarter.dev#faq) and more
* [Blog](/docs/web/cms/blog): to display your blog posts
* **Pricing**: to display your pricing plans
* **Contact**: to enable users to contact you with a contact form
## Contact form
To make the contact form work, you need to add the following environment variable:
```dotenv
CONTACT_EMAIL=
```
Set this variable to the email address where you want to receive contact form submissions. The sender's email address will match what you configured in your [mailing configuration](/docs/web/emails/configuration).
## Adding a new marketing page
To add a new marketing page, create a new directory in `apps/web/src/app/[locale]/(marketing)` with the desired route name.
The page will automatically become available in your application at the corresponding URL path.
For example, to create a page accessible at `/about`, create a directory named `about` and add a `page.tsx` file inside it. The complete path would be `apps/web/src/app/[locale]/(marketing)/about/page.tsx`.
```tsx title="apps/web/src/app/[locale]/(marketing)/about/page.tsx"
export default function AboutPage() {
return
About
;
}
```
This page inherits the layout at `apps/web/src/app/[locale]/(marketing)/layout.tsx`. You can customize the layout by editing this file - but remember that it will affect all marketing pages.
---
url: /docs/web/marketing/seo
title: SEO
description: Learn how to optimize your app for search engines.
---
SEO is an important part of building a website. It helps search engines understand your website and rank it higher in search results. In this guide, you'll learn how to improve your SaaS application's search engine optimization (SEO).
TurboStarter is already optimized for SEO out of the box (including meta tags, sitemaps, robots files and many more). However, there are a few things you can do to improve your application's SEO.
**Content:** High-quality, relevant content is the cornerstone of effective SEO. Focus on **creating valuable, engaging content** that addresses your customers' needs and questions. Regularly update your content to keep it fresh and relevant.
**Keyword optimization:** Conduct thorough keyword research to identify terms your target audience is searching for. Incorporate these keywords naturally into your content, titles, meta descriptions, and headers. Avoid keyword stuffing; prioritize readability and user experience.
**On-Page SEO:**
* Use descriptive, keyword-rich titles and meta descriptions for each page.
* Implement a clear heading structure (H1, H2, H3) to organize your content.
* Optimize images with descriptive file names and alt text.
* Ensure your URLs are clean, descriptive, and include relevant keywords.
**Technical SEO:**
* Improve website loading speed by optimizing images, minifying CSS and JavaScript, and leveraging browser caching.
* Ensure your website is mobile-friendly and responsive across all devices.
* Implement schema markup to help search engines better understand your content.
* Use HTTPS to secure your website and boost search rankings.
**User experience:**
* Design an intuitive site structure and navigation to improve user engagement.
* Reduce bounce rates by creating compelling, easy-to-read content.
* Implement internal linking to guide users through your site and distribute page authority.
**Link building:**
* Create high-quality, shareable content to naturally attract backlinks.
* Engage in guest posting on reputable sites within your industry.
* Participate in industry forums and discussions, providing valuable insights and linking to your content when relevant.
* Leverage social media to increase content visibility and encourage sharing.
**Local SEO (if applicable):**
* Claim and optimize your Google My Business listing.
* Ensure consistent NAP (Name, Address, Phone) information across all online directories.
* Encourage customer reviews on Google and other relevant platforms.
**Monitor and analyze:**
* Use [Google Search Console](https://search.google.com/search-console/about) to monitor your site's performance in search results and identify issues.
* Regularly analyze your SEO efforts using tools like Google Analytics to understand user behavior and refine your strategy.
**Stay updated:**
* Keep abreast of SEO best practices and algorithm updates to continually refine your strategy.
* Regularly audit your website to identify and fix any SEO issues.
## Sitemap
Generally speaking, Google will find your pages without a sitemap as it follows the link in your website. However, you can add pages to the sitemap by adding them to the `apps/web/src/app/sitemap.ts` file, which is used to generate the sitemap.
If you add more static pages to your website, you can add them to the sitemap by adding them to the `apps/web/src/app/sitemap.ts` returned array.
```tsx title="sitemap.ts"
export default function sitemap(): MetadataRoute.Sitemap {
return [
{
...getEntry(pathsConfig.index),
lastModified: new Date(),
changeFrequency: "monthly",
priority: 1,
},
...getContentItems({
collection: CollectionType.BLOG,
locale: appConfig.locale,
}).items.map((post) => ({
...getEntry(pathsConfig.marketing.blog.post(post.slug)),
lastModified: new Date(post.lastModifiedAt),
changeFrequency: "monthly",
priority: 0.7,
})),
/* other pages */
];
}
```
All the existing pages are already added to the sitemap. You don't need to add them manually.
## Meta tags
TurboStarter provides a helper function called `getMetadata` to easily set meta tags for your pages. This helper ensures consistent metadata formatting across your site and includes essential SEO tags like title, description, and Open Graph tags. You can use it in any page's metadata export:
```tsx title="page.tsx"
export const generateMetadata = getMetadata({
title: "My Page Title",
description: "My Page Description",
});
```
This will generate the following meta tags:
```html
```
The `getMetadata` helper is really useful for generating consistent meta tags across your site, making SEO optimization simpler and more reliable.
`getMetadata` also supports translations. You can pass a translation key to the `title` and `description` parameters, and it will automatically use the correct translation for the current locale.
```tsx
export const generateMetadata = getMetadata({
title: "billing:title",
description: "billing:description",
});
```
In this example, the `title` and `description` will be fetched from the `billing` namespace for the current locale and placed in the meta tags.
## Backlinks
Backlinks are said to be the **most important factor** in modern SEO. The more backlinks you have from high-quality websites, the higher your website will rank in search results - and the more traffic you'll get.
How do you acquire backlinks? The most effective strategy is to create high-quality, valuable content that naturally attracts links from other websites. However, there are several other methods to build backlinks:
1. **Guest blogging:** Contribute articles to reputable websites within your industry. This not only provides backlinks but also exposes your brand to a new audience.
2. **Strategic outreach:** Identify websites that could benefit from linking to your content. Reach out with a personalized pitch, explaining the value your content adds to their audience.
3. **Digital PR:** Create newsworthy content or conduct original research that journalists and bloggers will want to reference and link to.
4. **Broken link building:** Find broken links on relevant websites and suggest your content as a replacement.
5. **Resource page link building:** Find resource pages in your niche and suggest your content for inclusion.
6. **Social media engagement:** While not directly impacting SEO, active social media presence can increase content visibility and indirectly lead to more backlinks.
7. **Create linkable assets:** Develop infographics, tools, or comprehensive guides that others in your industry will want to reference.
8. **Participate in industry forums and discussions:** Contribute meaningfully to conversations in your field, including your website when relevant.
Remember, the quality of backlinks is more important than quantity. Focus on acquiring links from authoritative, relevant websites in your niche. Avoid any black-hat techniques or link schemes that could result in penalties from search engines.
## Adding your website to Google Search Console
Once you've optimized your website for SEO, you can add it to Google Search Console. Google Search Console is a free tool that helps you monitor and maintain your website's presence in Google search results.
You can use it to check your website's indexing status, submit sitemaps, and get insights into how Google sees your website.
The first thing you need to do is verify your website in Google Search Console. You can do this by adding a meta tag to your website's HTML or by uploading an HTML file to your website.
Once you've verified your website, you can submit your sitemap to Google Search Console. This will help Google find and index your website's pages faster.
Please submit your sitemap to Google Search Console by going to the `Sitemaps` section and adding the URL of your sitemap. The URL of your sitemap is `https://your-website.com/sitemap.xml`.
Of course, please replace `your-website.com` with your actual website URL.
## Content
When it comes to internal factors, **content is king**. Make sure your content is relevant, useful, and engaging. Make sure it's updated regularly and optimized for SEO.
Most importantly, you want to think about how your customers will search for the problem your SaaS is solving. For example, if you're building a project management tool, you might want to write about project management best practices, how to manage a remote team, or how to use your tool to improve productivity.
You can use the blog and documentation features in TurboStarter to create high-quality content that will help your website rank higher in search results - and help your customers find what they're looking for.
## Indexing and ranking take time
New websites can take a while to get indexed by search engines. It can take anywhere from a few days to a few weeks (in some cases, even months!) for your website to show up in search results. Be patient and keep updating your content and optimizing your website for search engines.
Also, you can edit `robots.ts` file to control which pages are indexed by search engines:
```tsx title="robots.ts"
export default function robots(): MetadataRoute.Robots {
return {
rules: {
userAgent: "*",
allow: "/",
disallow: ["/api", "/dashboard", "/auth"],
},
sitemap: appConfig.url + "/sitemap.xml",
};
}
```
Remember, **SEO is an ongoing process.** Consistently apply these practices and adapt your strategy based on performance data and industry changes to improve your search engine visibility over time.
---
url: /docs/web/monitoring/overview
title: Overview
description: Get started with web monitoring in TurboStarter.
---
TurboStarter includes lightweight monitoring hooks so you can quickly answer: **what's failing**, **where it's failing**, and **who it's affecting**. Out of the box, the web app can report exceptions from both the client and the server, and it's designed to be easy to extend with your preferred provider.
## Capturing exceptions
Monitoring starts with capturing exceptions reliably in the places that matter most:
* **Client-side errors**: the Next.js App Router error boundary reports unexpected runtime errors so you get visibility without leaving users stuck on a broken screen.
* **Server-side errors**: API failures (for example, Hono errors in production) can be reported with a stable, anonymous distinct id so you can spot recurring issues and correlate them with sessions.
* **Manual reporting**: you can also report exceptions from your own `try/catch` blocks to add extra context around critical flows (payments, onboarding, imports, etc.).
```tsx
"use client";
import { captureException } from "@workspace/monitoring-web";
export default function ExampleComponent() {
const handleClick = () => {
try {
/* some risky operation */
} catch (error) {
captureException(error);
}
};
return ;
}
```
```ts
import { captureException } from "@workspace/monitoring-web/server";
try {
/* do something */
} catch (error) {
captureException(error);
}
```
Make sure to use the correct import for the `captureException` function. We're using the same name for both client and server monitoring, but they are different functions. For server-side, just add `/server` to the import path (`@workspace/monitoring-web/server`).
```tsx
import { captureException } from "@workspace/monitoring-web";
```
```tsx
// [!code word:server]
import { captureException } from "@workspace/monitoring-web/server";
```
## Identifying users
Exception reports become dramatically more actionable once they're tied to a real user. TurboStarter automatically identifies signed-in users (based on the current auth session), which allows your monitoring provider to associate exceptions and sessions with a user profile.
If you want richer debugging, identify users with traits (like email, plan, or role) so you can filter and segment issues by the people impacted.
```tsx title="monitoring.tsx"
"use client";
import { useEffect } from "react";
import { identify } from "@workspace/monitoring-web";
import { authClient } from "~/lib/auth/client";
export const MonitoringProvider = ({
children,
}: {
children: React.ReactNode;
}) => {
const session = authClient.useSession();
useEffect(() => {
if (session.isPending) {
return;
}
identify(session.data?.user ?? null);
}, [session]);
return <>{children}>;
};
```
On the server, there are no dedicated identification helper. Most providers that support user-level tracking expect you to pass an identifier or traits directly within the `captureException` call (for example, as a `userId` or similar property), so make sure to check your specific provider's documentation for the recommended way to include user information.
## Providers
The starter implements multiple providers for managing monitoring. To learn more about each provider and how to configure them, see their respective sections:
Configuration and setup are handled for you via a unified API, making it easy to switch monitoring providers by just updating the exports. You can also add custom providers without disrupting any monitoring-related logic.
## Best practices
Below are some guidelines to keep monitoring useful, low-noise, and privacy-safe.
Report unexpected exceptions and failed business-critical operations; avoid
logging “expected” states (validation errors, user cancellations, missing
optional data).
Include what the user was doing (route/action), relevant IDs (request id,
order id), and a clear message so you can reproduce and triage quickly.
Identify with stable IDs; only attach traits that are necessary for
debugging. Don’t send secrets or sensitive fields (tokens, passwords, raw
payment details).
If a loop or retry can fire many times, guard your capture calls so you
don’t spam your provider (and your budget).
Keep dev/staging/prod isolated (separate projects or environment tags) so
production alerts stay meaningful.
Set alerts for spikes in error rate, degraded performance, and failures in
critical flows (auth, checkout, billing webhooks), not for every single
exception.
Application monitoring helps you track errors, exceptions, and performance issues for better app reliability. With multiple provider support, you can quickly spot and resolve problems.
Focus on actionable errors, useful context, and user privacy to get the most value from your monitoring.
---
url: /docs/web/monitoring/posthog
title: PostHog
description: Learn how to setup PostHog as your web monitoring provider.
---
[PostHog](https://posthog.com/) is a comprehensive product analytics platform that includes error tracking, session replay, feature flags, and more. It helps developers identify, diagnose, and fix issues in their applications by capturing and reporting errors and exceptions in real time.
With features like automatic error reporting, stack trace visualization, and user/session context, PostHog provides deep insight into how your application is behaving in production so you can quickly resolve problems and improve reliability.
To use PostHog as your monitoring provider, you need to have an account. You can create one [here](https://app.posthog.com/signup) or [self-host](https://posthog.com/docs/self-host) it.
PostHog is also one of pre-configured providers for [analytics](/docs/web/analytics/overview) in TurboStarter. You can learn more about it [here](/docs/web/analytics/configuration#posthog).

## Configuration
PostHog integrates seamlessly with TurboStarter, enabling you to monitor application errors and performance from development to production. By configuring PostHog as your monitoring provider, you'll be able to detect, track, and resolve issues proactively, leading to a more stable and reliable app.
Follow the simple setup instructions below to get started with PostHog in your TurboStarter project.
### Create a project
First, you need to create a [project](https://app.posthog.com/project/settings) in PostHog. You can do it directly from your [dashboard](https://app.posthog.com) by clicking on the *New Project* button.
### Activate PostHog as your monitoring provider
The monitoring provider to use is determined by the exports in `packages/monitoring/web` package. To activate PostHog as your monitoring provider, you need to update the exports in:
```ts
// [!code word:posthog]
export * from "./posthog";
```
```ts
// [!code word:posthog]
export * from "./posthog/server";
```
```ts
// [!code word:posthog]
export * from "./posthog/env";
```
If you want to customize the provider, you can find its definition in `packages/monitoring/web/src/providers/posthog` directory.
### Set environment variables
Based on your [project settings](https://app.posthog.com/project/settings), fill the following environment variables in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv title="apps/web/.env.local"
NEXT_PUBLIC_POSTHOG_KEY="your-posthog-api-key"
NEXT_PUBLIC_POSTHOG_HOST="https://us.i.posthog.com"
```
That's it! You can now start your app and see the errors and exceptions in your [PostHog dashboard](https://app.posthog.com/project/error_tracking).

Feel free to customize the configuration to your needs. For more information, please refer to the [PostHog documentation](https://posthog.com/docs/error-tracking/installation/nextjs).
## Uploading source maps
**Source maps** are files that map your minified or transpiled code (such as the JavaScript code generated by frameworks like Next.js) back to your original source code (for example, TypeScript or unbundled JavaScript). When your app is running in production, the code is often bundled and minified to improve performance, which makes stack traces and error messages hard to read and debug.
With source maps enabled and uploaded to your monitoring provider (like PostHog), error reports include references to the original lines of your source code, not just the processed/minified output.
PostHog can automatically provide readable stack traces for errors using source maps. The `@posthog/nextjs-config` package handles source map generation and upload automatically during the build process.
To start using source maps, install the package `@posthog/nextjs-config` in `apps/web/package.json` as a dependency.
```bash
pnpm i @posthog/nextjs-config --filter web
```
Next, extend your app's Next.js options by adding `withPostHogConfig` into the `next.config.ts` file:
```ts title="apps/web/next.config.ts"
import { withPostHogConfig } from "@posthog/nextjs-config";
const config = {
/* existing Next.js configuration options */
};
export default withPostHogConfig(config, {
personalApiKey: process.env.POSTHOG_API_KEY,
envId: process.env.POSTHOG_ENV_ID,
host: process.env.NEXT_PUBLIC_POSTHOG_HOST,
sourcemaps: {
enabled: true, // Enable sourcemaps generation and upload
project: "my-application", // Optional: Project name, defaults to repository name
version: "1.0.0", // Optional: Release version, defaults to current git commit
deleteAfterUpload: true, // Delete sourcemaps after upload, defaults to true
},
});
```
Make sure you have set the following environment variables locally and in your deployment environment:
* `POSTHOG_API_KEY` - Your [Personal API Key](https://app.posthog.com/settings/user-api-keys#variables) with write access on error tracking
* `POSTHOG_ENV_ID` - Project ID from [project settings](https://app.posthog.com/settings/environment#variables)
* `NEXT_PUBLIC_POSTHOG_HOST` - Your PostHog instance URL
Before proceeding, confirm that source maps are being generated by checking for `.js.map` files in your `dist` directory. These are the symbol sets that will be used to unminify stack traces in PostHog.
Next, confirm that source maps are successfully uploaded to PostHog by checking the [symbol sets](https://app.posthog.com/project/settings/symbol-sets) section in your project settings.
Finally, confirm that the served files are injected with the correct source map comment in production. You can do this by inspecting your deployed app in browser dev tools and looking for a comment like this at the end of your JavaScript bundles:
```js
//# chunkId=0197e6db-9a73-7b91-9e80-4e1b7158db5c
```
Once everything is set up, PostHog will provide you with detailed, easy-to-read error reports that link directly back to your original source code - even after your code has been bundled or minified. This makes diagnosing and fixing production issues much simpler.
---
url: /docs/web/monitoring/sentry
title: Sentry
description: Learn how to setup Sentry as your web monitoring provider.
---
[Sentry](https://sentry.io/) is a popular error monitoring and performance tracking platform. It helps developers identify, diagnose, and fix issues in their applications by capturing and reporting errors and exceptions in real time.
With features like automatic error reporting, stack trace visualization, and user/session context, Sentry provides deep insight into how your application is behaving in production so you can quickly resolve problems and improve reliability.
To use Sentry as your monitoring provider, you need to have an account. You can create one [here](https://sentry.io/signup).

## Configuration
Sentry integrates seamlessly with TurboStarter, enabling you to monitor application errors and performance from development to production. By configuring Sentry as your monitoring provider, you’ll be able to detect, track, and resolve issues proactively, leading to a more stable and reliable app.
Follow the simple setup instructions below to get started with Sentry in your TurboStarter project.
### Create a project
First, you need to create a [project](https://docs.sentry.io/product/projects/) in Sentry. You can do it directly from your [dashboard](https://sentry.io/settings/account/projects/) by clicking on the *Create Project* button.
### Activate Sentry as your monitoring provider
The monitoring provider to use is determined by the exports in `packages/monitoring/web` package. To activate Sentry as your monitoring provider, you need to update the exports in:
```ts
// [!code word:sentry]
export * from "./sentry";
```
```ts
// [!code word:sentry]
export * from "./sentry/server";
```
```ts
// [!code word:sentry]
export * from "./sentry/env";
```
If you want to customize the provider, you can find its definition in `packages/monitoring/web/src/providers/sentry` directory.
### Set environment variables
Based on your [project settings](https://sentry.io/project/settings), fill the following environment variables in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv title="apps/web/.env.local"
NEXT_PUBLIC_SENTRY_DSN="your-sentry-dsn"
NEXT_PUBLIC_PROJECT_ENVIRONMENT="your-project-environment"
```
### Apply instrumentation to your app
Install the package `@sentry/nextjs` in `apps/web/package.json` as a dependency.
```bash
pnpm i @sentry/nextjs --filter web
```
Next, extend your app's Next.js options by adding `withSentryConfig` into the `next.config.ts` file:
```ts title="apps/web/next.config.ts"
import { withSentryConfig } from "@sentry/nextjs";
const config = {
/* existing Next.js configuration options */
};
export default withSentryConfig(config, {
org: "your-sentry-org",
project: "your-sentry-project",
});
```
That's it! You can now start your app and see the errors and exceptions in your [Sentry dashboard](https://sentry.io/settings/account/projects/).

Feel free to customize the configuration to your needs. For more information, please refer to the [Sentry documentation](https://docs.sentry.io/platforms/javascript/guides/nextjs/).
## Uploading source maps
**Source maps** are files that map your minified or transpiled code (such as the JavaScript code generated by frameworks like Next.js) back to your original source code (for example, TypeScript or unbundled JavaScript). When your app is running in production, the code is often bundled and minified to improve performance, which makes stack traces and error messages hard to read and debug.
With source maps enabled and uploaded to your monitoring provider (like Sentry), error reports include references to the original lines of your source code, not just the processed/minified output.
Sentry can automatically provide readable stack traces for errors using source maps, requiring a [Sentry auth token](https://docs.sentry.io/account/auth-tokens/).
Update your `next.config.ts` file with the following options:
```ts title="apps/web/next.config.ts"
import { withSentryConfig } from "@sentry/nextjs";
const config = {
/* existing Next.js configuration options */
};
export default withSentryConfig(config, {
org: "your-sentry-org",
project: "your-sentry-project",
// An auth token is required for uploading source maps.
authToken: process.env.SENTRY_AUTH_TOKEN, // [!code ++]
// Upload a larger set of source maps for prettier stack traces (increases build time)
widenClientFileUpload: true, // [!code ++]
});
```
Then, set the `SENTRY_AUTH_TOKEN` environment variable in your `.env.local` file in `apps/web` directory and your deployment environment:
```dotenv title="apps/web/.env.local"
SENTRY_AUTH_TOKEN="your-sentry-auth-token"
```
With these steps, your Sentry integration will give you clear, actionable error reports tied directly to your source code - even after bundling and minification. This makes it much easier to debug and resolve production issues.
Take a moment to test your setup and ensure source maps are correctly resolving stack traces in your [Sentry dashboard](https://sentry.io/settings/account/projects/). For deeper customization or additional troubleshooting, always consult the [official Sentry documentation](https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/).
---
url: /docs/web/organizations/active-organization
title: Active organization
description: Set and switch the current organization context within your application.
---
The active organization is tracked based on the **URL slug** and the **session state**. We made it **as simple as possible** to use, introducing our custom hooks and an abstraction to sync it both ways.
Below you can find more details about how to access the active organization across different contexts in your application.
You can customize the behavior to your needs—for example, to restrict users to at most one organization at a time.
## Server component
You have two separate ways to access the active organization of the currently logged-in user on the server:
* from the URL slug (organization-scoped routes)
* from the session (when no slug is present or you don't want to use it)
We recommend always using the URL slug when you're doing something inside an organization-scoped route. This keeps the URL as the source of truth and works seamlessly with SSR and caching.
```tsx title="page.tsx"
import { getOrganization } from "~/lib/auth/server";
export default async function Page({
params,
}: {
params: Promise<{
organization: string;
}>;
}) {
const organization = (await params).organization;
const activeOrganization = await getOrganization({ slug: organization });
return <>{activeOrganization?.name}>;
}
```
Alternatively, you can use the session to access the active organization. This reads `session.activeOrganizationId` and resolves the organization by its stable ID.
```tsx title="page.tsx"
import { getOrganization, getSession } from "~/lib/auth/server";
export default async function Page() {
const { session } = await getSession();
const activeOrganization = await getOrganization({
id: session.activeOrganizationId,
});
return <>{activeOrganization?.name}>;
}
```
Be aware that sometimes you might encounter synchronization issues between the URL slug and the session, for example when a user opens multiple tabs to different organizations. More on this in the [Edge cases](#edge-cases) section.
## Client component
On the client side, we designed a dedicated hook to access the active organization - `useActiveOrganization`. It's a simple wrapper around the API that returns the active organization based on the URL slug or the session. It also helps keep the state in sync with the server session.
```tsx title="client.tsx"
"use client";
import { useActiveOrganization } from "~/lib/hooks/use-active-organization";
export default function ClientComponent() {
const { activeOrganization, activeMember } = useActiveOrganization();
return (
<>
{activeOrganization?.name}
{activeMember?.role}
>
);
}
```
Using the hook is recommended over direct API calls, as it will keep the state in sync with the server session.
It also returns the active member of the active organization, so you can access the user's role and other member-specific data.
## API route
To access the active organization data in an API route, you can read it from the session that is appended to the context when you use [authentication middleware](/docs/web/api/protected-routes).
```ts title="action/router.ts"
export const actionRouter = new Hono().post("/", enforceAuth, async (c) => {
const organizationId = c.var.user.activeOrganizationId;
const organization = await getOrganization({ id: organizationId });
return c.json(organization);
});
```
Although it's the simplest way, we recommend directly passing the `organizationId` together with the payload when you need to perform an action.
```ts title="action/router.ts"
export const actionRouter = new Hono().post(
"/",
enforceAuth,
validate(
"json",
z.object({
organizationId: z.string(),
/* rest of the payload */
}),
),
async (c) => {
const { organizationId, ...payload } = c.req.valid("json");
const organization = await getOrganization({ id: organizationId });
return c.json(await performAction(organization, payload));
},
);
```
This ensures that the action is performed on the correct organization, even if the user has multiple organizations open in different tabs. See [Edge cases](#edge-cases) for more details.
## Edge cases
* **Expected and harmless:** Short periods where the URL slug and server session differ can happen (for example, with multiple tabs or quick switching). The active tab always treats the slug as the source of truth and the session catches up.
* **Multiple tabs:** Each tab maintains its own org context from its slug. As you switch focus, the shared session updates; brief divergence is normal and safe.
* **Rapid switching/slow network:** During fast navigation or poor connectivity, you may momentarily see the previous org while the session updates. Show a small loading state; cancel in-flight requests tied to the old org.
* **Missing/invalid slug:** If the slug is missing or invalid, we fall back to the session’s `activeOrganizationId` or redirect to a safe default.
* **Access or permission changes:** If a user loses access to the org they’re viewing, the data is cleared from the session and the user is redirected to a valid organization or personal dashboard.
Whenever the active organization changes, the server session is updated and the client is redirected to the new organization scope.
All caches keyed by organization are invalidated to avoid leaking data between organizations.
---
url: /docs/web/organizations/data-model
title: Data model
description: Entities and relationships for organizations and multi-tenancy.
---
Our multi-tenant model is organized around the concept of an **organization**. An organization represents a single tenant and is the primary boundary for data isolation, access control, and routing.
Users can belong to multiple organizations through a membership. Invitations let organization admins onboard new members by email with a specific role.
## Entities
### Organization
The tenant. Stores human-friendly `name`, unique `slug` (used in URLs and lookups), optional `logo`, and optional `metadata` for extensibility (feature flags, billing context, UI preferences, etc.). `createdAt` provides auditability. The `slug` is globally unique to keep URLs stable and predictable.
### User
The identity of a person. Users are global and can join many organizations. Account-level fields (e.g., `name`, `email`, verification, avatar, security flags) live here.
A user's application-wide properties (like a global `role` or moderation flags) are distinct from their per-organization role.
### Member (Membership)
The join between a `user` and an `organization`. This is where multi-tenancy permissions are enforced. Each membership stores the `role` the user holds in that specific organization (for example, `member`, `admin`).
Memberships include timestamps for auditing and can be cascaded when a user or organization is removed.
### Invitation
Represents an invite to join an organization by `email` with an intended `role`. It includes `status` (e.g., pending, accepted, revoked), `expiresAt`, and `inviterId` for traceability.
On acceptance, an invitation creates a corresponding membership if one does not already exist.
## Relationships and constraints
Users and organizations are related many-to-many through memberships. A user
can join multiple organizations; an organization has multiple members.
We keep `organization.slug` unique across the system to ensure
consistent routing and discoverability. Within a single organization, each
`userId` should only appear once in memberships; enforce this
at the application layer or with a composite unique index
`(organizationId, userId)`.
* Deleting an organization removes its dependent memberships and invitations.
* Deleting a user removes their memberships and invitations.
These cascades preserve referential integrity and prevent orphaned records.
## Tenancy and isolation
### Tenant separator
`organizationId` is the tenant key. All tenant-scoped data should either live under the organization or reference it directly. Every read/write path in the application should be constrained by the current `organizationId`.
### Query guardrails
Derive the active `organizationId` from authenticated context (session or URL slug → lookup → id). Apply `organizationId` filters at the repository/service layer to avoid cross‑tenant reads. Add composite indexes that include `organizationId` on frequently queried relations.
### Isolation level
All organizations share the same database and schema, separated by `organizationId`. This keeps operations simple and cost‑effective. If stricter isolation is needed, evolve toward schema‑per‑tenant or database‑per‑tenant with care, as operational overhead increases.
The term "organizations" is used throughout the starter kit to identify a group of users. However, depending on your application's needs, you might want to represent these groups with a different name, such as "Teams" or "Workspaces."
If that's the case, we suggest retaining "organization" as the internal term within your codebase (to avoid the complexity of renaming it everywhere), while customizing the UI labels to your preferred terminology. To do this, simply update all user-facing instances of "Organization" in your interface to reflect the term that best fits your application.
## Lifecycle flows
* **Create organization**: Create an organization (with `name`, `slug`, optional `logo`/`metadata`) and immediately create a membership for the creator with an elevated role (commonly `owner`).
* **Invite member**:
1. Admin creates an invitation specifying `email` and intended `role`.
2. The invite is sent by email with an expiring token.
3. On acceptance, if the user exists they are added as a member; otherwise they register and then join.
4. Handle idempotency so repeated accepts don’t duplicate memberships.
* **Leave or remove**: Members can leave an organization and admins can remove members. The policy that "at least one owner must remain" is enforced at the application layer.
---
url: /docs/web/organizations/invitations
title: Invitations
description: Send, track, and accept organization invites.
---
You can invite teammates **by email** to join an organization straight from your organization settings.
Acceptance is frictionless: we verify the invite, create (or reuse) the membership with the intended role, and activate the organization in the user's session.
The implementation is based on the [Better Auth plugin](https://www.better-auth.com/docs/plugins/organization) and designed to drive engagement, minimize back-and-forth and keep admins in control.

## Model
As we can see inside our [data model](/docs/web/organizations/data-model), an invitation targets an `email`, carries the intended `role`, records the `inviterId`, and is scoped to an `organizationId`.
```ts
export const invitation = pgTable("invitation", {
id: text().primaryKey(),
organizationId: text()
.notNull()
.references(() => organization.id, { onDelete: "cascade" }),
email: text().notNull(),
role: text(),
status: text().default("pending").notNull(),
inviterId: text()
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
createdAt: timestamp().defaultNow().notNull(),
expiresAt: timestamp().notNull(),
});
```
The invitations expire at `expiresAt` to keep links short‑lived.
## Status
An invitation can be in one of three states:
* **Pending**: created/sent, awaiting acceptance.
* **Accepted**: verified; membership created or reused.
* **Rejected**: manually invalidated or removed via cascades.
Expiration is controlled by `expiresAt` (not a separate status). After this timestamp, the link is invalid and should be resent.
## Flow
1. Admin creates an invite with `email` and `role`. The `organizationId` is inferred from the context.
2. System generates a signed, single-use token bound to the invite and `expiresAt` and sends a CTA link.
3. Recipient opens the link; we verify the token and email.
4. On success, we proceed to acceptance.
## Onboarding
### Existing user
After verification, we create (or reuse) a membership with the invited role and set the active organization in the session.

### New user
We attach the invite context to signup; after registration, we create the membership and activate the organization - no detours required.

You can fully customize the invitation flow to fit your organization's needs. For example, you can add extra onboarding steps, capture additional user information, or implement advanced verification logic as part of the invite process.
The system is designed to be extensible—tailor it to match your team's requirements and user experience preferences.
## Automatic invalidation
An invitation is automatically revoked in the following scenarios:
* **The user accepts the invitation:** Once accepted, the token becomes invalid.
* **The user changes their email address:** To prevent misuse, any changes to the associated email automatically invalidate the token.
* **The user deletes their account:** Invitations linked to a deleted account are revoked to maintain data integrity.
This ensures that invitations remain secure and aligned with the current state of user accounts.
## Invitation management
Admins of the organization and [super admins](/docs/web/admin/overview) can manage invitations via a dedicated section in the dashboard, where they can:
* View the status of all invitations (`pending`, `accepted`, `rejected`).
* Resend invitations who did not respond.
* Revoke invitations if they were sent to the wrong email or are no longer needed.
* Adjust the role of an invitation if not yet accepted
---
url: /docs/web/organizations/overview
title: Overview
description: Learn how to use organizations/teams/multi-tenancy in TurboStarter.
---
Organizations let you build teams and multi-tenant SaaS out of the box, which is a widely used pattern, especially in a [B2B](https://en.wikipedia.org/wiki/Business-to-business) apps. Users can create organizations, invite teammates, assign roles, and seamlessly switch between workspaces.
[Multi-tenancy](https://www.ibm.com/think/topics/multi-tenant) is a software architecture pattern where a single instance of an application serves multiple tenants, each with its own data and configuration.
The feature is mostly powered by the [Better Auth organization plugin](https://www.better-auth.com/docs/plugins/organization) and integrates with TurboStarter's API, routing, data layer, and UI components. This allows you to share most of the code between the web app, [mobile app](/docs/mobile/organizations/overview), and [extension](/docs/extension/organizations).
## Architecture
TurboStarter uses a pragmatic multi-tenant architecture:
* **Tenant context** lives in the session as the active organization ID (derived from the user's selection or defaults). Server handlers read this context to enforce scoping.
* **Data scoping** is performed via `organizationId` on tenant-owned tables and guard clauses in queries. Background tasks and API routes receive the same context.
* **Authorization** combines tenant scoping with role checks. We separate “can access this tenant?” from “can perform this action within the tenant?”.
* **Extensibility**: add new tenant-bound entities by including `organizationId` and using the provided helpers to read the active organization.
This keeps data isolated per organization while remaining simple to reason about and customize.
You can restrict who can create organizations, perform actions within it, and hook into
lifecycle events using our API.
Check dedicated [Data model](/docs/web/organizations/data-model), [RBAC](/docs/web/organizations/rbac) and [Invitations](/docs/web/organizations/invitations) sections or direct [Better Auth docs](https://www.better-auth.com/docs/plugins/organization) for more details.
## Concepts
To effectively use multi-tenancy in your app, we introduced a few core concepts that define how the whole system works:
| Concept | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------- |
| **Organization** | A workspace that owns resources and settings, acting as an isolated tenant. |
| **Member** | A user assigned to an organization. |
| **Role** | Access level within an organization (see [RBAC](/docs/web/organizations/rbac)). |
| **Invitation** | Email request to join an organization (see [Invitations](/docs/web/organizations/invitations)). |
| **Active organization** | The currently selected organization in a user's session, used to scope data and permissions. |
These concepts provide the building blocks for flexible team management and secure, multi-tenant SaaS applications.
## Development data
In development, TurboStarter automatically [seeds](/docs/web/installation/commands#seeding-database) some example data when you [setup services](/docs/web/installation/commands#setting-up-services):
* One organization is created by default.
* All default roles are created and assigned within that organization.
* Sample invitations are generated so you can test the invite flow.
You can safely experiment with these sample organizations, roles, and invitations to understand multi-tenancy features - [reset](/docs/web/installation/commands#resetting-database) or [reseed](/docs/web/installation/commands#seeding-database) anytime to return to the default state.
The default credentials for demo users can be customized using the `SEED_EMAIL` and `SEED_PASSWORD` environment variables.
The default development data and setup are intended for local development and
testing only. **Never** use these seeds or configurations in a production
environment - they are insecure and may expose sensitive functionality.
## Customization
You have flexibility to adapt organizations to fit your product. For example, you might rename labels (such as Organization to *Team* or *Workspace*), and update the UI copy accordingly.
You can adjust the available [roles and permissions](/docs/web/organizations/rbac) to suit your access model.
The [invitation flow](/docs/web/organizations/invitations) can be customized, including how verification, onboarding, or metadata capture work.
You may also want to introduce tenant-specific policies, like usage limits, feature flags, or billing rules.
Feel free to check how to configure all of these features in the dedicated sections below.
---
url: /docs/web/organizations/rbac
title: RBAC (Roles & Permissions)
description: Manage roles, permissions, and access scopes.
---
Role-based access control (RBAC) lets you define who can do what in an organization.
If you're new to the RBAC concept, a simple mental model is:
* Users belong to organizations.
* Users get roles.
* Roles map to permissions on resources.
In TurboStarter, we primarily rely on the [Better Auth plugin](https://www.better-auth.com/docs/plugins/organization) for the heavy lifting - roles, permissions, teams, and member management - while handling critical logic with our own code.
This provides a flexible access control system, letting you control user access based on their role in the organization. You can also define custom permissions per role.
TurboStarter ships with the default RBAC system configured out of the box. This setup may be enough if you're not planning a very complex access control system, but you can also easily customize it to your needs.
It also includes [protecting routes](/docs/web/api/protected-routes) that users with specific roles can access by adding custom middlewares and disabling certain actions in the UI.
## Roles
Roles are named bundles of permissions. Keep them few and well-defined. By default, we have the following roles:
```ts
const MemberRole = {
MEMBER: "member",
ADMIN: "admin",
OWNER: "owner",
} as const;
```
A user can have multiple roles in an organization. For example, a user can be a member and an admin (if it makes sense for your application).
The organization's `admin` role is **different** from the user's global `admin` role.
The organization `admin` governs permissions only inside the organization, whereas the global `admin` controls access to the [super admin dashboard](/docs/web/admin/overview).
To create additional roles with custom permissions, see the [official documentation](https://www.better-auth.com/docs/plugins/organization#create-access-control) for more details.
## Permissions
Permissions represent what actions a role can perform on which resources. To check if the current user has permission to perform an action, you can use the `hasPermission` function.
```ts
const canCreateProject = await authClient.organization.hasPermission({
permissions: {
project: ["create"],
},
});
```
Or, if you're performing the check on the server, you can use the `hasPermission` function from the `auth.api` object.
```ts
await auth.api.hasPermission({
headers: await headers(),
body: {
permissions: {
project: ["create"], // This must match the structure in your access control
},
},
});
```
Once your roles and permissions are defined, you can avoid server checks (e.g., to reduce API calls) by using the client-side `checkRolePermission` function.
```ts
const { activeMember } = useActiveOrganization();
const canUpdateProject = authClient.organization.checkRolePermission({
permission: {
project: ["update"],
},
role: activeMember.role,
});
```
We leverage the existing custom hook to retrieve the active member role within the [active organization](/docs/web/organizations/active-organization) context. That way, you can easily check whether a member has permission to perform an action without a server round trip.
This does not include any dynamic roles or permissions because everything runs synchronously on the client-side. Use the `hasPermission` APIs to include checks for dynamic roles and permissions.
If you need to add more granular permissions to existing roles, or create new ones, use the [`createAccessControl`](https://www.better-auth.com/docs/plugins/organization#custom-permissions) API.
For further customization - such as dynamic access control, lifecycle hooks, or team management - see the guidance in the [official documentation](https://www.better-auth.com/docs/plugins/organization).
---
url: /docs/web/recipes/supabase
title: Supabase
description: Learn how to set up Supabase for your TurboStarter project.
---
[Supabase](https://supabase.com) is an open-source backend platform built on top of PostgreSQL that provides a managed database, storage, and other features out of the box.
You can adopt Supabase incrementally - start with just the pieces you need (for example, database only, or database + storage) and add more features over time. There's no requirement to integrate everything at once.
In this guide, we'll walk you through the process of setting up Supabase as a provider for your TurboStarter project. This could include using it as a [database](https://supabase.com/docs/guides/database), [storage](https://supabase.com/docs/guides/storage), [edge runtime for your API](https://supabase.com/docs/guides/functions) and more.
## Prerequisites
Before you start, make sure you have:
* **TurboStarter project** cloned locally with dependencies installed (you can use our [CLI](/docs/web/cli) to create a new project in seconds)
* **Supabase account** - you can create one at [supabase.com](https://supabase.com/sign-up)
* Basic familiarity with the core database docs:
* [Database overview](/docs/web/database/overview)
* [Migrations](/docs/web/database/migrations)
* [Database client](/docs/web/database/client)
## (Optional) Use Supabase locally with Docker
If you're on the Supabase free plan, you can only have a limited number of active hosted databases at once. A good workflow is:
* Use **local Supabase** for day-to-day development
* Keep **one hosted Supabase project** for staging/production (and for testing features that require a deployed project)
Supabase provides a local development stack that runs via **Docker**, managed by the **Supabase CLI**.
### Install prerequisites
* Install **Docker** (Docker Desktop is the easiest option)
* Install the **Supabase CLI** (pick one):
* macOS (Homebrew): `brew install supabase/tap/supabase`
* npm (no global install): `npx supabase --version`
### Initialize and start Supabase locally
From the monorepo root:
```bash
supabase init
supabase start
```
Once it’s running, get the local URLs and credentials:
```bash
supabase status
```
You should see a local **DB URL** (Postgres), plus URLs for **Studio** and the local API.
In most default setups, the local Postgres URL looks like:
`postgresql://postgres:postgres@127.0.0.1:54322/postgres`
Always prefer copying the exact value from `supabase status` to avoid port mismatches.
### Point TurboStarter to the local database
Update the **root** `.env.local` so TurboStarter’s `@workspace/db` uses the local Postgres:
```dotenv title=".env.local"
DATABASE_URL="postgresql://postgres:postgres@127.0.0.1:54322/postgres"
```
Then run migrations (same as with hosted Supabase):
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
That’s it — TurboStarter now talks to your **local Supabase Postgres**.
### Useful local commands
```bash
supabase stop # stop containers
supabase start # start again
supabase status # show URLs/ports/keys
supabase db reset # reset local DB (drops data)
```
## Create a new Supabase project
1. Go to the [Supabase dashboard](https://supabase.com).
2. Create a **new project** (choose a strong database password and a region close to your users).
3. Supabase will automatically provision a **PostgreSQL database** for you.

Optionally, you can customize the **Security options** by choosing the **Only Connection String** option - it will opt out of autogenerating API for tables inside your database. It's not needed for TurboStarter setup, but of course you can still leverage it for your custom use-cases.

Once the project is ready, you can fetch the connection string.
## Get the database connection string
In the Supabase dashboard:
1. Open your project.
2. Click on the **Connect** button at the top.
3. Locate the **connection string** for your chosen ORM (it will be under the **ORMs** tab).

Copy this value - you'll use it as your `DATABASE_URL`.
In your Supabase connection string, you can see a placeholder like `[YOUR-PASSWORD]`. Make sure to replace this with the actual password you set when creating your Supabase project.
## Configure environment variables
TurboStarter reads database connection settings from the **root** `.env.local` file and uses them inside the `@workspace/db` package.
Create (or update) the `.env.local` file in the **monorepo root**:
```dotenv title=".env.local"
DATABASE_URL="postgres://postgres.[YOUR-PROJECT-REF]:[YOUR-PASSWORD]@aws-0-[aws-region].pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1"
```
Replace:
* `YOUR-PROJECT-REF` with your Supabase project ref
* `YOUR-PASSWORD` with the database password you set when creating the project
* `aws-region` with the region shown in the Supabase connection string
These variables are validated in the `@workspace/db` package and used to create Drizzle client for your database.
For more background on how `DATABASE_URL` is used, see [Database overview](/docs/web/database/overview).
## Setup your Supabase database
With `DATABASE_URL` now pointing to Supabase, you can apply the existing TurboStarter schema to your Supabase database.
From the monorepo root, run:
```bash
pnpm with-env pnpm --filter @workspace/db db:migrate
```
This will:
* Use your Supabase `DATABASE_URL` from `.env.local`
* Run all pending SQL migrations from `packages/db/migrations`
* Create the full TurboStarter schema (users, billing, demo tables, etc.) in Supabase
If you're actively iterating on the schema, you can generate new migrations and apply them as described in [Migrations](/docs/web/database/migrations).
After running your migrations, you may want to seed your database with initial data (such as demo users or organizations). You can do this by running the following command:
```bash
pnpm with-env pnpm turbo db:seed
```
This will populate your Supabase database with some example data you can use to test your application.
## Use Supabase Storage as S3-compatible storage
TurboStarter's storage layer is designed to work seamlessly with **any S3-compatible provider**. In this section, we'll show how to use [Supabase Storage](/docs/web/storage/overview) as your application's file storage back-end.
Supabase Storage provides a simple, S3-compatible API and is a great choice if you're already using Supabase for your database.
### Create a storage bucket
1. In the Supabase dashboard, go to **Storage → Buckets**.
2. Click **Create bucket** (name it whatever you want, for example `avatars` or `uploads`).
3. Adjust settings based on your needs (e.g. limit the maximum file size, specify the allowed file types, etc.)

You can create multiple buckets (for documents, images, videos, etc.) if needed.
### Generate S3 access keys in Supabase dashboard
1. Go to **Storage → S3 → Access keys**.
2. Click **New access key**.
3. Give it a descriptive name and create the key.
4. Copy the **Access key ID** and **Secret access key** to use in your application.

### Configure S3 environment variables for Supabase Storage
In your weba application's `.env.local`, add (or update) the S3 configuration used by TurboStarter's storage layer:
```dotenv title=".env.local"
S3_REGION="us-east-1"
S3_BUCKET="avatars"
S3_ENDPOINT="https://[YOUR-PROJECT-REF].supabase.co/storage/v1/s3"
S3_ACCESS_KEY_ID="your-access-key-id"
S3_SECRET_ACCESS_KEY="your-secret-access-key"
```
These variables integrate directly with the storage configuration described in:
* [Storage overview](/docs/web/storage/overview)
* [Storage configuration](/docs/web/storage/configuration)
Once set, existing TurboStarter file upload flows (e.g. user avatars, organization logos) will use Supabase Storage via presigned URLs.
## Run your API on Supabase Edge Functions
As we're using a [Hono](https://hono.dev) as our API server, you can deploy it as a Supabase Edge Function so it runs close to your users.
At a high level:
1. Install the [Supabase CLI](https://supabase.com/docs/guides/cli) and initialize a Supabase project locally with `supabase init`.
2. Create a new [Edge Function](https://supabase.com/docs/guides/functions/quickstart) (for example `hono-backend`) with `supabase functions new hono-backend`.
3. Inside the generated function (for example `supabase/functions/hono-backend/index.ts`), set up a basic Hono app and export it via `Deno.serve(app.fetch)`:
```ts
import { Hono } from "jsr:@hono/hono";
// change this to your function name
const functionName = "hono-backend";
const app = new Hono().basePath(`/${functionName}`);
app.get("/hello", (c) => c.text("Hello from hono-server!"));
Deno.serve(app.fetch);
```
4. Run the function locally with `supabase start` and `supabase functions serve --no-verify-jwt`, then call it from your TurboStarter app using the local or deployed function URL.
5. When you're ready, deploy the function with `supabase functions deploy` (or `supabase functions deploy hono-backend`) and manage it using the Supabase dashboard, as described in the [Supabase Edge Functions docs](https://supabase.com/docs/guides/functions).
This is entirely optional, but it's a great fit for lightweight APIs, webhooks, and other serverless logic you want to run alongside your Supabase project.
## Explore additional Supabase features
Supabase is a full Postgres development platform, so beyond the database and storage pieces wired up above you can gradually add more features as your app grows ([see the Supabase homepage](https://supabase.com/) for an overview).
Some features that fit especially well with TurboStarter's design are:
* [Realtime](https://supabase.com/docs/guides/realtime) - built on [Postgres replication](https://www.postgresql.org/docs/current/runtime-config-replication.html), so you can stream changes from your existing TurboStarter tables (inserts, updates, deletes) into live UIs without changing how you manage schema or RLS. You still define tables and policies via `@workspace/db`, and opt into Realtime on top.
* [Vector](https://supabase.com/docs/guides/vector) - powered by the [pgvector](https://github.com/pgvector/pgvector) extension and stored in regular Postgres tables, making it easy to integrate semantic search or AI features while keeping everything in the same migrations and Drizzle models you already use in TurboStarter. We're using it extensively in our dedicated [AI Kit](/ai).
* [Cron](https://supabase.com/docs/guides/functions/cron) - enables you to schedule background jobs and periodic tasks with [pg\_cron](https://github.com/citusdata/pg_cron). You can define cron jobs for things like scheduled database cleanups, sending emails, report generation, or any recurring logic, all managed alongside your TurboStarter app with full Postgres integration.
Because these features are all layered on top of Postgres, you can introduce them incrementally and keep managing everything through your familiar workflow.
## Start the development server
With the database and other services configured to use Supabase, you can start TurboStarter as usual from the monorepo root:
```bash
pnpm dev
```
TurboStarter will now:
* Use **Supabase Postgres** as your database through `DATABASE_URL`
* Use **Supabase Storage** as your file storage through the S3-compatible endpoint
* Leverage **Supabase Edge Functions** (for example, with Hono) for your serverless backend
That's it! You can now start building your application with Supabase as your main provider. Explore the [Supabase documentation](https://supabase.com/docs) for more features and best practices.
---
url: /docs/web/stack
title: Tech Stack
description: A detailed look at the technical details.
---
## Turborepo
[Turborepo](https://turbo.build/) is a monorepo tool that helps you manage your project's dependencies and scripts. We chose a monorepo setup to make it easier to manage the structure of different features and enable code sharing between different packages.
} />
## Next.js
[Next.js](https://nextjs.org) is one of the most popular [React](https://react.dev) frameworks that enables server-side rendering, static site generation, and more. We chose Next.js for its flexibility and ease of use. We're also using it to host our serverless API.
} />
} />
## Hono & React Query
[Hono](https://hono.dev) is a small, simple, and ultrafast web framework for the edge. It provides tools to help you build APIs and web applications faster. It includes an RPC client for making type-safe function calls from the frontend. We use Hono to build our serverless API endpoints.
To make data fetching and caching from our API easy and reliable, we pair Hono with [React Query](https://tanstack.com/query/latest). It helps manage asynchronous data, caching, and state synchronization between the client and backend, delivering a fast and seamless UX.
} />
} />
## Better Auth
[Better Auth](https://www.better-auth.com) is a modern authentication library for fullstack applications. It provides ready-to-use snippets for features like email/password login, magic links, OAuth providers, and more. We use Better Auth to handle all authentication flows in our application.
} />
## Tailwind CSS
[Tailwind CSS](https://tailwindcss.com) is a utility-first CSS framework that helps you build custom designs without writing any CSS. We also use [Base UI](https://base-ui.com) for our headless components library and [shadcn/ui](https://ui.shadcn.com), which enables you to generate pre-designed components with a single command.
} />
} />
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a super fast [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) tool for databases. It helps manage databases, generate TypeScript types from your schema, and run queries in a fully type-safe way.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but thanks to Drizzle's flexibility, you can easily switch to MySQL, SQLite or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />
---
url: /docs/web/storage/configuration
title: Configuration
description: Learn how to configure storage in TurboStarter.
---
Currently, TurboStarter supports all S3-compatible storage providers, including [AWS S3](https://aws.amazon.com/s3/), [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces), [Cloudflare R2](https://www.cloudflare.com/products/r2/), and [Supabase Storage](https://supabase.com/storage).
For a concrete example using Supabase Storage as an S3-compatible provider, see the [Supabase recipe](/docs/web/recipes/supabase#use-supabase-storage-as-s3-compatible-storage).
The setup process is straightforward - you just need to configure a few environment variables in both your local environment and hosting provider:
```dotenv
S3_REGION=
S3_BUCKET=
S3_ENDPOINT=
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
```
Let's break down each required variable:
* `S3_REGION`: The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where your storage is located - defaults to `us-east-1`
* `S3_BUCKET`: The default name of your storage bucket - you can pass different for each request
* `S3_ENDPOINT`: The S3 [endpoint URL](https://docs.aws.amazon.com/general/latest/gr/s3.html) for your storage provider - defaults to `https://s3.amazonaws.com`
* `S3_ACCESS_KEY_ID`: Your storage provider's [access key ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
* `S3_SECRET_ACCESS_KEY`: Your storage provider's [secret access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
You can learn more about S3 service configuration in the [official AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html) or your specific storage provider's documentation.
---
url: /docs/web/storage/managing-files
title: Managing files
description: Learn how to manage files in TurboStarter.
---
Before you start managing files, make sure you have [configured storage](/docs/web/storage/configuration).
## Permissions
Most S3-compatible storage providers allow you to configure bucket permissions and access policies. It's crucial to properly set these up to secure your files and control who can access them.
Here are some key security recommendations:
* Keep your bucket private by default
* Use IAM roles and policies to manage access
* Enable server-side encryption for sensitive data
* Configure CORS settings appropriately for client-side uploads
* Regularly audit bucket permissions and access logs
Making your bucket public is strongly discouraged as it can expose sensitive data and lead to unauthorized access and unexpected costs from bandwidth usage.
For detailed guidance on configuring bucket policies and permissions, refer to your storage provider's documentation:
* [AWS S3 Security Documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html)
* [DigitalOcean Spaces Security](https://docs.digitalocean.com/products/spaces/how-to/manage-access/)
* [Cloudflare R2 Security](https://developers.cloudflare.com/r2/api/s3/tokens/)
* [Supabase Storage Security](https://supabase.com/docs/guides/storage/security/access-control)
## Uploading files
As explained in the [overview](/docs/web/storage/overview), TurboStarter uses presigned URLs to upload files to your storage provider.
We prepared a special endpoint to generate presigned URLs for your uploads to use in your client-side code.
```ts title="storage/router.ts"
export const storageRouter = new Hono().get(
"/upload",
enforceAuth,
validate("query", getObjectUrlSchema),
async (c) => c.json(await getUploadUrl(c.req.valid("query"))),
);
```
The signed URL is only valid for a limited time and will work for anyone who has access to it during that period. Make sure to handle the URL securely and avoid exposing it to unauthorized users.
Then, you can use it to upload files to the generated presigned URL from your frontend code:
```tsx title="upload.tsx"
const upload = useMutation({
mutationFn: async (data: { file?: File }) => {
const extension = data.file?.type.split("/").pop();
const path = `files/${crypto.randomUUID()}.${extension}`;
const { url: uploadUrl } = await handle(api.storage.upload.$get)({
query: { path },
});
const response = await fetch(uploadUrl, {
method: "PUT",
body: data.file,
headers: {
"Content-Type": data.file?.type ?? "",
},
});
if (!response.ok) {
throw new Error("Failed to upload file!");
}
},
onError: (error) => {
toast.error(error.message});
},
onSuccess: async ({ publicUrl, oldImage }, _b, context) => {
toast.success("File uploaded!");
},
});
```
The code above demonstrates how to implement file uploads in your application:
1. First, we have a server-side endpoint (`storageRouter`) that generates presigned URLs for uploads. This endpoint:
* [Requires authentication](/docs/web/api/protected-routes) via `enforceAuth`
* Validates the request parameters using `validate`
* Returns a presigned URL for uploading
2. Then, in the frontend code (`upload.tsx`), we use React Query's `useMutation` hook to handle the upload process:
* Requests a presigned URL from the server
* Uploads the file directly to the storage provider using the presigned URL
* Handles success and error cases with toast notifications
This approach ensures secure file uploads while avoiding server bandwidth costs and function timeout issues.
### Public uploads
Although **it's not recommended** to use public uploads in production, you can use the same endpoint to generate presigned URLs for public uploads:
```ts title="storage/router.ts"
export const storageRouter = new Hono().get(
"/upload",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getUploadUrl(c.req.valid("query"))),
);
```
Just remove the `enforceAuth` middleware from the endpoint and keep rest of the logic the same.
## Displaying files
We provide dedicated endpoints for retrieving signed URLs specifically for displaying files. These URLs are time-limited to maintain security, so they cannot be used for permanent storage or long-term access:
```ts title="storage/router.ts"
export const storageRouter = new Hono().get(
"/signed",
enforceAuth,
validate("query", getObjectUrlSchema),
async (c) => c.json(await getSignedUrl(c.req.valid("query"))),
);
```
This endpoint is perfect for displaying files that should only be accessible to authorized users for a limited time.
### Public files
For displaying files publicly (without authorization and time limitations), you can use the `/public` endpoint:
```ts title="storage/router.ts"
export const storageRouter = new Hono().get(
"/public",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getPublicUrl(c.req.valid("query"))),
);
```
This endpoint generates a public URL for the file that you can use to display in your application. Please ensure that your bucket policy allows public access to the files and verify that you're not exposing any sensitive information.
## Deleting files
Deleting files works almost the same way as uploading files. You just need to generate a presigned URL for deletion and then use it to remove the file:
```ts title="storage/router.ts"
export const storageRouter = new Hono().get(
"/delete",
validate("query", getObjectUrlSchema),
async (c) => c.json(await getDeleteUrl(c.req.valid("query"))),
);
```
Then, in the frontend code, we use React Query's `useMutation` hook to handle the deletion process:
```tsx title="delete.tsx"
const remove = useMutation({
mutationFn: async () => {
const path = file.split("/").pop();
if (!path) return;
const { url: deleteUrl } = await handle(api.storage.delete.$get)({
query: { path: `files/${path}` },
});
await fetch(deleteUrl, {
method: "DELETE",
});
},
onError: (error) => {
toast.error(error.message);
},
onSuccess: () => {
toast.success("File removed!");
},
});
```
Now that you understand how to manage files in TurboStarter, it's time to build something awesome! Try creating a file upload component, building a photo gallery, or implementing a document management system.
---
url: /docs/web/storage/overview
title: Overview
description: Get started with storage in TurboStarter.
---
With TurboStarter, you can easily upload and manage files (images, videos, documents, and more) in your application.
Currently, all S3-compatible storage providers are supported, including [AWS S3](https://aws.amazon.com/s3/), [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces), [Cloudflare R2](https://www.cloudflare.com/products/r2/), [Supabase Storage](https://supabase.com/storage), and others.
If you're using Supabase, you can follow the [Supabase recipe](/docs/web/recipes/supabase#use-supabase-storage-as-s3-compatible-storage) for a concrete example of configuring Supabase Storage as your S3-compatible backend.
## Uploading files
The most common approach to uploading files is to use client-side uploads. With client-side uploads, you avoid paying ingress/egress fees for transferring file binary data through your server.
Additionally, most hosting platforms like [Vercel](https://vercel.com/docs/functions/runtimes#size-limits) or [Netlify](https://answers.netlify.com/t/what-is-the-maximum-file-size-upload-limit-in-a-netlify-form-submission/108419) have limitations on file size and maximum serverless function execution time.
That's why TurboStarter utilizes the **presigned URLs** feature of storage providers to upload files. Instead of sending files to the serverless function, the client requests a time-limited presigned URL from the serverless function and then uploads the file directly to the storage provider.
1. Client **requests** a presigned URL from the serverless function.
2. Server parses the request, validates the payload, optionally saves the metadata, and **returns the presigned URL** to the client.
3. Client **uploads the file** to the presigned URL within the expiration time.
4. (Optional) Once the file is uploaded, the serverless function is notified about the upload event, and the file metadata is saved to the database.
This approach ensures that credentials remain secure, handles authorization and authentication properly, and avoids the limitations of serverless platforms.
The configuration and use of storage is straightforward and simple. We'll explore this in more detail in the following sections.
---
url: /docs/web/tests/e2e
title: E2E tests
description: Simulate real user scenarios across the entire stack with automated end-to-end test tools and examples.
---
End-to-end (E2E) tests will be available soon, allowing you to automate testing of real user flows and interactions across your application.
Stay tuned for updates as we roll out robust E2E testing resources and examples.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
---
url: /docs/web/tests/unit
title: Unit tests
description: Write and run fast unit tests for individual functions and components with instant feedback.
---
Unit tests are a type of automated test where individual units or components are tested. The "unit" in "unit test" refers to the smallest testable parts of an application. These tests are designed to verify that each unit of code performs as expected.
TurboStarter uses [Vitest](https://vitest.dev) as the unit testing framework. It's a blazing-fast test runner built on top of [Vite](https://vitejs.dev), designed for modern JavaScript and TypeScript projects.
If you've used [Jest](https://jestjs.io) before, you already know Vitest - it shares the same API. But Vitest is built for speed: native TypeScript support without transpilation, parallel test execution, and a smart watch mode that only re-runs tests affected by your changes.
It comes with everything you need out of the box - code coverage, snapshot testing, mocking, and a slick UI for debugging. Fast feedback, zero configuration.
## Why write unit tests?
Unit tests give you **fast, focused feedback** on small pieces of your code - individual functions, hooks, or components. Instead of debugging an entire page or flow, you can verify just the logic you care about in isolation.
They also act as **living documentation**: a good test tells you how a function is supposed to behave, which edge cases are important, and what assumptions the code makes. This makes it much easier to safely refactor or extend features later.
In TurboStarter, unit tests are designed to be **cheap and quick to run**, so you can keep Vitest running in watch mode while you code. Every change you make is immediately checked, helping you catch regressions before they ever reach integration or end‑to‑end tests.
## Configuration
TurboStarter configures Vitest to be **as simple as possible**, while still taking advantage of [Turborepo's caching](https://turborepo.com/docs/crafting-your-repository/caching) and Vitest's [Test Projects](https://vitest.dev/guide/projects).
```ts title="vitest.config.ts"
import { mergeConfig } from "vitest/config";
import baseConfig from "@workspace/vitest-config/base";
export default mergeConfig(baseConfig, {
test: {
/* your extended test configuration here */
},
});
```
* **Per-package tests**: each package that has unit tests defines its own `test` script. This keeps the configuration close to the code and makes it easy to add tests to any workspace.
* **Turbo tasks for CI**: the root `test` task (`pnpm test`) uses `turbo run test` to execute all package-level test scripts with smart caching, which is ideal for CI pipelines where you want to avoid re-running unchanged tests.
* **Vitest Test Projects for local dev**: a root Vitest configuration uses [Test Projects](https://vitest.dev/guide/projects) to run all unit test suites from a single command, which is perfect for local development when you want fast feedback across the whole monorepo.
This **hybrid setup** combines Turborepo and Vitest Projects in a way that fits TurboStarter's principles: cached, package-aware runs in CI, and a single, unified Vitest entry point for local development.
You can read more about this setup in the official documentation guides listed below.
## Running tests
There are a few different ways to run unit tests, depending on what you're doing:
* **CI / full test run** - at the root of the repo:
```bash
pnpm test
```
This runs `turbo run test`, which executes all `test` scripts in packages that define them, with Turborepo handling caching so unchanged packages are skipped. This is what you should use in your CI/CD pipeline.
* **One-off local run with Vitest Projects**:
```bash
pnpm test:projects
```
This uses Vitest [Test Projects](https://vitest.dev/guide/projects) to run all configured unit test suites from a single command, which is great when you want to quickly validate the whole monorepo locally.
* **Watch mode during development**:
```bash
pnpm test:projects:watch
```
This starts Vitest in watch mode across all Test Projects. As you edit files, only the affected tests are re-run, giving you fast feedback while you work.
## Code coverage
Unit test coverage helps you understand **how much** of your code is being tested. While it can't guarantee bug-free code, it shines a light on untested paths that could hide issues or regressions.
To generate a code coverage report for all unit tests, run:
```bash
pnpm turbo test:coverage
```
This command runs the coverage task across all relevant packages (using Turborepo) and collects the results into a single coverage output.
To open the coverage report in your browser:
```bash
pnpm turbo test:coverage:view
```
This will build the HTML report and launch it using your default browser, so you can explore which files and branches are covered.
You can also store the generated coverage report as a [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) during your CI/CD pipeline, just add the following steps to your workflow job:
```yaml title=".github/workflows/ci.yml"
# your workflow job configuration here
- name: 📊 Generate coverage
run: pnpm turbo test:coverage
- name: 🗃️ Archive coverage report
uses: actions/upload-artifact@v5
with:
name: coverage-${{ github.sha }}
path: tooling/vitest/coverage/report
```
This will generate a test coverage report and upload it as an artifact, so you can access it from GitHub Actions tab for later inspection.
A high coverage percentage means your tests execute most lines and branches - but the quality and relevance of your tests matter more than the raw number. Use coverage reports to spot gaps and guide improvements, not as the sole metric of test health.

## Best practices
Unit tests should work **for you**, not the other way around. Focus on writing tests that make it easier to change code with confidence, not on satisfying arbitrary rules or reaching a magic number in a dashboard.
Code coverage is a **useful metric**, but it **SHOULD NOT** be the goal. It's better to have a smaller set of high‑value tests that cover critical paths and edge cases than a huge suite of fragile tests that are hard to maintain.
When in doubt, ask: *“Does this test give **me** confidence that I can change this code without breaking users?”* If the answer is no, refactor or remove it.
Finally, keep unit tests focused on **small, isolated pieces of logic**. More advanced flows — like multi-step user journeys, cross-service interactions, or full-page behavior — are better covered by [end-to-end (E2E) tests](/docs/web/tests/e2e), where you can verify the system as a whole.
---
url: /docs/web/troubleshooting/billing
title: Billing
description: Find answers to common billing issues.
---
## Checkout can't be created
This happen in the following cases:
1. The environment variables are not set correctly. Please make sure you have set the environment variables corresponding to your billing provider in `.env.local` if locally - or in your hosting provider's dashboard if in production
2. The price IDs used are incorrect. Make sure to use the exact price IDs as they are in the payment provider's dashboard.
[Read more about billing configuration](/docs/web/billing/configuration)
## Database is not updated after subscribing to a plan
This may happen if the webhook is not set up correctly. Please make sure you have set up the webhook in the payment provider's dashboard and that the URL is correct.
If working locally, make sure that:
1. If using Stripe, that the Stripe CLI or configured proxy is up and running ([see the Stripe documentation for more information](/docs/web/billing/stripe#create-a-webhook)).
2. If using Lemon Squeezy, that the webhook set in Lemon Squeezy is correct, the server is running, and the proxy is set up properly if you are testing locally ([see the Lemon Squeezy documentation for more information](/docs/web/billing/lemon-squeezy#create-a-webhook)).
3. If using Polar, ensure that you have configured the webhook URL in the Polar dashboard exactly as documented, and that your local development server is accessible (use a tool like [ngrok](https://ngrok.com) if required) ([see the Polar documentation for more information](/docs/web/billing/polar#create-a-webhook)).
## Webhook signature verification failed
If you see "Invalid signature" or "Webhook signature verification failed":
1. **Check the webhook secret** matches exactly (no extra spaces or newlines)
2. **Verify you're using the correct secret** for the environment (test vs live)
3. **Ensure the raw request body** is being passed to verification (some middleware can modify it)
## Prices not showing or wrong currency
If prices display incorrectly or don't show:
1. **Verify price IDs** in your config match the ones in your payment provider dashboard
2. **Check the price is active** (not archived) in the provider dashboard
3. **Ensure currency matches** your configuration
4. **Clear browser cache** - pricing data may be cached
```bash
# Restart dev server after config changes
pnpm dev
```
---
url: /docs/web/troubleshooting/deployment
title: Deployment
description: Find answers to common web deployment issues.
---
## Deployment build fails
This is most likely an issue related to the environment variables not being set correctly in the deployment environment. Please analyse the logs of the deployment provider to see what is the issue.
The kit is very defensive about incorrect environment variables, and will throw an error if any of the required environment variables are not set. In this way - the build will fail if the environment variables are not set correctly - instead of deploying a broken application.
Check our guides for the most popular hosting providers for more information on how to deploy your TurboStarter project correctly:
## What should I set as a URL before my first deployment?
That's very good question! For the first deployment you can set any URL, and then, after you (or your provider) assign a domain name, you can change it to the correct one. There's nothing wrong with redeploying your project multiple times.
## Sign in with OAuth provider doesn't work
This is most likely a settings issues in the provider's settings. To troubleshoot this issue, follow these steps:
1. **Verify provider settings**: Ensure that the OAuth provider's settings are correctly configured. Check that the client ID, client secret, and redirect URI are accurate and match the values in your application.
2. **Check environment variables**: Confirm that the environment variables for the OAuth provider are set correctly in your application production environment.
3. **Validate callback URLs**: Ensure that the callback URLs for each provider are set correctly and match the URLs in your application. This is crucial for the OAuth flow to work correctly.
Please read [Better Auth documentation](https://www.better-auth.com/docs/concepts/oauth) for more information on how to set up third-party providers.
## Build runs out of memory
If the build fails with `JavaScript heap out of memory`:
**Increase Node.js memory limit:**
```bash
# In your build command
NODE_OPTIONS="--max-old-space-size=4096" pnpm build
```
**For Vercel**, add to `vercel.json`:
```json
{
"build": {
"env": {
"NODE_OPTIONS": "--max-old-space-size=4096"
}
}
}
```
**For other providers**, set the `NODE_OPTIONS` environment variable in their dashboard.
## Database connection issues in production
If you see "Connection refused" or "ECONNREFUSED" errors:
1. **Check DATABASE\_URL** is set correctly in your hosting provider
2. **Verify IP allowlist** - many database providers require you to allowlist your deployment's IP addresses
3. **Check SSL requirements** - production databases often require SSL:
```
DATABASE_URL="postgresql://...?sslmode=require"
```
4. **Verify connection pooling** - serverless environments may need connection pooling (e.g., PgBouncer, Neon's pooler)
## CORS errors in production
If API requests fail with CORS errors:
1. **Verify `NEXT_PUBLIC_URL`** matches your actual domain exactly (including `https://`)
2. **Check for trailing slashes** - `https://example.com` and `https://example.com/` are different origins
3. **Verify API route** is deployed - check `/api/status` is accessible
The API automatically configures CORS based on `NEXT_PUBLIC_URL`.
## Preview deployments not working
If preview/staging deployments have issues:
1. **OAuth callbacks** won't work unless you add preview URLs to your OAuth provider's allowed redirect URIs
2. **Webhooks** need to point to the correct environment (use separate webhook endpoints for staging)
3. **Database** - ensure preview uses a separate database or the same one as staging (not production)
---
url: /docs/web/troubleshooting/emails
title: Emails
description: Find answers to common emails issues.
---
## I want to use a different email provider
Of course! You can use any email provider that you want. All you need to do is to implement the `EmailProviderStrategy` and export it in your `index.ts` file.
[Read more about sending emails](/docs/web/emails/sending)
## My emails are landing in the spam folder
Emails landing in spam folders is a common issue. Here are key steps to improve deliverability:
1. **Configure proper domain setup**:
* Use a dedicated subdomain for sending emails (e.g., mail.yourdomain.com)
* Ensure [reverse DNS (PTR) records](https://www.cloudflare.com/learning/dns/dns-records/dns-ptr-record/) are properly configured
* Warm up your sending domain gradually
2. **Implement authentication protocols**:
* Set up [SPF records](https://www.cloudflare.com/learning/dns/dns-records/dns-spf-record/) to specify authorized sending servers
* Enable [DKIM signing](https://www.cloudflare.com/learning/dns/dns-records/dns-dkim-record/) to verify email authenticity
* Configure [DMARC policies](https://www.cloudflare.com/learning/dns/dns-records/dns-dmarc-record/) to prevent spoofing
3. **Follow deliverability best practices**:
* Include clear unsubscribe mechanisms in all marketing communications
* Personalize content appropriately
* Avoid excessive promotional language and spam triggers
* Maintain consistent HTML formatting and styling
* Only include links to verified domains
* Keep a regular sending schedule
* Clean your email lists regularly
* Use double opt-in for new subscribers
4. **Monitor and optimize**:
* Track key metrics like delivery rates, opens, and bounces
* Monitor spam complaint rates
* Review email authentication reports
* Test emails across different clients and devices
* Adjust sending practices based on performance data
## Emails not sending in development
If emails don't send locally:
1. **Check your provider credentials** are set in `.env.local`
2. **Verify the email provider is configured** in `packages/email`
3. **Check the console/logs** for error messages
**For local testing without a provider**, you can use services like:
* [Mailpit](https://github.com/axllent/mailpit) (included in Docker setup)
* [Ethereal](https://ethereal.email/) (free test accounts)
## Email templates not rendering correctly
If email templates look broken or don't render:
1. **Preview locally first:**
```bash
pnpm --filter @workspace/email dev
```
2. **Check for unsupported CSS** - email clients have limited CSS support. Avoid:
* Flexbox/Grid (use tables)
* External stylesheets
* Modern CSS properties
3. **Test across clients** using tools like [Litmus](https://litmus.com/) or [Email on Acid](https://www.emailonacid.com/)
---
url: /docs/web/troubleshooting/installation
title: Installation
description: Find answers to common web installation issues.
---
## Cannot clone the repository
Issues related to cloning the repository are usually related to a Git misconfiguration in your local machine. The commands displayed in this guide using SSH: these will work only if you have setup your SSH keys in Github.
If you run into issues, [please make sure you follow this guide to set up your SSH key in Github.](https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
If this also fails, please use HTTPS instead. You will be able to see the commands in the repository's Github page under the "Clone" dropdown.
Please also make sure that the account that accepted the invite to TurboStarter, and the locally connected account are the same.
## My environment variables from `.env.local` file are not being loaded
Make sure you are running the `pnpm dev` command from the root directory of your project (where the `pnpm-workspace.yaml` file is located)
Also, ensure that the `.env.local` files are present in the apps that need them. For example, the `.env` file should be present in the `apps/web` directory for the web app.
TurboStarter uses the `dotenv-cli` to load environment variables from a `.env` files. The `dotenv-cli` is automatically used when running the `pnpm dev` command from the root directory.
## Next.js server doesn't start
This may happen due to some issues in the packages. Try to clean the workspace using the following command:
```bash
pnpm clean
```
Then, reinstall the dependencies:
```bash
pnpm i
```
You can now retry running the dev server.
## Local database doesn't start
If you cannot run the local database container, it's likely you have not started [Docker](https://docs.docker.com/get-docker/) locally. Our local database requires Docker to be installed and running.
Please make sure you have installed Docker (or compatible software such as [Colima](https://github.com/abiosoft/colima), [Orbstack](https://github.com/orbstack/orbstack)) and that is running on your local machine.
Also, make sure that you have enough [memory and CPU allocated](https://docs.docker.com/engine/containers/resource_constraints/) to your Docker instance.
## I don't see my translations
If you don't see your translations appearing in the application, there are a few common causes:
1. Check that your translation `.json` files are properly formatted and located in the correct directory
2. Verify that the language codes in your configuration match your translation files
3. Enable debug mode (`debug: true`) in your i18next configuration to see detailed logs
[Read more about configuration for translations](/docs/web/internationalization/configuration)
## "Module not found" error
This issue is mostly related to either dependency installed in the wrong package or issues with the file system.
The most common cause is incorrect dependency installation. Here's how to fix it:
1. Clean the workspace:
```bash
pnpm clean
```
2. Reinstall the dependencies:
```bash
pnpm i
```
If you're adding new dependencies, make sure to install them in the correct package:
```bash
# For main app dependencies
pnpm install --filter web my-package
# For a specific package
pnpm install --filter @workspace/ui my-package
```
If the issue persists, please check the file system for any issues.
### Windows OneDrive
OneDrive can cause file system issues with Node.js projects due to its file syncing behavior. If you're using Windows with OneDrive, you have two options to resolve this:
1. Move your project to a location outside of OneDrive-synced folders (recommended)
2. Disable OneDrive sync specifically for your development folder
This prevents file watching and symlink issues that can occur when OneDrive tries to sync Node.js project files.
## Turbo cache issues
If builds behave unexpectedly or changes aren't reflected, clear the Turbo cache:
```bash
pnpm turbo clean
```
Then rebuild:
```bash
pnpm build
```
## Windows line ending issues
If you see errors related to line endings or scripts fail with `\r` characters:
**Configure Git to use LF:**
```bash
git config --global core.autocrlf input
```
**Fix existing files:**
```bash
git rm --cached -r .
git reset --hard
```
Or use a tool like `dos2unix` to convert files.
---
url: /ai/docs/agents
title: Agents
description: Build powerful, autonomous AI agents capable of performing complex tasks within your web and mobile applications.
---
This feature is currently under development and will be
available in a future release.
[See roadmap](https://github.com/orgs/turbostarter/projects/1)
The Agents page is currently a placeholder in both the web and mobile apps. Today, it serves as a preview screen pointing users to the public roadmap while the feature is still under development.
The long-term goal is to showcase how to build intelligent, autonomous agents that can interact with users, tools, and data sources.
## Features
Design agents once and deploy them seamlessly across multiple platforms
including React, React Native, Expo, and Next.js through a unified
architecture.
Implement sophisticated context retention that allows agents to maintain
state and recall critical information across conversations and devices with
perfect continuity.
Enable agents to take meaningful actions by integrating with external tools,
accessing APIs, and executing functions dynamically within secure,
controlled environments.
Leverage the [Model Context
Protocol](https://modelcontextprotocol.io/introduction) to standardize
context delivery between agents and Large Language Models (LLMs). This
enables frictionless connections to diverse data sources and tools,
dramatically enhancing agent capabilities.
Orchestrate complex workflows combining Retrieval-Augmented Generation
(RAG), tool utilization, and MCP server interactions to solve sophisticated
tasks that previously required human intervention.
Stay tuned for the release of this exciting functionality!
---
url: /ai/docs/chat
title: Chat
description: Build a powerful AI assistant with multiple LLMs, generative UI, web browsing, and image analysis.
---
The [Chat](https://ai.turbostarter.dev/chat) demo application showcases an advanced AI assistant capable of engaging in complex conversations, browsing the web, working with file attachments, and sharing selected conversations through public links. It integrates multiple large language models (LLMs), supports reasoning-enabled models, and streams responses in real time.
You also get light chat management features like pinning and renaming so conversations stay easy to find.
## Features
The chat app offers a variety of capabilities for an enhanced conversational experience:
Switch between models from providers like
[OpenAI](/ai/docs/providers/openai),
[Anthropic](/ai/docs/providers/anthropic), [Google
AI](/ai/docs/providers/google), [xAI](/ai/docs/providers/xai), and
[DeepSeek](/ai/docs/providers/deepseek) from one consistent chat interface.
Experience an AI that truly understands complex questions and delivers
thoughtful, nuanced responses based on comprehensive reasoning.
Access up-to-the-minute information from the web through the integrated
search capability powered by the shared [web search provider
layer](/ai/docs/web-search).
Share a conversation up to a chosen point, then copy or open the public link
from the built-in share sheet.
Enjoy natural, fluid conversations with responses that stream in real-time,
eliminating waiting periods.
Pin and rename chats for quick organization without adding unnecessary
complexity.
## Setup
To implement your advanced AI assistant, you'll need several services configured. If you haven't set these up yet, start with:
### AI models
Different models offer varying capabilities for tool calling, reasoning, and file processing. Consider these differences when selecting the optimal model for your specific use case.
The Chat app uses the AI SDK to support multiple language and vision-capable models. You can switch models based on your needs. Explore the most relevant providers here:
} />
} />
} />
} />
} />
For detailed configuration of specific providers and other supported models, refer to the [AI SDK documentation](https://sdk.vercel.ai/providers/ai-sdk-providers).
### Web browsing
The chat app includes a dedicated web-search tool with provider-specific strategy adapters. The current codebase includes integrations for [Tavily](https://tavily.com/), [Brave Search](https://brave.com/search/api/), [Exa](https://exa.ai/), and [Firecrawl](https://www.firecrawl.dev/).
This provider layer keeps the tool contract stable while letting you switch or extend the underlying search backend. It also centralizes result normalization so the chat flow does not depend on each provider's raw response format.
### Tavily quick start
[Tavily](https://tavily.com/) remains a strong default option because it is optimized for LLM and agent workflows and returns structured, AI-friendly search results with minimal setup.
Tavily offers a generous free tier with [1,000 API credits per
month](https://docs.tavily.com/documentation/api-credits) without requiring
credit card information. A basic search consumes 1 credit, while an advanced
search uses 2 credits. Paid plans are available for higher volume usage.
To enable web browsing, follow these steps:
#### Get Tavily API Key
Sign up or log in at the [Tavily Platform](https://app.tavily.com/sign-in) to obtain your API key from the dashboard.
#### Add API Key to Environment
Add your API key to your project's `.env` file (e.g., in `apps/web`):
```bash title=".env"
TAVILY_API_KEY=tvly-your-api-key
```
With the API key properly configured, the chat app can use Tavily for searches when contextually appropriate.
## Data persistence
User interactions and chat history are persisted to ensure a continuous experience across sessions.
Conversation data is organized within a dedicated PostgreSQL schema named `chat`
to maintain clear separation from other application data.
* `chat`: stores records for each conversation session, including metadata like `userId`, `name`, and timestamps.
* `message`: stores individual messages linked to a parent chat.
* `part`: stores structured message parts, including text parts and file parts.
* `usage`: stores model/provider usage metadata for assistant responses.
Files shared within conversations are uploaded to [cloud storage](/ai/docs/storage) (S3-compatible), with attachment metadata stored in message parts and signed URLs generated when the files need to be read back.
## Devtools
TurboStarter AI includes a built-in devtools tool designed to help you inspect, debug, and understand all aspects of the AI chat experience. When you run the development server, it becomes available at [http://localhost:3001](http://localhost:3001).
The devtools provide a detailed view into chat request/response flows, message payloads, model invocations, and step-by-step assistant function calls as they occur.

You can monitor live chat events, observe intermediate reasoning traces, and troubleshoot issues - making it much easier to build, test, and optimize AI-powered conversations with full transparency.
## Structure
The Chat functionality is distributed across shared packages and platform-specific modules for web and mobile, ensuring strong code reuse and a consistent product experience.
### Core
The shared chat logic lives in `@workspace/ai-chat`, implemented in `packages/ai/chat/src`. It includes:
* Zod schemas for chat payloads and options
* Model definitions and provider strategy wiring
* Chat persistence helpers for messages, parts, attachments, and usage
* provider-backed web search tooling under `tools/web-search`
* Streamed AI responses built on the AI SDK
### API
Built with Hono, the `packages/api` package wires the chat app through `packages/api/src/modules/ai/chat.ts`.
That module validates incoming payloads, applies shared middleware like authentication and credit deduction, and then forwards the request into `@workspace/ai-chat`, where the chat stream, persistence, attachment handling, and model/tool execution actually happen.
### Web
The Next.js web application in `apps/web` implements the user-facing chat experience:
* `src/app/[locale]/(apps)/chat/**`: route entry points for the chat app
* `src/modules/chat/**`: the actual feature modules for composer, history, conversation UI, web search rendering, and attachment handling
### Mobile
The Expo/React Native mobile application in `apps/mobile` delivers a native chat experience:
* `src/app/(apps)/chat/**`: route entry points for the mobile chat app
* `src/modules/chat/**`: mobile-native chat modules for composer, history, and conversation UI
* **API interaction**: uses the same shared Hono client as the web app for consistent backend communication
This modular structure promotes separation of concerns and facilitates independent development and scaling of different parts of the application.
---
url: /ai/docs/image
title: Image playground
description: Learn how to generate images using AI models within the TurboStarter AI demo application.
---
The [Image Generation](https://ai.turbostarter.dev/image) demo application allows users to create visuals from text prompts using multiple image models. It provides a clean interface for prompt entry, model selection, aspect ratio control, and browsing generation history.
## Features
Explore the capabilities of the AI-powered image generation tool:
Create images simply by describing what you want to see in text.
Choose from different AI image generation models offered by various
providers.
Select the desired aspect ratio for your generated images (e.g. square,
landscape, portrait).
Create multiple design variations from a single prompt simultaneously,
accelerating your creative workflow.
Access and reference your complete generation history, including all prompts
and resulting images for continued iteration.
## Setup
To implement image generation in your application, you'll need to configure the necessary backend services.
You'll also need API keys for the models you want to enable. Follow the provider documentation linked below for setup details.
## AI models
The Image Generation app uses the AI SDK to support several image-capable models. In the current codebase, these come from OpenAI, Google, and Replicate:
} />
} />
} />
For detailed implementation guidance, refer to the [AI SDK documentation](https://sdk.vercel.ai/docs/ai-sdk-core/image-generation) covering the `generateImage` function and supported providers.
## Data persistence
Details about image generation requests and the resulting images are stored to maintain user history.
Data is organized within a dedicated PostgreSQL schema named `image`:
* `generation`: captures detailed information about each generation request, including the `prompt`, selected `model`, `aspectRatio`, requested image `count`, `userId`, and precise timestamps.
* `image`: stores metadata for each generated image, linked to its parent `generation` record via `generationId` and maintaining the `url` reference to the stored image file.
Generated image files are uploaded to [cloud storage](/ai/docs/storage) (S3-compatible). The public asset URL is then stored in the `image` table for later retrieval.
## Structure
The Image Generation feature is organized across the monorepo for clear separation between shared AI logic, API routes, and platform-specific UI.
### Core
The shared image generation logic lives in `@workspace/ai-image`, implemented in `packages/ai/image/src`:
* Validation schemas for prompts and generation options
* Model definitions and provider strategy wiring
* DB helpers for generations and images
* Upload flow for persisting generated assets to storage
### API
The `packages/api` package wires image generation through `packages/api/src/modules/ai/image.ts`.
That module is responsible for validating generation input, applying shared middleware like authentication, rate limiting, and credits, and then delegating to `@workspace/ai-image`, which creates generation records, calls the model provider, uploads assets to storage, and returns the results back through the API layer.
### Web
The Next.js application (`apps/web`) delivers an intuitive user interface:
* `src/app/[locale]/(apps)/image/**`: route entry points for the image app, history page, and generation detail pages
* `src/modules/image/**`: feature modules for the composer, history, generation detail views, and image gallery UI
### Mobile
The Expo/React Native application (`apps/mobile`) provides a native mobile experience:
* `src/app/(apps)/image/**`: route entry points for the mobile image app
* `src/modules/image/**`: mobile-native modules for generation, history, and viewing results
* **API integration**: uses the same shared Hono client as the web app for consistent backend communication
This architecture ensures perfect consistency across platforms while enabling tailored UI implementations optimized for each environment.
---
url: /ai/docs/rag
title: Knowledge RAG
description: Engage in conversations with your documents using AI to extract insights and answer questions.
---
The [Knowledge RAG](https://ai.turbostarter.dev/rag) demo application enables intelligent interaction with document content through a conversational AI interface. Upload a document from your device or provide a remote URL, then ask questions, request summaries, and extract information grounded in the document itself.
## Features
Transform how you interact with document content through these powerful capabilities:
Upload documents directly from your device or import them from a remote URL.
Chat with an AI that answers using content retrieved from the uploaded
document.
Quickly find specific information, key points, or summaries within the
document through natural language queries.
Visualize exactly which document sections informed the AI's responses with
precise source highlighting.
Conduct sophisticated conversations spanning multiple uploaded documents,
enabling cross-document analysis and comparison.
## Setup
To implement the [Knowledge RAG](/ai/docs/rag) application in your project, configure these essential backend services:
Set up PostgreSQL with the `pgvector` extension to efficiently store
conversation history, document metadata, and vector embeddings for semantic
search.
Configure S3-compatible cloud storage for secure management of uploaded
documents documents.
You'll also need API keys for the language and embedding models used in the RAG flow.
## AI models
This application leverages two complementary AI model types working together:
1. **Large Language Models (LLMs):** Provide sophisticated natural language understanding to interpret your questions and generate contextually appropriate responses based on document content.
2. **Embedding Models:** Convert document text segments into numerical vector representations that enable efficient semantic similarity search and [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation).
In the current codebase, the default RAG strategy uses OpenAI for both the chat model and the embedding model:
} />
If you want to expand provider support, the right place to do that is `packages/ai/rag/src/strategies.ts`.
## Data persistence
The application stores data related to chats, documents, and embeddings to provide a persistent experience.
Application data is organized within a dedicated PostgreSQL schema named `rag`:
* `chat`: stores metadata for each RAG conversation.
* `message`: stores all user and assistant messages within a chat.
* `document`: stores uploaded document metadata including `name` and storage `path`.
* `embedding`: stores extracted chunks and vector embeddings using [`pgvector`](https://github.com/pgvector/pgvector)'s `vector` type, with an HNSW index for similarity search.
The files uploaded by users are securely stored in your configured [cloud storage](/ai/docs/storage) bucket. The `path` field in the `document` table maintains the precise reference to each file's location.
## Devtools
TurboStarter AI provides an integrated devtools panel specifically designed to help you analyze, debug, and optimize every aspect of the RAG (Retrieval-Augmented Generation) workflow.
When you run the development server, the devtools panel becomes available at [http://localhost:3001](http://localhost:3001).
With devtools, you can trace how user queries are processed, examine the retrieval of relevant documents, and inspect each step in the response generation pipeline—including model calls, prompt construction, and semantic matching.

This tool allows you to observe retrieval and generation events as they happen, diagnose retrieval quality or edge cases, and fine-tune your RAG configuration for the best results. It is an essential resource for developing robust, transparent document-based AI features.
## Structure
The [Knowledge RAG](/ai/docs/rag) feature is organized across the monorepo for shared AI logic, API routes, and platform-specific UI.
### Core
The shared RAG logic lives in `@workspace/ai-rag`, implemented in `packages/ai/rag/src`:
* Validation schemas for messages and remote URLs
* Document loading, chunking, and embedding generation helpers
* Similarity search utilities for retrieving relevant content
* Streamed RAG chat logic with tool-assisted retrieval
### API
The `packages/api` package wires the RAG app through `packages/api/src/modules/ai/rag.ts`.
This module validates uploads and chat messages, applies shared middleware like authentication, rate limiting, and credits, and then delegates to `@workspace/ai-rag`, where document creation, embedding generation, retrieval, and streamed responses are handled.
### Web
The [Next.js](https://nextjs.org/) application (`apps/web`) delivers an intuitive user interface:
* `src/app/[locale]/(apps)/rag/**`: route entry points for the RAG app and chat detail pages
* `src/modules/rag/**`: feature modules for upload, chat history, conversation UI, and the built-in document previewer
### Mobile
The [Expo](https://expo.dev/)/[React Native](https://reactnative.dev/) application (`apps/mobile`) provides a native mobile experience:
* `src/app/(apps)/rag/**`: route entry points for the mobile RAG app
* `src/modules/rag/**`: mobile-native modules for upload, history, and conversation UI
* **API integration**: uses the same shared Hono client as the web app for consistent backend communication
This architecture ensures that core AI processing and data handling logic is shared across platforms, while enabling optimized UI implementations tailored to each environment.
---
url: /ai/docs/tts
title: Text to Speech
description: Convert text into natural-sounding speech using advanced AI voice synthesis models.
---
The [Text to Speech (TTS)](https://ai.turbostarter.dev/tts) demo application transforms written text into high-quality spoken audio. It uses ElevenLabs models to stream generated speech and gives users fine-grained control over voice settings.
## Features
Discover the powerful capabilities of this AI-powered voice synthesis solution:
Browse a large library of voices from [Eleven Labs](https://elevenlabs.io/)
to find a style that fits your product.
Experience near-instantaneous audio generation with streaming delivery,
providing immediate feedback as your content comes to life.
Enjoy a full-featured playback interface with precise controls for playback
speed and convenient options to download generated audio files.
Fine-tune your audio output with settings like speed, stability, similarity,
and speaker boost, depending on the selected voice and model.
Benefit from a thoughtfully designed interface that makes transforming text
to speech effortless and efficient, even for first-time users.
## AI models
This application primarily utilizes specialized text-to-speech models from [Eleven Labs](https://elevenlabs.io/).
} />
For comprehensive information about available voices and advanced customization techniques, consult the [ElevenLabs SDK documentation](https://elevenlabs.io/docs/overview).
## Data flow
Unlike the chat, image, and RAG demos, the TTS demo does **not** persist generations in the database by default. The API streams back audio directly from ElevenLabs, and the UI handles playback and download on the client side.
## Structure
The Text-to-Speech feature is organized across the monorepo for maximum flexibility and maintainability:
### Core
The shared TTS logic lives in `@workspace/ai-tts`, implemented in `packages/ai/tts/src`:
* Validation schemas and constants for TTS options
* The ElevenLabs client wrapper
* Voice mapping utilities and streamed text-to-speech generation
### API
The `packages/api` package wires the TTS app through `packages/api/src/modules/ai/tts.ts`.
That module validates the text-to-speech payload, applies shared middleware like authentication, rate limiting, and credits, and then delegates to `@workspace/ai-tts`, which fetches voices and streams generated audio from ElevenLabs back to the client.
### Web
The [Next.js](https://nextjs.org/) application (`apps/web`) provides the user interface:
* `src/app/[locale]/(apps)/tts/**`: route entry points for the TTS app
* `src/modules/tts/**`: feature modules for the composer, voice selector, settings controls, playback, and visualizer UI
### Mobile
The [Expo](https://expo.dev/)/[React Native](https://reactnative.dev/) application (`apps/mobile`) provides the native mobile experience:
* `src/app/(apps)/tts/**`: route entry points for the mobile TTS app
* `src/modules/tts/**`: mobile-native modules for composing and playing speech
* **API interaction**: uses the same shared Hono client as the web app for consistent communication with the backend
This architecture ensures perfect consistency between platforms while allowing for optimized UI implementations tailored to each environment.
---
url: /ai/docs/voice
title: Voice
description: Build a LiveKit-powered voice assistant for web and mobile, run it locally, and deploy the agent with the repository's existing build pipeline.
---
The [Voice](https://ai.turbostarter.dev/voice) app is the most real-time part of TurboStarter AI. Instead of a request-response UI, it gives users a shared audio session with an agent that can listen, reason, speak back, and stream conversation state across web and mobile.
## Capabilities
[LiveKit](https://livekit.com/) gives this app more than a microphone button. It provides the realtime transport, room lifecycle, participant state, and media controls that make the experience feel like a proper call interface instead of a chat form with audio attached.
LiveKit is a powerful realtime platform that provides the infrastructure for the voice app. It is used to transport the audio and video streams between the client and the server.
It is also used to store the session state and to control the media tracks. It powers millions of real-time voice and video sessions every day, including a [ChatGPT Voice](https://chatgpt.com/features/voice/) mode.
### Web
On web, the app leans into the full browser surface area. The desktop experience is especially good for demos, internal copilots, sales assistants, and any workflow where screen sharing matters.
Users can start a session, interrupt naturally, and follow the conversation
through a live transcript while the agent is speaking or listening.
The web client exposes microphone, camera, and screen-share controls, which
makes it useful for support, onboarding, and collaborative assistant flows.
The visualizer is customizable, and the layout adapts when transcript,
camera, or screen-share tiles are active.
Web is the fastest place to test prompts, media permissions, interruptions,
and room behavior while you are iterating on the agent.

### Mobile
On mobile, the same LiveKit room and agent stack is presented through a native-first session layout. The experience is optimized for touch controls, safe areas, and audio-session handling on real devices.
The mobile app manages the underlying audio session so the voice experience
behaves like a real call instead of a fragile media demo.
Users can keep the conversation voice-first while still opening transcript
and chat surfaces when they want more control or visibility.
The mobile UI also supports microphone, camera, and screen-share controls,
with a second media tile shown when visual tracks are active.
Both apps use the same voice backend and request lifecycle, while keeping
the interaction model natural on phones and tablets.

### Shared infrastructure
Under the UI differences, the architecture stays consistent across platforms. Both apps request a room token from the shared API layer, join a LiveKit room, and then hand the live session off to a LiveKit agent worker.
The web and mobile welcome screens trigger the same shared voice route
module in `packages/api/src/modules/ai/voice.ts`, so auth, credits, and
token creation stay centralized.
The LiveKit token logic, environment handling, agent entrypoint, and
deployment assets all live in `packages/ai/voice/src`.
Session state, media tracks, transcript messages, and agent events are
streamed over LiveKit instead of the text-streaming path used by the chat
apps.
## Architecture
The voice app still uses the same monorepo boundaries as the rest of the [AI product](/ai/docs/architecture). The difference is that the backend work happens around room creation and agent sessions rather than around a single streamed HTTP response.
1. The user starts a [session](https://docs.livekit.io/agents/logic/sessions/) from the platform-specific voice UI in `src/modules/voice/**`.
2. The shared API route module in `packages/api/src/modules/ai/voice.ts` runs through the normal request lifecycle, including auth context and credit checks.
3. The token-creation logic in `packages/ai/voice/src/api.ts` creates a [short-lived LiveKit participant token](https://docs.livekit.io/frontends/reference/tokens-grants/) and [room](https://docs.livekit.io/reference/other/roomservice-api/) configuration.
4. The web and mobile `SessionProvider` implementations use LiveKit's token source helpers to join the room.
5. The LiveKit agent worker defined in `packages/ai/voice/src/agent/main.ts` joins that room and drives the conversation.
This keeps the app aligned with the rest of the starter while still giving voice its own realtime transport and deployment model.
## Voice architecture
The most useful mental model for this app is the classic voice pipeline: speech comes in, gets transcribed, passed to an LLM, and then rendered back to audio. LiveKit Agents supports that pattern directly, and it is the right baseline to understand before looking at realtime speech-to-speech models.
This is the recommended architecture to understand first because it gives
you the most flexibility. You can mix providers for transcription,
reasoning, and voice quality instead of accepting one provider's full stack.
```ts
import { AgentSession } from "@livekit/agents";
const session = new AgentSession({
stt: "deepgram/nova-3:multi",
llm: "openai/gpt-4.1-mini",
tts: "cartesia/sonic-3:voice-id",
});
```
In the repository, this full pipeline is already scaffolded in
`packages/ai/voice/src/agent/main.ts`, along with turn detection, VAD, and
noise-cancellation hooks.
Realtime models are a strong alternative when you want a more tightly
coupled speech-to-speech experience with fewer moving pieces in the app
layer.
```ts
import { AgentSession } from "@livekit/agents";
import * as openai from "@livekit/agents-plugin-openai";
const session = new AgentSession({
llm: new openai.realtime.RealtimeModel({ voice: "cedar" }),
});
```
This pattern is also supported by LiveKit and is useful when you want
latency and expressiveness from a single realtime model. It is not the only
option, though, and you can switch between the two approaches as your
product needs evolve.
LiveKit's own docs present both approaches side by side, which is a helpful way to reason about tradeoffs: pipeline mode gives you finer provider control, while realtime mode gives you a more tightly integrated speech experience. See the [Voice AI quickstart](https://docs.livekit.io/agents/start/voice-ai/), [turn handling guide](https://docs.livekit.io/agents/logic/turns/), and [realtime models overview](https://docs.livekit.io/agents/models/realtime/).
### STT and TTS provider options
One of the strengths of LiveKit Agents is that you are not locked into a single speech stack. You can mix and match providers based on latency, language coverage, cost, and voice quality.
| Capability | Common choices | Notes |
| ------------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| STT | Deepgram, AssemblyAI, OpenAI, Google Cloud, Speechmatics | Deepgram is a common starting point for low-latency multilingual transcription, but LiveKit supports a wider plugin ecosystem. |
| TTS | Cartesia, ElevenLabs, Deepgram, OpenAI, Google Cloud, Rime | Cartesia and ElevenLabs are common picks when voice quality is the main product differentiator. |
| Realtime speech-to-speech | OpenAI Realtime, Gemini Live, xAI Grok Voice, Amazon Nova Sonic, Phonic | Realtime models reduce app-layer composition but change how you think about control and provider mix. |
If you are exploring the space, start with [STT models](https://docs.livekit.io/agents/models/stt/), [TTS models](https://docs.livekit.io/agents/models/tts/), and [realtime models](https://docs.livekit.io/agents/models/realtime/). For product-specific guidance inside this docs set, see [Speech](/ai/docs/speech), [Transcription](/ai/docs/transcription), [OpenAI](/ai/docs/providers/openai), and [ElevenLabs](/ai/docs/providers/eleven-labs).
## Create a project and set environment variables
The repository already knows how to build and run the voice agent, but you still need a [LiveKit project](https://livekit.com/) to point it at. You can use [LiveKit Cloud](https://cloud.livekit.io/) for the smoothest path, or [run the LiveKit server locally](https://docs.livekit.io/transport/self-hosting/local/) when you want full control during development.
LiveKit Cloud is the easiest way to get from code to a working agent. It
gives you hosted transport, agent deployment, observability, and the cloud
dashboard in one place.
Create a project in the [LiveKit Cloud dashboard](https://cloud.livekit.io/).
Install the LiveKit CLI and link it to your account:
```bash
brew install livekit-cli
lk cloud auth
```
Add the LiveKit credentials to `apps/web/.env.local`, because the
`@workspace/ai-voice` package scripts load that file by default:
```bash title="apps/web/.env.local"
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your-livekit-api-key
LIVEKIT_API_SECRET=your-livekit-api-secret
# Optional: voice-model providers
OPENAI_API_KEY=your-openai-api-key
DEEPGRAM_API_KEY=your-deepgram-api-key
CARTESIA_API_KEY=your-cartesia-api-key
ELEVENLABS_API_KEY=your-elevenlabs-api-key
```
You do not need every provider key on day one. Add only the providers your
chosen voice pipeline uses. The most important local prerequisite is simply
having the `lk` CLI installed and available on your `PATH`, because the
repository deploy script shells out to it directly.
Local LiveKit is useful when you want to test the room transport yourself
or develop without relying on a cloud project. It is separate from the
repository's own `docker-compose.yml`, which only starts Postgres.
Install the LiveKit server locally:
```bash
brew update && brew install livekit
```
Start the server in dev mode:
```bash
livekit-server --dev
```
Point your env file at the local instance. The local dev server uses the
default `devkey` and `secret` credentials:
```bash title="apps/web/.env.local"
LIVEKIT_URL=ws://127.0.0.1:7880
LIVEKIT_API_KEY=devkey
LIVEKIT_API_SECRET=secret
```
If you want to connect from another device (e.g. mobile phone) on your network, use
`livekit-server --dev --bind 0.0.0.0` and replace `127.0.0.1` with your
machine's LAN IP in the client-facing URL.
The starter already includes Docker-based build assets for the agent itself,
but it does not currently spin up a LiveKit server through Docker Compose for
you. Treat LiveKit transport as a separate dependency from Postgres and the
rest of the local services.
## Run the agent locally
Once your environment variables are in place, local development is straightforward. The easiest path is to let Turbo orchestrate the voice package task graph for you, because `packages/ai/voice/turbo.json` already declares `dev -> download-files -> build`.
From the `ai` repository root, you can run the voice worker through Turbo and
let it handle the build and pre-download steps automatically:
```bash
pnpm with-env turbo dev --filter=@workspace/ai-voice
```
This is the best command to document for day-to-day development. The
step-by-step commands below are still useful when you want to understand what
happens under the hood or run each piece manually.
If you are already working in the full monorepo, `pnpm dev` from the repository root is also a valid path. The root script runs `pnpm with-env turbo dev`, so the voice package can be started as part of the wider development graph alongside the web and mobile apps.
Install dependencies for the monorepo:
```bash
pnpm install
```
Build the package first if you want to run the pieces manually:
```bash
pnpm --filter @workspace/ai-voice build
```
This runs TypeScript compilation for `@workspace/ai-voice` and produces the
`dist` output used by the worker's production-style scripts.
Optionally pre-download local files such as VAD-related assets:
```bash
pnpm --filter @workspace/ai-voice download-files
```
This boots the built agent in a special download mode so it can fetch any
local runtime assets it needs ahead of time. In practice, this is where
model helpers such as Silero VAD assets can be warmed up before you enter a
live session.
Run the agent in development mode:
```bash
pnpm --filter @workspace/ai-voice dev
```
This loads `apps/web/.env.local`, starts the LiveKit agent entrypoint in
development mode, prewarms the VAD, and waits for room jobs from your
local or cloud LiveKit project.
In a separate terminal, run the app itself:
```bash
pnpm dev
```
This starts the rest of the product surface so you can actually join the
room from the web app, mobile app, or any other connected client.
The package also includes `pnpm --filter @workspace/ai-voice connect` and `pnpm --filter @workspace/ai-voice start` for alternative LiveKit agent startup modes. If you prefer the Turbo task graph for those flows too, `start` is also wired to depend on `download-files` in `packages/ai/voice/turbo.json`. For the bigger picture on these modes, see LiveKit's [voice quickstart](https://docs.livekit.io/agents/start/voice-ai/).
## Dashboard and playground
LiveKit Cloud gives you two especially useful surfaces while building. The dashboard is your operational control plane, and the playground is your fastest browser-based testing surface when you do not want to open the full app.
### Dashboard
The [LiveKit Cloud dashboard](https://cloud.livekit.io/) is where you create projects, manage API keys, inspect agent deployments, and review operational signals after deployment.
Create projects, generate credentials, and manage the values that end up in
your local env files or cloud secrets.
Review deployment status, session counts, errors, limits, and other agent
health signals from one place.
Use the dashboard's runtime and build logs when a deployment starts failing,
cold starts become visible, or a model provider is misconfigured.

### Playground
The Agents Playground is useful when you want to verify the agent itself before you involve the full product UI. It is especially handy while tuning prompts, testing interruptions, or validating a new STT/TTS provider combination.
You can use the playground against a locally running agent in `dev` mode or a deployed agent in LiveKit Cloud. LiveKit covers this flow in the [Voice AI quickstart](https://docs.livekit.io/agents/start/voice-ai/).

## Deployment
The repository already contains the agent build and staging flow, so deployment is less about writing Docker logic and more about understanding what the existing scripts are doing for you.
### `deploy` command
The main entrypoint is the package-level deploy script in `packages/ai/voice/package.json`. It stages a deployment workspace and then hands that staged directory off to the LiveKit CLI.
```bash
pnpm --filter @workspace/ai-voice run deploy
```
### What the staging script does
The file `packages/ai/voice/src/deployment/stage-agent-deploy.ts` prepares a clean `.lk-stage-ai-voice` workspace at the repo root. That staging step keeps the agent deployment isolated from the rest of the monorepo while still letting the Docker build reuse workspace packages.
It copies:
* the root `package.json`
* `pnpm-lock.yaml` and `pnpm-workspace.yaml`
* the `packages` and `tooling` directories
* the agent-specific deployment `Dockerfile`
### How the container build works
The Dockerfile in `packages/ai/voice/src/deployment/Dockerfile` is already set up for the agent. It installs dependencies for `@workspace/ai-voice`, builds the package, pre-downloads model files, and then starts the built worker with the production `start` script.
That means the important work has already been encoded into the repository:
* dependency installation is workspace-aware
* the build only targets `@workspace/ai-voice`
* `download-files` runs during image build
* the final container boots directly into the LiveKit agent worker
### How LiveKit Cloud sees the deployment
After staging, the script runs `lk agent deploy` against `.lk-stage-ai-voice`. LiveKit Cloud then builds the image, stores deployment metadata, and exposes the resulting agent through the dashboard and playground surfaces.
If you are using a cloud deployment, keep provider credentials such as `OPENAI_API_KEY`, `DEEPGRAM_API_KEY`, `CARTESIA_API_KEY`, or `ELEVENLABS_API_KEY` in LiveKit Cloud secrets instead of baking them into the image. LiveKit documents this flow in its [deployment overview](https://docs.livekit.io/deploy/agents/).
### LiveKit project metadata
The staged workspace also contains a `livekit.toml` file once the CLI has linked that deployment to a specific LiveKit project and agent. Think of it as deployment metadata owned by the LiveKit toolchain rather than application code you should hand-edit frequently.
## Structure
The voice feature spans the shared agent package, the API layer, and the two frontends. Keeping these boundaries clear makes it much easier to change models or deployment strategy without rebuilding the UI from scratch.
### Core
The shared voice backend logic lives in `packages/ai/voice/src`. This is where token generation, environment handling, agent instructions, the agent entrypoint, and deployment assets live.
### API
The voice route wiring lives in `packages/api/src/modules/ai/voice.ts`. It sits inside the same Hono request pipeline as the other AI apps, so voice still benefits from shared auth, validation, and credit logic before the request reaches the LiveKit-specific code.
### Web
The web route entrypoints live in `apps/web/src/app/[locale]/(apps)/voice/**`, while the actual feature implementation lives in `apps/web/src/modules/voice/**`. That module tree contains the welcome screen, controls, transcript, session provider, settings, and visualizer components.
### Mobile
The mobile route entrypoints live in `apps/mobile/src/app/(apps)/voice/**`, while the feature logic lives in `apps/mobile/src/modules/voice/**`. That is where the mobile session provider, controls, transcript, video tile, chat composer, and animations are defined.
## Related documentation
Voice sits at the intersection of speech, transcription, provider choice, and realtime product design. These pages are the best next stop if you want to go deeper into one part of the stack.
## References
These are the best official LiveKit references to keep open while working on the voice app:
* [Voice AI quickstart](https://docs.livekit.io/agents/start/voice-ai/)
* [Turn handling](https://docs.livekit.io/agents/logic/turns/)
* [STT models](https://docs.livekit.io/agents/models/stt/)
* [TTS models](https://docs.livekit.io/agents/models/tts/)
* [Realtime models](https://docs.livekit.io/agents/models/realtime/)
* [Running LiveKit locally](https://docs.livekit.io/transport/self-hosting/local/)
* [Agent deployment overview](https://docs.livekit.io/deploy/agents/)
---
url: /ai/docs/embeddings
title: Embeddings
description: Understand embeddings, vector search, and semantic retrieval for modern AI apps, with practical RAG patterns, code examples, and TurboStarter AI references.
---
Embeddings let machines represent text as vectors, which makes meaning searchable. Instead of matching exact keywords, you can compare semantic similarity: "pricing page" and "billing plan" may be close together even if they share few words.
That is why embeddings are a core building block for search, recommendations, clustering, deduplication, and especially [RAG](/ai/docs/rag).
In many modern AI systems, embeddings are the bridge between raw content and
useful retrieval.
Semantic search, knowledge retrieval, document chat, duplicate detection,
recommendations, and content grouping.
The [Knowledge RAG app](/ai/docs/rag) uses embeddings to index uploaded PDFs
and retrieve relevant chunks before generating an answer.
Use embeddings when you need "similar meaning", not just "matching words".
## Mental model
Imagine every sentence in your system gets turned into a point in a very high-dimensional space. Sentences about similar ideas land near each other. Queries can be embedded too, and then compared against stored vectors.
That gives you a simple retrieval loop:
Split source content into chunks.Turn each chunk into an embedding vector.Store the vector alongside the original content.Embed the user's query.Retrieve the nearest chunks and pass them into a language model.
This pattern is the backbone of many retrieval-augmented systems.
## What embeddings are not
It is just as helpful to understand the boundaries of embeddings as it is to understand their strengths. That keeps teams from expecting retrieval systems to behave like answer engines on their own.
Embeddings do not answer questions by themselves. They are for
representation and retrieval, not final responses.
Embeddings improve retrieval, but weak chunking, noisy source data, or poor
ranking can still produce bad context.
RAG is the most popular use case, but embeddings are also useful for search,
recommendations, classification pipelines, and analytics.
## Core concepts that matter
A few concepts account for most of the quality difference between a weak embeddings system and a strong one. These are the ideas worth learning first.
Long documents are typically split into smaller sections before embedding.
Chunk size and overlap shape retrieval quality more than many teams expect.
Once text is converted into vectors, you compare vectors with metrics like
cosine similarity or cosine distance to find the closest matches.
You need somewhere to store embeddings and query them efficiently. That can
be a vector database, or Postgres with `pgvector`, as used in TurboStarter
AI.
Retrieving more chunks increases your chance of finding the right one, but
also adds noise. Choosing the right threshold and top-k matters.
Retrieved chunks should be passed into a text-generation model with clear
instructions to answer from the supplied context.
## Common stack
A common production-friendly embeddings stack looks like this:
* [LangChain](https://js.langchain.com/) for PDF loading and text splitting
* the AI SDK for `embed` and `embedMany`
* Postgres with [`pgvector`](https://github.com/pgvector/pgvector) or a vector database for similarity search
That is the key production pattern: embed source chunks once, then embed each user query at request time.
If you want to see how this capability is used in the starter, [Knowledge RAG](/ai/docs/rag) is the best companion page.
## AI SDK example
The AI SDK gives you simple building blocks for both query-time embedding and batch indexing. Those two modes cover most real-world embeddings workflows.
```ts
import { openai } from "@ai-sdk/openai";
import { embed } from "ai";
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: "How do I add AI chat to my SaaS app?",
});
console.log(embedding.length);
```
Use this when embedding a single query at request time.
```ts
import { openai } from "@ai-sdk/openai";
import { embedMany } from "ai";
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: [
"TurboStarter supports AI chat.",
"TurboStarter includes background jobs.",
"TurboStarter ships with billing integrations.",
],
});
console.log(embeddings.length);
```
Use this when indexing documents, help center content, or product knowledge in bulk.
## Similarity search in plain language
Once you have vectors, you rank documents by "how close" they are to the query vector. In many systems, that means using cosine similarity or cosine distance and then selecting the top few chunks above some quality threshold.
This is one reason `pgvector` has become such a practical choice: many teams can add semantic retrieval to an existing Postgres-backed app without introducing a separate data system on day one.
## Where teams usually go wrong
Embeddings are conceptually simple, but retrieval quality often breaks down in the implementation details. These are some of the most common failure points.
Large chunks blur topics together and make retrieval less precise. Smaller
overlapping chunks are often easier to retrieve well.
Navigation chrome, repeated headers, or noisy boilerplate can pollute
retrieval quality.
Returning low-similarity chunks can hurt answer quality more than returning
fewer chunks.
Retrieved context still needs a generation step that explains, compares, or
answers in a user-friendly way.
## When to use embeddings
This quick comparison helps separate problems that benefit from semantic retrieval from problems that are better solved with plain generation or deterministic logic.
| Problem | Use embeddings? | Why |
| ------------------------------------ | --------------- | ------------------------------------------------------------------ |
| Find docs related to a user question | Yes | Semantic similarity is usually better than keyword matching alone. |
| Answer questions from uploaded PDFs | Yes | Embeddings help retrieve relevant chunks before generation. |
| Write a product announcement | Probably not | That is primarily a text generation problem. |
| Compute an exact invoice total | No | This is deterministic logic, not semantic retrieval. |
## Useful references
These references are a good next step if you want to understand both the practical implementation side and the research ideas behind modern embeddings systems.
* [AI SDK embeddings docs](https://ai-sdk.dev/docs/ai-sdk-core/embeddings)
* [LangChain text splitters](https://js.langchain.com/docs/concepts/text_splitters/)
* [pgvector](https://github.com/pgvector/pgvector)
* [TurboStarter AI RAG docs](/ai/docs/rag)
* [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401)
* [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
---
url: /ai/docs/generating-text
title: Generating text
description: Learn how modern AI text generation works, when to use it, how to stream and structure outputs, and where it appears in TurboStarter AI.
---
Text generation is the foundation of most AI products. It powers chatbots, writing copilots, search assistants, structured extraction, summarization, classification, and agent-style workflows.
What changes from product to product is not whether you are "using text generation", but how you shape the input, what context you supply, how you constrain the output, and what happens after the model responds.
Chat replies, summaries, rewrites, labels, outlines, SQL, JSON, and
multi-step tool decisions all start as text generation tasks.
See it in the [Chat app](/ai/docs/chat), [Knowledge RAG app](/ai/docs/rag),
and provider guides like [OpenAI](/ai/docs/providers/openai) or
[Anthropic](/ai/docs/providers/anthropic).
Use text generation when the result should be language-first: explain,
answer, transform, compare, classify, or draft.
## Overview
At a practical level, text generation means asking a model to continue or complete a task in natural language. The model can work from:
* a single prompt
* a chat history
* retrieved context from your database or documents
* tool results from external systems
* structured instructions that constrain the output format
That makes text generation much broader than "write me a paragraph". A production system might generate:
* a customer support answer grounded in your docs
* a product description rewritten in your brand voice
* a JSON object for downstream automation
* a step-by-step plan before invoking tools
* a streaming response that feels interactive in the UI
Most AI apps are just text generation plus constraints: context, formatting,
tools, memory, and UI.
## Common patterns
Most text generation features fall into a small number of recurring patterns. Picking the right one early helps you avoid overengineering or forcing every use case into a chat-shaped UI.
The simplest pattern. Best for copywriting, rewriting, tagging, and one-off
generation jobs.
The standard chat pattern. Best when users expect conversational
back-and-forth and low perceived latency.
Used in RAG systems. The model answers from external documents instead of
relying only on its training data.
Best when another system needs to consume the result reliably, for example
JSON, enums, or extracted fields.
Best for assistants that need search, databases, calculators, or third-party
APIs before they respond.
Useful for reports, content pipelines, and background tasks where a
synchronous response is not the best UX.
## How to design good text generation features
Strong text features usually come from good product framing, not just better prompts. These design choices tend to matter most once you move beyond toy demos.
Define what the user is trying to accomplish. "Answer a question from
uploaded PDFs" leads to a very different architecture than "draft a
marketing email" or "extract fields from invoices".
Streaming improves perceived speed and feels much better for chat, drafting,
and long answers. For tiny background transformations, a single final
response is often enough.
Inject only the context the model needs: user input, system instructions,
retrieved documents, tool results, or account metadata. Too little context
hurts accuracy. Too much hurts relevance and cost.
If you need reliable downstream behavior, ask for structured output or
validate the result after generation. Free-form prose is great for UX, but
brittle for automation.
Plan for rate limits, partial streaming, empty answers, hallucinations, and
provider outages. Strong AI products handle these gracefully instead of
pretending the model never fails.
## AI SDK examples
These examples show the two most common starting points. One is best for one-shot tasks, while the other is better when you want the response to feel alive in the UI.
```ts
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const { text } = await generateText({
model: openai("gpt-5"),
prompt: "Summarize this feature request in 3 concise bullet points.",
});
console.log(text);
```
This pattern is ideal for short tasks like summarization, rewriting, extraction, and internal automations.
```ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
const { textStream } = streamText({
model: openai("gpt-5"),
prompt: "Draft a launch announcement for a new AI image editor.",
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
```
Streaming is the better fit for chat UIs, copilots, and any experience where responsiveness matters.
## Model selection in practice
Most real products do not treat text generation as a single-model feature. They choose different models depending on the task: a faster one for chat, a cheaper one for background jobs, or a more capable one for harder reasoning-heavy requests.
That is a good production pattern to learn from:
* keep provider wiring in one place
* keep product logic separate from provider choice
* add middleware around models for logging, billing, safety, or localization
If you want to see how that idea shows up in this docs set, start with [Chat](/ai/docs/chat), then compare the provider pages like [OpenAI](/ai/docs/providers/openai), [Anthropic](/ai/docs/providers/anthropic), and [Google AI](/ai/docs/providers/google).
## Beginner mistakes to avoid
Many early text-generation features fail for predictable reasons. These are some of the most common traps when teams move from experimentation to real product work.
Better prompts help, but product quality usually improves more from better
context, better constraints, and better retrieval than from prompt tweaks
alone.
The best model for fast chat is not always the best one for extraction,
planning, or background jobs.
If the task matters, compare prompts, models, and outputs against real
examples instead of relying on intuition.
## Related documentation
This capability shows up in several parts of the AI docs because it is the base layer for many other features. These pages are the best next stop if you want to see it in more applied contexts.
## When to use it
Text generation is powerful, but it should not be stretched to solve every AI problem on its own. The most reliable products know when to pair it with retrieval, tools, or another modality.
Drafting, rewriting, summarizing, extracting, classifying, and answering
from provided context are usually text-generation-first problems.
If answers need to come from your documents, tickets, database records, or
knowledge base, pair generation with embeddings and retrieval.
If the model must search the web, create records, call APIs, or execute
workflows, add tool calling instead of hoping the model can infer the
answer.
If the output should be an image, audio file, or transcription, move to a
modality-specific capability like [Image
generation](/ai/docs/image-generation) or [Speech](/ai/docs/speech).
## Practical quality checklist
Before shipping a text feature, it helps to pressure-test the basics. A small checklist like this often catches the issues that matter most in production.
* Write instructions that are explicit about tone, scope, and success criteria.
* Give the model the minimum context needed to answer well.
* Prefer streaming for user-facing experiences that may take more than a moment.
* Validate or post-process outputs if another system depends on them.
* Log prompt inputs, model choice, latency, and failures so you can improve the feature over time.
## Learn more
If you want to go deeper, these references cover both practical implementation and the broader ideas that shaped modern text-generation workflows.
* [AI SDK text generation docs](https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-text)
* [AI SDK streaming guide](https://ai-sdk.dev/docs/foundations/streaming)
* [Prompt engineering guide by OpenAI](https://platform.openai.com/docs/guides/prompt-engineering)
* [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
* [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903)
* [TurboStarter AI Chat docs](/ai/docs/chat)
* [TurboStarter AI RAG docs](/ai/docs/rag)
---
url: /ai/docs/image-generation
title: Image generation
description: Explore modern AI image generation, from prompt-to-image workflows to model selection, output control, and production-ready patterns in TurboStarter AI.
---
Image generation turns natural-language prompts into visual outputs. It is one of the most visible AI capabilities today, but good product design matters just as much as model quality: prompt structure, aspect ratio, moderation, iteration loops, and asset storage all shape the final experience.
Concept art, product mockups, marketing visuals, avatar creation,
thumbnails, moodboards, and creative exploration.
See the full flow in the [Image playground](/ai/docs/image), including
prompting, aspect ratios, history, and stored generations.
Use image generation when the user wants a new visual artifact, not just a
text description of one.
## Overview
Modern image models can synthesize original visuals from prompts such as:
> Editorial-style portrait of a founder in a minimalist office
> Landing page illustration for a logistics startup, isometric, blue-orange palette
> Packaging mockup for a premium matcha brand, studio lighting
The output is shaped by more than the prompt alone. Common controls include:
* model choice
* aspect ratio
* image count
* style direction
* quality and latency tradeoffs
* post-processing or storage workflow
The first result is often a direction, not the final asset. Strong products
make it easy to adjust prompts, regenerate, compare versions, and save the
best outcome.
## Common product patterns
Most image-generation products reuse a few familiar interaction models. Understanding these patterns makes it easier to decide whether you are building a creative playground, a workflow tool, or something more structured.
The classic interface: write a prompt, choose a model, generate one or more
images, then iterate.
Great for internal tools. Users fill in fields like subject, brand, mood,
and aspect ratio instead of writing a raw prompt.
Generate supporting visuals as part of a CMS, campaign builder, ecommerce
flow, or design review process.
Produce several candidates at once so users can choose the best direction
before refining.
Save generated files to object storage, keep metadata in your database, and
expose a history UI for future reuse.
Especially important for brand-sensitive or customer-facing content where
style, safety, and consistency matter.
## Design considerations
Image generation looks simple on the surface, but a lot of the product quality comes from a few design choices made early. These are the places where teams usually win or lose usability.
Users often need help describing composition, mood, style, subject, and
framing. Good defaults, examples, and prompt templates usually improve
results more than adding more settings.
Some models are better for speed, others for photorealism, branding,
illustration, or experimentation. Let the product goal drive the default
model.
If generated assets matter after the first render, you will likely want
object storage, metadata persistence, and a history browser.
Image generation can produce copyrighted, unsafe, or off-brand results.
Decide what to block, what to review, and what to log.
Designers and marketers rarely accept the first output. Build for fast
retries, prompt edits, and version comparison from the start.
## AI SDK example
This is the basic prompt-to-image shape used in many modern apps. In practice, you would usually wrap this in your own server flow for auth, moderation, and storage.
```ts
import { replicate } from "@ai-sdk/replicate";
import { generateImage } from "ai";
import { writeFile } from "node:fs/promises";
const { image } = await generateImage({
model: replicate.image("black-forest-labs/flux-schnell"),
prompt:
"A cinematic product photo of a matte-black mechanical keyboard on a walnut desk",
aspectRatio: "16:9",
});
await writeFile("keyboard.webp", image.uint8Array);
```
This is the core pattern behind most image features: choose a provider, pass a prompt plus image-specific options, then display or store the result.
## Choosing image models in practice
Most image products benefit from keeping the UI separate from the underlying provider choice. That makes it much easier to swap defaults, compare providers, or expose different quality and speed tiers without redesigning the experience.
As a general rule:
* pick one default model for the common path
* expose only the settings users can understand
* add more providers only when they create a clear product advantage
If you want implementation-oriented follow-up, the best companion pages are [Image playground](/ai/docs/image), [OpenAI](/ai/docs/providers/openai), [Google AI](/ai/docs/providers/google), and [Replicate](/ai/docs/providers/replicate).
## Prompt engineering
Prompt quality has an outsized effect on image results, especially for new users. A small amount of structure often produces much more usable outputs than an open-ended prompt box.
```txt
make a landing page illustration for a startup
```
This leaves too much unspecified, so the model has to guess style, composition, tone, and format.
```txt
Create an isometric landing page illustration for a B2B logistics startup.
Use a clean SaaS visual style, blue and orange accents, soft shadows,
warehouse and route motifs, and leave negative space for headline text.
Aspect ratio 16:9.
```
This gives the model clearer constraints around subject, style, composition, color, and layout intent.
## Related documentation
If you want to see how image generation turns into a real product experience, these pages are the best follow-up. They connect the capability itself to concrete provider and app-level guidance.
## A simple architecture for production use
Most production image pipelines follow a fairly predictable sequence. The details vary, but the shape below is a good baseline for designing a reliable system.
Collect the prompt and image options from the client. Keep the UI focused on
a few controls users actually understand.
Route the request through your server so provider keys stay private and you
can add validation, auth, billing, and moderation.
Generate the image with your selected provider and model.
Store the asset and metadata if the result should be reusable later.
Return the image plus enough metadata for history, auditing, and future
iteration.
## Practical quality checklist
This short checklist helps keep an image feature useful and manageable once real users start generating assets at scale.
* Offer prompt examples so users are not starting from a blank box.
* Keep the number of settings small unless your audience is highly technical.
* Store prompt, model, aspect ratio, and timestamps alongside the generated asset.
* Add moderation and error states early, not after launch.
* Make regeneration and side-by-side comparison fast and obvious.
## Research and background
If you want more context on how current image systems work and how they are evaluated, these references are a strong place to start.
* [AI SDK image generation docs](https://ai-sdk.dev/docs/ai-sdk-core/image-generation)
* [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
* [DALL·E 3 system card](https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf)
## Learn more
These companion pages are the most useful next step if you want to move from general understanding to provider setup and app-level implementation.
* [TurboStarter AI Image playground docs](/ai/docs/image)
* [OpenAI provider docs](/ai/docs/providers/openai)
* [Google AI provider docs](/ai/docs/providers/google)
* [Replicate provider docs](/ai/docs/providers/replicate)
---
url: /ai/docs/mcp
title: Model Context Protocol (MCP)
description: Learn what MCP is, how it standardizes tool and context access for AI systems, and when to use it instead of building custom integrations.
---
Model Context Protocol, or MCP, is an emerging standard for connecting models to external tools, data sources, and execution environments. It gives assistants a common way to discover capabilities instead of relying on one-off custom integrations for every app.
If tool calling answers the question "can the model use a function?", MCP answers a broader one: "how do we expose tools and context to models in a consistent, portable way?"
## Overview
Without a standard, every AI integration tends to invent its own tool format, auth flow, transport, and discovery mechanism. That makes ecosystems fragmented and difficult to reuse.
MCP creates a shared protocol for:
* listing available tools and resources
* describing what those tools do
* validating inputs and outputs
* connecting over supported transports
* letting clients and assistants interact with those capabilities in a uniform way
## Why this matters
MCP is important because it shifts AI integration from bespoke glue code toward reusable interfaces. That makes it easier to plug assistants into IDEs, local tools, databases, internal systems, or SaaS platforms without redesigning the entire integration each time.
MCP gives models and clients a common contract for tools, resources, and
structured interactions.
The same MCP server can potentially be used by multiple clients instead of
being tightly coupled to one product.
MCP fits naturally alongside [Tool calling](/ai/docs/tool-calling),
[Generating text](/ai/docs/generating-text), and assistant-style
[Chat](/ai/docs/chat) experiences.
## MCP vs regular tool calling
These concepts are related, but they are not identical. Tool calling is the model behavior. MCP is one way to provide tools and context in a standardized format.
| Concept | What it focuses on | Typical question |
| ------------ | ----------------------------------------------- | ----------------------------------------------------------- |
| Tool calling | Letting the model invoke external capabilities | "Can the model call this function?" |
| MCP | Standardizing how tools and context are exposed | "How should these capabilities be described and connected?" |
## A simple mental model
You can think of MCP as a protocol layer between AI clients and the systems they want to use. Instead of every client speaking a different dialect, MCP gives them a shared language.
That usually means three actors:
An MCP server exposes tools, resources, or prompts.
An MCP client connects to that server and discovers what is available.
A model-enabled app uses those capabilities through the client during
generation.
## AI SDK example
The AI SDK has support for working with MCP clients and feeding discovered tools into generation. This example shows the general shape without tying it to any specific internal product logic.
```ts
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp/mcp-stdio";
import { generateText, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
const transport = new Experimental_StdioMCPTransport({
command: "node",
args: ["./server.js"],
});
const client = await createMCPClient({ transport });
const tools = await client.tools();
const result = await generateText({
model: openai("gpt-4o"),
tools,
prompt: "Find products under $100 and summarize the best options.",
stopWhen: stepCountIs(5),
});
await client.close();
```
The important idea is that the model does not need hardcoded knowledge of each capability. It can discover and use tools through a consistent protocol.
## When MCP is a good fit
MCP is not mandatory for every project. It shines when you want interoperability, reuse, and a cleaner separation between AI clients and backend capabilities.
You want multiple AI clients to share the same tool surface, or you want to
expose capabilities in a more standardized way.
You only need one or two internal tools in a single app and a direct
tool-calling setup is simpler.
IDE assistants, internal copilots, local tooling, multi-client ecosystems,
and platforms that want plug-in style extensibility.
## Design considerations
Even with a protocol, good interface design still matters. MCP does not remove the need for careful tool and resource design.
Whether exposed through MCP or not, tools still need clear descriptions,
good schemas, and predictable outputs.
Standardized access does not mean unrestricted access. Different tools may
require different auth, scoping, or approval flows.
MCP works best when clients can rely on consistent tool names, schemas, and
behavior over time.
Let MCP handle the interface layer, while your actual domain logic stays
behind well-defined services.
## Where MCP fits in a modern AI stack
MCP is easiest to understand when you place it in the bigger picture. It is not a replacement for models, retrieval, or prompting. It is a way to connect them to external capability surfaces.
MCP can supply the tools that the model chooses to call during generation.
An MCP server can expose resources or search interfaces that help the model
get better context.
IDE copilots, chat assistants, and agent-like systems can all benefit from a
standardized integration layer.
## Common misconceptions
MCP is powerful, but it helps to be clear about what it does and does not solve. That keeps teams from overcomplicating their architecture too early.
| Misconception | Better framing |
| ------------------------------------- | ------------------------------------------------------------------------------------------------ |
| "MCP replaces tool calling." | MCP is one standardized way to provide tools and context to a model. |
| "MCP automatically makes tools safe." | Safety still depends on auth, validation, permissions, and execution policy. |
| "Every AI app needs MCP." | Many apps can start with direct tools and adopt MCP later if interoperability becomes important. |
## Related documentation
If you are learning this capability for the first time, the most useful follow-up is to pair it with tool calling. MCP becomes much easier to reason about when you already understand how models use tools in practice.
## Learn more
These references are the best next stop if you want to understand both the protocol and how it plugs into modern AI tooling.
* [Model Context Protocol](https://modelcontextprotocol.io/)
* [AI SDK MCP tools cookbook](https://ai-sdk.dev/cookbook/node/mcp-tools)
* [AI SDK tool calling docs](https://ai-sdk.dev/docs/ai-sdk-core/tools-and-tool-calling)
* [TurboStarter AI Tool calling docs](/ai/docs/tool-calling)
---
url: /ai/docs/providers/anthropic
title: Anthropic
description: Learn when Anthropic is a strong choice, what Claude models are best at, and how to use Anthropic for reasoning-heavy assistants and high-quality writing.
---
Anthropic is a strong choice when your product leans heavily on assistant-style interaction, nuanced writing, and deeper reasoning-heavy workflows. Claude models are especially popular in products that need thoughtful long-form output, careful tool use, and reliable conversation quality.
If OpenAI often wins on breadth, Anthropic often wins on teams that care most about the feel and quality of the assistant itself.

## Why choose Anthropic
Anthropic tends to be most attractive for text-first products where answer quality, reasoning style, and assistant behavior matter more than broad multimodal coverage.
Claude is a natural fit for products centered on chat, explanation,
synthesis, and careful long-form responses.
Anthropic is often used in assistants that need to reason through steps
before choosing a tool or producing a final answer.
See [Generating text](/ai/docs/generating-text),
[Reasoning](/ai/docs/reasoning), [Tool calling](/ai/docs/tool-calling), and
[Chat](/ai/docs/chat).
## Setup
Anthropic setup is simple in most AI SDK projects. You mainly need an API key and a clear choice about where Claude should fit in your provider mix.
Create an API key in the [Anthropic Console](https://console.anthropic.com/).
Add it to your environment:
```bash title=".env"
ANTHROPIC_API_KEY=your-api-key
```
Use the Anthropic provider in the AI SDK and choose the Claude model that fits your task and latency budget.
## Best fit
Claude is usually most compelling in products that feel more like an assistant than a pure model backend. It is often chosen for quality-sensitive text work rather than breadth across every modality.
Strong fit for complex questions, analytical conversations, and
assistant-style workflows that benefit from careful thinking.
Useful for explanations, rewriting, summarization, planning, and structured
reasoning over complex inputs.
Good fit when the model needs to reason before choosing or sequencing
external tools.
Relevant when you want text workflows that also incorporate image
understanding or mixed-input reasoning.
## AI SDK example
This example shows the basic Anthropic integration pattern through the AI SDK. In practice, teams often compare Claude against other providers for tasks like chat quality, summarization, and planning.
```ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const { text } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Summarize the tradeoffs of adding RAG to a support assistant.",
});
```
This is a good mental model for Anthropic: it is often chosen when the product needs a strong general text-and-assistant engine more than a huge list of modalities.
## Related documentation
Anthropic is most relevant in the parts of the docs where assistant quality and structured thinking matter. These pages are the best follow-up if that is your main interest.
## When to compare alternatives
Anthropic is a strong provider, but not every product needs what it is best at. Sometimes a broader or more specialized provider will create a better overall fit.
| If you care most about... | You may also want to compare |
| --------------------------------------------- | ----------------------------------------- |
| Broad multimodal coverage in one ecosystem | [OpenAI](/ai/docs/providers/openai) |
| Google-native multimodal and Gemini workflows | [Google AI](/ai/docs/providers/google) |
| Open-source image model access | [Replicate](/ai/docs/providers/replicate) |
## Learn more
These resources are the best next step if you want to go from high-level provider selection to implementation.
* [Anthropic](https://www.anthropic.com)
* [Anthropic documentation](https://docs.anthropic.com)
* [AI SDK Anthropic provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic)
---
url: /ai/docs/providers/deepseek
title: DeepSeek
description: Learn when DeepSeek is a strong option for text and reasoning workloads, how it fits into provider comparisons, and where it makes sense in modern AI products.
---
DeepSeek is most often evaluated for text-heavy and reasoning-oriented workloads, especially when teams want another serious option beyond the more commonly used default providers. It is especially relevant in products centered on chat, analysis, and tool-enabled assistants.
For many teams, DeepSeek is not the only provider in the stack. It is a provider worth comparing when quality, reasoning behavior, and cost sensitivity all matter at once.

## Why choose DeepSeek
DeepSeek is often attractive when you want a strong text-and-reasoning option in a multi-provider product. It is less about modality breadth and more about fit for language-heavy tasks.
DeepSeek is commonly evaluated for analytical, explanation-heavy, and
reasoning-sensitive product flows.
It is most relevant in chat, summarization, planning, coding support, and
assistant-style workflows.
See [Generating text](/ai/docs/generating-text),
[Reasoning](/ai/docs/reasoning), [Tool calling](/ai/docs/tool-calling), and
[Chat](/ai/docs/chat).
## Setup
DeepSeek setup is similar to most AI SDK-backed providers. The main implementation questions are usually model selection and where it belongs in your provider mix.
Create an API key on the [DeepSeek platform](https://platform.deepseek.com/).
Add it to your environment:
```bash title=".env"
DEEPSEEK_API_KEY=your-api-key
```
Use the DeepSeek provider in the AI SDK and compare it against the other text-generation providers in your product.
## Best fit
DeepSeek is usually a text-and-reasoning decision rather than a broad multimodal-platform decision. That makes it easier to position inside the rest of the docs.
Relevant when you want another strong text-generation provider in a
conversational product.
Worth evaluating for analysis, planning, and other tasks where model
behavior under more difficult prompts matters.
Useful in systems where text generation and tool use work together to
complete multi-step tasks.
Often compared when teams want to balance quality and operational cost
across more than one provider.
## AI SDK example
This example shows the basic DeepSeek integration shape. In practice, teams often compare it directly against OpenAI, Anthropic, or xAI for the same product flow.
```ts
import { generateText } from "ai";
import { deepseek } from "@ai-sdk/deepseek";
const { text } = await generateText({
model: deepseek("deepseek-chat"),
prompt:
"Explain how a support assistant could use RAG and tool calling together.",
});
```
This is the right way to think about DeepSeek in most products: a text- and reasoning-oriented provider you evaluate where those traits matter most.
## Related documentation
DeepSeek maps most naturally to the text-heavy and assistant-oriented parts of the docs. These pages are the best follow-up if you want to place it in a real product context.
## When to compare alternatives
DeepSeek is strong in its lane, but if you need a wider modality surface or a more unified ecosystem, another provider may be a better starting point.
| If you care most about... | You may also want to compare |
| -------------------------------------------- | ----------------------------------------- |
| Broad multimodal and audio coverage | [OpenAI](/ai/docs/providers/openai) |
| Assistant-style writing and Claude workflows | [Anthropic](/ai/docs/providers/anthropic) |
| Gemini and richer multimodal file workflows | [Google AI](/ai/docs/providers/google) |
## Learn more
These references are the best next step if you want to go deeper into DeepSeek-specific setup and implementation.
* [DeepSeek](https://www.deepseek.com/)
* [DeepSeek Platform](https://platform.deepseek.com/)
* [AI SDK DeepSeek provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/deepseek)
---
url: /ai/docs/providers/eleven-labs
title: ElevenLabs
description: Learn when ElevenLabs is the right choice for speech-first products, including text-to-speech, transcription, voice cloning, and richer audio workflows.
---
ElevenLabs is best understood as a speech-first platform rather than a general-purpose text-model provider. It is especially relevant when your product needs realistic voice synthesis, transcription, voice cloning, or broader audio experiences.
That makes ElevenLabs a strong complement to the text- and multimodal-focused providers in the rest of this section. It is often added when audio quality is a product requirement rather than a nice-to-have.

## Why choose ElevenLabs
Teams usually pick ElevenLabs when speech quality, voice control, or audio-specific product UX matters more than using one provider for every modality.
ElevenLabs is a natural fit when the product centers on TTS, STT, voice
cloning, or richer audio experiences.
It is especially attractive when voice realism and perceived quality are
central to the product, not just an extra feature.
See [Speech](/ai/docs/speech), [Transcription](/ai/docs/transcription),
[Text to Speech](/ai/docs/tts), and [Voice](/ai/docs/voice).
## Setup
ElevenLabs is typically integrated through its own SDKs and APIs rather than through the AI SDK core. In most projects, setup is mainly about getting a key and deciding which audio capabilities belong in your product.
Generate an API key in the [ElevenLabs dashboard](https://elevenlabs.io/).
Add it to your environment:
```bash title=".env"
ELEVENLABS_API_KEY=your-api-key
```
Use the ElevenLabs SDK or API for the speech workflow you are building, such as TTS, STT, cloning, or conversational audio.
## Best fit
ElevenLabs is the most specialized provider in this section. It is most compelling when your product has an explicit audio surface rather than treating speech as a minor extra.
A strong fit for narration, accessibility playback, spoken summaries, and
any product where voice output quality matters.
Useful for transcription, captions, voice input, and audio pipelines that
feed into summarization or agents.
Relevant when your product needs branded voices, character voices, or more
customized audio identity.
Worth evaluating when live or near-live conversational audio is a meaningful
part of the user experience.
## SDK example
This example shows the basic pattern of creating a client and using it as the entry point for audio workflows. The specific method you call will depend on whether you are generating speech, transcribing, or working with another audio feature.
```ts
import { ElevenLabsClient } from "elevenlabs";
const client = new ElevenLabsClient({
apiKey: process.env.ELEVENLABS_API_KEY,
});
```
The important design takeaway is that ElevenLabs is usually introduced when audio is important enough to deserve a dedicated provider strategy.
## Related documentation
ElevenLabs maps directly to the speech- and voice-oriented parts of the AI docs. These pages are the best follow-up if you want to see how the provider turns into product features.
## When to compare alternatives
ElevenLabs is excellent for audio, but that specialization also means it is usually one part of a broader stack rather than the only provider in the product.
| If you care most about... | You may also want to compare |
| ----------------------------------------------------------------- | ------------------------------------------------------------ |
| One provider covering text, embeddings, speech, and transcription | [OpenAI](/ai/docs/providers/openai) |
| Live conversational voice sessions | [Voice](/ai/docs/voice) and the broader real-time stack docs |
| Open-source image or niche model experimentation | [Replicate](/ai/docs/providers/replicate) |
## Learn more
These are the best next references if you want to move from provider overview into concrete audio implementation.
* [ElevenLabs](https://elevenlabs.io/)
* [ElevenLabs docs](https://elevenlabs.io/docs)
* [ElevenLabs quickstart](https://elevenlabs.io/docs/quickstart)
* [ElevenLabs API reference](https://elevenlabs.io/docs/api-reference/introduction)
---
url: /ai/docs/providers/google
title: Google AI
description: Learn when Google AI is a strong fit, how Gemini fits into modern AI products, and where Google works especially well for multimodal and retrieval-heavy workflows.
---
Google AI is most compelling when your product benefits from Gemini models, multimodal inputs, embeddings, and broader Google ecosystem familiarity. It is a strong option for teams building assistants that need to work across text, files, images, and retrieval-style workflows.
Google is often worth considering when you want more than pure chat. It becomes especially interesting in products that combine reasoning, files, search grounding, and multimodal interaction.

## Why choose Google AI
Google AI stands out when multimodal understanding and Gemini-specific workflows matter. It is often evaluated as a serious alternative to OpenAI for teams building richer input and grounding experiences.
A strong fit for teams that specifically want Gemini models as the core of
their AI product.
Google is especially relevant when the product needs to work across text,
files, images, and broader input types.
See [Generating text](/ai/docs/generating-text), [Image
generation](/ai/docs/image-generation), [Embeddings](/ai/docs/embeddings),
and [Chat](/ai/docs/chat).
## Setup
Most projects start with a Google AI Studio key, though larger teams may eventually prefer Google Cloud-style credential flows depending on their architecture.
Create an API key in [Google AI Studio](https://aistudio.google.com/app/apikey).
Add it to your environment:
```bash title=".env"
GOOGLE_GENERATIVE_AI_API_KEY=your-api-key
```
Use the Google provider in the AI SDK and choose Gemini models that match your product's latency, reasoning, and modality needs.
## Best fit
Google tends to be most attractive in products that combine text generation with richer context or multimodal inputs. That makes it a practical option for assistants that go beyond plain conversation.
Useful when the product needs to understand text, images, files, or mixed
input sources in one workflow.
Relevant for semantic search, retrieval, and knowledge-aware experiences.
Valuable when you want answers that connect to search or other grounded
information sources.
Worth comparing when your product needs both text and image-oriented flows
in a shared provider ecosystem.
## AI SDK example
This example shows the core Google AI SDK pattern through Gemini. The same provider can then extend into embeddings, multimodal input, or grounding-heavy workflows.
```ts
import { generateText } from "ai";
import { google } from "@ai-sdk/google";
const { text } = await generateText({
model: google("gemini-2.5-flash"),
prompt: "Explain how embeddings help a support-search product.",
});
```
This is a good default mental model for Google AI: a strong provider to evaluate when the product is more multimodal or context-rich than plain text generation.
## Related documentation
Google touches several parts of the AI docs because its strengths map cleanly to multiple capabilities. These are the best follow-up pages if you want to see those patterns in context.
## When to compare alternatives
Google is strong, but the best starting provider still depends on the product. In some cases, a provider with broader modality coverage or a more specialized ecosystem may be the better fit.
| If you care most about... | You may also want to compare |
| ------------------------------------------------------------------ | ----------------------------------------- |
| One provider for text, speech, transcription, and image generation | [OpenAI](/ai/docs/providers/openai) |
| Assistant-style writing and reasoning quality | [Anthropic](/ai/docs/providers/anthropic) |
| Open-source model experimentation | [Replicate](/ai/docs/providers/replicate) |
## Learn more
These references are the best next step if you want to go deeper into Google's provider surface and Gemini-specific implementation details.
* [Google AI](https://ai.google/)
* [Google AI Studio](https://aistudio.google.com/)
* [Google AI docs](https://ai.google.dev/docs)
* [AI SDK Google provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai)
---
url: /ai/docs/providers
title: AI providers
description: Compare the main AI providers in the stack, understand what each is best at, and choose the right model platform for your product.
---
Providers are the model backends and capability platforms that actually power your AI features. Choosing the right one affects latency, cost, multimodal support, reasoning quality, tool support, and how much flexibility you have as your product evolves.
There is rarely one perfect provider for everything. Most strong AI products choose a default provider for the common path, then add others only when they create a clear product advantage.
## Overview
This section is meant to help you choose and understand providers, not just configure environment variables. Start with the providers that match your capability needs, then go deeper into setup and app-specific docs when you are ready to implement.
OpenAI is a strong starting point if you want text, vision, speech,
transcription, embeddings, and image generation in one ecosystem.
Anthropic is a natural fit for high-quality writing, deep analysis, and
assistant-style workflows with tool use.
Google AI is especially relevant when you want Gemini, embeddings, file
input, and broader multimodal experiences.
Replicate is useful when you want a wide range of image and niche community
models without hosting them yourself.
## Available providers
These pages cover the providers that make the most sense for the current AI section. Each one explains where the provider fits best rather than treating setup as the only thing that matters.
## Provider selection
Provider choice is usually easier when you anchor it in the product problem instead of the model hype cycle. This quick comparison is a good starting point.
| If you need... | A good starting page |
| -------------------------------------------- | -------------------------------------------- |
| One provider with broad modality coverage | [OpenAI](/ai/docs/providers/openai) |
| Strong writing and assistant-style reasoning | [Anthropic](/ai/docs/providers/anthropic) |
| Gemini and multimodal Google workflows | [Google AI](/ai/docs/providers/google) |
| Open-source image models and experimentation | [Replicate](/ai/docs/providers/replicate) |
| Speech-first product features | [ElevenLabs](/ai/docs/providers/eleven-labs) |
| Open-weight model flexibility | [Meta](/ai/docs/providers/meta) |
## Related capabilities
Provider pages are most useful when read alongside the capability pages. That is where you can see how provider choice maps to actual product features.
---
url: /ai/docs/providers/meta
title: Meta
description: Learn when Meta's model ecosystem makes sense, how to think about open-weight hosting, and where Llama fits in modern AI product stacks.
---
Meta is different from the other providers in this section because its AI story is centered on open-weight models rather than a single hosted platform. In practice, that usually means accessing Llama through a third-party host such as DeepInfra, Fireworks, Bedrock, or another compatible provider.
That makes Meta especially interesting for teams that care about ecosystem choice, provider portability, or open-model strategy rather than a single managed API surface.

## Why choose Meta
Teams usually choose Meta's model ecosystem when they want more flexibility around hosting, pricing, model access, or open-model experimentation. It is less about one official vendor experience and more about keeping options open.
Meta's open models are attractive when you want the option to choose from
multiple hosts instead of depending on one provider platform.
It is commonly evaluated for chat, generation, code assistance, and
tool-using assistant scenarios.
See [Generating text](/ai/docs/generating-text), [Tool
calling](/ai/docs/tool-calling), and [Chat](/ai/docs/chat).
## Setup
Because Llama is usually hosted by third parties, setup starts by choosing a host rather than going directly to Meta. Your environment variables and model IDs then depend on that host.
Choose a hosting provider such as DeepInfra, Fireworks, or Amazon Bedrock.
Add the relevant credentials to your environment. For example:
```bash title=".env"
DEEPINFRA_API_KEY=your-api-key
# or
FIREWORKS_API_KEY=your-api-key
```
Use that host's AI SDK provider to access the Llama model that fits your product.
## Best fit
Meta is most interesting when you care about provider optionality, open-model ecosystems, or experimenting with different hosting paths while staying within familiar AI SDK patterns.
A natural fit for text-first assistants, writing flows, and internal
productivity tools.
Often evaluated for coding assistants, explanation, and developer tooling
depending on the specific hosted model.
Relevant when you want open-weight model options for tool-calling and
assistant-style systems.
Useful when architecture or procurement constraints make host portability
more important than using a single closed model platform.
## AI SDK example
This example shows the general idea using a hosted Meta model through a provider integration. The exact provider and model ID will vary based on the host you choose.
```ts
import { generateText } from "ai";
import { deepinfra } from "@ai-sdk/deepinfra";
const { text } = await generateText({
model: deepinfra("meta-llama/Meta-Llama-3.1-8B-Instruct"),
prompt: "Explain the benefits of using tool calling in a support assistant.",
});
```
The important thing to remember is that with Meta's open models, host choice is part of provider choice.
## Related documentation
Meta is mostly relevant in the text- and assistant-oriented parts of the docs. These pages are the best next stop if you want to understand where its models could fit into an end-user product.
## When to compare alternatives
Meta's ecosystem is flexible, but that does not automatically make it the best starting point. If you want a more unified, managed experience, another provider may get you moving faster.
| If you care most about... | You may also want to compare |
| ------------------------------------- | -------------------------------------------- |
| Broad managed capability coverage | [OpenAI](/ai/docs/providers/openai) |
| Assistant-style writing and reasoning | [Anthropic](/ai/docs/providers/anthropic) |
| Speech and audio workflows | [ElevenLabs](/ai/docs/providers/eleven-labs) |
## Learn more
These references are useful if you want to evaluate Meta's models through the hosts and provider surfaces that actually make them available in practice.
* [Meta AI](https://ai.meta.com/)
* [Llama](https://ai.meta.com/llama/)
* [AI SDK providers directory](https://sdk.vercel.ai/providers)
---
url: /ai/docs/providers/openai
title: OpenAI
description: Learn when to choose OpenAI, what capabilities it covers well, and how to set it up for text, image, speech, transcription, and embeddings.
---
OpenAI is one of the broadest general-purpose providers in the current AI ecosystem. It is often the simplest starting point when you want one provider that can cover chat, reasoning, vision, speech, transcription, embeddings, and image generation.
That breadth makes OpenAI especially useful for teams that want to move quickly without stitching together several providers on day one.

## Why choose OpenAI
OpenAI is usually the default pick when teams want strong coverage across multiple AI capabilities, mature tooling, and a straightforward path from prototype to production.
OpenAI is relevant across text generation, image generation, transcription,
speech, embeddings, tool calling, and multimodal apps.
It is a practical choice when you want fewer moving parts and a single
provider to support many product experiments.
See [Generating text](/ai/docs/generating-text), [Image
generation](/ai/docs/image-generation), [Embeddings](/ai/docs/embeddings),
[Speech](/ai/docs/speech), and [Transcription](/ai/docs/transcription).
## Setup
Getting started with OpenAI is straightforward. In most projects, setup is mainly about generating a key, storing it in your environment, and choosing the right model for each task.
Create an API key in the [OpenAI API dashboard](https://platform.openai.com/api-keys).
Add it to your environment:
```bash title=".env"
OPENAI_API_KEY=your-api-key
```
Use the OpenAI provider through the AI SDK and choose the right model for the capability you are building.
## Best fit
OpenAI is most compelling when you want one provider that can support multiple product surfaces without switching ecosystems every time the feature changes.
Strong fit for assistants, copilots, drafting, summarization, structured
output, and many tool-using workflows.
A practical choice for RAG, semantic search, clustering, and relevance-based
workflows.
Useful when you want text-to-speech or speech-to-text inside the same
broader AI stack.
Relevant when your product needs prompt-to-image flows in the same provider
ecosystem as text and audio.
## AI SDK example
This example shows the simplest OpenAI text-generation shape through the AI SDK. The same provider can then extend into images, embeddings, or audio depending on the feature.
```ts
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const { text } = await generateText({
model: openai("gpt-5"),
prompt: "Write a short product description for an AI meeting assistant.",
});
```
The main lesson is not the exact model name. It is that OpenAI is often chosen when a team wants one provider to support several product directions.
## Related documentation
OpenAI appears across several capability pages because it spans more than one mode of interaction. These are the best follow-up pages if you want to explore actual product patterns.
## When to compare alternatives
OpenAI is broad, but that does not mean it is always the best fit. In some products, a more specialized or cheaper provider may be the better starting point.
| If you care most about... | You may also want to compare |
| ------------------------------------------- | -------------------------------------------- |
| Claude-style writing and assistant behavior | [Anthropic](/ai/docs/providers/anthropic) |
| Gemini and Google multimodal workflows | [Google AI](/ai/docs/providers/google) |
| Open-source image ecosystem access | [Replicate](/ai/docs/providers/replicate) |
| Dedicated voice and audio workflows | [ElevenLabs](/ai/docs/providers/eleven-labs) |
## Learn more
These are the most useful next references if you want to move from provider overview to implementation details.
* [OpenAI Platform](https://openai.com/)
* [OpenAI API docs](https://platform.openai.com/docs)
* [AI SDK OpenAI provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/openai)
---
url: /ai/docs/providers/replicate
title: Replicate
description: Learn when Replicate is the right choice, how it fits open-source model workflows, and why it is especially useful for image-heavy AI products.
---
Replicate is different from the frontier-model platforms in this section because it is primarily a model-hosting ecosystem. It is especially useful when you want access to a wide range of open-source and specialized models without managing the infrastructure yourself.
That makes Replicate one of the most practical choices for teams building image products or experimenting with niche models that are not available through the larger general-purpose providers.

## Why choose Replicate
Replicate is usually chosen for model diversity rather than for being the single provider for an entire AI stack. It shines when experimentation, image workflows, or specialized model access matter.
Replicate gives teams cloud access to a large catalog of community and
specialized models without self-hosting them.
It is especially useful in image-generation workflows where model variety
matters more than staying inside one closed provider ecosystem.
See [Image generation](/ai/docs/image-generation), [Image
playground](/ai/docs/image), and [Speech](/ai/docs/speech) if you are
exploring broader model experimentation.
## Setup
Replicate setup is simple and usually starts with a single API token. The bigger product decision is which models to expose and how much provider-specific configuration you want to surface in the UI.
Generate a token in your [Replicate account settings](https://replicate.com/).
Add it to your environment:
```bash title=".env"
REPLICATE_API_TOKEN=your-api-key
```
Use the Replicate provider in the AI SDK and select the model that matches the job you are solving.
## Best fit
Replicate is best thought of as a gateway to model variety. It becomes attractive when a product needs more experimentation or niche capability than a single default provider offers.
The clearest fit. Replicate is especially useful when your product depends
on image models with different styles, tradeoffs, or specialties.
Useful when you want to test a narrower model for a specific task instead of
relying only on one general-purpose provider.
A good addition when your stack already has a main text provider but you
want broader model choice for other modes.
Helpful when the team wants to compare several hosted models before deciding
which one deserves a deeper integration.
## AI SDK example
This example shows the basic Replicate image-generation pattern through the AI SDK. It captures the main reason most teams add Replicate in the first place.
```ts
import { generateImage } from "ai";
import { replicate } from "@ai-sdk/replicate";
const { image } = await generateImage({
model: replicate.image("black-forest-labs/flux-schnell"),
prompt: "A clean SaaS dashboard hero illustration in blue and orange",
aspectRatio: "16:9",
});
```
The main lesson here is that Replicate is often the right answer when model variety matters as much as model quality.
## Related documentation
Replicate connects most directly to the image-oriented parts of the AI docs. These pages are the best follow-up if that is the product surface you care about most.
## When to compare alternatives
Replicate is powerful, but not every product needs a large model catalog. If a unified provider experience matters more than model breadth, another choice may be simpler.
| If you care most about... | You may also want to compare |
| --------------------------------------------------- | -------------------------------------------- |
| One provider for text, image, audio, and embeddings | [OpenAI](/ai/docs/providers/openai) |
| Gemini and broader multimodal workflows | [Google AI](/ai/docs/providers/google) |
| Speech-first product surfaces | [ElevenLabs](/ai/docs/providers/eleven-labs) |
## Learn more
These references are the best next step if you want provider-specific setup details or want to browse the model ecosystem directly.
* [Replicate](https://replicate.com)
* [Replicate docs](https://replicate.com/docs)
* [AI SDK Replicate provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/replicate)
---
url: /ai/docs/providers/xai
title: xAI Grok
description: Learn when xAI is a useful provider choice, how Grok fits into chat and multimodal workflows, and where to compare it against other model platforms.
---
xAI is most relevant when you want to evaluate Grok models as part of a modern AI product stack. It is typically considered for chat, reasoning-flavored interaction, tool-enabled assistants, and selected multimodal workflows.
For most teams, xAI is not the first provider they integrate, but it can be a worthwhile comparison point when model behavior and provider diversity matter.

## Why choose xAI
xAI tends to matter most when a team wants to compare Grok against other frontier-style model providers rather than committing immediately to a single default ecosystem.
xAI is often evaluated alongside OpenAI, Anthropic, and Google for
conversational and assistant-style product behavior.
Depending on the product surface, xAI may also be relevant for image-related
or richer multimodal workflows.
See [Generating text](/ai/docs/generating-text), [Tool
calling](/ai/docs/tool-calling), [Reasoning](/ai/docs/reasoning), and
[Chat](/ai/docs/chat).
## Setup
xAI setup is similar to most AI SDK-backed providers: generate a key, store it securely, and choose the Grok model that fits your task.
Create an API key from the [xAI platform](https://x.ai).
Add it to your environment:
```bash title=".env"
XAI_API_KEY=your-api-key
```
Use the xAI provider in the AI SDK and compare Grok models against the other providers in your stack.
## Best fit
xAI is usually best understood as part of a provider comparison set. It becomes useful when your product needs another strong option for chat, tool use, or model diversity rather than a specialized niche capability.
Relevant for chat and assistant flows where you want to evaluate Grok's
interaction style against other providers.
Worth comparing for assistant scenarios where external tools or system
integrations are part of the experience.
Depending on the task, xAI can be part of the set you compare for deeper
multi-step responses.
In some products, xAI may also be relevant where text and image generation
live in the same provider evaluation set.
## AI SDK example
This example shows the basic xAI integration pattern in the AI SDK. In practice, teams usually use it as one option inside a broader provider comparison strategy.
```ts
import { generateText } from "ai";
import { xai } from "@ai-sdk/xai";
const { text } = await generateText({
model: xai("grok-3-mini-fast"),
prompt: "Summarize the risks of adding too many tools to an AI assistant.",
});
```
This is the right mental model for xAI in product work: one provider in a broader frontier-model toolbox, not necessarily the only backend in the system.
## Related documentation
The most useful way to explore xAI in these docs is through the capability pages where provider tradeoffs are most visible. These pages are the best next follow-up.
## When to compare alternatives
xAI can be valuable, but most teams will still want to compare it against more established defaults before making it the primary provider in a product.
| If you care most about... | You may also want to compare |
| -------------------------------------- | ----------------------------------------- |
| Broad managed capability coverage | [OpenAI](/ai/docs/providers/openai) |
| Assistant-style writing quality | [Anthropic](/ai/docs/providers/anthropic) |
| Gemini and Google multimodal ecosystem | [Google AI](/ai/docs/providers/google) |
## Learn more
These links are the best next stop if you want provider-specific implementation details.
* [xAI](https://x.ai)
* [AI SDK xAI provider docs](https://sdk.vercel.ai/providers/ai-sdk-providers/xai)
---
url: /ai/docs/reasoning
title: Reasoning
description: Learn when reasoning-capable AI models help, how to use them responsibly, and how TurboStarter AI exposes reasoning in chat experiences.
---
Reasoning models are designed for tasks that benefit from deeper multi-step thinking: planning, comparison, synthesis, troubleshooting, tool selection, and decisions that are too brittle for a fast one-shot answer.
That does not mean you should turn reasoning on for everything. In practice, reasoning is a tradeoff between quality, latency, and cost.
Complex questions, ambiguous requests, code analysis, planning, and tasks
that require multiple intermediate steps.
The [Chat app](/ai/docs/chat) supports reasoning-capable models and can
surface reasoning-related usage in the UI.
Use reasoning when the task is hard enough that extra deliberation is likely
to improve the answer.
## Overview
For most teams, reasoning is not about exposing private chain-of-thought. It is about choosing models and settings that spend more effort on:
* decomposing a problem
* checking assumptions
* comparing alternatives
* working through constraints
* deciding which tool or strategy to use next
That often leads to better outcomes for difficult tasks, but with higher latency and sometimes higher cost.
Reasoning is extra thinking budget for hard tasks, not a feature to enable
blindly across your whole app.
## Use cases
Reasoning is most valuable when the task actually benefits from extra deliberation. This quick comparison helps separate genuinely reasoning-heavy work from tasks that are better handled by faster models.
| Task | Use reasoning? | Why |
| -------------------------------------------------- | -------------- | ---------------------------------------------------------------- |
| Debug a production incident from logs and symptoms | Yes | The model needs to compare hypotheses and work through evidence. |
| Summarize a short meeting note | Usually no | A fast model is often enough. |
| Plan a migration with constraints and tradeoffs | Yes | This benefits from deeper structured thinking. |
| Rewrite a paragraph in a friendlier tone | No | This is mainly a generation task, not a reasoning-heavy one. |
## How to think about reasoning in UX
Reasoning is not just a model setting. It also changes the experience of using the product, especially around latency, confidence, and how much internal process you expose to the user.
What users often want is confidence, not raw internal deliberation.
Summaries, cited evidence, and clear conclusions are usually better UX than
dumping intermediate reasoning.
If a response takes longer, the UI should communicate why: a thinking
indicator, staged streaming, or a clear "analyzing" state helps set
expectations.
Applying reasoning only to hard requests is often a better product choice
than enabling it globally.
A model that thinks longer can still hallucinate. Retrieval, tools,
validation, and citations still matter.
## Product patterns
In many AI products, reasoning is primarily a chat or assistant concern. A typical implementation:
* supports reasoning-capable chat models
* passes provider-specific reasoning options when the user enables reasoning
* streams reasoning-aware responses into the chat UI
* tracks reasoning token usage separately
That is a strong pattern for production systems: if reasoning has a cost profile, you should measure it explicitly.
If you want to see where this capability shows up in this docs set, start with [Chat](/ai/docs/chat), then compare it with [Generating text](/ai/docs/generating-text) and [Tool calling](/ai/docs/tool-calling).
## AI SDK usage pattern
Provider support varies, but the core idea is to pass reasoning-related options through provider configuration when the task warrants it.
```ts
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const result = await generateText({
model: openai("gpt-5"),
prompt:
"Compare two migration strategies for moving a SaaS app to a monorepo.",
providerOptions: {
openai: {
reasoningEffort: "medium",
},
},
});
```
The important idea is not the exact option name. It is the product decision behind it: harder tasks may justify a slower, more deliberate model run.
## Decision framework
If you are unsure whether reasoning belongs in a feature, a lightweight decision process usually helps. This keeps reasoning intentional instead of becoming the default for every request.
Ask whether the task is genuinely multi-step or ambiguity-heavy.
Decide whether better reasoning quality is worth extra latency and cost.
Add retrieval or tools if the task needs outside information or actions.
Surface the answer in a user-friendly way, with evidence or a concise
reasoning summary when helpful.
Track usage, latency, and success rates so you know whether reasoning is
paying off.
## When not to use reasoning
Some tasks feel complex, but the real answer is not "add more reasoning". In many cases, speed, deterministic logic, or better context will matter more.
Simple rewrites, summaries, formatting tasks, and classification are often
better served by fast models without extra reasoning overhead.
Taxes, permissions, billing rules, and policy enforcement should be encoded
in software, not delegated to model reasoning.
If the model lacks the right documents, tool results, or system state, more
reasoning alone will not fix it.
## Related capabilities
Reasoning rarely stands alone. It is usually layered on top of other capabilities that provide context, actions, or the final user-facing output.
## Useful references
These resources are helpful if you want to understand both the practical product tradeoffs and the research conversation around reasoning in language models.
* [Anthropic's article on extended thinking](https://www.anthropic.com/engineering/claude-think-tool)
* [OpenAI prompting guide](https://platform.openai.com/docs/guides/prompt-engineering)
* [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903)
* [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916)
* [TurboStarter AI Chat docs](/ai/docs/chat)
---
url: /ai/docs/speech
title: Speech
description: Learn how AI speech synthesis works, when to use text-to-speech, and how to design natural voice experiences for apps, assistants, and accessibility features.
---
Speech synthesis turns text into audio that sounds spoken rather than written. It is the capability behind narration, voice assistants, accessibility playback, character voices, and many real-time conversational experiences.
In modern products, speech is rarely just "read this text out loud". The best experiences consider voice selection, latency, emotional tone, playback controls, and whether the audio is a one-off file or part of a live conversation.
## Overview
At its core, speech synthesis means taking text as input and producing audio as output. That audio can be generated as a single file, streamed in chunks, or produced in real time as part of an interactive voice system.
Most speech products involve some combination of:
* converting text into spoken audio
* selecting a voice or speaker profile
* controlling style, pace, or delivery
* streaming or downloading the result
* playing the result inside an app or assistant
## Where speech is useful
Speech is especially useful when reading is not the best interface. It adds reach, accessibility, and a stronger sense of presence than text alone.
Accessibility playback, narration, reading assistants, virtual characters,
voice UIs, and spoken summaries are all strong speech use cases.
See [Text to Speech](/ai/docs/tts), [Voice](/ai/docs/voice), and the [Eleven
Labs](/ai/docs/providers/eleven-labs) provider guide for more applied
follow-up.
If the user only needs silent, skimmable output, plain text is often faster,
cheaper, and easier to control.
## Speech vs voice vs transcription
These terms are related, but they refer to different capabilities. Keeping them separate makes it easier to design the right system.
| Capability | Input | Output | Best for |
| ---------------- | -------------------- | ------------------------ | --------------------------------------------------- |
| Speech synthesis | Text | Audio | Narration, playback, spoken responses |
| Transcription | Audio | Text | Captions, notes, search, voice input |
| Real-time voice | Audio and text turns | Interactive conversation | Voice assistants, live agents, low-latency sessions |
## AI SDK example
The AI SDK includes a speech generation API for provider-backed text-to-speech flows. The example below shows the basic shape: choose a speech model, send text, and receive audio.
```ts
import { experimental_generateSpeech as generateSpeech } from "ai";
import { openai } from "@ai-sdk/openai";
const { audio } = await generateSpeech({
model: openai.speech("tts-1"),
text: "Hello from the AI SDK!",
voice: "alloy",
});
console.log(audio);
```
This is the foundation for many speech features. In a real app, you would usually stream or save the audio rather than just logging it.
## Common product patterns
Speech features tend to fall into a few common UX patterns. Choosing the right one depends on whether the user wants playback, interaction, or audio as a generated asset.
The user enters text, chooses a voice, and plays or downloads the result.
This is the most common TTS pattern.
The app adds optional speech playback to written content such as articles,
summaries, or onboarding instructions.
A text or chat system generates the answer, then speech synthesis reads it
aloud for a more immersive interaction.
Speech synthesis is used as one piece of a live assistant flow alongside
transcription, turn-taking, and session control.
## Design considerations
Speech can feel magical in demos, but product quality usually comes down to a few practical decisions. These are the areas worth thinking through up front.
For narration, a short wait is fine. For assistants or live responses,
latency has a much bigger effect on whether the interaction feels natural.
The voice communicates brand, tone, and trust. A great voice for
accessibility may not be the right one for a playful character or a support
assistant.
Speech quality alone is not enough. Users often need pause, replay, speed
control, and download options.
Text written for reading is not always pleasant to hear. Long sentences,
timestamps, code, or URLs may need pre-processing before synthesis.
If speech is streamed as it is generated, the product can feel much more
alive, but error handling and buffering become more important.
## Beginner mistakes to avoid
Many weak speech features fail for predictable reasons. The model may be good, but the experience still feels awkward if the surrounding design is poor.
Lists, links, code snippets, and long machine-written sentences often sound
unnatural if sent directly to speech synthesis.
Even high-quality audio becomes frustrating if users cannot pause, replay,
or adjust speed.
The same voice can feel warm, robotic, premium, or wrong depending on the
product and audience.
Some tasks are simply easier to scan in text than to hear in audio. Speech
should add value, not just novelty.
## Related documentation
In this docs set, speech is best understood through the app and provider pages that turn the capability into concrete product flows. These are the best places to continue once you understand the fundamentals.
## A practical checklist
Before shipping a speech feature, it helps to test whether the output sounds good, behaves well, and actually improves the product rather than just adding novelty.
* Pick voices that match the product tone and audience.
* Clean up text before sending it to speech synthesis.
* Add playback controls early, not as an afterthought.
* Measure latency if the experience is interactive.
* Decide whether audio should be streamed, downloaded, or stored.
## Learn more
These references are useful if you want to go deeper into both implementation and product design for speech features.
* [AI SDK speech generation reference](https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-speech)
* [TurboStarter AI Text to Speech docs](/ai/docs/tts)
* [TurboStarter AI Voice docs](/ai/docs/voice)
* [ElevenLabs Documentation](https://elevenlabs.io/docs)
---
url: /ai/docs/tool-calling
title: Tool calling
description: Learn how AI tool calling works, when to let models use external tools, and how to design reliable tool-based workflows in modern AI apps.
---
Tool calling lets a model do more than generate text. Instead of guessing, it can decide to call a weather API, search the web, look up data, run a calculation, or trigger a workflow, then use the result to continue the response.
This is one of the key shifts that turns a chatbot into an assistant. The model is no longer limited to what it remembers. It can interact with systems around it.
## Overview
At a high level, tool calling means giving the model a set of well-defined capabilities and letting it choose when to use them. Each tool has a name, a description, an input schema, and usually an execution function that runs in your app or backend.
That creates a loop like this:
The user asks for something that may require outside information or action.
The model decides whether a tool is needed.The selected tool runs with validated input.The tool result is returned to the model.The model uses that result to continue or complete the answer.
## When tool calling is useful
Tool calling is most useful when the model needs access to fresh data, private data, or real-world actions. That is why it shows up so often in assistants, agents, dashboards, support tools, and internal automation.
Web search, database lookup, CRM queries, order status checks, calculations,
scheduling, and content retrieval are all strong tool-calling use cases.
See related implementations in [Chat](/ai/docs/chat), [Knowledge
RAG](/ai/docs/rag), and [MCP](/ai/docs/mcp).
If the answer can be produced from the prompt and context alone, plain text
generation is usually simpler and faster.
## Tool calling vs retrieval vs reasoning
These capabilities are often discussed together, but they solve different problems. Knowing the difference helps you design simpler systems.
| Capability | What it does | Best for |
| ---------------------- | ------------------------------------------------ | ----------------------------------------------------------------- |
| Text generation | Produces language output | Drafting, rewriting, summarizing, answering from provided context |
| Retrieval / embeddings | Finds relevant context | Search, RAG, semantic lookup |
| Tool calling | Lets the model use external functions or systems | Actions, real-time data, workflow orchestration |
| Reasoning | Gives the model more thinking budget | Multi-step planning, comparisons, hard decisions |
## AI SDK example
The AI SDK makes tool calling approachable by letting you define tools with a schema and an execution function. This keeps the interface model-friendly while still giving you runtime control.
```ts
import { generateText, stepCountIs, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const result = await generateText({
model: openai("gpt-5"),
prompt: "What is the weather in Berlin today?",
tools: {
weather: tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The city or place to check"),
}),
execute: async ({ location }) => {
return {
location,
temperature: "18C",
conditions: "Cloudy",
};
},
}),
},
stopWhen: stepCountIs(5),
});
```
This pattern is useful because the model gets a clear interface, and your app keeps control over validation, permissions, and execution.
## How to design good tools
The best tools are boringly clear. Models do better when tools are specific, narrow, and easy to distinguish from one another.
A tool like `lookupCustomerOrder` is easier for the model to use correctly
than a vague tool like `handleSupportTask`.
Tool descriptions should explain when the tool should be used and what it
returns. Ambiguity leads to incorrect tool selection.
Input validation matters. A strong schema keeps tool calls predictable and
prevents malformed inputs from leaking into the rest of your system.
Tool responses should contain what the model needs to continue, not huge
noisy payloads with every possible field.
Some tools are read-only, while others create side effects. Treat write
actions carefully and consider confirmations, auth, and audit logging.
## Common product patterns
Tool calling rarely exists by itself. In most products, it appears as part of a broader workflow or assistant experience.
The model decides when to use search, fetches results, and then summarizes
them for the user.
The model looks up customer, billing, or product data across internal
systems before answering.
The model chains multiple tools together, such as search, retrieval, and
summarization, to complete a multi-step task.
The model does not just answer. It creates tickets, updates records, or
triggers downstream automations after validation.
## Failure modes to plan for
Tool calling is powerful, but it introduces new operational risks. A good assistant is not just "smart"; it is predictable under failure.
The model may choose a tool when none is needed, or choose the wrong one if
descriptions overlap too much.
Weak schemas or vague prompts can produce malformed parameters that break
execution.
Returning too much irrelevant data can make the final answer worse instead
of better.
Any tool that writes data, charges money, or changes system state should be
protected with auth, policy checks, and confirmation flows where
appropriate.
## Related documentation
In this docs set, tool calling connects naturally to several other capabilities. Those pages are the best place to see how it fits into end-user experiences.
## A practical checklist
Before shipping tool calling, make sure the system is understandable to both the model and your team. The simpler the contract, the more reliable the behavior.
* Keep tools small and well-scoped.
* Validate all tool inputs with schemas.
* Prefer read-only tools first, then add safe write actions later.
* Log tool selections and failures so you can evaluate behavior.
* Avoid exposing tools that overlap too much in purpose.
## Learn more
If you want to go deeper, these resources are the best next step. They cover both the practical API surface and the emerging design patterns around agent-like systems.
* [AI SDK tool calling docs](https://ai-sdk.dev/docs/ai-sdk-core/tools-and-tool-calling)
* [AI SDK chatbot tool usage guide](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot-tool-usage)
* [OpenAI function calling guide](https://platform.openai.com/docs/guides/function-calling)
* [TurboStarter AI Chat docs](/ai/docs/chat)
* [TurboStarter AI RAG docs](/ai/docs/rag)
---
url: /ai/docs/transcription
title: Transcription
description: Learn how speech-to-text works, when to use AI transcription, and how to design transcription flows for notes, search, captions, and voice interfaces.
---
Transcription converts spoken audio into text. It is one of the most practical AI capabilities because it turns voice, meetings, recordings, and media into something searchable, editable, and usable by the rest of your product.
For many teams, transcription is the bridge between audio and everything else: summaries, captions, analytics, support workflows, notes, and voice-based input all start with getting the spoken words into text reliably.
## Overview
At a basic level, transcription means sending audio into a speech-to-text model and receiving text back. Some systems return only raw text, while others also provide timestamps, speaker information, segmentation, or confidence-style metadata.
That makes transcription useful well beyond simple dictation. A good transcription pipeline can support:
* meeting notes
* captions and subtitles
* search across audio and video
* voice input for assistants
* downstream summarization or extraction
## When transcription is useful
Transcription is valuable any time spoken information needs to become searchable, shareable, or actionable in a text-first workflow. It is often the first step in a larger AI pipeline.
Voice notes, meeting recordings, support calls, interview analysis,
captioning, and voice-assistant input are strong transcription use cases.
The closest companion pages are [Voice](/ai/docs/voice),
[Speech](/ai/docs/speech), and [Generating text](/ai/docs/generating-text).
A transcript captures what was said, but not always what mattered. Many
products pair transcription with summarization, extraction, or search.
## Transcription vs speech vs voice
These capabilities are often grouped together, but they solve different parts of the audio experience. Keeping them separate helps you choose the right building blocks.
| Capability | Input | Output | Best for |
| ---------------- | --------------------------- | ------------------------ | ------------------------------------------------- |
| Transcription | Audio | Text | Notes, captions, search, voice input |
| Speech synthesis | Text | Audio | Narration, spoken answers, accessibility playback |
| Real-time voice | Live audio and system turns | Interactive conversation | Voice assistants, live support, agent sessions |
## AI SDK example
The AI SDK can also be used for transcription through provider-backed models. The shape is simple: send audio bytes in and receive text out.
```ts
import { experimental_transcribe as transcribe } from "ai";
import { openai } from "@ai-sdk/openai";
import { readFile } from "fs/promises";
const { text } = await transcribe({
model: openai.transcription("whisper-1"),
audio: await readFile("audio.mp3"),
});
console.log(text);
```
This is the core pattern behind many transcription features. In a real product, you would often store the transcript, index it, or pass it into another AI step right after transcription.
## Common product patterns
Transcription usually becomes more useful when it is part of a larger workflow. The text itself is valuable, but the downstream actions often create the real product value.
A short recording becomes editable text that can be saved, searched, or
turned into a task or reminder.
Audio is transcribed first, then summarized, tagged, or turned into action
items.
Transcription powers subtitles or accessibility features for audio and video
content.
Real-time voice systems use transcription as the text layer that feeds the
model before a spoken response is generated.
## Design considerations
Transcription is often judged by accuracy, but product usefulness depends on more than just word-level correctness. These are the design decisions that usually matter most.
Background noise, overlapping speakers, poor microphones, and compression
can hurt quality long before model choice becomes the limiting factor.
If users need captions, clip references, or synchronized playback, timing
information may matter as much as the transcript itself.
Technical jargon, names, accents, and multilingual audio all affect
transcription quality. Domain-aware post-processing is often worth it.
Live transcription is a latency problem. Post-recording transcription is
more about accuracy, formatting, and downstream processing.
Fillers, repetitions, and broken punctuation may be acceptable in raw
transcripts, but not in user-facing notes or captions.
## What comes after transcription
In most products, the transcript is not the end result. It becomes the input to another capability that makes the output more useful to humans.
Turn long conversations into concise notes or recap emails.
Pull out action items, decisions, names, dates, or structured fields.
Make audio and video content searchable using text and embeddings.
Feed the transcribed text into a model that decides how to respond in real
time.
## Beginner mistakes to avoid
Many transcription features feel disappointing not because speech-to-text is weak, but because the surrounding workflow is incomplete. These are some of the most common issues.
Most real users want cleaned-up notes, captions, or searchable records, not
just an unformatted block of transcript text.
No model can fully rescue very poor recordings. It helps to set expectations
and improve capture quality where possible.
If the provider supports language hints or domain-specific options, using
them can noticeably improve accuracy and speed.
Audio can be sensitive. Decide what gets stored, how long transcripts
persist, and who can access them.
## Related documentation
While there is not a dedicated transcription demo page yet, the capability connects directly to the voice and audio parts of the stack. These pages are the best follow-up if you want to see how transcription fits into broader experiences.
## A practical checklist
Before shipping a transcription feature, it helps to decide whether the output is meant for raw capture, user reading, downstream AI processing, or all three.
* Test on noisy, accented, and domain-specific audio, not just clean samples.
* Decide whether you need raw transcript, cleaned text, timestamps, or speaker separation.
* Plan what happens after transcription instead of stopping at raw text.
* Think about privacy, retention, and who can access recordings or transcripts.
* Treat live and offline transcription as different UX problems.
## Learn more
These references are a strong next step if you want to explore transcription in more depth, both as a technical capability and as part of richer audio workflows.
* [AI SDK transcription guide](https://ai-sdk.dev/docs/ai-sdk-core/transcription)
* [AI SDK transcribe reference](https://ai-sdk.dev/docs/reference/ai-sdk-core/transcribe)
* [TurboStarter AI Voice docs](/ai/docs/voice)
* [ElevenLabs Documentation](https://elevenlabs.io/docs)
---
url: /ai/docs/web-search
title: Web search
description: Understand how TurboStarter AI integrates web search, where the provider layer lives, and how Tavily, Brave Search, Exa, and Firecrawl fit into the chat stack.
---
Web search is the capability that lets the chat app pull in current or externally verified information instead of relying only on model memory. In TurboStarter AI, that capability lives behind a shared provider contract so the tool layer can stay stable while the search backend changes.
This is especially useful for:
* current events and recent announcements
* source-backed answers that need citations or fresh data
* niche lookups where static model knowledge may be incomplete
* assistant flows that benefit from explicit retrieval before generating a response
## Where it lives
The shared implementation is in `packages/ai/chat/src/tools/web-search`.
That module currently includes:
* a tool entrypoint used by the chat flow
* input schemas for multi-query web search requests
* shared normalization utilities for results, images, domains, and dates
* provider strategies for [Tavily](https://tavily.com/), [Brave Search](https://brave.com/search/api/), [Exa](https://exa.ai/), and [Firecrawl](https://www.firecrawl.dev/)
* provider-specific SDK wrappers under `packages/ai/chat/src/tools/sdk/*`
The overall shape follows the same pattern as the rest of the AI package: keep the chat app code focused on behavior, and isolate third-party integrations behind typed boundaries.
## Provider architecture
The web-search tool does not talk to a single provider API directly. Instead, it uses a provider strategy layer. Each provider implementation maps its own SDK response into the same result shape:
* `results`: normalized search hits
* `images`: normalized image candidates
That makes it easier to:
* swap the default provider
* compare providers during development
* keep the UI independent from provider-specific response formats
* add provider-specific options without rewriting the whole tool surface
## Implemented providers
TurboStarter AI currently includes the following web-search providers:
## Environment variables
If you want to enable these providers, add the corresponding server-side keys in `apps/web/.env.local` or your deployment environment:
```dotenv title="apps/web/.env.local"
BRAVE_SEARCH_API_KEY=""
EXA_API_KEY=""
FIRECRAWL_API_KEY=""
TAVILY_API_KEY=""
```
Only configure the providers you actually plan to use. Keeping multiple providers available can be useful during development, but it is not required.
## How it fits into chat
The [Chat](/ai/docs/chat) app uses web search as a tool rather than as a separate product surface. That means the model decides when browsing is necessary based on the request and the tool instructions.
The shared chat package handles:
* deciding when the tool is relevant
* executing the normalized web-search flow
* streaming the tool status back into the conversation UI
* passing structured search results back into the model loop
For the user, this shows up as a normal tool-driven assistant interaction rather than a separate search screen.
## Related documentation
---
url: /ai/docs/api
title: API
description: Overview of the API service in TurboStarter AI, including its architecture, technology stack, and core functionalities.
---
The API service acts as the central hub for all backend logic within TurboStarter AI. It handles interactions with AI models, data processing, and communication between the frontend and backend systems.
## Technology
We use [Hono](https://hono.dev), a fast, TypeScript-first web framework. This ensures efficient handling of API requests, especially for AI interactions like streaming responses.
**Importantly, this single API layer serves both web and mobile applications, guaranteeing consistent business logic and data handling across all platforms.**
In the AI kit, the API is mounted under `/api` (base path), and routes are grouped by module, for example:
* `/api/ai/*` for AI features (chat, RAG, image, voice, TTS)
* `/api/auth/*` for authentication helpers
* `/api/storage/*` for upload and signed URL helpers
## AI integration
While the API package (`@workspace/api`) exposes the endpoints, most AI functionality is implemented in dedicated AI packages and imported into the API routes. In practice:
* `@workspace/ai` contains shared AI primitives like credit costs and server helpers (credit balance, deductions, etc.)
* Each demo app has its own AI package (e.g. `@workspace/ai-chat`, `@workspace/ai-rag`, `@workspace/ai-image`, `@workspace/ai-tts`, `@workspace/ai-voice`) that contains the module-specific API functions, schemas, and strategy/provider wiring
The AI packages are responsible for:
* Communicating with various AI providers and models ([OpenAI](/ai/docs/providers/openai), [Anthropic](/ai/docs/providers/anthropic), [Google AI](/ai/docs/providers/google), etc.)
* Processing and formatting data specifically for AI interactions
* Parsing responses from AI models and producing consistent outputs
* Reading/writing AI module data (chat history, RAG documents/embeddings, image generations, etc.) via `@workspace/db` and `@workspace/storage` where needed
The API layer itself focuses on registering Hono routes, applying middleware (auth, validation, credits, etc.), and exposing these AI features to web and mobile clients.
This separation ensures AI-specific logic remains modular and reusable, while the API package stays focused on request handling and routing.
API keys for AI services are managed securely on the backend within these packages, ensuring they never appear client-side.
## Middlewares
Hono middlewares streamline request handling by tackling common tasks before the main logic runs. In TurboStarter AI, they handle:
* **Authentication:** verifying user sessions before allowing access to protected routes (the AI kit starts with anonymous sessions by default)
* **Validation:** validating query params and JSON bodies with [Zod](https://zod.dev/); validation errors can be localized using the i18n layer
* **Rate limiting:** restricting request frequency (for example, for costly operations like image generation or RAG ingestion)
* **Credits management:** checking a user's credit balance and deducting costs before running an AI operation
* **Localization:** detecting the user's locale (cookie / `Accept-Language`) so API errors and validation messages can be translated
* **Security:** CORS and CSRF protections where appropriate
These middlewares keep core route logic clean and focused, while consistently enforcing security, usage limits, and data integrity across the API.
## Core API documentation
For general information about the API setup, architecture, authentication integration, and how to add new endpoints, please refer to the [Core API documentation](/docs/web/api/overview).
Specific configurations related to AI providers or templates can be found in their respective documentation sections.
---
url: /ai/docs/auth
title: Authentication
description: Learn about the authentication flow in TurboStarter AI.
---
TurboStarter AI implements a streamlined authentication approach powered by [Better Auth](https://www.better-auth.com/). Since the primary focus is showcasing AI capabilities, we've kept the initial authentication simple, allowing you to quickly integrate and experiment with AI features.
## Anonymous sessions
When someone first visits the AI application, an **anonymous session** is automatically created. This establishes a unique user identity without requiring login credentials.
These anonymous sessions serve two critical purposes:
1. **Persistence:** links data like chat history or generated content to specific users in your database
2. **Usage control:** enables tracking for rate limiting and the credits system, ensuring fair AI resource usage even for anonymous visitors
Under the hood, this is implemented with Better Auth's anonymous plugin on the server and an anonymous client plugin on the frontend. The web app signs the user in anonymously on first load if there is no existing session.
## Extending authentication
While the default anonymous setup provides a frictionless initial experience, TurboStarter is built for growth. The authentication logic uses Better Auth in the shared `packages/auth` package, ensuring consistency between web and mobile applications.
When your project needs more sophisticated authentication features like:
* Email/Password login
* Magic links
* Social logins (OAuth)
* Multi-factor authentication
You can easily integrate these by leveraging the comprehensive authentication system in the [TurboStarter Core kit](/docs/web). The underlying structure is already in place, making this transition straightforward.
For detailed implementation guides, check out the core documentation:
By starting with anonymous sessions, the AI kit lets you focus on building compelling AI features first, while providing a clear path to implement advanced user management and security as your application evolves.
---
url: /ai/docs/billing
title: Billing
description: Discover how to manage billing and payment methods for AI features.
---
TurboStarter AI includes a straightforward middleware setup to manage user credits for AI features. This lets you control access based on available credits without complex payment integrations.
## Credit-based access
A focused middleware verifies if users have enough credits before allowing them to access specific AI-powered routes or actions.
```ts title="ai.router.ts"
export const aiRouter = new Hono().post(
"/chat",
rateLimiter,
validate("json", chatMessageSchema),
deductCredits({
amount: 10, // [!code highlight]
}),
streamChat,
);
```
This example shows how the `deductCredits` middleware subtracts a specific amount (10 credits) for each request to the `/chat` endpoint.
## Coming soon
We're actively expanding the billing capabilities for AI services, including:
* **Usage-based billing:** implementing a system where users pay based on their actual consumption of AI resources (tokens used, API calls made, etc.)
* **Payment provider integration:** connecting with popular services like [Stripe](/docs/web/billing/stripe), [Polar](/docs/web/billing/polar), [Lemon Squeezy](/docs/web/billing/lemon-squeezy), and more for hassle-free payment processing
## Extending billing
For more advanced billing scenarios or immediate needs, you can tap into the core TurboStarter billing features. The main documentation provides detailed guidance on setting up and managing billing with third-party providers.
} />
} />
Stay tuned for updates as we enhance the AI-specific billing functionalities!
---
url: /ai/docs/database
title: Database
description: Overview of the database service in TurboStarter AI.
---
The database service, managed within the `packages/db` directory (as `@workspace/db`), stores data essential for both core application functions and AI features. It ensures that information like user profiles, conversation history, and AI-generated content is reliably preserved and efficiently accessed.
## Technology
We've chosen [PostgreSQL](https://www.postgresql.org) as our primary relational database for its exceptional reliability, extensibility (including powerful tools like `pgvector` for similarity searches), and proven track record in production environments.
Database interactions are handled through [Drizzle ORM](https://orm.drizzle.team/), a cutting-edge TypeScript ORM that offers outstanding type safety (generating types directly from your schema), high performance, and a developer-friendly API.
For detailed guidance on setup, configuration, schema management (including migrations), and general usage patterns of Drizzle and PostgreSQL in the TurboStarter ecosystem, check out our core documentation:
## What is stored in the database?
Beyond standard application data (like users and accounts), the database plays a crucial role in storing AI-specific information:
* **[Chat](/ai/docs/chat) history**: stores conversations between users and AI models, including rich message parts (for example attachments, tool output parts) and token usage for billing/analytics
* **Vector embeddings**: stores numerical representations (vectors) of text data (like document chunks) that power Retrieval-Augmented Generation (RAG) techniques, allowing features like [Knowledge RAG](/ai/docs/rag) to quickly find relevant context from large document collections
* **Document references**: tracks metadata and storage identifiers (paths in [Blob Storage](/ai/docs/storage)) for user-uploaded files used in RAG
* **Image generations**: stores prompts, settings, and generated image URLs for the [Image playground](/ai/docs/image)
* **Credits**: keeps track of each user's remaining credits (used by the middleware to gate AI operations)
## Schema
The core database schema, defined in `packages/db/src/schema`, contains essential tables for the overall application (users, accounts, sessions, etc.).
To maintain clarity as AI features grow, AI module tables are grouped into dedicated [PostgreSQL schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) using Drizzle's `pgSchema`, for example:
* `chat.*` for the [Chat](/ai/docs/chat) demo (chats, messages, message parts, usage)
* `rag.*` for the [Knowledge RAG](/ai/docs/rag) demo (chats, messages, documents, embeddings)
* `image.*` for the [Image playground](/ai/docs/image) demo (generations, images)
This logical separation helps manage complexity and isolates feature-specific data structures. You'll typically find AI-specific schema definitions either alongside the relevant demo app code or within the main `packages/db/src/schema` directory, clearly labeled and organized.
---
url: /ai/docs/internationalization
title: Internationalization
description: Learn how we manage internationalization in TurboStarter AI.
---
TurboStarter AI builds on the core internationalization (i18n) setup from the main TurboStarter framework. The shared `@workspace/i18n` package in `packages/i18n` handles translation management across platforms.
This gives you the benefit of a proven system using [i18next](https://www.i18next.com/) for managing translations on both web and mobile apps. Plus, the AI models and LLMs integrated within TurboStarter AI generally support multiple languages, enabling interactions beyond what's covered by UI translations alone.
By default, the AI kit ships with English (`en`) enabled. You can add more locales by extending the i18n config and providing matching translation files.
For detailed information on configuring languages, adding translations, or using the `useTranslation` hook, check out the core documentation:
} />
} />
## AI-specific translations
While most translations are shared across the platform, TurboStarter AI introduces a dedicated `ai` namespace within translation files. This namespace contains strings specifically for AI features, demo applications, and UI elements unique to the AI starter kit.
The i18n package ships with multiple namespaces (for example `common`, `ai`, and `validation`) and loads translations from `packages/i18n/src/translations`.
```json title="packages/i18n/src/translations/en/ai.json"
{
"chat": {
"title": "AI Chatbot",
"description": "Engage in intelligent conversations."
},
"image": {
"title": "Image Generation",
"description": "Create stunning visuals with AI."
}
// ... other AI-specific translations
}
```
When adding translations for new AI features or modifying existing ones, place them within the `ai` namespace in the appropriate language files (e.g., `en/ai.json`, `es/ai.json`). This keeps AI-related text organized and separate from core application translations.
---
url: /ai/docs/security
title: Security
description: Learn about the security measures implemented in TurboStarter AI.
---
Remember to regularly review your security implementations and update them as needed.
The starter kit incorporates several security measures to protect your application and users when interacting with AI services.
## Authenticated endpoints
All AI operation endpoints require user authentication. This is enforced through middleware that verifies the user's session before granting access to any AI features.
The system creates anonymous sessions by default, but you can implement stronger authentication using the core framework's capabilities or the dedicated [authentication setup](/docs/web/auth/overview).
## Credit-based access
To prevent AI resource abuse, TurboStarter AI includes a credit-based system. Users receive a limited number of credits that are consumed when using AI features.
This approach avoids misuse while enabling potential monetization. Learn about the implementation details in the [Core billing documentation](/docs/web/billing/overview).
## Rate limiting
API endpoints are guarded by rate limiting to prevent abuse and ensure fair usage. This protects your application from potential denial-of-service attacks and excessive request volumes.
We use [`hono-rate-limiter`](https://github.com/rhinobase/hono-rate-limiter), which supports various storage options including [Redis](https://redis.io/), [Cloudflare KV](https://developers.cloudflare.com/workers/runtime-apis/kv/), and [Memcached](https://memcached.org/) for distributed rate limiting.
## Secure API key handling
Sensitive API keys for AI providers ([OpenAI](/ai/docs/providers/openai), [Anthropic](/ai/docs/providers/anthropic), [Google AI](/ai/docs/providers/google), etc.) are managed exclusively on the backend.
They are **NEVER** exposed to client-side code, dramatically reducing the risk of key leakage or unauthorized usage.
## AI service abuse protection
While TurboStarter AI provides application-level safeguards like credit limits and rate limiting, it's essential to implement additional protection directly within your AI providers.
Always configure spending limits, usage quotas, and monitoring alerts in your
AI provider dashboards (e.g., [OpenAI](/ai/docs/providers/openai),
[Anthropic](/ai/docs/providers/anthropic), [Google
AI](/ai/docs/providers/google)). These serve as critical safety nets against
unexpected costs or potential abuse that might bypass your application-level
controls.
By combining application-level security with provider-level controls, you'll build truly robust and secure AI applications.
---
url: /ai/docs/storage
title: Storage
description: Explore cloud storage services for AI applications.
---
Blob storage in TurboStarter AI offers a scalable solution for handling the diverse file types essential to modern AI applications. It works seamlessly with S3-compatible services including [AWS S3](https://aws.amazon.com/s3/), [Cloudflare R2](https://www.cloudflare.com/products/r2/), and [MinIO](https://min.io/).
## Use cases
Blob storage powers several key AI functions:
* **Managing user uploads:** safely storing files like documents or images that users upload for AI processing, as seen in the [Knowledge RAG](/ai/docs/rag) and image analysis features
* **Preserving AI-generated content:** storing outputs from AI models, such as images from the [Image playground](/ai/docs/image) or audio files from the [Voice](/ai/docs/voice) and [Text to Speech](/ai/docs/tts)
* **Powering RAG systems:** housing documents and files that serve as knowledge sources for Retrieval-Augmented Generation, used in demos like [Knowledge RAG](/ai/docs/rag) and intelligent [Agents](/ai/docs/agents)
## Security
Properly configuring bucket permissions for your storage provider is critical. Always restrict access based on the principle of least privilege:
* Buckets containing user uploads or sensitive RAG documents should typically **not** be publicly accessible
* Set precise permissions that allow your application server (API) to read/write as needed while blocking unauthorized access
Refer to your provider's documentation ([AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html), [Cloudflare R2](https://developers.cloudflare.com/security-center/security-insights/roles-and-permissions/), [MinIO](https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html)) for specific guidance on securing your storage buckets.
## Storage documentation
For detailed setup instructions, configuration options for different storage providers, and implementation best practices, check out the core storage documentation:
In summary, blob storage is essential for building sophisticated AI applications - enabling you to handle user uploads, store AI-generated files, and manage RAG document collections.
---
url: /ai/docs/ui
title: UI
description: Learn more about UI components and design system in AI starter kit.
---
TurboStarter AI builds on the core TurboStarter UI foundation to create engaging interfaces for all AI features.
The UI architecture uses shared components and styles with platform-specific implementations:
* **`@workspace/ui`**: includes shared assets, themes, and fundamental styles
* **`@workspace/ui-web`**: contains web components built with [Tailwind CSS](https://tailwindcss.com), [Base UI](https://base-ui.com), and [shadcn/ui](https://ui.shadcn.com)
* **`@workspace/ui-mobile`**: delivers mobile components using [Uniwind](https://uniwind.dev/) and [react-native-reusables](https://reactnativereusables.com/)
This approach maximizes code reuse while optimizing for each platform's unique capabilities.
## UI in AI applications
The AI starter kit leverages this foundation to create intuitive interfaces for various features and demo apps:
Components for displaying conversations, user input, and streaming responses
(used in [Chatbot](/ai/docs/chat), [Voice](/ai/docs/voice) and [Knowledge
RAG](/ai/docs/rag) demos).
Displaying AI-generated images as masonry grids with options for interaction
(used in [Image playground](/ai/docs/image) demo).
Structured forms for configuring AI tasks (e.g., selecting models, adjusting
parameters, modifying prompts).
Visual feedback during AI processing, such as loading spinners or progress
indicators (e.g. [Voice](/ai/docs/voice) and [Text to Speech](/ai/docs/tts)
voice avatar animation).
UI elements for users to rate or provide feedback on AI outputs. This can
include thumbs up/down buttons or text input fields for comments.
Components for displaying error messages or alerts when AI tasks fail or
encounter issues.
Ensuring that all UI components are usable for individuals with
disabilities, including keyboard navigation and screen reader support.
Components for displaying data or model outputs visually, such as charts,
graphs, or progress bars.
## Generative UI
A standout aspect of AI applications is their ability to dynamically create or modify UI elements based on AI responses. TurboStarter AI enables this through:
* **AI SDK components**: libraries like the [AI SDK](https://ai-sdk.dev/docs/introduction) provide specialized components and hooks designed to render UI based on AI actions or structured data. This creates interactive elements - buttons, forms, or visualizations - that appear dynamically within conversations or workflows.
* **Structured output**: AI models can return data in specific formats (such as JSON) that your frontend parses to render appropriate components, display information, or trigger actions. For example, an AI might return product details that automatically render as interactive cards.
* **Conditional rendering**: the platform uses standard React patterns for showing, hiding, or transforming UI components based on AI interaction states. This creates smooth transitions between loading states, results displays, and follow-up options tailored to AI suggestions.
This approach delivers truly responsive user experiences where interfaces adapt intelligently to ongoing AI processes. The [Chat demo app](/ai/docs/chat) showcases these generative UI capabilities in action.
## Customization and further details
Customizing appearance (themes, styling) or adding new UI components follows the same process as core TurboStarter applications. For complete guides on styling, theme management, and component development, see our core documentation:
} />
} />
By leveraging the core UI system, TurboStarter AI ensures consistent user experiences across platforms while letting you focus on creating unique AI functionalities.
---
url: /ai/docs/architecture
title: Architecture
description: A quick overview of the different parts of the TurboStarter AI.
---
TurboStarter AI integrates several best-in-class open source libraries to power its diverse functionalities, including authentication, data persistence, text generation, and more. Here's a concise overview of the architecture that makes everything work together.
## Application framework
The project leverages a [monorepo structure](https://turbo.build/repo) powered by [Turborepo](https://turbo.build/) to enable efficient code sharing and consistent tooling across the entire application ecosystem. This approach creates a single source of truth for shared code and dramatically simplifies dependency management.
### Web
Built with [Next.js](https://nextjs.org) and [React](https://react.dev), the web application leverages server-side rendering and static site generation for optimal performance and SEO. The UI is styled with [Tailwind CSS](https://tailwindcss.com) and [shadcn/ui](https://ui.shadcn.com) components for rapid development and consistent design. API routes are handled by [Hono](https://hono.dev) for edge computing, chosen for its minimal overhead and excellent TypeScript support.
### Mobile
The mobile application uses [React Native](https://reactnative.dev) with [Expo](https://expo.dev) for cross-platform development. This combination was selected for its ability to share up to 90% of code between platforms while maintaining native performance. For UI layer, we use [Uniwind](https://uniwind.dev/) which is a Tailwind CSS-like framework for React Native and [react-native-reusables](https://reactnativereusables.com/) for headless components.
The integration with the monorepo allows seamless sharing of business logic and types with the web application.
## API
The API is implemented as a dedicated package using [Hono](https://hono.dev), a lightweight framework optimized for edge computing. This architectural decision creates a clear separation between frontend and backend logic, enhancing maintainability and testability.
Hono's exceptional TypeScript support ensures type safety across all endpoints, while its minimal footprint and edge-first design deliver outstanding performance.
## Model providers
TurboStarter AI integrates with multiple AI model providers through the [AI SDK](https://sdk.vercel.ai/). In the current codebase, you’ll find built-in strategy/provider wiring for providers like [OpenAI](/ai/docs/providers/openai), [Anthropic](/ai/docs/providers/anthropic), [Google AI](/ai/docs/providers/google), [xAI](/ai/docs/providers/xai), and [DeepSeek](/ai/docs/providers/deepseek), plus additional providers for specific capabilities (for example [Replicate](/ai/docs/providers/replicate) for image models and [Eleven Labs](/ai/docs/providers/eleven-labs) for TTS).
For retrieval from the public web, the chat stack also includes a separate [web search](/ai/docs/web-search) provider layer with integrations for [Tavily](https://tavily.com/), [Brave Search](https://brave.com/search/api/), [Exa](https://exa.ai/), and [Firecrawl](https://www.firecrawl.dev/). That layer lives outside the model-provider abstraction because it powers tool execution rather than text generation itself.
For real-time voice experiences, the kit also integrates [LiveKit](https://livekit.io/) (via LiveKit Agents) to handle low-latency audio sessions and voice agent workflows.
The platform strategically utilizes specialized models for distinct AI tasks:
* **Text generation** models for conversational AI and content creation
* **Structured output** models for precise data extraction and formatting
* **Image generation** models for visual content creation
* **Transcription / speech / voice** models for real-time voice experiences
* **Embedding** models for semantic search and information retrieval
Switching models requires just a **one-line code change**, allowing you to rapidly adapt to emerging models or change providers based on your specific requirements. This flexibility ensures your application can leverage the latest AI advancements without extensive refactoring.
## Authentication
The applications use [Better Auth](https://www.better-auth.com/) for authentication, providing a secure and flexible authentication system. By default, the AI implementation creates an anonymous user session at startup, which is then used for all subsequent queries and interactions with the AI models. This approach maintains user context across sessions while minimizing friction.
For more sophisticated authentication requirements, you can easily extend the flow by leveraging the [Core implementation](/docs/web/auth/overview), which supports email/password authentication, magic links, OAuth providers, and more. This modular design lets you implement precisely the level of security your application demands.
## Persistence
Persistence in TurboStarter AI refers to the system's ability to store and retrieve data from a database. The application uses [PostgreSQL](https://www.postgresql.org/) as its primary database to store critical information such as:
* Chat history and conversation context
* User accounts and preference settings
* Vector embeddings for retrieval-augmented generation
To interact with the database from route handlers and server actions, TurboStarter AI leverages [Drizzle ORM](https://orm.drizzle.team/), a high-performance TypeScript ORM that provides type-safe database operations. This ensures robust data integrity and simplified query construction throughout the application.
A key advantage of Drizzle is its compatibility with multiple database providers including [Neon](https://neon.tech/), [Supabase](https://supabase.com/), and [PlanetScale](https://planetscale.com/). This flexibility allows seamless switching between providers based on your specific requirements without modifying queries or schema definitions — making your application highly adaptable to evolving infrastructure needs.
## Blob storage
File storage is managed through S3-compatible services, providing scalable, reliable storage for diverse file types. The system efficiently handles user-uploaded images, AI-generated content, and document files. This approach ensures optimal file management and straightforward integration with various storage providers including [AWS S3](https://aws.amazon.com/s3/), [Cloudflare R2](https://www.cloudflare.com/products/r2/), or [MinIO](https://min.io/).
## Security
Security is implemented comprehensively to protect both the application and its users. Key AI endpoints incorporate **rate limiting** to prevent abuse and ensure fair resource allocation.
The system uses a **credits-based access** control system, where each user has a limited number of credits for AI operations, preventing resource exhaustion and enabling monetization options.
All external API interactions, including those with AI model providers, occur exclusively server-side. This ensures that sensitive API keys are **never exposed** to client-side code, significantly reducing vulnerability to unauthorized access or credential theft.
Additionally, the system implements industry-standard security practices including thorough input validation, proper authentication enforcement, and regular dependency security audits.
---
url: /ai/docs/components/analyzing-image
title:
description: A compact loading component for image-understanding states that feels more helpful than a generic spinner in multimodal AI interfaces.
---
`` is a small status component for moments when the assistant is looking at an image and the user is waiting for the result. It makes that state feel intentional and product-specific instead of falling back to a neutral spinner.

## Why it is useful
This component does one very specific job, and that is exactly what makes it valuable. In image-understanding flows, users usually need reassurance that the model is actively inspecting the image rather than simply “loading.”
The visual language suggests image analysis, not just generic waiting.
It fits naturally into assistant messages, loading rows, and image-analysis
placeholders.
The animated scan line and shimmering label add motion without taking over
the interface.
## Usage
The simplest usage is just to render the component inline wherever an image-analysis state appears. It already includes its own icon animation and localized status label.
```tsx
import { AnalyzingImage } from "@workspace/ui-web/ai-elements/analyzing-image";
export function ImageAnalysisState() {
return ;
}
```
If you are already using the conversation primitives, this component also appears through the image loading variant in the conversation loading UI.
```tsx
import { ConversationLoading } from "@workspace/ui-web/ai-elements/conversation";
export function AssistantPendingState() {
return ;
}
```
## Props
The component is intentionally lightweight. It forwards standard `div` props, so you can pass `className` and any native wrapper attributes you need when placing it inside message rows, cards, or custom loading states.
## How it works
The component combines three small pieces to create a clear multimodal loading state. The result is subtle enough for production UIs while still being recognizable at a glance.
* An animated image frame creates the feeling of an active scan.
* A moving vertical bar reinforces the “analyzing” motion.
* A `` label displays the localized `analyzingImage` copy from the common translation namespace.
It does not manage image uploads, request state, or model calls on its own. It is purely a presentational component that helps your loading state communicate intent.
## Related components
`` works best as part of a broader conversational UI rather than as a standalone hero element. These nearby components are the most relevant companions.
---
url: /ai/docs/components/attachments
title:
description: A composable attachment UI for web and mobile, with grid, inline, and list variants for images, documents, audio, and other AI message assets.
---
`` is the media and file surface used across the AI starter. It gives prompts and messages a consistent way to show uploaded files, generated assets, and source-like items without rebuilding attachment UI for every app.
## Web
The web implementation is the richer of the two. It supports the three shared layout variants and adds hover-card helpers that work especially well in prompt composers and chat messages.

## Mobile
The mobile implementation keeps the same variants, but adapts them to native scrolling, touch targets, and image handling. It is designed to feel natural inside composer rows and conversation screens.

## Variants
The attachment family exposes the same three layout variants on both platforms. Each one is useful in a different part of the product.
| Variant | Best fit | Notes |
| -------- | -------------------------------------- | ------------------------------------------------------------------ |
| `grid` | Message bubbles, gallery-like surfaces | Great for image-first attachments and compact previews |
| `inline` | Prompt composers and compact rows | Keeps attachments small and easy to remove or inspect |
| `list` | Document-heavy or mixed media views | Better when filename and media type matter more than the thumbnail |
That shared variant model is what makes the component easy to reuse. The same attachment data can be rendered differently depending on where it appears in the product.
## Blocks
The API is intentionally simple. Most implementations only need a container, an item, and a preview, then optionally add metadata or removal behavior.
| Component | Role |
| ------------------- | -------------------------------------------------- |
| `Attachments` | Container that applies the selected variant layout |
| `Attachment` | Individual attachment item with contextual styling |
| `AttachmentPreview` | Thumbnail or icon preview based on media type |
| `AttachmentInfo` | Label and optional media type display |
| `AttachmentRemove` | Remove button for editable attachment lists |
| `AttachmentEmpty` | Empty state surface |
On web, the family also includes `AttachmentHoverCard`, `AttachmentHoverCardTrigger`, and `AttachmentHoverCardContent` for richer preview-on-hover behavior.
## Usage
The most common pattern is an `Attachments` container wrapping one or more `Attachment` items. From there, you decide whether the surface should be visual-first, compact, or metadata-heavy.
The web version is a good fit when you want richer preview behavior and more flexible composition around each item.
```tsx
import {
Attachment,
AttachmentInfo,
AttachmentPreview,
AttachmentRemove,
Attachments,
} from "@workspace/ui-web/ai-elements/attachments";
import type { AttachmentData } from "@workspace/ui-web/ai-elements/attachments";
const files = [
{
id: "img-1",
type: "file" as const,
url: "https://images.unsplash.com/photo-1500530855697-b586d89ba3ee?auto=format&fit=crop&w=800&q=80",
mediaType: "image/jpeg",
filename: "mountain-lake.jpg",
},
{
id: "doc-1",
type: "file" as const,
url: "https://example.com/product-brief.pdf",
mediaType: "application/pdf",
filename: "product-brief.pdf",
},
] satisfies AttachmentData[];
export function AttachmentRow() {
return (
{files.map((file) => (
undefined}>
))}
);
}
```
The mobile version follows the same structure, but the container is scroll-based and the interaction model is tuned for touch and native image rendering.
```tsx
import {
Attachment,
AttachmentInfo,
AttachmentPreview,
AttachmentRemove,
Attachments,
} from "@workspace/ui-mobile/ai-elements/attachments";
import type { AttachmentData } from "@workspace/ui-mobile/ai-elements/attachments";
const files = [
{
id: "img-1",
type: "file" as const,
url: "https://images.unsplash.com/photo-1500530855697-b586d89ba3ee?auto=format&fit=crop&w=800&q=80",
mediaType: "image/jpeg",
filename: "mountain-lake.jpg",
},
{
id: "doc-1",
type: "file" as const,
url: "https://example.com/product-brief.pdf",
mediaType: "application/pdf",
filename: "product-brief.pdf",
},
] satisfies AttachmentData[];
export function AttachmentRow() {
return (
{files.map((file) => (
undefined}>
))}
);
}
```
## Media handling
The preview component decides what to render based on the attachment data, so you do not need different components for every file type.
| Media type | Preview behavior |
| ------------- | ------------------------------------------ |
| Images | Thumbnail preview |
| Video | Video-style media preview |
| Audio | Audio icon |
| Documents | File icon plus filename where space allows |
| Unknown files | Generic attachment icon |
That keeps the calling code simple. You pass `data`, and the preview layer decides whether the result should look visual or icon-based.
## Platform differences
The shared design language is consistent, but the two implementations still respect the platform they live on.
| Area | Web | Mobile |
| ------------------- | ------------------------------------------ | ---------------------------------------- |
| Container | `div`-based layout | `ScrollView`-based layout |
| Rich preview | Hover-card helpers available | Native touch flow instead of hover |
| Image rendering | Standard `img` / `video` elements | `expo-image` |
| Inline editing feel | Great for composer chips and hover details | Great for touch removal and compact rows |
The result is a component family that feels shared, but not forced to behave identically across platforms.
## In the starter
Attachments are used in two main ways throughout the AI starter: as editable items inside prompt composers, and as rendered assets inside messages or conversation history.
That split explains the API shape. `AttachmentRemove` and compact inline layouts matter more in composers, while `grid` and `list` variants matter more in messages and history views.
## Related components
Attachments rarely appear on their own. These pages are the most relevant companions in the docs set.
---
url: /ai/docs/components/context
title:
description: A compact context-usage surface for showing token consumption, model limits, and estimated cost on both web and mobile AI interfaces.
---
`` is a small compound component for answering a question users increasingly care about in AI products: how much context has been used, and what did that response cost?
It turns raw token usage into a compact, inspectable UI that feels at home inside chat interfaces.
## Web
On web, the context UI opens as a lightweight hover card and falls back to a popover on touch devices. That makes it easy to keep the interface compact while still exposing detailed token and cost information when the user asks for it.

## Mobile
On mobile, the same information is presented through a bottom sheet. That keeps the summary trigger small while giving the detail view enough room for token breakdowns, pricing, and model metadata on a narrow screen.

## What it solves
This component is less about decoration and more about trust. It gives users a simple way to inspect usage details without forcing the main message UI to carry that information all the time.
The trigger gives a quick usage signal, and the expanded view shows what is
happening in more detail.
By combining model metadata with usage numbers, it helps users understand
why one generation may be more expensive than another.
It works especially well in assistants, playgrounds, and message rows where
context usage matters but should not dominate the layout.
## Compound API
Unlike a one-piece badge, `` is designed as a small composition system. You wrap the data once, then arrange the trigger and content pieces however your interface needs them.
The main parts are:
* `` for the shared model, provider, and usage data
* `` for the compact entry point
* `` for the expanded panel or sheet
* ``, ``, and `` for structure
* ``, ``, ``, and `` for the token breakdown
## Basic composition
The most common pattern is a small trigger that opens a richer detail panel. The implementation differs slightly by platform, but the mental model stays the same.
```tsx
import {
Context,
ContextContent,
ContextContentBody,
ContextContentFooter,
ContextContentHeader,
ContextInputUsage,
ContextOutputUsage,
ContextTrigger,
} from "@workspace/ui-web/ai-elements/context";
export function MessageContext() {
return (
);
}
```
```tsx
import {
Context,
ContextContent,
ContextContentBody,
ContextContentFooter,
ContextContentHeader,
ContextInputUsage,
ContextOutputUsage,
ContextTrigger,
} from "@workspace/ui-mobile/ai-elements/context";
export function MessageContext() {
return (
);
}
```
## Shared inputs
At the top level, both versions take the same core data. That is what makes the component easy to reuse across different message and assistant surfaces.
| Prop | Type | Notes |
| ---------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
| `model` | `string` | The model identifier used to look up limits and pricing metadata. |
| `provider` | `string \| undefined` | The provider identifier. If omitted, model lookup falls back to model-only resolution. |
| `usage` | `{ input?: number; output?: number; reasoning?: number; cached?: number }` | The token usage payload shown in the trigger and detail sections. |
The rest of the customization mostly comes from composition. You can replace or rearrange the trigger, header, body, footer, and usage rows instead of passing a long list of appearance props.
## Platform-specific behavior
The platform distinction matters here because interaction design changes the feel of the component quite a bit, even when the data is identical.
* Web uses a hover-card style interaction and switches to a popover on touch devices.
* Mobile uses a bottom sheet, which gives the content more breathing room and feels natural in native layouts.
* Both versions fetch model metadata through [tokenlens](https://www.tokenlens.dev/), calculate context usage, and render estimated costs from the same usage payload.
That shared logic helps the component stay consistent even though the surrounding shell is platform-native.
## What the user sees
Most users will encounter this component in two stages: a small trigger first, then a detail surface only when they want more context. That balance keeps the main conversation readable while still exposing meaningful operational detail.
The default experience typically includes:
* a percentage-style trigger based on used context
* a circular usage icon
* the current model name and provider logo
* input, output, reasoning, and cache token rows when available
* an estimated total cost footer
## Related components
`` is most useful when paired with other message-level primitives. These are the closest companion pages in the component set.
---
url: /ai/docs/components/conversation
title:
description: A conversation container for web and mobile AI interfaces, with scroll-to-bottom behavior, loading and error states, content layout, and conversation export helpers.
---
`` is the outer shell for the chat and transcript experience in the AI starter. It is responsible for the part around the messages: scrolling, bottom-follow behavior, loading and error surfaces, and small utilities like export and “jump to latest”.
## Web
The web implementation is built around a stick-to-bottom pattern, which makes it well suited for streaming AI interfaces where new content keeps arriving. It handles the “follow the latest message unless the user scrolls away” behavior for you.

## Mobile
The mobile implementation uses a keyboard-friendly scroll container and a small internal context to manage scroll state. It is designed for transcript and chat surfaces that need to stay usable while the keyboard opens and closes.

## Blocks
The conversation family is intentionally small. It gives you the shared shell around the message list without trying to own the messages themselves.
| Component | Role |
| -------------------------- | ----------------------------------------------------------- |
| `Conversation` | Root container and scroll-state owner |
| `ConversationContent` | Scrollable content region for messages and states |
| `ConversationScrollButton` | Jump-to-latest button when the user is away from the bottom |
| `ConversationLoading` | Loading surface while the assistant is still working |
| `ConversationError` | Error surface with retry affordance |
| `ConversationDownload` | Export helper for the current conversation |
On web, the family also includes `ConversationContentSpacer`, which is useful when the layout needs extra breathing room below the latest message.
## Usage
The common pattern is a root conversation container, a `ConversationContent` region that holds your message list, and then optional utilities like a loading row, error state, or scroll button.
The web version is driven by the underlying `StickToBottom` component, so its props are best understood as scroll-behavior props plus layout props. The most common pieces are the root container, content area, loader, error, and floating scroll button.
```tsx
import {
Conversation,
ConversationContent,
ConversationError,
ConversationLoading,
ConversationScrollButton,
} from "@workspace/ui-web/ai-elements/conversation";
export function ChatSurface() {
return (
First message
Second message
undefined}
/>
);
}
```
The mobile version follows the same structure, but the content area is backed by `KeyboardFriendlyScrollView`. That makes it a better fit for full-screen conversation surfaces that need to stay stable while the user types.
```tsx
import {
Conversation,
ConversationContent,
ConversationError,
ConversationLoading,
ConversationScrollButton,
} from "@workspace/ui-mobile/ai-elements/conversation";
export function ChatSurface() {
return (
undefined}
/>
);
}
```
## Scroll behavior
The most important thing this family does is manage how the conversation behaves as new content arrives. That is the difference between a usable chat surface and one that constantly fights the user.
| Behavior | Web | Mobile |
| ----------------- | ----------------------------------------------------- | ----------------------------------------------------- |
| Bottom following | Built on `StickToBottom` | Managed with internal scroll state |
| Jump to latest | `ConversationScrollButton` appears when not at bottom | Same idea, with native animated visibility |
| Keyboard handling | Normal desktop scroll behavior | `KeyboardFriendlyScrollView` keeps input flows usable |
This is why `Conversation` matters even though it looks visually simple. It is carrying a lot of interaction behavior that would otherwise get rewritten in every chat screen.
## Loading and error states
Loading and error surfaces are part of the conversation family because they belong to the conversation flow, not to any single message. They are best treated as rows inside `ConversationContent`, not as separate overlays.
| Component | Purpose |
| --------------------- | -------------------------------------------------------- |
| `ConversationLoading` | Shows that the assistant is still thinking or generating |
| `ConversationError` | Shows an error message and exposes a retry action |
On web, `ConversationLoading` also supports an image-style loading variant, which is useful in multimodal or vision flows where a plain spinner feels too generic.
## Exporting a conversation
Both platforms include `ConversationDownload`, but the behavior is platform-specific. The helper takes an array of conversation messages and turns them into markdown before exporting or sharing.
| Platform | Behavior |
| -------- | ---------------------------- |
| Web | Downloads a `.md` file |
| Mobile | Opens the native share sheet |
That makes it a nice utility to keep near the conversation shell rather than rebuilding export logic around the app every time.
## Platform differences
The structure is shared, but the implementation still respects the platform.
| Area | Web | Mobile |
| -------------------- | ------------------------------------- | ---------------------------------- |
| Root behavior | `StickToBottom` | Context-managed `View` container |
| Content region | `StickToBottom.Content` | `KeyboardFriendlyScrollView` |
| Scroll button reveal | CSS transition-based visibility | Reanimated timing-based visibility |
| Extra spacing helper | `ConversationContentSpacer` available | No spacer helper in the same form |
That split keeps the API familiar while still letting each platform solve the scrolling problem in the way that makes the most sense.
## In the starter
The conversation shell is where `Message`, `Attachments`, `Reasoning`, `Tool`, and loading states all come together. It is the structural surface that turns those individual pieces into a working conversation.
If `PromptInput` starts the interaction and `Message` renders individual entries, `Conversation` is what gives the whole exchange its behavior as a live interface.
## Related components
The conversation shell is closely tied to the rest of the AI chat surface. These are the most useful companion pages in the docs set.
---
url: /ai/docs/components/message
title:
description: A composable AI message UI for web and mobile, with content, actions, markdown response rendering, and branch navigation for alternate generations.
---
`` is the foundation of the conversation UI in the AI starter. It is not only a message bubble, but a small family of components for laying out user and assistant messages, rendering rich responses, attaching actions, and handling response branches.
## Web
The web version is the more feature-rich implementation. It supports user-versus-assistant layout treatment, hover-revealed actions, rich markdown rendering through [Streamdown](https://streamdown.ai/), and a compact branch selector for alternate responses.

## Mobile
The mobile version keeps the same overall structure, but adapts it to a native layout with touch-friendly spacing and a lighter response renderer. It still supports actions and branch navigation, but without the hover-based affordances from the web version.

## Blocks
The message system is intentionally composable. Most conversation UIs only need a few of these pieces, but the family gives you room to build from a simple bubble up to a more advanced assistant surface.
| Component | Role |
| ----------------- | -------------------------------------------- |
| `Message` | Root wrapper that sets role-aware layout |
| `MessageContent` | Main visual body of the message |
| `MessageActions` | Action row below or beside the message |
| `MessageAction` | Reusable action button primitive |
| `MessageResponse` | Rich response renderer for assistant output |
| `MessageBranch*` | Components for alternate response navigation |
The most important idea is that the root `Message` sets the role context, and the rest of the family adapts to that context rather than requiring you to pass the same role props over and over again.
## Usage
The common pattern is a role-aware root message, a content section, and then optional actions. Assistant messages usually include `MessageResponse`, while user messages often render plain text or attachments inside `MessageContent`.
The web version works especially well when assistant output includes formatted markdown, code, math, or diagrams. The most relevant props are the `from` role on `Message`, regular layout props on `MessageContent`, button props on `MessageAction`, and `Streamdown` props on `MessageResponse`.
```tsx
import {
Message,
MessageAction,
MessageActions,
MessageContent,
MessageResponse,
} from "@workspace/ui-web/ai-elements/message";
import { Icons } from "@workspace/ui-web/icons";
export function AssistantMessage() {
return (
{
"## Plan\n\nHere is a concise answer with **formatting** and `code`."
}
);
}
```
The mobile version keeps the same composition pattern, but the response renderer is based on the native markdown component and actions are always touch-first. The key props are still `from` on `Message`, view props on the layout pieces, and button props on `MessageAction`.
```tsx
import {
Message,
MessageAction,
MessageActions,
MessageContent,
MessageResponse,
} from "@workspace/ui-mobile/ai-elements/message";
import { Icons } from "@workspace/ui-mobile/icons";
export function AssistantMessage() {
return (
{
"## Plan\n\nHere is a concise answer with **formatting** and `code`."
}
);
}
```
## Assistant and user roles
The message family changes its layout depending on the role. That makes user and assistant messages feel related, but not identical.
| Role | Treatment |
| --------- | -------------------------------------- |
| User | Right-aligned, bubble-like surface |
| Assistant | Left-aligned, more open content layout |
That difference is subtle, but important. It lets rich assistant output breathe while still making user messages feel compact and clearly authored.
## Response rendering
`MessageResponse` is one of the most useful pieces in the assistant side of the message family. It gives rich text output a dedicated renderer instead of pushing raw markdown handling into the surrounding conversation code.
| Platform | Renderer | Notes |
| -------- | --------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| Web | [Streamdown](https://streamdown.ai/) | Supports richer formatting, including code, math, mermaid, and CJK plugins |
| Mobile | [react-native-enriched-markdown](https://github.com/software-mansion-labs/react-native-enriched-markdown) | Better suited to compact native rendering and scrolling |
This is one of the reasons the message family is more than layout. It also standardizes how assistant output is actually presented.
## Message branches
Both platforms support a branch-navigation model for alternate assistant responses. That is useful when the product allows regeneration or multiple candidate answers.
| Component | Purpose |
| --------------------------------------------- | ------------------------------ |
| `MessageBranch` | Holds branch state |
| `MessageBranchContent` | Renders the active branch |
| `MessageBranchSelector` | Wraps the navigation controls |
| `MessageBranchPrevious` / `MessageBranchNext` | Move between branches |
| `MessageBranchPage` | Shows the current branch index |
The branch API is intentionally separate from the base `Message` so you only pay for that complexity when the product actually needs it.
## Platform differences
The structure is shared, but the behavior still respects the platform.
| Area | Web | Mobile |
| -------------------- | ---------------------------------------------- | ---------------------------------------------- |
| Action visibility | Hover and focus reveal | Touch-first, always available in layout |
| Response rendering | `Streamdown` | Native markdown component |
| Root layout | `div`-based with role-specific utility classes | `Animated.View` with native layout transitions |
| Branch selector feel | Compact desktop control group | Native button row |
That balance keeps the component family consistent without making either platform feel awkwardly ported.
## In the starter
The message family is where many of the other AI UI components come together. Attachments, tools, reasoning traces, context displays, and feedback actions often live inside or around a message.
That is why this page matters. If `PromptInput` starts the interaction, `Message` is where the result actually becomes visible to the user.
## Related components
The message surface is usually composed with several other AI primitives. These are the most relevant companion pages in the docs set.
---
url: /ai/docs/components/model-selector
title:
description: A cross-platform model picker for AI interfaces, with built-in provider logos, model labels, and compact trigger patterns for web and mobile.
---
`` is the model picker used across the AI starter. It ships in two UI variants:
* `select`: a compact dropdown
* `modal`: a richer picker for larger model catalogs (search, providers, capabilities)
The modal variant is also a great fit when your model list is fetched dynamically. In the starter it’s wired to work with remote model catalogs, and you can plug it into providers like OpenRouter, models.dev, or an AI gateway.
## Web
On web you can use either the small `select` dropdown or the larger `modal` picker. The modal adapts to screen size: it renders as a popover on desktop and a drawer on smaller screens.

## Mobile
On mobile, the `select` trigger includes the provider or model logo by default. The `modal` variant is built on a bottom sheet and works well for browsing a longer list.

## Why it matters
Model choice is often one of the most important controls in an AI product, but it can also become visually messy very quickly. This component helps you present that choice in a way that feels intentional instead of improvised.
The component already understands provider logos, model names, and the kind
of compact trigger most chat products need.
The logo and name helpers are useful on their own in places like usage
panels, model badges, and message metadata.
Whether the selector appears in a composer, a settings area, or a context
panel, it keeps the visual language of model choice consistent.
## Building blocks
`` is a small family of parts rather than a single monolithic control. Most screens only need the trigger, the content, and the shared logo/name helpers.
The exports are grouped by variant:
`select` variant:
* `ModelSelectorSelect`
* `ModelSelectorSelectTrigger`
* `ModelSelectorSelectContent`
* `ModelSelectorSelectItem`
`modal` variant:
* `ModelSelectorModal`
* `ModelSelectorModalTrigger`
* `ModelSelectorModalContent`
* `ModelSelectorModalList`
Shared helpers:
* `ModelSelectorLogo`
* `ModelSelectorName`
* `ModelSelectorDescription`
* `ModelSelectorProviders`
* `ModelSelectorCapabilities`
* `ModelSelectorSearchInput`
## Variants
Both variants solve “pick a model”, but the interaction feel is different.
| Variant | Best fit | Notes |
| -------- | ------------------------------- | ------------------------------------------------------ |
| `select` | chat toolbars, compact settings | fast, minimal UI |
| `modal` | long model lists, discovery | search + provider browsing; ideal for dynamic catalogs |
## Select variant
The `select` variant is best when the model list is short and the picker should stay out of the way.
```tsx
import {
ModelSelectorSelect,
ModelSelectorSelectContent,
ModelSelectorSelectItem,
ModelSelectorSelectTrigger,
} from "@workspace/ui-web/ai-elements/model-selector";
export function ChatModelSelector() {
return (
GPT-4.1 Mini
Claude 4 Sonnet
Gemini 2.5 Flash
);
}
```
```tsx
import {
ModelSelectorSelect,
ModelSelectorSelectContent,
ModelSelectorSelectItem,
ModelSelectorSelectTrigger,
} from "@workspace/ui-mobile/ai-elements/model-selector";
export function ChatModelSelector() {
return (
GPT-4.1 Mini
Claude 4 Sonnet
Gemini 2.5 Flash
);
}
```
## Modal variant
The `modal` variant is built for browsing. It’s the one you want when you have many models, when you want provider filtering, or when the list is fetched dynamically.
You can source models from anywhere: [OpenRouter](https://openrouter.ai/), [models.dev](https://models.dev/), an [AI gateway](https://vercel.com/ai-gateway), or your own API. The picker only needs a normalized list to render.
```tsx
import { useEffect, useMemo, useState } from "react";
import {
ModelSelectorCapabilities,
ModelSelectorDescription,
ModelSelectorLogo,
ModelSelectorModal,
ModelSelectorModalContent,
ModelSelectorModalList,
ModelSelectorModalTrigger,
ModelSelectorName,
ModelSelectorProviders,
ModelSelectorSearchInput,
} from "@workspace/ui-web/ai-elements/model-selector";
type ModelItem = {
id: string;
name: string;
description?: string;
provider: string;
attachments: boolean;
tools: boolean;
reasoning: boolean;
};
export function ChatModelSelectorModal() {
const [value, setValue] = useState("gpt-4.1-mini");
const [provider, setProvider] = useState("openai");
const [query, setQuery] = useState("");
const [models, setModels] = useState([]);
useEffect(() => {
// Fetch models dynamically (OpenRouter, models.dev, AI gateway, or your API).
setModels([
{
id: "gpt-4.1-mini",
name: "GPT-4.1 Mini",
provider: "openai",
description: "Fast, general-purpose model.",
attachments: true,
tools: true,
reasoning: false,
},
{
id: "claude-4-sonnet",
name: "Claude 4 Sonnet",
provider: "anthropic",
description: "Strong writing and reasoning balance.",
attachments: true,
tools: true,
reasoning: true,
},
]);
}, []);
const providers = useMemo(
() => Array.from(new Set(models.map((m) => m.provider))),
[models],
);
const filtered = useMemo(() => {
return models
.filter((m) => (provider ? m.provider === provider : true))
.filter((m) => m.name.toLowerCase().includes(query.toLowerCase()));
}, [models, provider, query]);
return (
{models.find((m) => m.id === value)?.name ?? "Select a model"}
);
}
```
```tsx
import {
ModelSelectorLogo,
ModelSelectorName,
} from "@workspace/ui-mobile/ai-elements/model-selector";
export function ModelMeta() {
return (
Claude 4 Sonnet
);
}
```
The helpers first try to match model-specific icons for names like `claude`, `gemini`, `grok`, or `nano-banana`. If no model-specific icon matches, they fall back to the provider icon, and then finally to an external logo from [models.dev](https://models.dev/).
## Platform differences
The two versions stay close in spirit, but there are a few differences worth knowing when you design around them.
| Area | Web | Mobile |
| ---------------- | ------------------------------------------ | --------------------------------------- |
| `select` trigger | Compact text-first trigger | Trigger includes logo by default |
| Logo fallback | `img` fallback from `models.dev` | `expo-image` fallback from `models.dev` |
| Name helper | `span`-based text helper | native `Text`-based helper |
| `modal` surface | popover (desktop) / drawer (small screens) | bottom sheet |
| Visual feel | tighter desktop toolbar fit | easier scanning in touch layouts |
## What to customize
Most customization happens through composition and styling rather than through a long prop list. In practice, the main knobs are:
* the variant you choose (`select` vs `modal`)
* the selected value and state wiring in your app
* `className` on triggers and list rows
* the `provider` and `model` values used to resolve the right logo
* wiring the modal list to a dynamic model catalog (OpenRouter, models.dev, AI gateway, or your API)
That makes the component easy to adapt without turning it into a configuration-heavy abstraction.
## Related components
`` tends to live near other model-aware pieces of the UI. These are the most natural companion pages in this docs set.
---
url: /ai/docs/components/prompt-input
title:
description: A composable AI prompt input for web and mobile, with textarea, submit controls, tools, action menus, attachments, and provider-driven state management.
---
`` is the main text-entry surface across the AI starter. It is not just a single input field, but a small component family for building [chat](/ai/docs/chat), [image](/ai/docs/image), [RAG](/ai/docs/rag), and [TTS](/ai/docs/tts) composers with the same design language.
## Web
The web version is the broader implementation. It supports provider-driven state, drag and drop, attachment actions, referenced sources, menus, selects, hover cards, and richer composition around the textarea.

## Mobile
The mobile version keeps the same overall structure, but adapts it to native interaction patterns. Instead of drag and drop and hover-based UI, it leans on bottom sheets, touch-friendly buttons, and platform pickers for camera, photos, and files.

## What makes it useful
Prompt input is where a lot of AI product complexity shows up. This component family gives that complexity a clean place to live without turning the composer into one giant custom component.
You can start with a textarea and submit button, then add tools, model
selectors, menus, attachments, and helper UI as needed.
The same family is used for [chat](/ai/docs/chat), [image
generation](/ai/docs/image), [knowledge RAG](/ai/docs/rag), and
[text-to-speech](/ai/docs/tts) flows in the starter.
Attachments, generation state, stop actions, and external input control are
all first-class parts of the API.
## Blocks
The component family is intentionally broad, but most implementations only need a handful of pieces. The root container handles submission flow, while the surrounding helpers shape the final composer UI.
| Component | Role |
| ------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| `PromptInput` | Root container that owns submission flow and local state when no provider is used |
| `PromptInputProvider` | Optional shared state provider for input text and attachments |
| `PromptInputTextarea` | Main text entry area |
| `PromptInputSubmit` | Send or stop button tied to generation state |
| `PromptInputHeader` / `PromptInputBody` / `PromptInputFooter` | Layout regions for building the composer |
| `PromptInputTools` / `PromptInputButton` | Tool rows and compact actions |
| `PromptInputActionMenu*` | Attachment and secondary action menu primitives |
| `PromptInputSelect*` | Model or option selectors placed inside the composer |
On web, the family also includes extras like `PromptInputDropzone`, `PromptInputActionAddAttachments`, `PromptInputHoverCard*`, `PromptInputCommand*`, and referenced source helpers.
## Usage
The most common pattern is a root prompt input with a textarea and footer. From there, you can add tools and actions depending on the product surface.
The web version is best when the prompt input needs to behave like a full composer surface. The root component supports `status`, `dropzone`, `attachments`, `onSubmit`, and regular form props, while the child pieces shape the UI around it.
```tsx
import {
PromptInput,
PromptInputFooter,
PromptInputSubmit,
PromptInputTextarea,
PromptInputTools,
} from "@workspace/ui-web/ai-elements/prompt-input";
import type { PromptInputMessage } from "@workspace/ui-web/ai-elements/prompt-input";
export function ChatComposer() {
return (
{
console.log(message.text, message.files);
}}
className="w-full"
>
);
}
```
The mobile version follows the same composition idea, but the root is a `View`-based container and the action flow is tuned for touch and native pickers. The key props are `status`, `attachments`, `onSubmit`, and standard view props.
```tsx
import {
PromptInput,
PromptInputFooter,
PromptInputSubmit,
PromptInputTextarea,
PromptInputTools,
} from "@workspace/ui-mobile/ai-elements/prompt-input";
import type { PromptInputMessage } from "@workspace/ui-mobile/ai-elements/prompt-input";
export function ChatComposer() {
return (
{
console.log(message.text, message.files);
}}
>
);
}
```
## Shared state
If the composer needs to be controlled from outside the prompt input itself, both platforms expose a provider and controller hook. That is useful when examples, attachment previews, or external UI need to read or update the same state.
| Piece | Purpose |
| ----------------------------- | ---------------------------------------------------------- |
| `PromptInputProvider` | Lifts input and attachment state outside the root composer |
| `usePromptInputController()` | Gives access to `textInput` and `attachments` |
| `usePromptInputAttachments()` | Reads and manages current attachments |
On web, the provider also keeps track of the dropzone state so actions like “add attachments” can open the file dialog from elsewhere in the composer tree.
## Attachments and actions
Attachments are a core part of the prompt input family, but the interaction model differs between web and mobile.
| Area | Web | Mobile |
| ------------------ | --------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| File input | Drag and drop plus file dialog | Native camera, photo library, and document pickers |
| Menu model | Dropdown-based action menu | Bottom-sheet action menu |
| Attachment helpers | `PromptInputDropzone`, `PromptInputActionAddAttachments` | `PromptInputActionCamera`, `PromptInputActionPhotos`, `PromptInputActionFiles` |
| Validation | `PromptInputAttachmentsOptions` for file count, size, and MIME checks | Same validation model, adapted to native assets |
That split is important: the API stays conceptually similar, but each platform uses the interaction pattern users already expect.
## References
You do not need the entire component family every time. These are the parts most apps will end up using first.
| Need | Component |
| ------------------------- | ---------------------------------------------------- |
| Main text field | `PromptInputTextarea` |
| Submit or stop button | `PromptInputSubmit` |
| Footer layout | `PromptInputFooter` |
| Inline tools row | `PromptInputTools` |
| Action menu trigger | `PromptInputActionMenuTrigger` |
| Model or option selector | `PromptInputSelect*` |
| Provider-controlled state | `PromptInputProvider` + `usePromptInputController()` |
On web, the command and hover-card primitives are also worth reaching for when the composer needs richer inline UX, such as search, slash commands, or contextual help.
## In the starter
The prompt input is one of the most reused UI systems in the AI starter. It shows up in the [Chat](/ai/docs/chat), [Image](/ai/docs/image), [RAG](/ai/docs/rag), and [TTS](/ai/docs/tts) apps, with each surface composing a slightly different set of tools around the same foundation.
That reuse is the main reason the component family matters. Instead of rebuilding the composer for every app, the starter uses the same primitives and swaps in app-specific controls, selectors, and attachment behavior.
## Related components
The prompt input usually sits next to other AI UI primitives rather than standing alone. These pages are the closest companions in the docs set.
---
url: /ai/docs/components/reasoning
title:
description: A compact collapsible UI for showing model reasoning progress and completed reasoning traces across web and mobile AI interfaces.
---
`` gives hidden model thinking a readable place to live. It helps you surface reasoning progress, completion state, and the final reasoning text without forcing that detail into the main assistant message.
## Web
The web version works especially well in chat and playground interfaces where users may want to peek into the model's thought process, but only when they choose to. It uses a compact trigger row plus an expandable content area with richer formatting support.

## Mobile
The mobile version keeps the same interaction pattern, but simplifies the rendering for a native layout. It is a strong fit when you want to preserve the idea of “peek into reasoning” without overloading the small screen.

## What it adds
Reasoning UI is most valuable when users want transparency without clutter. This component gives you that middle ground by separating the “thinking” status from the actual answer.
Users can expand the reasoning only when they care, instead of reading it
inline with the assistant response.
The trigger changes its message and icon depending on whether reasoning is
still streaming or already finished.
It fits well beside message, tool, and context components in a modern chat
interface.
## Building blocks
The API is intentionally small. You usually only need three pieces:
* `` for the shared collapsible container and state logic
* `` for the status row
* `` for the reasoning body
The component also manages a few useful behaviors for you, like auto-opening when reasoning starts streaming and auto-closing shortly after it finishes, unless you explicitly control the open state yourself.
## Basic composition
The normal pattern is a trigger followed by the expandable reasoning content. Both platforms use the same idea, so you can carry the same design language across web and mobile.
```tsx
import {
Reasoning,
ReasoningContent,
ReasoningTrigger,
} from "@workspace/ui-web/ai-elements/reasoning";
export function AssistantReasoning() {
return (
{`I compared the user's request against the available options, ruled out
the ones that violated the constraints, and selected the safest match.`}
);
}
```
```tsx
import {
Reasoning,
ReasoningContent,
ReasoningTrigger,
} from "@workspace/ui-mobile/ai-elements/reasoning";
export function AssistantReasoning() {
return (
{`I compared the user's request against the available options, ruled out
the ones that violated the constraints, and selected the safest match.`}
);
}
```
## State behavior
The component changes its trigger behavior based on whether reasoning is actively streaming or already complete. That gives the UI a sense of motion without requiring extra wiring in the caller.
| Situation | Trigger behavior |
| -------------------------------- | -------------------------------------------------------------------- |
| Streaming | Shows a spinner and shimmer-style “in progress” message |
| Finished, no duration yet | Shows a completed message |
| Finished, duration available | Shows a completed message with elapsed time |
| Explicitly controlled open state | Respects the caller's open state instead of relying on auto behavior |
This is one of the reasons the component feels nice in practice: it handles the common “thinking -> done” rhythm for you.
## Platform differences
The core interaction is shared, but the content rendering differs between web and mobile.
| Area | Web | Mobile |
| ----------------- | ------------------------------------------------------------------------------------------ | --------------------------------------------------------- |
| Content rendering | [Streamdown](https://streamdown.ai/) with support for code, math, mermaid, and CJK plugins | plain native text rendering |
| Trigger layout | desktop-friendly inline row | touch-friendly inline row |
| Text treatment | shimmer for active state, richer formatted content when expanded | shimmer for active state, simpler text body when expanded |
The web version is a better fit if you want richly formatted reasoning content. The mobile version is better when you want the same product concept in a lighter-weight native surface.
## Useful control points
Most teams will not need to customize much, but there are a few props worth knowing about:
| Prop | Type | Notes |
| -------------------- | -------------------------------------- | ------------------------------------------------------- |
| `isStreaming` | `boolean` | Drives the active versus completed state. |
| `open` | `boolean` | Lets you fully control the open state. |
| `defaultOpen` | `boolean` | Sets the initial state for uncontrolled usage. |
| `onOpenChange` | `(open: boolean) => void` | Lets you react to user toggles. |
| `duration` | `number` | Overrides or supplies the displayed reasoning duration. |
| `getThinkingMessage` | `(isStreaming, duration) => ReactNode` | Customizes the trigger message in `ReasoningTrigger`. |
## Related components
`` is usually part of a broader assistant response surface. These pages are the closest companions in the component set.
---
url: /ai/docs/components/shimmer
title:
description: A lightweight animated text treatment for in-progress AI states on both web and mobile, with platform-specific implementations tuned to each UI stack.
---
`` is a small component with a big job: it makes waiting states feel alive without adding heavy UI. In TurboStarter AI, it is used when something is actively happening, like reasoning, tool execution, or image analysis, and you want a softer signal than a spinner alone.

## Why it is useful
Shimmer text works best when the UI should feel active but calm. It gives users feedback that something is still in progress without making the interface feel noisy or overloaded.
It communicates progress without taking over the layout or competing with
the actual content.
It is especially effective for labels like “thinking”, “analyzing image”, or
a tool name that is still running.
Both implementations aim for the same product feel, even though the
underlying rendering strategy differs between web and mobile.
## Usage
The simplest usage is to wrap a short status string. This works well for inline loading states, trigger labels, and compact assistant UI elements.
```tsx
import { ShimmerText } from "@workspace/ui-web/ai-elements/shimmer";
export function ThinkingLabel() {
return Thinking...;
}
```
```tsx
import { ShimmerText } from "@workspace/ui-mobile/ai-elements/shimmer";
export function ThinkingLabel() {
return Thinking...;
}
```
## Platform differences
The shared API is intentionally small, but the two implementations expose slightly different customization points because they are solving the effect in different environments.
### Web props
The web version expects a string child and supports a few simple tuning options. It is ideal when you want a polished shimmer effect with minimal setup.
| Prop | Type | Notes |
| ----------- | ------------- | ---------------------------------------------------------------- |
| `children` | `string` | The text content to render. |
| `as` | `ElementType` | Changes the rendered element, such as `p`, `span`, or `div`. |
| `className` | `string` | Adds typography or spacing classes. |
| `duration` | `number` | Controls how long one shimmer cycle takes. |
| `spread` | `number` | Controls the width of the highlight relative to the text length. |
### Mobile props
The mobile version is also lightweight, but it includes a few extra controls because the animation is built from a masked gradient rather than CSS background clipping.
| Prop | Type | Notes |
| ---------------- | ---------------- | ----------------------------------------------------------- |
| `children` | `string` | The text content to render. |
| `className` | `string` | Applies text styling and layout classes. |
| `duration` | `number` | Controls animation speed. |
| `direction` | `"ltr" \| "rtl"` | Changes the shimmer direction. |
| `angle` | `number` | Rotates the gradient used for the highlight. |
| `highlightWidth` | `number` | Adjusts how wide the bright section of the shimmer appears. |
## How it works
Both versions aim for the same product outcome, but they get there differently because web and mobile do not offer the same rendering primitives.
* On web, the text becomes transparent and is filled by an animated background gradient.
* On mobile, the text is used as a mask and an animated gradient moves behind it.
* In both cases, the component stays focused on presentation only. It does not manage loading state itself; it simply makes a loading label feel better.
## Where it appears
`` is a foundational helper in the AI UI kit rather than a one-off effect. It shows up in a few different places where the interface benefits from an in-progress label with a little motion.
---
url: /ai/docs/components/tool
title:
description: A compact compound component for presenting tool calls, execution status, inputs, and outputs in AI conversations across web and mobile.
---
`` turns a tool call into something users can actually read. Instead of exposing raw tool events or JSON blobs in the message flow, it gives you a structured surface for showing what ran, what state it is in, and what came back.
## Web
The web version is built around a collapsible row that feels native inside a desktop conversation. It is especially good for agentic chat UIs where tool activity should be visible but not overwhelming.

## Mobile
The mobile version keeps the same mental model, but adapts the spacing and content rendering to a native layout. It still behaves like a compact activity row first, with details available when expanded.

## What it communicates
This component is useful because tool calls are rarely just “done” or “not done.” They move through approval, execution, success, denial, or error states, and the UI needs to make that progression feel understandable.
Users can tell whether a tool is pending, running, completed, denied, or
failed without reading raw event payloads.
Inputs and outputs can stay tucked away until the user wants to inspect
them.
It fits naturally into assistant interfaces where tool calls are part of the
conversation, not a separate debug panel.
## Building blocks
`` is a compact compound component. The outer wrapper manages the collapsible state, and the inner pieces let you decide how much of the tool call to show.
The main exports are:
* ``
* ``
* ``
* ``
* ``
* ``
## Basic composition
The standard pattern is a collapsed header row for the tool call plus expandable details for the input and output. The API shape stays close across platforms, which makes it easy to keep the same mental model in both apps.
```tsx
import {
Tool,
ToolContent,
ToolHeader,
ToolInput,
ToolOutput,
} from "@workspace/ui-web/ai-elements/tool";
export function WeatherTool() {
return (
);
}
```
```tsx
import {
Tool,
ToolContent,
ToolHeader,
ToolInput,
ToolOutput,
} from "@workspace/ui-mobile/ai-elements/tool";
export function WeatherTool() {
return (
);
}
```
## Supported states
The component family is designed around tool lifecycle states rather than around a single “loading” flag. That is why it reads much better in agent-driven UIs than a plain spinner row.
| State | Meaning |
| -------------------- | --------------------------------------------------------------------------- |
| `approval-requested` | The tool is waiting for explicit approval before it can run. |
| `approval-responded` | An approval decision was made and the tool can proceed or stop accordingly. |
| `input-available` | The tool input is ready and execution is underway. |
| `input-streaming` | Input or tool activity is still streaming in. |
| `output-available` | The tool completed successfully and produced a result. |
| `output-denied` | The tool run was denied or blocked. |
| `output-error` | The tool failed and returned an error state. |
## How the pieces behave
A lot of the value in this component comes from the defaults baked into each part. You get a fairly rich tool row without having to author every little detail yourself.
* `` derives a readable tool name from the `type` when you do not pass a custom `title`.
* Non-final states use `` to make the label feel active.
* Final states switch to a static label and a status badge.
* `` renders structured input as JSON.
* `` can render JSON, strings, React elements, or an error panel.
That balance is what makes the component useful in both product UI and internal agent tooling.
## Platform notes
The web and mobile versions stay aligned conceptually, but the rendering details are slightly different.
| Area | Web | Mobile |
| ---------------------- | ----------------------------- | -------------------------------- |
| Base shell | DOM collapsible row | native collapsible row |
| Input/output rendering | code-block style surface | native scrollable JSON block |
| Status text | web text + shimmer primitives | native text + shimmer primitives |
| Layout feel | tighter desktop density | more touch-friendly spacing |
## Related components
`` works best alongside the other conversation-level primitives that explain what the assistant is doing. These are the nearest companion pages in the current docs set.
---
url: /ai/docs/components/voice-control-bar
title:
description: A voice-session control surface for web and mobile, with microphone, camera, screen share, chat, and disconnect actions designed for real-time AI interfaces.
---
`` is the main interaction surface for the voice session once a user is connected. It brings the core voice actions into one place, so the session feels like a proper call experience rather than a scattered set of controls.
## Web
The web control bar is the richer of the two implementations. It supports the main media toggles, disconnect flow, and an expandable inline chat composer for sending text into the live session.

## Mobile
The mobile control bar keeps the same control model, but presents it in a tighter native layout with larger touch targets and no inline text composer inside the bar itself.

## What it does well
This component is useful because a voice product needs more than a mute button. Once the user is in a live session, the control surface has to coordinate media, chat, and exit actions in a way that stays readable under pressure.
Microphone, camera, screen sharing, chat, and disconnect actions live in one
predictable place.
The control bar gives the session a call-like interaction pattern instead of
a generic toolbar.
You can show only a few controls or enable the full bar depending on the
product surface.
## Core controls
Both implementations revolve around the same control categories, even though the internal composition differs by platform.
| Control | Purpose |
| ------------ | ------------------------------------------- |
| Microphone | Mute or unmute the user's audio track |
| Camera | Enable or disable the local camera track |
| Screen share | Start or stop screen sharing when supported |
| Chat | Toggle an in-session chat surface |
| Leave | Disconnect from the active session |
On web, the chat control can expand into a compact inline input inside the control bar. On mobile, chat is still represented as a toggle, but the actual message entry happens in the surrounding session UI rather than inside the bar.
## Basic usage
You usually render the control bar as part of a connected voice session, passing in which controls should be visible and wiring it to the session state around it.
```tsx
import { ControlBar } from "@workspace/ui-web/voice/control-bar";
export function VoiceSessionControls() {
return (
);
}
```
```tsx
import { ControlBar } from "@workspace/ui-mobile/voice/control-bar";
export function VoiceSessionControls() {
return (
);
}
```
## Platform differences
The interaction model is shared, but the two implementations are not identical. That is intentional, because a voice call bar should respect the platform it lives on.
| Area | Web | Mobile |
| ------------- | ------------------------------------------------- | ---------------------------------------------------------- |
| Chat handling | Optional inline text input inside the control bar | toggle only, with chat handled elsewhere in the session UI |
| Device logic | More browser-specific media and device handling | simpler native voice-session control surface |
| Variants | `default`, `outline`, and `livekit` | `default` and `outline` |
| Layout feel | wider desktop toolbar | compact touch-friendly row |
## Useful props
The control bar is mostly configured through visibility flags and a few session callbacks.
| Prop | Type | Notes |
| -------------------- | --------- | ------------------------------------------------------------------------------------------ |
| `controls` | object | Chooses which controls are visible: `leave`, `microphone`, `camera`, `screenShare`, `chat` |
| `variant` | string | Changes the visual treatment of the bar |
| `isChatOpen` | `boolean` | Controls whether the chat state is open |
| `onIsChatOpenChange` | function | Called when the chat toggle changes |
| `onDisconnect` | function | Called when the user disconnects |
| `onDeviceError` | function | Useful for reacting to media-device issues |
The web version also accepts more session-oriented props like `isConnected` and media-control helpers because it owns more of the interactive logic directly.
## How it fits into the voice UI
This component works best when it is treated as the bottom control rail of a larger voice session. It is not the whole experience on its own; it is the part that keeps the user in control while the transcript, visualizer, and media tiles do the rest.
That means it pairs especially well with:
* a voice visualizer above it
* a transcript or chat panel nearby
* session state from LiveKit or a similar real-time layer
## Related components
The voice control bar is part of a small family of voice UI primitives. These are the most relevant companion pages in this docs set.
---
url: /ai/docs/components/voice-visualizer
title:
description: A complete guide to the voice visualizers in TurboStarter AI, covering the six web visualizer styles, the mobile bar visualizer, and how each one is configured.
---
TurboStarter AI ships with a small family of voice visualizers rather than one fixed component. On web, the voice experience can render six different styles from `packages/ui/web`, while mobile uses a focused bar visualizer from `packages/ui/mobile`.
## Web
The web side is the more flexible implementation. It includes six distinct visualizers, and the app-level voice screen selects between them based on the current visualizer settings.

The web package includes six visualizer styles. They all react to voice-session but each one gives the interface a different character.
| Visualizer | Component | Best fit | Notes |
| ---------- | ----------------------- | --------------------------- | ------------------------------------------------------------------------- |
| Orb | `Orb` | Hero-style voice sessions | A shader-driven focal point with blended colors and volume-driven motion. |
| Bar | `AudioVisualizerBar` | Clear, familiar voice UI | The most direct option when you want a classic speech-bar treatment. |
| Grid | `AudioVisualizerGrid` | Structured layouts | Animates a matrix of cells and works well in more system-like interfaces. |
| Radial | `AudioVisualizerRadial` | Circular layouts | Wraps bars around a center point for a more ambient feel. |
| Wave | `AudioVisualizerWave` | Minimal wide layouts | Uses a shader-based waveform that feels clean and elegant. |
| Aura | `AudioVisualizerAura` | Premium, immersive surfaces | Renders a soft glowing field that feels more atmospheric than literal. |
In the app, the selected visualizer shape is read from the voice settings store and mapped to the matching primitive from `@workspace/ui-web/voice/*`.
## Mobile
The mobile implementation is intentionally simpler. Instead of exposing a full visualizer family, it uses a single bar visualizer that stays clear and readable on a smaller screen.

The mobile package currently exposes one voice visualizer primitive, and the app-level mobile voice screen follows that same direction.
| Visualizer | Component | Best fit | Notes |
| ---------- | -------------------- | --------------------------------- | ------------------------------------------------------------------------------------------ |
| Bar | `AudioVisualizerBar` | Native full-screen voice sessions | A five-bar layout with animated idle and speaking states, optimized for compact mobile UI. |
That keeps the mobile experience consistent and easy to place next to transcript, controls, and the rest of the session UI.
## What it brings to the session
A good voice interface needs more than controls and transcript text. The visualizer is what makes the session feel active before the next response is read or heard.
Listening, thinking, and speaking each feel different, so the session never
looks idle or frozen.
It creates a strong visual focal point without introducing more buttons,
labels, or status chips.
The idea stays consistent between web and mobile, even though each platform
renders it differently.
## Usage
If you are building a custom voice surface, it is often better to use the primitives directly instead of relying on the app-level wrapper. Each example below stays minimal, but it uses the available props so you can see how the visualizer is meant to be configured.
The orb is the most configurable visualizer in the set. It works best when the visualizer is the centerpiece of the screen rather than a supporting detail.
```tsx
import { Orb } from "@workspace/ui-web/voice/orb";
import { useRef } from "react";
export function OrbVisualizer() {
const colorsRef = useRef<["#93c5fd", "#1d4ed8"]>(["#93c5fd", "#1d4ed8"]);
const inputVolumeRef = useRef(0.2);
const outputVolumeRef = useRef(0.45);
return (
0.2}
getOutputVolume={() => 0.45}
/>
);
}
```
| Prop | Purpose |
| ------------------------------------ | --------------------------------------------------------- |
| `colors` | Sets the base gradient pair. |
| `colorsRef` | Updates colors dynamically without remounting. |
| `resizeDebounce` | Controls how quickly the canvas reacts to resize changes. |
| `seed` | Keeps the visual pattern deterministic. |
| `agentState` | Drives the current animation state. |
| `volumeMode` | Chooses automatic or manual volume control. |
| `manualInput` / `manualOutput` | Pass explicit input and output levels. |
| `inputVolumeRef` / `outputVolumeRef` | Provide refs for external live volume data. |
| `getInputVolume` / `getOutputVolume` | Pull volume from callbacks instead of refs. |
The bar visualizer is the most straightforward option. It is usually the easiest one to drop into a product UI when you want something readable and familiar.
```tsx
import { AudioVisualizerBar } from "@workspace/ui-web/voice/audio-visualizer-bar";
import { useVoiceAssistant } from "@livekit/components-react";
export function BarVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| ------------ | -------------------------------------- |
| `size` | Adjusts height and spacing. |
| `state` | Changes the current animation pattern. |
| `color` | Sets the bar color. |
| `barCount` | Changes the number of bars. |
| `audioTrack` | Connects speaking mode to live audio. |
The grid visualizer is better when you want a more structured or technical feel. It is also the easiest option to restyle because you can replace the default cell markup.
```tsx
import { AudioVisualizerGrid } from "@workspace/ui-web/voice/audio-visualizer-grid";
import { useVoiceAssistant } from "@livekit/components-react";
export function GridVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| -------------------------- | --------------------------------------- |
| `size` | Changes the gap scale. |
| `state` | Controls the current animation state. |
| `color` | Sets the active cell color. |
| `audioTrack` | Connects the grid to live audio data. |
| `radius` | Controls how far the highlight spreads. |
| `interval` | Adjusts non-speaking animation timing. |
| `rowCount` / `columnCount` | Define the grid dimensions. |
The radial visualizer is useful when the voice UI is built around a center point. It feels more ambient than bars while still staying readable.
```tsx
import { AudioVisualizerRadial } from "@workspace/ui-web/voice/audio-visualizer-radial";
import { useVoiceAssistant } from "@livekit/components-react";
export function RadialVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| ------------ | ------------------------------------------ |
| `size` | Changes the overall scale. |
| `state` | Drives the current animation behavior. |
| `color` | Sets the bar color. |
| `radius` | Changes the distance from the center. |
| `barCount` | Defines how many radial bars are rendered. |
| `audioTrack` | Connects the visualizer to live audio. |
The wave visualizer is a strong default when you want something polished but understated. It works especially well in wider layouts and hero-like voice stages.
```tsx
import { AudioVisualizerWave } from "@workspace/ui-web/voice/audio-visualizer-wave";
import { useVoiceAssistant } from "@livekit/components-react";
export function WaveVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| ------------ | ------------------------------------- |
| `size` | Changes the default height scale. |
| `state` | Drives the motion profile. |
| `color` | Sets the wave color. |
| `colorShift` | Adds hue variation toward the edges. |
| `lineWidth` | Changes the visible stroke thickness. |
| `blur` | Softens the wave edge. |
| `audioTrack` | Connects the wave to live audio. |
The aura visualizer is the softest option in the set. It is a good fit when you want the session to feel atmospheric rather than explicitly meter-driven.
```tsx
import { AudioVisualizerAura } from "@workspace/ui-web/voice/audio-visualizer-aura";
import { useVoiceAssistant } from "@livekit/components-react";
export function AuraVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| ------------ | ----------------------------------------------- |
| `size` | Changes the visual scale. |
| `state` | Drives the animation state. |
| `color` | Sets the base aura color. |
| `colorShift` | Adds variation across the effect. |
| `themeMode` | Tunes the effect for light or dark backgrounds. |
| `audioTrack` | Connects speaking mode to live audio. |
The mobile bar visualizer keeps the API compact, but the `options` object still gives you room to tune the motion and visual balance.
```tsx
import { AudioVisualizerBar } from "@workspace/ui-mobile/voice/audio-visualizer-bar";
import { useVoiceAssistant } from "@livekit/components-react";
export function MobileVisualizer() {
const { audioTrack } = useVoiceAssistant();
return (
);
}
```
| Prop | Purpose |
| ---------- | -------------------------------------------------------------------------- |
| `state` | Drives the current animation state. |
| `barCount` | Sets how many bars are rendered. |
| `trackRef` | Connects the visualizer to live audio. |
| `options` | Controls bar size, spacing, color, opacity, and idle or speaking behavior. |
## Under the hood
Although the public API is intentionally small, the visualizer system is doing real session work for you. It ties motion to actual voice state instead of treating animation as decoration.
* On web, the wrapper chooses a visualizer style from the active voice settings and adapts it to the current theme, session and live input and output volume.
* On mobile, the implementation stays closer to a single native pattern and focuses on keeping the visualization readable and stable in a compact layout.
* Both versions react to the assistant lifecycle, so connecting, listening, thinking, and speaking can each look distinct.
## Related components
`` usually lives at the center of a broader voice session. These pages are the most useful companions when you are building out the rest of that surface.
---
url: /ai/docs
title: Get started
description: An overview of the TurboStarter AI starter kit.
---
TurboStarter AI is a **starter kit with 10+ ready-to-use templates** across web and mobile that helps you quickly build powerful AI applications without starting from scratch.
Whether you're launching a small side project or a full-scale product, it gives you the structure you need to start building immediately.
## Features
TurboStarter AI comes packed with features designed to accelerate your development process:
### Core framework
### AI
### Data storage
### Authentication
### User interface
## Templates
TurboStarter AI includes several production-ready template applications that showcase diverse AI capabilities. Use these examples to understand implementation patterns and jumpstart your own projects.
} />
} />
} />
} />
} />
} />
## Scope of this documentation
This documentation focuses specifically on the AI features, architecture, and demo applications included in the **TurboStarter AI** kit. While we provide comprehensive coverage of AI integrations, for information about core framework elements (authentication, billing, etc.), please refer to the [Core documentation](/docs/web).
Our goal is to guide you through setting up, customizing, and deploying the AI starter kit efficiently. Where relevant, we include links to official documentation for the integrated AI providers and libraries.
## Setup
Getting started with TurboStarter AI requires configuring the core applications first. For detailed setup instructions, refer to:
} />
} />
After establishing the core applications, you can configure specific AI providers and demo applications using the dedicated sections in this documentation (see sidebar). For a quick start, you might also want to check our [TurboStarter CLI guide](/blog/the-only-turbo-cli-you-need-to-start-your-next-project-in-seconds) to bootstrap your project in seconds.
When working with the AI starter kit, remember to use the `ai` repository instead of `core` for Git commands. For example, use `git clone turbostarter/ai` rather than `git clone turbostarter/core`.
## Deployment
Deploying TurboStarter AI follows the same process as deploying the core web application. Ensure you configure all necessary environment variables, including those for your selected AI providers (like [OpenAI](/ai/docs/providers/openai), [Anthropic](/ai/docs/providers/anthropic), etc.), in your deployment environment.
For comprehensive deployment instructions across various platforms, consult our core deployment guides:
For mobile app store deployment, refer to our mobile publishing guides:
Each AI demo app may have specific deployment considerations, so check their dedicated documentation sections for additional guidance.
## AI-assisted development
TurboStarter comes with built-in rules, skills, subagents, and commands designed specifically to make AI-enhanced development easier. These project-specific AI helpers guide large language models (LLMs) to understand your codebase, enforce best practices, and maintain consistency throughout your project.
Major AI coding assistants - such as [Cursor](https://cursor.com), [Claude](https://claude.ai), [Codex](https://openai.com/codex), [Antigravity](https://antigravity.dev), and others - work seamlessly with this setup. Simply open the TurboStarter AI project in your preferred AI tool to get intelligent code assistance right away.
Additionally, you'll find a [/llms.txt](/llms.txt) file containing up-to-date, LLM-optimized documentation, which allows you to query the latest details about TurboStarter directly from your AI assistant.
If you'd like a step-by-step walkthrough, check out our [AI-assisted development guide](/docs/web/installation/ai-development).
## Let's build amazing AI SaaS!
We're excited to help you create innovative AI-powered applications quickly and efficiently. If you have questions, encounter issues, or want to showcase your creations, connect with our community:
* [Follow updates on X](https://x.com/turbostarter_)
* [Join our Discord](https://discord.gg/KjpK2uk3JP)
* [Report issues on GitHub](https://github.com/turbostarter)
* [Contact us via email](mailto:hello@turbostarter.dev)
Happy building!
---
url: /ai/docs/stack
title: Tech stack
description: Learn which tools and libraries power TurboStarter AI.
---
## Turborepo
[Turborepo](https://turbo.build/) is a high-performance monorepo tool that optimizes dependency management and script execution across your project. We chose this monorepo setup to simplify feature management and enable seamless code sharing between packages.
} />
## Next.js
[Next.js](https://nextjs.org) is a powerful [React](https://react.dev) framework that delivers server-side rendering, static site generation, and more. We selected Next.js for its exceptional flexibility and developer experience. It also serves as the foundation for our serverless API.
} />
} />
## React Native + Expo
[React Native](https://reactnative.dev/) is a leading open-source framework created by Facebook that enables building native mobile applications using [React](https://react.dev). It provides access to native platform capabilities while maintaining the development efficiency of React.
[Expo](https://expo.dev/) extends React Native with a comprehensive toolkit that streamlines development, building, and deployment of iOS, Android, and web apps from a single codebase.
} />
} />
## AI
As a foundation, we use [AI SDK](https://ai-sdk.dev/) which provides a robust toolkit for building AI-powered applications. It offers essential utilities and components for integrating advanced AI features, including streaming responses, interactive chat interfaces, and more.
For building complex AI systems, including prompt management, memory systems, and agent architectures, the starter leverages [LangChain](https://js.langchain.com/), a sophisticated framework designed for language model-powered applications.
For collaborative AI and communication features, we use [LiveKit](https://livekit.io/) which enables real-time audio, video, and data streaming capabilities, specifically designed for autonomous voice agents.
} />
} />
} />
## Hono
[Hono](https://hono.dev) is an ultrafast, lightweight web framework optimized for edge computing. It includes a type-safe RPC client for secure function calls from the frontend. We leverage Hono to create efficient serverless API endpoints.
} />
## Tailwind CSS
[Tailwind CSS](https://tailwindcss.com) is a utility-first CSS framework that accelerates UI development without writing custom CSS. We complement it with [Base UI](https://base-ui.com), a collection of accessible headless components, and [shadcn/ui](https://ui.shadcn.com), which lets you generate beautifully designed components with a single command.
} />
} />
} />
## Drizzle
[Drizzle](https://orm.drizzle.team/) is a type-safe, high-performance [ORM](https://orm.drizzle.team/docs/overview) (Object-Relational Mapping) for modern database management. It generates TypeScript types from your schema and enables fully type-safe queries.
We use [PostgreSQL](https://www.postgresql.org) as our default database, but Drizzle's flexibility allows you to easily switch to MySQL, SQLite, or any [other supported database](https://orm.drizzle.team/docs/connect-overview) by updating a few configuration lines.
} />
} />