Tool calling

Learn how AI tool calling works, when to let models use external tools, and how to design reliable tool-based workflows in modern AI apps.

Tool calling lets a model do more than generate text. Instead of guessing, it can decide to call a weather API, search the web, look up data, run a calculation, or trigger a workflow, then use the result to continue the response.

This is one of the key shifts that turns a chatbot into an assistant. The model is no longer limited to what it remembers. It can interact with systems around it.

Overview

At a high level, tool calling means giving the model a set of well-defined capabilities and letting it choose when to use them. Each tool has a name, a description, an input schema, and usually an execution function that runs in your app or backend.

That creates a loop like this:

The user asks for something that may require outside information or action.

The model decides whether a tool is needed.
The selected tool runs with validated input.
The tool result is returned to the model.
The model uses that result to continue or complete the answer.

When tool calling is useful

Tool calling is most useful when the model needs access to fresh data, private data, or real-world actions. That is why it shows up so often in assistants, agents, dashboards, support tools, and internal automation.

Good fit

Web search, database lookup, CRM queries, order status checks, calculations, scheduling, and content retrieval are all strong tool-calling use cases.

Where it appears in these docs

See related implementations in Chat, Knowledge RAG, and MCP.

Not always needed

If the answer can be produced from the prompt and context alone, plain text generation is usually simpler and faster.

Tool calling vs retrieval vs reasoning

These capabilities are often discussed together, but they solve different problems. Knowing the difference helps you design simpler systems.

CapabilityWhat it doesBest for
Text generationProduces language outputDrafting, rewriting, summarizing, answering from provided context
Retrieval / embeddingsFinds relevant contextSearch, RAG, semantic lookup
Tool callingLets the model use external functions or systemsActions, real-time data, workflow orchestration
ReasoningGives the model more thinking budgetMulti-step planning, comparisons, hard decisions

AI SDK example

The AI SDK makes tool calling approachable by letting you define tools with a schema and an execution function. This keeps the interface model-friendly while still giving you runtime control.

import { generateText, stepCountIs, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const result = await generateText({
  model: openai("gpt-5"),
  prompt: "What is the weather in Berlin today?",
  tools: {
    weather: tool({
      description: "Get the weather in a location",
      inputSchema: z.object({
        location: z.string().describe("The city or place to check"),
      }),
      execute: async ({ location }) => {
        return {
          location,
          temperature: "18C",
          conditions: "Cloudy",
        };
      },
    }),
  },
  stopWhen: stepCountIs(5),
});

This pattern is useful because the model gets a clear interface, and your app keeps control over validation, permissions, and execution.

How to design good tools

The best tools are boringly clear. Models do better when tools are specific, narrow, and easy to distinguish from one another.

Common product patterns

Tool calling rarely exists by itself. In most products, it appears as part of a broader workflow or assistant experience.

Search assistant

The model decides when to use search, fetches results, and then summarizes them for the user.

Back-office copilot

The model looks up customer, billing, or product data across internal systems before answering.

Agent-style workflow

The model chains multiple tools together, such as search, retrieval, and summarization, to complete a multi-step task.

Action-taking assistant

The model does not just answer. It creates tickets, updates records, or triggers downstream automations after validation.

Failure modes to plan for

Tool calling is powerful, but it introduces new operational risks. A good assistant is not just "smart"; it is predictable under failure.

Wrong tool selection

The model may choose a tool when none is needed, or choose the wrong one if descriptions overlap too much.

Bad input shape

Weak schemas or vague prompts can produce malformed parameters that break execution.

Noisy tool responses

Returning too much irrelevant data can make the final answer worse instead of better.

Unsafe side effects

Any tool that writes data, charges money, or changes system state should be protected with auth, policy checks, and confirmation flows where appropriate.

In this docs set, tool calling connects naturally to several other capabilities. Those pages are the best place to see how it fits into end-user experiences.

A practical checklist

Before shipping tool calling, make sure the system is understandable to both the model and your team. The simpler the contract, the more reliable the behavior.

  • Keep tools small and well-scoped.
  • Validate all tool inputs with schemas.
  • Prefer read-only tools first, then add safe write actions later.
  • Log tool selections and failures so you can evaluate behavior.
  • Avoid exposing tools that overlap too much in purpose.

Learn more

If you want to go deeper, these resources are the best next step. They cover both the practical API surface and the emerging design patterns around agent-like systems.

How is this guide?

Last updated on

On this page

Make AI your edge, not replacement.Get TurboStarter AI