# AI SDK by Vercel
The AI SDK is a TypeScript toolkit designed to help developers build AI-powered applications and agents with React, Next.js, Vue, Svelte, Node.js, and more. It standardizes integrating artificial intelligence models across supported providers, enabling developers to focus on building great AI applications without worrying about provider-specific implementation details.
The SDK consists of two main libraries: **AI SDK Core** provides a unified API for generating text, structured objects, tool calls, and building agents with LLMs; **AI SDK UI** offers framework-agnostic hooks for quickly building chat and generative user interfaces. The SDK supports multiple model providers including OpenAI, Anthropic, Google, Amazon Bedrock, and many others through a consistent interface.
---
## AI SDK Core
### generateText
Generates text for a given prompt and model. Ideal for non-interactive use cases like drafting emails, summarizing documents, or agents using tools.
```typescript
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text, usage, finishReason } = await generateText({
model: openai('gpt-4o'),
system: 'You are a professional writer. You write simple, clear, and concise content.',
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
console.log(text);
console.log('Token usage:', usage);
console.log('Finish reason:', finishReason);
```
### streamText
Streams text from a given prompt and model in real-time. Essential for interactive use cases like chatbots where users expect immediate responses.
```typescript
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
onChunk({ chunk }) {
if (chunk.type === 'text') {
process.stdout.write(chunk.text);
}
},
onFinish({ text, usage, finishReason }) {
console.log('\nCompleted:', { usage, finishReason });
},
});
// Use as async iterable
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
// Or get final text
const finalText = await result.text;
```
### generateText with Structured Output
Generates structured data conforming to a Zod schema. The AI SDK validates the output to ensure type safety and correctness.
```typescript
import { generateText, Output } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { output } = await generateText({
model: openai('gpt-4o'),
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({ name: z.string(), amount: z.string() })
),
steps: z.array(z.string()),
}),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
console.log('Recipe:', output.recipe.name);
console.log('Ingredients:', output.recipe.ingredients);
console.log('Steps:', output.recipe.steps);
```
### streamText with Structured Output
Streams structured data generation, allowing partial objects to be displayed as they are received.
```typescript
import { streamText, Output } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { partialOutputStream } = streamText({
model: openai('gpt-4o'),
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({ name: z.string(), amount: z.string() })
),
steps: z.array(z.string()),
}),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
for await (const partialObject of partialOutputStream) {
console.clear();
console.log('Partial:', partialObject);
}
```
### Tool Calling
Enables models to call external functions/tools to perform specific tasks. Tools have descriptions, input schemas, and execute functions.
```typescript
import { generateText, tool, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const { text, steps, toolCalls, toolResults } = await generateText({
model: openai('gpt-4o'),
tools: {
weather: tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
unit: 'fahrenheit',
}),
}),
convertTemperature: tool({
description: 'Convert temperature from Fahrenheit to Celsius',
inputSchema: z.object({
temperature: z.number().describe('Temperature in Fahrenheit'),
}),
execute: async ({ temperature }) => ({
celsius: Math.round((temperature - 32) * (5 / 9)),
}),
}),
},
stopWhen: stepCountIs(5),
prompt: 'What is the weather in San Francisco in celsius?',
});
console.log('Final answer:', text);
console.log('Tool calls made:', toolCalls);
console.log('Tool results:', toolResults);
console.log('Steps taken:', steps.length);
```
### ToolLoopAgent
A class that handles LLM tool loops automatically, managing context and stopping conditions for agent workflows.
```typescript
import { ToolLoopAgent, stepCountIs, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const weatherAgent = new ToolLoopAgent({
model: openai('gpt-4o'),
tools: {
weather: tool({
description: 'Get the weather in a location (in Fahrenheit)',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
convertFahrenheitToCelsius: tool({
description: 'Convert temperature from Fahrenheit to Celsius',
inputSchema: z.object({
temperature: z.number().describe('Temperature in Fahrenheit'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return { celsius };
},
}),
},
stopWhen: stepCountIs(20),
});
const result = await weatherAgent.generate({
prompt: 'What is the weather in San Francisco in celsius?',
});
console.log(result.text);
console.log('Steps:', result.steps);
```
### embed
Generates embeddings for a single value. Useful for semantic search, similarity comparisons, and clustering.
```typescript
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
const { embedding, usage } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
});
console.log('Embedding dimensions:', embedding.length);
console.log('Token usage:', usage);
```
### embedMany
Generates embeddings for multiple values in batch. Ideal for preparing data stores for retrieval-augmented generation (RAG).
```typescript
import { embedMany, cosineSimilarity } from 'ai';
import { openai } from '@ai-sdk/openai';
const { embeddings, usage } = await embedMany({
model: openai.embedding('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
],
});
// Calculate similarity between first two embeddings
const similarity = cosineSimilarity(embeddings[0], embeddings[1]);
console.log('Similarity:', similarity);
console.log('Total tokens:', usage.tokens);
```
---
## AI SDK UI
### useChat Hook
Creates a conversational UI for chatbot applications with real-time message streaming and state management.
```typescript
// app/page.tsx (Client Component)
'use client';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, stop, error, regenerate, setMessages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
}),
onFinish: ({ message, messages }) => {
console.log('Chat completed:', message);
},
onError: (error) => {
console.error('Chat error:', error);
},
});
const [input, setInput] = useState('');
return (