Vercel AI SDK

Using OpenRouter with Vercel AI SDK

Vercel AI SDK

You can use the Vercel AI SDK to integrate OpenRouter with your Next.js app. To get started, install @openrouter/ai-sdk-provider:

$npm install @openrouter/ai-sdk-provider

And then you can use streamText() API to stream text from OpenRouter.

TypeScript
1import { createOpenRouter } from '@openrouter/ai-sdk-provider';
2import { streamText } from 'ai';
3import { z } from 'zod';
4
5export const getLasagnaRecipe = async (modelName: string) => {
6 const openrouter = createOpenRouter({
7 apiKey: '<OPENROUTER_API_KEY>',
8 });
9
10 const response = streamText({
11 model: openrouter(modelName),
12 prompt: 'Write a vegetarian lasagna recipe for 4 people.',
13 });
14
15 await response.consumeStream();
16 return response.text;
17};
18
19export const getWeather = async (modelName: string) => {
20 const openrouter = createOpenRouter({
21 apiKey: '<OPENROUTER_API_KEY>',
22 });
23
24 const response = streamText({
25 model: openrouter(modelName),
26 prompt: 'What is the weather in San Francisco, CA in Fahrenheit?',
27 tools: {
28 getCurrentWeather: {
29 description: 'Get the current weather in a given location',
30 parameters: z.object({
31 location: z
32 .string()
33 .describe('The city and state, e.g. San Francisco, CA'),
34 unit: z.enum(['celsius', 'fahrenheit']).optional(),
35 }),
36 execute: async ({ location, unit = 'celsius' }) => {
37 // Mock response for the weather
38 const weatherData = {
39 'Boston, MA': {
40 celsius: '15°C',
41 fahrenheit: '59°F',
42 },
43 'San Francisco, CA': {
44 celsius: '18°C',
45 fahrenheit: '64°F',
46 },
47 };
48
49 const weather = weatherData[location];
50 if (!weather) {
51 return `Weather data for ${location} is not available.`;
52 }
53
54 return `The current weather in ${location} is ${weather[unit]}.`;
55 },
56 },
57 },
58 });
59
60 await response.consumeStream();
61 return response.text;
62};

Video Generation

OpenRouter supports video generation through the AI SDK’s experimental_generateVideo API. The provider handles the asynchronous submit-poll-download workflow automatically.

TypeScript
1import { createOpenRouter } from '@openrouter/ai-sdk-provider';
2import { experimental_generateVideo as generateVideo } from 'ai';
3
4const openrouter = createOpenRouter({
5 apiKey: '<OPENROUTER_API_KEY>',
6});
7
8const { video } = await generateVideo({
9 model: openrouter.videoModel('google/veo-3.1'),
10 prompt: 'A golden retriever playing fetch on a sunny beach with waves crashing in the background',
11 aspectRatio: '16:9',
12 duration: 4,
13});
14
15console.log(video.mediaType); // 'video/mp4'
16console.log(video.uint8Array.byteLength); // video size in bytes

Passthrough Options

Each video model supports model-specific parameters that can be passed through via provider.options in extraBody, keyed by provider slug. See the video generation docs for the full list of allowed_passthrough_parameters per model.

1import { createOpenRouter } from '@openrouter/ai-sdk-provider';
2import { experimental_generateVideo as generateVideo } from 'ai';
3
4const openrouter = createOpenRouter({
5 apiKey: '<OPENROUTER_API_KEY>',
6});
7
8const { video } = await generateVideo({
9 model: openrouter.videoModel('google/veo-3.1', {
10 generateAudio: true,
11 extraBody: {
12 provider: {
13 options: {
14 'google-vertex': {
15 parameters: {
16 personGeneration: 'allow_all',
17 enhancePrompt: true,
18 },
19 },
20 },
21 },
22 },
23 }),
24 prompt: 'A timelapse of a flower blooming in a sunlit garden',
25 aspectRatio: '16:9',
26});

Video Model Settings

SettingTypeDefaultDescription
generateAudiobooleanfalseWhether to generate audio alongside the video
pollIntervalMsnumber2000Polling interval in milliseconds when waiting for generation
maxPollTimeMsnumber600000Maximum time in milliseconds to wait before timing out
extraBodyobjectDefault body parameters merged into every request for this model

Image-to-Video

You can pass a reference image to guide the video generation:

TypeScript
1import { createOpenRouter } from '@openrouter/ai-sdk-provider';
2import { experimental_generateVideo as generateVideo } from 'ai';
3
4const openrouter = createOpenRouter({
5 apiKey: '<OPENROUTER_API_KEY>',
6});
7
8const { video } = await generateVideo({
9 model: openrouter.videoModel('alibaba/wan-2.7'),
10 prompt: 'A character walking through a forest',
11 image: new URL('https://example.com/first-frame.png'),
12 resolution: '1920x1080',
13});

Provider Metadata

The response includes OpenRouter-specific metadata accessible via providerMetadata:

TypeScript
1import { createOpenRouter } from '@openrouter/ai-sdk-provider';
2import { experimental_generateVideo as generateVideo } from 'ai';
3
4const openrouter = createOpenRouter({
5 apiKey: '<OPENROUTER_API_KEY>',
6});
7
8const result = await generateVideo({
9 model: openrouter.videoModel('google/veo-3.1'),
10 prompt: 'A slow pan across a calm mountain lake at sunrise',
11 aspectRatio: '16:9',
12});
13
14console.log(result.providerMetadata?.openrouter);
15// { generationId: 'gen-...', cost: 0.25 }

For the full list of supported models, resolutions, aspect ratios, and provider-specific options, see the Video Generation documentation.