Vercel AI Data Stream Protocol
Pydantic AI natively supports the Vercel AI Data Stream Protocol to receive agent run input from, and stream events to, a frontend using AI SDK UI hooks like useChat. You can optionally use AI Elements for pre-built UI components.
Note
By default, the adapter targets AI SDK v5 for backwards compatibility. To use features introduced in AI SDK v6, set sdk_version=6 on the adapter.
Usage
The VercelAIAdapter class is responsible for transforming agent run input received from the frontend into arguments for Agent.run_stream_events(), running the agent, and then transforming Pydantic AI events into Vercel AI events. The event stream transformation is handled by the VercelAIEventStream class, but you typically won't use this directly.
If you're using a Starlette-based web framework like FastAPI, you can use the VercelAIAdapter.dispatch_request() class method from an endpoint function to directly handle a request and return a streaming response of Vercel AI events. This is demonstrated in the next section.
If you're using a web framework not based on Starlette (e.g. Django or Flask) or need fine-grained control over the input or output, you can create a VercelAIAdapter instance and directly use its methods. This is demonstrated in "Advanced Usage" section below.
Usage with Starlette/FastAPI
Besides the request, VercelAIAdapter.dispatch_request() takes the agent, the same optional arguments as Agent.run_stream_events(), and an optional on_complete callback function that receives the completed AgentRunResult and can optionally yield additional Vercel AI events.
from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import Response
from pydantic_ai import Agent
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
agent = Agent('gateway/openai:gpt-5.2')
app = FastAPI()
@app.post('/chat')
async def chat(request: Request) -> Response:
return await VercelAIAdapter.dispatch_request(request, agent=agent)
from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import Response
from pydantic_ai import Agent
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
agent = Agent('openai:gpt-5.2')
app = FastAPI()
@app.post('/chat')
async def chat(request: Request) -> Response:
return await VercelAIAdapter.dispatch_request(request, agent=agent)
Advanced Usage
If you're using a web framework not based on Starlette (e.g. Django or Flask) or need fine-grained control over the input or output, you can create a VercelAIAdapter instance and directly use its methods, which can be chained to accomplish the same thing as the VercelAIAdapter.dispatch_request() class method shown above:
- The
VercelAIAdapter.build_run_input()class method takes the request body as bytes and returns a Vercel AIRequestDatarun input object, which you can then pass to theVercelAIAdapter()constructor along with the agent.- You can also use the
VercelAIAdapter.from_request()class method to build an adapter directly from a Starlette/FastAPI request.
- You can also use the
- The
VercelAIAdapter.run_stream()method runs the agent and returns a stream of Vercel AI events. It supports the same optional arguments asAgent.run_stream_events()and an optionalon_completecallback function that receives the completedAgentRunResultand can optionally yield additional Vercel AI events.- You can also use
VercelAIAdapter.run_stream_native()to run the agent and return a stream of Pydantic AI events instead, which can then be transformed into Vercel AI events usingVercelAIAdapter.transform_stream().
- You can also use
- The
VercelAIAdapter.encode_stream()method encodes the stream of Vercel AI events as SSE (HTTP Server-Sent Events) strings, which you can then return as a streaming response.- You can also use
VercelAIAdapter.streaming_response()to generate a Starlette/FastAPI streaming response directly from the Vercel AI event stream returned byrun_stream().
- You can also use
Note
This example uses FastAPI, but can be modified to work with any web framework.
import json
from http import HTTPStatus
from fastapi import FastAPI
from fastapi.requests import Request
from fastapi.responses import Response, StreamingResponse
from pydantic import ValidationError
from pydantic_ai import Agent
from pydantic_ai.ui import SSE_CONTENT_TYPE
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
agent = Agent('gateway/openai:gpt-5.2')
app = FastAPI()
@app.post('/chat')
async def chat(request: Request) -> Response:
accept = request.headers.get('accept', SSE_CONTENT_TYPE)
try:
run_input = VercelAIAdapter.build_run_input(await request.body())
except ValidationError as e:
return Response(
content=json.dumps(e.json()),
media_type='application/json',
status_code=HTTPStatus.UNPROCESSABLE_ENTITY,
)
adapter = VercelAIAdapter(agent=agent, run_input=run_input, accept=accept)
event_stream = adapter.run_stream()
sse_event_stream = adapter.encode_stream(event_stream)
return StreamingResponse(sse_event_stream, media_type=accept)
import json
from http import HTTPStatus
from fastapi import FastAPI
from fastapi.requests import Request
from fastapi.responses import Response, StreamingResponse
from pydantic import ValidationError
from pydantic_ai import Agent
from pydantic_ai.ui import SSE_CONTENT_TYPE
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
agent = Agent('openai:gpt-5.2')
app = FastAPI()
@app.post('/chat')
async def chat(request: Request) -> Response:
accept = request.headers.get('accept', SSE_CONTENT_TYPE)
try:
run_input = VercelAIAdapter.build_run_input(await request.body())
except ValidationError as e:
return Response(
content=json.dumps(e.json()),
media_type='application/json',
status_code=HTTPStatus.UNPROCESSABLE_ENTITY,
)
adapter = VercelAIAdapter(agent=agent, run_input=run_input, accept=accept)
event_stream = adapter.run_stream()
sse_event_stream = adapter.encode_stream(event_stream)
return StreamingResponse(sse_event_stream, media_type=accept)
Data Chunks
Pydantic AI tools can send Vercel AI data stream chunks by returning a
ToolReturn object with a data-carrying chunk
(or a list of chunks) as metadata.
The supported chunk types are DataChunk,
SourceUrlChunk,
SourceDocumentChunk,
and FileChunk.
This is useful for attaching structured data to the frontend alongside the tool result, such as source URLs or custom data payloads.
from pydantic_ai import Agent, ToolReturn
from pydantic_ai.ui.vercel_ai.response_types import DataChunk, SourceUrlChunk
agent = Agent('gateway/openai:gpt-5.2')
@agent.tool_plain
async def search_docs(query: str) -> ToolReturn:
return ToolReturn(
return_value=f'Found 2 results for "{query}"',
metadata=[
SourceUrlChunk(
source_id='doc-1',
url='https://example.com/docs/intro',
title='Introduction',
),
DataChunk(
type='data-search-results',
data={'query': query, 'count': 2},
),
],
)
from pydantic_ai import Agent, ToolReturn
from pydantic_ai.ui.vercel_ai.response_types import DataChunk, SourceUrlChunk
agent = Agent('openai:gpt-5.2')
@agent.tool_plain
async def search_docs(query: str) -> ToolReturn:
return ToolReturn(
return_value=f'Found 2 results for "{query}"',
metadata=[
SourceUrlChunk(
source_id='doc-1',
url='https://example.com/docs/intro',
title='Introduction',
),
DataChunk(
type='data-search-results',
data={'query': query, 'count': 2},
),
],
)
Note
Protocol-control chunks such as StartChunk, FinishChunk, StartStepChunk, or FinishStepChunk are automatically filtered out — only the four data-carrying chunk types listed above are forwarded to the stream and preserved in dump_messages.
Tool Approval
Note
Tool approval requires AI SDK UI v6 or later on the frontend.
Pydantic AI supports human-in-the-loop tool approval workflows with AI SDK UI, allowing users to approve or deny tool executions before they run. See the deferred tool calls documentation for details on setting up tools that require approval.
To enable tool approval streaming, pass sdk_version=6 to dispatch_request:
@app.post('/chat')
async def chat(request: Request) -> Response:
return await VercelAIAdapter.dispatch_request(request, agent=agent, sdk_version=6)
When sdk_version=6, the adapter will:
- Emit
tool-approval-requestchunks when tools withrequires_approval=Trueare called - Automatically extract approval responses from follow-up requests
- Emit
tool-output-deniedchunks for rejected tools
On the frontend, AI SDK UI's useChat hook handles the approval flow. You can use the Confirmation component from AI Elements for a pre-built approval UI, or build your own using the hook's addToolApprovalResponse function.