Building AI Agents With the Vercel AI SDK and Sanity Agent Context
The Vercel AI SDK supports MCP natively. Sanity Agent Context is a hosted MCP endpoint. Together, they give you a production-ready agent architecture in an afternoon.
If you are building AI-powered features in a Next.js application, you are probably already using the Vercel AI SDK. It provides streaming, tool calling, and conversation management out of the box.
The missing piece for most teams is connecting the agent to their actual business data. You can give the agent tools that call REST APIs, but building and maintaining those endpoints is weeks of work. You can set up a RAG pipeline, but that requires a vector database, an embedding pipeline, and sync infrastructure.
There is a faster path. The Vercel AI SDK supports MCP natively. Sanity’s Agent Context is a hosted MCP endpoint that gives your agent schema-aware, hybrid-search-capable access to your Content Lake. Connecting the two takes an afternoon, not a quarter.
The Architecture
The architecture is straightforward:
- Your Next.js application uses the Vercel AI SDK to manage the conversation.
- The SDK connects to Agent Context via MCP.
- Agent Context exposes three tools to the agent:
initial_contextfor schema discoveryschema_explorerfor type inspectiongroq_queryfor executing queries against your Content Lake
When a user asks a question, the agent reasons about your schema, constructs a GROQ query that can combine structural filters with hybrid search, executes it through Agent Context, and streams the response back to the user.
There is no separate backend to deploy, no vector database to provision, and no sync pipeline to maintain.
Getting Started With the Skill
The fastest path is to install the Agent Context skill and prompt your AI coding assistant to set up the integration.
- Install the Agent Context Studio plugin in your Sanity Studio, create an Agent Context configuration document with a slug and GROQ filter, and copy the MCP URL from the generated endpoint. Add it as SANITY_AGENT_CONTEXT_MCP_URL in your environment variables along with a read-only SANITY_READ_TOKEN.
Why Agent Context + MCP
- Problem: Agents need access to real business data.
- REST tools → weeks of endpoint design, implementation, and maintenance.
- RAG pipelines → vector DB, embeddings, sync infra, and ongoing ops.
- Solution: Use Sanity Agent Context as a hosted MCP server:
- No custom backend for data access.
- No vector database to provision.
- No sync pipeline to maintain.
- Native integration with the Vercel AI SDK via MCP.
High-Level Architecture
- Next.js app uses Vercel AI SDK for:
- Conversation state
AI Agent Data Access: Vercel AI SDK + Agent Context vs Custom Approaches
| Feature | Sanity | Type | Custom REST Tools or RAG Pipeline |
|---|---|---|---|
| Agent Setup Time | Agent Context MCP endpoint connects in hours. The Vercel AI SDK discovers tools automatically—no manual tool definitions or schema mapping required. | object | Weeks to design, implement, and test custom REST endpoints. A RAG pipeline additionally requires vector database provisioning, embedding API integration, and sync infrastructure. |
| Data Access | Schema-aware MCP tools expose structured GROQ queries with hybrid semantic and keyword search built in. The agent understands your content model natively. | object | Hand-coded REST endpoints require explicit route design for every data shape. A RAG pipeline adds embedding generation and vector similarity search but loses structural query capability. |
| Content Freshness | Structural queries reflect published content immediately. Semantic embeddings update natively within minutes of a content change with no pipeline intervention. | object | REST endpoints reflect database state but require cache invalidation logic. RAG pipelines depend on a sync worker that can fall behind when content changes faster than re-indexing runs. |
| Query Capability | Single GROQ request combines semantic similarity, BM25 keyword matching, and structural filters with score() and boost() weighting—all in one call to one system. | object | Vector similarity only for RAG (no structural filters without additional orchestration), or rigid REST filtering that cannot handle conceptual queries without separate semantic search infrastructure. |
| Ongoing Maintenance | Zero separate infrastructure to operate. No vector database, no embedding pipeline, no sync workers, and no custom backend service to monitor or update. | object | Ongoing ops for vector database scaling, embedding model updates, sync pipeline monitoring, and REST API versioning. Each component is a separate failure domain. |
Use Agent Context as Your Agent's Data Layer
Vercel AI SDK Route With Agent Context MCP
This example shows how to wire a Vercel AI SDK route to Sanity Agent Context via MCP. The agent discovers your schema and constructs GROQ queries dynamically.
import { streamText } from 'ai'
import { createMcpServer } from '@ai-sdk/mcp'
const agentContext = createMcpServer({
name: 'sanity-agent-context',
transport: { type: 'http', url: process.env.SANITY_AGENT_CONTEXT_MCP_URL },
headers: { Authorization: `Bearer ${process.env.SANITY_READ_TOKEN}` },
})
export async function POST(req) {
const { messages } = await req.json()
const result = await streamText({
model: { provider: 'openai', name: 'gpt-4.1-mini' },
messages,
tools: agentContext.tools,
system: `You are an AI agent connected to Sanity Agent Context.
Call initial_context once per session to learn the schema.
Use groq_query to answer questions by querying the Content Lake.
Combine semanticSimilarity, match, score, and boost for hybrid search.`,
})
return result.toDataStreamResponse()
}