MCP (Model Context Protocol) tools let the LLM call external functions, Yao processes, or APIs. **No Hook is required** — the LLM decides when to call each tool based on the user's request.
## Step 1: Create the MCP Server Definition
Create `mcps/tools.mcp.yao` in your assistant directory:
```json
{
"label": "My Tools",
"description": "Tools available to this agent",
"transport": "process",
"tools": {
"get_weather": "scripts.weather.Get",
"search_docs": "scripts.search.Query"
}
}
```
The `transport: "process"` type maps tool names directly to Yao processes. The LLM calls `get_weather`, Yao runs `scripts.weather.Get` and returns the result.
## Step 2: Register in `package.yao`
```json
{
"name": "My Assistant",
"connector": "$ENV.DEFAULT_CONNECTOR",
"mcp": {
"servers": [
{
"server_id": "agents.my-assistant.tools",
"tools": ["get_weather", "search_docs"]
}
]
}
}
```
The `server_id` is `agents.<assistant-id>.<filename-without-extension>`. List only the tools you want the LLM to see. If `tools` is omitted or empty, **all tools** defined in that server are exposed.
## Transport Types
**`process`** — Call Yao processes directly. Best for internal data access.
```json
{
"transport": "process",
"tools": {
"save_entry": "agents.yao.keeper.store.MCPSave",
"search_entries": "agents.yao.keeper.store.MCPSearch"
}
}
```
**`stdio`** — Spawn a local MCP server process.
```json
{
"transport": "stdio",
"command": "python",
"arguments": ["mcp_server.py"],
"env": { "API_KEY": "$ENV.API_KEY" }
}
```
**`http`** — Connect to a remote MCP server over HTTP.
```json
{
"transport": "http",
"url": "https://mcp.example.com/api",
"authorization_token": "$ENV.MCP_TOKEN"
}
```
**`sse`** — Server-Sent Events stream.
```json
{
"transport": "sse",
"url": "https://mcp.example.com/events",
"authorization_token": "$ENV.MCP_TOKEN"
}
```
## `servers` Syntax Sugar
The `servers` array supports four equivalent formats — pick the one that fits your use case.
**Format 1 — String** (expose all tools in the server):
```json
{ "servers": ["agents.my-assistant.tools"] }
```
**Format 2 — Standard object** (explicit tool list, recommended for clarity):
```json
{
"servers": [
{
"server_id": "agents.my-assistant.tools",
"tools": ["get_weather", "search_docs"]
}
]
}
```
**Format 3 — `{ server_id: [tools] }`** (shorthand for tools-only):
```json
{
"servers": [
{ "agents.my-assistant.tools": ["get_weather", "search_docs"] }
]
}
```
**Format 4 — `{ server_id: { resources, tools } }`** (when you also need resources):
```json
{
"servers": [
{
"agents.my-assistant.tools": {
"tools": ["get_weather"],
"resources": ["data://reports"]
}
}
]
}
```
All four formats are parsed by the same `UnmarshalJSON` — the framework accepts any of them. Format 2 is what you'll see in real assistants (e.g. `keeper`, `expense`) and is the most readable.
## Tool Parameter Schemas
For `process` transport, you can add input/output schemas to help the LLM call tools correctly. Create mapping files under `mcps/mapping/`:
```
mcps/
├── tools.mcp.yao
└── mapping/
└── tools/
└── schemes/
├── search_docs.in.yao ← input schema
└── search_docs.out.yao ← output schema
```
Input schema (`search_docs.in.yao`):
```json
{
"type": "object",
"properties": {
"query": { "type": "string", "description": "Search query" },
"limit": { "type": "integer", "description": "Max results", "default": 10 }
},
"required": ["query"]
}
```
## Testing
```bash
# The LLM will call tools automatically based on the conversation
yao agent test -n my-assistant -i "What's the weather in Tokyo?"
```
## What's Next
Your agent now has tools. The LLM calls them automatically. The next page introduces Hooks — for when you need control over what happens before or after.
→ **[Pure Hook](./pure-hook)**