The `Create` Hook runs after the user sends a message and before the LLM receives it. Use it to preprocess input, configure the LLM request, or route to a different agent entirely.
## Minimal Implementation
```typescript
// assistants/my-assistant/src/index.ts
import { agent } from "@yao/runtime";
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
return { messages }; // pass through unchanged
}
```
Return `null` or `{ messages }` for default behavior.
## Return Values
The full set of options you can return:
```typescript
return {
messages, // messages sent to LLM (can be modified)
connector: "gpt-4o-mini", // override the connector for this request
prompt_preset: "task", // select a prompt preset from prompts/
disable_global_prompts: true, // skip global system prompts for this request
temperature: 0.7, // override temperature
max_tokens: 2000, // override max tokens (most models)
max_completion_tokens: 2000, // override max completion tokens (o-series / newer models)
mcp_servers: [ // add/override MCP servers for this request
{ server_id: "agents.my-assistant.tools", tools: ["search"] }
],
uses: { // override wrapper tools (vision / search / etc.)
vision: "yao.vision-agent",
search: "disabled",
},
force_uses: true, // force Uses tools even if model has native capabilities
search: false, // disable auto-search for this request (bool | SearchIntent)
locale: "zh-cn", // override locale for this request
metadata: { key: "value" }, // merged into ctx.metadata (accessible in ctx.metadata)
};
```
## Common Patterns
### Inject Context into Messages
Add information to the user's message before the LLM sees it:
```typescript
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
const username = ctx.authorized?.user_id || "anonymous";
const time = new Date().toISOString();
// Inject context as a system message
const enriched = [
{
role: "system",
content: `Current user: ${username}. Time: ${time}.`,
},
...messages,
];
return { messages: enriched };
}
```
### Switch Connector or Prompt by Intent
```typescript
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
const input = messages[messages.length - 1]?.content || "";
// Route complex analysis to a more capable model
if (input.length > 500 || input.includes("analyze")) {
return {
messages,
connector: "gpt-4o",
prompt_preset: "analysis",
};
}
return { messages }; // default: fast model, default prompt
}
```
### Store State for the Next Hook
Use `ctx.memory.context` to pass data from Create to Next within the same request:
```typescript
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
ctx.memory.context.Set("start_time", Date.now());
ctx.memory.context.Set("chat_id", ctx.chat_id);
return { messages };
}
```
## Delegation
Skip the LLM entirely and hand the request to another agent. The delegated agent runs with the same conversation history.
```typescript
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
const input = messages[messages.length - 1]?.content || "";
// Route to specialist agent
if (input.toLowerCase().includes("code")) {
return {
delegate: {
agent_id: "yao.code-agent", // required
messages, // required
options: {}, // optional — override connector, locale, etc.
},
};
}
return { messages };
}
```
**`delegate` shares the conversation history.** The delegated agent sees all previous messages. This is "passing the baton" — use it for routing.
For independent sub-tasks (calling an agent as a function), use `ctx.agent.Call()` — covered in [Multi-Agent](./multi-agent).
## Real Example: Mark's Create Hook
From `yaobots/assistants/yao/mark/src/index.ts` — detects edit vs. create mode and switches prompt preset:
```typescript
export function Create(
ctx: agent.Context,
messages: agent.Message[]
): agent.Create {
// Store context for Next hook
ctx.memory.context.Set("chat_id", ctx.chat_id);
ctx.memory.context.Set("start_time", Date.now());
const lastMsg = messages[messages.length - 1];
if (lastMsg?.content) {
ctx.memory.context.Set("user_input", lastMsg.content);
}
// Detect edit vs. create mode from URL
const existing = detectAndLoadExisting(ctx);
if (existing) {
ctx.memory.context.Set("mode", "edit");
ctx.memory.context.Set("canvas_id", existing.canvas_id);
return { messages, prompt_preset: "edit" };
}
ctx.memory.context.Set("mode", "create");
return { messages };
}
```
## What's Next
You've controlled the LLM input. Now process its output.
→ **[Next Hook](./hook-next)**