In this page you'll create a minimal Yao Agent — no hooks, no tools, just a prompt and a connector. By the end you'll have a working agent you can talk to.
## Prerequisites
- Yao installed and a project initialized (`yao init`)
- At least one LLM connector configured (e.g. OpenAI, DeepSeek)
## Step 1: Create the Agent Directory
Every agent lives under `assistants/` in your project. Create a directory for your agent:
```bash
mkdir -p assistants/my-assistant
```
An agent needs exactly two files to work: `package.yao` (configuration) and `prompts.yml` (system prompt).
## Step 2: Write `package.yao`
Create `assistants/my-assistant/package.yao`:
```json
{
"name": "My Assistant",
"description": "A helpful assistant that answers questions clearly and concisely.",
"connector": "$ENV.DEFAULT_CONNECTOR",
"options": {
"temperature": 0.7,
"max_tokens": 2048
},
"public": true
}
```
**Key fields:**
| Field | Required | Description |
|-------|----------|-------------|
| `name` | Yes | Display name shown in the UI |
| `connector` | Yes | Which LLM to use. `$ENV.DEFAULT_CONNECTOR` reads from your `.env` file |
| `description` | No | Shown in the agent list |
| `options` | No | LLM parameters (temperature, max_tokens, etc.) |
| `public` | No | `true` makes it visible to all users |
> **Tip:** To use a specific model instead of the default, set `"connector": "openai.gpt-4o"` or any connector ID defined in your `connectors/` directory.
## Step 3: Write `prompts.yml`
Create `assistants/my-assistant/prompts.yml`:
```yaml
- role: system
content: |
You are a helpful assistant. Answer questions clearly and concisely.
When you don't know something, say so honestly rather than guessing.
```
The `prompts.yml` file defines the system prompt sent to the LLM at the start of every conversation. Only `system` role is supported here.
## Step 4: Test Your Agent
The quickest way to test is directly from the command line — no server needed:
```bash
yao agent test -n my-assistant -i "What is the capital of France?"
```
You should see a streaming response from the LLM.
**If you already have the server running in dev mode**, the agent is live-reloaded automatically — just open the UI at `http://localhost:5099` and find **My Assistant** in the agent list.
If the server isn't running yet, start it with:
```bash
yao start
```
## What You Have
```
assistants/my-assistant/
├── package.yao ← agent configuration
└── prompts.yml ← system prompt
```
That's it. Two files, one working agent.
## What's Next
This agent is the simplest possible form — it just passes your messages to an LLM with a system prompt. In the next page, you'll learn how this execution works under the hood, and understand the three modes that make Yao Agent different from a plain LLM wrapper.
→ **[How It Works](./how-it-works)**