Quick Start: State Transfer with MCP
In this quick start, we'll demonstrate how fast-agent can transfer state between two agents using MCP Prompts.
First, we'll start agent_one
as an MCP Server, and send it some messages with the MCP Inspector tool.
Next, we'll run agent_two
and transfer the conversation from agent_one
using an MCP Prompt.
Finally, we'll take a look at fast-agent's prompt-server
and how it can assist building agent applications
You'll need API Keys to connect to a supported model, or use Ollama's OpenAI compatibility mode to use local models.
The quick start also uses the MCP Inspector - check here for installation instructions.
Step 1: Setup fast-agent
Change to the state-transfer directory (cd state-transfer
), rename fastagent.secrets.yaml.example
to fastagent.secrets.yaml
and enter the API Keys for the providers you wish to use.
The supplied fastagent.config.yaml
file contains a default of gpt-4o
- edit this if you wish.
Finally, run uv run agent_one.py
and send a test message to make sure that everything working. Enter stop
to return to the command line.
Step 2: Run agent one as an MCP Server
To start "agent_one"
as an MCP Server, run the following command:
The agent is now available as an MCP Server.
Note
This example starts the server on port 8001. To use a different port, update the URLs in fastagent.config.yaml
and the MCP Inspector.
Step 3: Connect and chat with agent one
From another command line, run the Model Context Protocol inspector to connect to the agent:
Choose the SSE transport type, and the url http://localhost:8001/sse
. After clicking the connect
button, you can interact with the agent from the tools
tab. Use the agent_one_send
tool to send the agent a chat message and see it's response.
The conversation history can be viewed from the prompts
tab. Use the agent_one_history
prompt to view it.
Disconnect the Inspector, then press ctrl+c
in the command window to stop the process.
Step 4: Transfer the conversation to agent two
We can now transfer and continue the conversation with agent_two
.
Run agent_two
with the following command:
Once started, type '/prompts'
to see the available prompts. Select 1
to apply the Prompt from agent_one
to agent_two
, transferring the conversation context.
You can now continue the chat with agent_two
(potentially using different Models, MCP Tools or Workflow components).
Configuration Overview
fast-agent uses the following configuration file to connect to the agent_one
MCP Server:
# MCP Servers
mcp:
servers:
agent_one:
transport: sse
url: http://localhost:8001
agent_two
then references the server in it's definition:
agent_two.py | |
---|---|
Step 5: Save/Reload the conversation
fast-agent gives you the ability to save and reload conversations.
Enter ***SAVE_HISTORY history.json
in the agent_two
chat to save the conversation history in MCP GetPromptResult
format.
You can also save it in a text format for easier editing.
By using the supplied MCP prompt-server
, we can reload the saved prompt and apply it to our agent. Add the following to your fastagent.config.yaml
file:
fastagent.config.yaml | |
---|---|
And then update agent_two.py
to use the new server:
agent_two.py | |
---|---|
Run uv run agent_two.py
, and you can then use the /prompts
command to load the earlier conversation history, and continue where you left off.
Note that Prompts can contain any of the MCP Content types, so Images, Audio and other Embedded Resources can be included.
You can also use the Playback LLM to replay an earlier chat (useful for testing!)