use pydantic
All checks were successful
Build and Push Agent Docker Image / build (push) Successful in 2m10s
Build and Push Web Docker Image / build (push) Successful in 3m33s

This commit is contained in:
2025-12-13 01:39:31 +01:00
parent bad9a4e547
commit 79fc89a7f4
8 changed files with 1459 additions and 609 deletions

View File

@@ -1,17 +1,16 @@
# Cavepedia Web # Cavepedia Web
Next.js frontend with integrated LangGraph agent for Cavepedia. Next.js frontend with integrated PydanticAI agent for Cavepedia.
## Project Structure ## Project Structure
``` ```
web/ web/
├── src/ # Next.js application ├── src/ # Next.js application
├── agent/ # LangGraph agent (Python) ├── agent/ # PydanticAI agent (Python)
│ ├── main.py # Agent graph definition │ ├── main.py # Agent definition
│ ├── langgraph.json │ ├── server.py # FastAPI server with AG-UI
── pyproject.toml ── pyproject.toml
│ └── Dockerfile # Production container
└── ... └── ...
``` ```
@@ -20,7 +19,7 @@ web/
- Node.js 24+ - Node.js 24+
- Python 3.13 - Python 3.13
- npm - npm
- Google AI API Key (for the LangGraph agent) - Google AI API Key (for the PydanticAI agent)
## Development ## Development
@@ -46,77 +45,37 @@ cp agent/.env.example agent/.env
npm run dev npm run dev
``` ```
This starts both the Next.js UI and LangGraph agent servers concurrently. This starts both the Next.js UI and PydanticAI agent servers concurrently.
## Agent Deployment ## Agent Deployment
The agent is containerized for production deployment. The agent can be containerized for production deployment.
### Building the Docker image ### Environment Variables
```bash | Variable | Required | Description |
cd agent |----------|----------|-------------|
docker build -t cavepediav2-agent . | `GOOGLE_API_KEY` | Yes | Google AI API key for Gemini |
```
### Running in production ### Running in production
The agent requires PostgreSQL and Valkey for persistence and pub/sub:
```bash ```bash
docker run \ cd agent
-p 8123:8000 \ uv run uvicorn server:app --host 0.0.0.0 --port 8000
-e REDIS_URI="redis://valkey:6379" \
-e DATABASE_URI="postgres://user:pass@postgres:5432/langgraph" \
-e GOOGLE_API_KEY="your-key" \
-e LANGSMITH_API_KEY="your-key" \
cavepediav2-agent
``` ```
Or use Docker Compose with the required services:
```yaml
services:
valkey:
image: valkey/valkey:9
postgres:
image: postgres:16
environment:
POSTGRES_DB: langgraph
POSTGRES_USER: langgraph
POSTGRES_PASSWORD: langgraph
agent:
image: git.seaturtle.pw/cavepedia/cavepediav2-agent:latest
ports:
- "8123:8000"
environment:
REDIS_URI: redis://valkey:6379
DATABASE_URI: postgres://langgraph:langgraph@postgres:5432/langgraph
GOOGLE_API_KEY: ${GOOGLE_API_KEY}
depends_on:
- valkey
- postgres
```
### CI/CD
The agent image is automatically built and pushed to `git.seaturtle.pw/cavepedia/cavepediav2-agent:latest` on push to `main` via Gitea Actions.
## Web Deployment ## Web Deployment
### Environment Variables ### Environment Variables
| Variable | Required | Default | Description | | Variable | Required | Default | Description |
|----------|----------|---------|-------------| |----------|----------|---------|-------------|
| `LANGGRAPH_DEPLOYMENT_URL` | Yes | `http://localhost:8000` | URL to the LangGraph agent | | `LANGGRAPH_DEPLOYMENT_URL` | Yes | `http://localhost:8000` | URL to the agent |
| `AUTH0_SECRET` | Yes | - | Session encryption key (`openssl rand -hex 32`) | | `AUTH0_SECRET` | Yes | - | Session encryption key (`openssl rand -hex 32`) |
| `AUTH0_DOMAIN` | Yes | - | Auth0 tenant domain | | `AUTH0_DOMAIN` | Yes | - | Auth0 tenant domain |
| `AUTH0_CLIENT_ID` | Yes | - | Auth0 application client ID | | `AUTH0_CLIENT_ID` | Yes | - | Auth0 application client ID |
| `AUTH0_CLIENT_SECRET` | Yes | - | Auth0 application client secret | | `AUTH0_CLIENT_SECRET` | Yes | - | Auth0 application client secret |
| `APP_BASE_URL` | Yes | - | Public URL of the app | | `APP_BASE_URL` | Yes | - | Public URL of the app |
| `LANGSMITH_API_KEY` | No | - | LangSmith API key for tracing |
### Docker Compose (Full Stack) ### Docker Compose (Full Stack)
@@ -139,29 +98,14 @@ services:
agent: agent:
image: git.seaturtle.pw/cavepedia/cavepediav2-agent:latest image: git.seaturtle.pw/cavepedia/cavepediav2-agent:latest
environment: environment:
REDIS_URI: redis://valkey:6379
DATABASE_URI: postgres://langgraph:langgraph@postgres:5432/langgraph
GOOGLE_API_KEY: ${GOOGLE_API_KEY} GOOGLE_API_KEY: ${GOOGLE_API_KEY}
depends_on:
- valkey
- postgres
valkey:
image: valkey/valkey:9
postgres:
image: postgres:16
environment:
POSTGRES_DB: langgraph
POSTGRES_USER: langgraph
POSTGRES_PASSWORD: langgraph
``` ```
## Available Scripts ## Available Scripts
- `dev` - Start both UI and agent servers - `dev` - Start both UI and agent servers
- `dev:ui` - Start only Next.js - `dev:ui` - Start only Next.js
- `dev:agent` - Start only LangGraph agent - `dev:agent` - Start only PydanticAI agent
- `build` - Build Next.js for production - `build` - Build Next.js for production
- `start` - Start production server - `start` - Start production server
- `lint` - Run ESLint - `lint` - Run ESLint
@@ -169,7 +113,7 @@ services:
## References ## References
- [LangGraph Documentation](https://langchain-ai.github.io/langgraph/) - [PydanticAI Documentation](https://ai.pydantic.dev/)
- [CopilotKit Documentation](https://docs.copilotkit.ai) - [CopilotKit Documentation](https://docs.copilotkit.ai)
- [Next.js Documentation](https://nextjs.org/docs) - [Next.js Documentation](https://nextjs.org/docs)
- [Auth0 Next.js SDK Examples](https://github.com/auth0/nextjs-auth0/blob/main/EXAMPLES.md) - [Auth0 Next.js SDK Examples](https://github.com/auth0/nextjs-auth0/blob/main/EXAMPLES.md)

View File

@@ -1,8 +0,0 @@
{
"python_version": "3.13",
"image_distro": "wolfi",
"dependencies": ["."],
"graphs": {
"vpi_1000": "./main.py:graph"
}
}

View File

@@ -1,186 +1,26 @@
""" """
This is the main entry point for the agent. PydanticAI agent with MCP tools from Cavepedia server.
It defines the workflow graph, state, tools, nodes and edges.
""" """
from typing import Any, List, Callable, Awaitable from pydantic_ai import Agent
import json from pydantic_ai.models.google import GoogleModel
from pydantic_ai.mcp import MCPServerStreamableHTTP
from langchain.tools import tool
from langchain_core.messages import BaseMessage, SystemMessage
from langchain_core.runnables import RunnableConfig
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import END, MessagesState, StateGraph, START
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest, MCPToolCallResult
class AgentState(MessagesState): # Create MCP server connection to Cavepedia
""" mcp_server = MCPServerStreamableHTTP(
Here we define the state of the agent url="https://mcp.caving.dev/mcp",
timeout=30.0,
)
In this instance, we're inheriting from MessagesState, which will bring in # Create the agent with Google Gemini model
the messages field for conversation history. agent = Agent(
""" model=GoogleModel("gemini-2.5-pro"),
toolsets=[mcp_server],
tools: List[Any] instructions="""You are a helpful assistant with access to cave-related information through the Cavepedia MCP server. You can help users find information about caves, caving techniques, and related topics.
# @tool
# def your_tool_here(your_arg: str):
# """Your tool description here."""
# print(f"Your tool logic here")
# return "Your tool response here."
backend_tools = [
# your_tool_here
]
class RolesHeaderInterceptor:
"""Interceptor that injects user roles header into MCP tool calls."""
def __init__(self, user_roles: list = None):
self.user_roles = user_roles or []
async def __call__(
self,
request: MCPToolCallRequest,
handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]
) -> MCPToolCallResult:
headers = dict(request.headers or {})
if self.user_roles:
headers["X-User-Roles"] = json.dumps(self.user_roles)
modified_request = request.override(headers=headers)
return await handler(modified_request)
def get_mcp_client(user_roles: list = None):
"""Create MCP client with user roles header."""
return MultiServerMCPClient(
{
"cavepedia": {
"transport": "streamable_http",
"url": "https://mcp.caving.dev/mcp",
"timeout": 10.0,
}
},
tool_interceptors=[RolesHeaderInterceptor(user_roles)]
)
# Cache for MCP tools per access token
_mcp_tools_cache = {}
async def get_mcp_tools(user_roles: list = None):
"""Lazy load MCP tools with user roles."""
roles_key = ",".join(sorted(user_roles)) if user_roles else "default"
if roles_key not in _mcp_tools_cache:
try:
mcp_client = get_mcp_client(user_roles)
tools = await mcp_client.get_tools()
_mcp_tools_cache[roles_key] = tools
print(f"Loaded {len(tools)} tools from MCP server with roles: {user_roles}")
except Exception as e:
print(f"Warning: Failed to load MCP tools: {e}")
_mcp_tools_cache[roles_key] = []
return _mcp_tools_cache[roles_key]
async def chat_node(state: AgentState, config: RunnableConfig) -> dict:
"""
Standard chat node based on the ReAct design pattern. It handles:
- The model to use (and binds in CopilotKit actions and the tools defined above)
- The system prompt
- Getting a response from the model
- Handling tool calls
For more about the ReAct design pattern, see:
https://www.perplexity.ai/search/react-agents-NcXLQhreS0WDzpVaS4m9Cg
"""
# 0. Extract user roles from config.configurable.context
configurable = config.get("configurable", {})
context = configurable.get("context", {})
user_roles = context.get("auth0_user_roles", [])
# 1. Define the model
model = ChatGoogleGenerativeAI(model="gemini-3-pro-preview", max_output_tokens=65536)
# 1.5 Load MCP tools from the cavepedia server with roles
mcp_tools = await get_mcp_tools(user_roles)
# 2. Bind the tools to the model
model_with_tools = model.bind_tools(
[
*state.get("tools", []), # bind tools defined by ag-ui
*backend_tools,
*mcp_tools, # Add MCP tools from cavepedia server
],
)
# 3. Define the system message by which the chat model will be run
system_message = SystemMessage(
content=f"""You are a helpful assistant with access to cave-related information through the Cavepedia MCP server. You can help users find information about caves, caving techniques, and related topics.
IMPORTANT RULES: IMPORTANT RULES:
1. Always cite your sources at the end of each response. List the specific sources/documents you used. 1. Always cite your sources at the end of each response. List the specific sources/documents you used.
2. If you cannot find information on a topic, say so clearly. Do NOT make up information or hallucinate facts. 2. If you cannot find information on a topic, say so clearly. Do NOT make up information or hallucinate facts.
3. If the MCP tools return no results, acknowledge that you couldn't find the information rather than guessing. 3. If the MCP tools return no results, acknowledge that you couldn't find the information rather than guessing.""",
User roles: {', '.join(user_roles) if user_roles else 'none'}"""
)
# 4. Run the model to generate a response
response = await model_with_tools.ainvoke(
[
system_message,
*state["messages"],
],
config,
)
# 5. Return the response in the messages
return {"messages": [response]}
async def tool_node_wrapper(state: AgentState, config: RunnableConfig) -> dict:
"""
Custom tool node that handles both backend tools and MCP tools.
"""
# Extract user roles from config.configurable.context
configurable = config.get("configurable", {})
context = configurable.get("context", {})
user_roles = context.get("auth0_user_roles", [])
# Load MCP tools with roles
mcp_tools = await get_mcp_tools(user_roles)
all_tools = [*backend_tools, *mcp_tools]
# Use the standard ToolNode with all tools
node = ToolNode(tools=all_tools)
result = await node.ainvoke(state, config)
return result
# Define the workflow graph
workflow = StateGraph(AgentState)
workflow.add_node("chat_node", chat_node)
workflow.add_node("tools", tool_node_wrapper) # Must be named "tools" for tools_condition
# Set entry point
workflow.add_edge(START, "chat_node")
# Use tools_condition for proper routing
workflow.add_conditional_edges(
"chat_node",
tools_condition,
) )
# After tools execute, go back to chat
workflow.add_edge("tools", "chat_node")
graph = workflow.compile()

View File

@@ -4,17 +4,8 @@ version = "1.0.0"
description = "VPI-1000" description = "VPI-1000"
requires-python = ">=3.13,<3.14" requires-python = ">=3.13,<3.14"
dependencies = [ dependencies = [
"langchain==1.1.0", "pydantic-ai>=0.1.0",
"langgraph==1.0.4",
"langsmith>=0.4.49",
"anthropic>=0.40.0",
"fastapi>=0.115.5,<1.0.0", "fastapi>=0.115.5,<1.0.0",
"uvicorn>=0.29.0,<1.0.0", "uvicorn>=0.29.0,<1.0.0",
"python-dotenv>=1.0.0,<2.0.0", "python-dotenv>=1.0.0,<2.0.0",
"langchain-google-genai>=2.1.0",
"langchain-mcp-adapters>=0.1.0",
"docstring-parser>=0.17.0",
"jsonschema>=4.25.1",
"copilotkit>=0.1.0",
"ag-ui-langgraph>=0.0.4",
] ]

View File

@@ -1,35 +1,18 @@
""" """
Self-hosted LangGraph agent server using AG-UI protocol. Self-hosted PydanticAI agent server using AG-UI protocol.
""" """
import os import os
from fastapi import FastAPI
import uvicorn import uvicorn
from dotenv import load_dotenv from dotenv import load_dotenv
from copilotkit import LangGraphAGUIAgent from pydantic_ai.ui.ag_ui.app import AGUIApp
from ag_ui_langgraph import add_langgraph_fastapi_endpoint from main import agent
from main import graph
load_dotenv() load_dotenv()
app = FastAPI(title="Cavepedia Agent") # Convert PydanticAI agent to ASGI app with AG-UI protocol
app = AGUIApp(agent)
add_langgraph_fastapi_endpoint(
app=app,
agent=LangGraphAGUIAgent(
name="vpi_1000",
description="AI assistant with access to cave-related information through the Cavepedia MCP server",
graph=graph,
),
path="/",
)
@app.get("/health")
def health():
"""Health check."""
return {"status": "ok"}
if __name__ == "__main__": if __name__ == "__main__":

1730
web/agent/uv.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
{ {
"name": "langgraph-python-starter", "name": "cavepedia-web",
"version": "0.1.0", "version": "0.1.0",
"private": true, "private": true,
"scripts": { "scripts": {
@@ -15,7 +15,6 @@
}, },
"dependencies": { "dependencies": {
"@ag-ui/client": "^0.0.42", "@ag-ui/client": "^0.0.42",
"@ag-ui/langgraph": "0.0.18",
"@auth0/nextjs-auth0": "^4.13.2", "@auth0/nextjs-auth0": "^4.13.2",
"@copilotkit/react-core": "1.50.0", "@copilotkit/react-core": "1.50.0",
"@copilotkit/react-ui": "1.50.0", "@copilotkit/react-ui": "1.50.0",
@@ -31,7 +30,6 @@
}, },
"devDependencies": { "devDependencies": {
"@eslint/eslintrc": "^3", "@eslint/eslintrc": "^3",
"@langchain/langgraph-cli": "^1.0.4",
"@tailwindcss/postcss": "^4", "@tailwindcss/postcss": "^4",
"@types/node": "^20", "@types/node": "^20",
"@types/react": "^19", "@types/react": "^19",

View File

@@ -4,17 +4,15 @@ import {
copilotRuntimeNextJSAppRouterEndpoint, copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime"; } from "@copilotkit/runtime";
import { LangGraphAgent } from "@ag-ui/langgraph"; import { HttpAgent } from "@ag-ui/client";
import { NextRequest } from "next/server"; import { NextRequest } from "next/server";
const serviceAdapter = new ExperimentalEmptyAdapter(); const serviceAdapter = new ExperimentalEmptyAdapter();
const runtime = new CopilotRuntime({ const runtime = new CopilotRuntime({
agents: { agents: {
vpi_1000: new LangGraphAgent({ vpi_1000: new HttpAgent({
deploymentUrl: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8000", url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8000",
graphId: "vpi_1000",
langsmithApiKey: process.env.LANGSMITH_API_KEY || "",
}), }),
}, },
}); });