Use this file to discover all available pages before exploring further.
PydanticAI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic, it provides a clean, type-safe way to build AI agents with structured outputs.
Set up tracing using OpenTelemetry and the PydanticAI instrumentation:
import osfrom opentelemetry import tracefrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.trace import TracerProviderfrom openinference.instrumentation.pydantic_ai import OpenInferenceSpanProcessorfrom opentelemetry.sdk.trace.export import SimpleSpanProcessor# Set up the tracer providertracer_provider = TracerProvider()trace.set_tracer_provider(tracer_provider)# Add the OpenInference span processorendpoint = f"{os.environ['PHOENIX_COLLECTOR_ENDPOINT']}/v1/traces"# If you are using a local instance without auth, ignore these headersheaders = {"Authorization": f"Bearer {os.environ['PHOENIX_API_KEY']}"}exporter = OTLPSpanExporter(endpoint=endpoint, headers=headers)tracer_provider.add_span_processor(OpenInferenceSpanProcessor())tracer_provider.add_span_processor(SimpleSpanProcessor(exporter))
Here’s a simple example using PydanticAI with automatic tracing:
from pydantic import BaseModelfrom pydantic_ai import Agentfrom pydantic_ai.models.openai import OpenAIModelimport nest_asyncionest_asyncio.apply()# Set your OpenAI API keyos.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"# Define your Pydantic modelclass LocationModel(BaseModel): city: str country: str# Create and configure the agentmodel = OpenAIModel("gpt-4", provider='openai')agent = Agent(model, output_type=LocationModel, instrument=True)# Run the agentresult = agent.run_sync("The windy city in the US of A.")print(result)
from pydantic import BaseModel, Fieldfrom pydantic_ai import Agent, RunContextfrom pydantic_ai.models.openai import OpenAIModelfrom typing import Listimport httpxclass WeatherInfo(BaseModel): location: str temperature: float = Field(description="Temperature in Celsius") condition: str humidity: int = Field(description="Humidity percentage")# Create an agent with system prompts and toolsweather_agent = Agent( model=OpenAIModel("gpt-4"), output_type=WeatherInfo, system_prompt="You are a helpful weather assistant. Always provide accurate weather information.", instrument=True)@weather_agent.toolasync def get_weather_data(ctx: RunContext[None], location: str) -> str: """Get current weather data for a location.""" # Mock weather API call - replace with actual weather service async with httpx.AsyncClient() as client: # This is a placeholder - use a real weather API mock_data = { "temperature": 22.5, "condition": "partly cloudy", "humidity": 65 } return f"Weather in {location}: {mock_data}"# Run the agent with tool usageresult = weather_agent.run_sync("What's the weather like in Paris?")print(result)
Now that you have tracing setup, all PydanticAI agent operations will be streamed to your running Phoenix instance for observability and evaluation. You’ll be able to see:
Agent interactions: Complete conversations between your application and the AI model
Structured outputs: Pydantic model validation and parsing results
Tool usage: When agents call external tools and their responses
Performance metrics: Response times, token usage, and success rates
Error handling: Validation errors, API failures, and retry attempts
Multi-agent workflows: Complex interactions between multiple agents
The traces will provide detailed insights into your AI agent behaviors, making it easier to debug issues, optimize performance, and ensure reliability in production.