Get Started: Tracing
Now that you have Phoenix up and running, the next step is to start sending traces from your Python application. Traces let you see what’s happening inside your system, including function calls, LLM requests, tool calls, and other operations.
Launch Phoenix
Before sending traces, make sure Phoenix is running. For more step by step instructions, check out this Get Started guide.
Log in, create a space, navigate to the settings page in your space, and create your API keys.
In your code, set your environment variables.
import os
os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX Collector endpoint"You can find your collector endpoint here:

Your Collector Endpoint is: https://app.phoenix.arize.com/s/ + your space name.
If you installed Phoenix locally, you have a variety of options for deployment methods including: Terminal, Docker, Kubernetes, Railway, & AWS CloudFormation. (Learn more: Self-Hosting)
To host on your local machine, run phoenix serve in your terminal.
Navigate to your localhost in your browser. (example localhost:6006)
Install the Phoenix OTEL Package
To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.
pip install arize-phoenix-otel# npm, pnpm, yarn, etc
npm install @arizeai/phoenix-otelSet-Up Tracing
There are two ways to trace your application: manually, or automatically with an auto-instrumentor. OpenInference provides the auto-instrumenter option through ready-to-use integrations with popular frameworks, so you can capture traces without adding manual logging code.
Phoenix can capture all calls made to supported libraries automatically. Just install the associated library.
pip install openinference-instrumentation-openaiPhoenix can capture all calls made to supported libraries automatically. Just install the associated library.
# npm, pnpm, yarn, etc
npm install openai @arizeai/openinference-instrumentation-openaiUpdate your instrumentation.tsfile, registering the instrumentation. Steps will vary depending on if your project is configured for CommonJS or ESM style module resolution.
// instrumentation.ts
// ... rest of imports
import OpenAI from "openai"
import { registerInstrumentations } from "@arizeai/phoenix-otel";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// ... previous code
const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);
registerInstrumentations({
instrumentations: [instrumentation],
});// instrumentation.ts
// ... rest of imports
import { registerInstrumentations } from "@arizeai/phoenix-otel";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// ... previous code
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
});Phoenix supports a variety of frameworks, model providers, and other integrations. For Example:
Check out our Integrations page for all Integrations
Trace your own functions using OpenInference/OpenTelemetry.
Functions can be traced using decorators:
@tracer.chain
def my_func(input: str) -> str:
return "output"Input and output attributes are set automatically based on my_func's parameters and return.
For Manually tracing your whole application, check out our guide on manual tracing using OpenInference/OpenTelemetry.
Register a Tracer
In your Python code, register Phoenix as the trace provider. This connects your application to Phoenix, making a project in the UI after you send a trace, and optionally enables auto-instrumentation (automatic tracing for supported libraries like OpenAI).
from phoenix.otel import register
tracer_provider = register(
project_name="my-llm-app",
auto_instrument=True,
)
tracer = tracer_provider.get_tracer(__name__)In a new file called instrumentation.ts (or .js if applicable)
// instrumentation.ts
import { register } from "@arizeai/phoenix-otel";
const provider = register({
projectName: "my-llm-app", // Sets the project name in Phoenix UI
});The register function automatically:
Reads
PHOENIX_COLLECTOR_ENDPOINTandPHOENIX_API_KEYfrom environment variablesConfigures the collector endpoint (defaults to
http://localhost:6006)Sets up batch span processing for production use
Registers the provider globally
Now, import this file at the top of your main program entrypoint, or invoke it with the node cli's --require flag:
Import Method:
In main.ts or similar:
import "./instrumentation.ts"In your CLI, script, Dockerfile, etc:
node main.ts
--require Method:
In your CLI, script, Dockerfile, etc:
node --require ./instrumentation.ts main.ts
Start Your Application
Now that you have set up tracing & your project in Phoenix, it's time to actually invoke your llm, agent, or application.
First add your OpenAI API Key & then invoke the model.
import os
from getpass import getpass
if not (openai_api_key := os.getenv("OPENAI_API_KEY")):
openai_api_key = getpass("🔑 Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = openai_api_key# Add OpenAI API Key
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
print(response.choices[0].message.content)In your app code, invoke OpenAI:
// main.ts
import OpenAI from "openai";
// set OPENAI_API_KEY in environment, or pass it in arguments
const openai = new OpenAI();
openai.chat.completions
.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a haiku." }],
})
.then((response) => {
console.log(response.choices[0].message.content);
})
// for demonstration purposes, keep the node process alive long
// enough for BatchSpanProcessor to flush Trace to Phoenix
// with its default flush time of 5 seconds
.then(() => new Promise((resolve) => setTimeout(resolve, 6000)));
Phoenix supports a variety of frameworks, model providers, and other integrations. After downloading any of these auto-instrumenters, the next step is to invoke them & see your traces populate in the Phoenix UI.
Check out our Integrations page for all Integrations
After setting up all your functions to be traced using OpenInference/OpenTelemetry, now just call your application to start & you should be able to see your traces populate in the Phoenix UI.
View your Traces in Phoenix
You should now see traces in Phoenix!

Learn More:
Last updated
Was this helpful?