Semantic conventions are standardized attribute names and values that ensure consistent tracing across different LLM providers, models, and frameworks. Different instrumentation standards use different semantic conventions to describe LLM operations.Phoenix uses OpenInference semantic conventions as its standard format. To ensure all traces are displayed consistently in Phoenix, traces from other libraries must be translated to the OpenInference format using span processors.
Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.
phoenix serve
3
Configure the tracer provider and add the span processors. The OpenInferenceSpanProcessor converts OpenLIT traces to OpenInference format, and the BatchSpanProcessor exports them to Phoenix via the OTLP gRPC endpoint:
import osimport grpcfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporterfrom phoenix.otel import registerfrom openinference.instrumentation.openlit import OpenInferenceSpanProcessor# Set your OpenAI API keyos.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"# Set up the tracer providertracer_provider = register( project_name="default" # Phoenix project name)# Add the OpenInference span processor first to convert OpenLIT tracestracer_provider.add_span_processor(OpenInferenceSpanProcessor())# Add the batch span processor to export traces to Phoenix (OTLP gRPC endpoint)tracer_provider.add_span_processor( BatchSpanProcessor( OTLPSpanExporter( endpoint="http://localhost:4317", # Phoenix OTLP gRPC endpoint (if using phoenix cloud, change to phoenix cloud endpoint from settings) headers={}, compression=grpc.Compression.Gzip, ) ))
4
Initialize OpenLIT with the tracer and set up Semantic Kernel:
from semantic_kernel import Kernelfrom semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletionimport openlit# Initialize OpenLit tracertracer = tracer_provider.get_tracer(__name__)openlit.init(tracer=tracer)# Set up Semantic Kernel with OpenLITkernel = Kernel()kernel.add_service( OpenAIChatCompletion( service_id="default", ai_model_id="gpt-4o-mini", ),)
5
Invoke your model and view the converted traces in Phoenix:
# Define and invoke your modelresult = await kernel.invoke_prompt( prompt="What is the national food of Yemen?", arguments={},)# Now view your converted OpenLIT traces in Phoenix!
The traces will be visible in the Phoenix UI at http://localhost:6006.
Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)
phoenix serve
3
Configure the tracer provider and add the span processors. The OpenInferenceSpanProcessor converts OpenLLMetry traces to OpenInference format, and the BatchSpanProcessor exports them to Phoenix via the OTLP gRPC endpoint:
import osimport grpcfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporterfrom phoenix.otel import registerfrom openinference.instrumentation.openllmetry import OpenInferenceSpanProcessor# Set your OpenAI API keyos.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"# Set up the tracer providertracer_provider = register( project_name="default" # Phoenix project name)# Add the OpenInference span processor first to convert OpenLLMetry tracestracer_provider.add_span_processor(OpenInferenceSpanProcessor())tracer_provider.add_span_processor( BatchSpanProcessor( OTLPSpanExporter( endpoint="http://localhost:4317", # Phoenix OTLP gRPC endpoint (if using phoenix cloud, change to phoenix cloud endpoint from settings) headers={}, compression=grpc.Compression.Gzip, ) ))
4
Initialize the OpenAI instrumentor with the tracer provider to generate OpenLLMetry traces:
from opentelemetry.instrumentation.openai import OpenAIInstrumentorOpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
5
Invoke your model and view the converted traces in Phoenix:
import openai# Define and invoke your OpenAI modelclient = openai.OpenAI()messages = [ {"role": "user", "content": "What is the national food of Yemen?"}]response = client.chat.completions.create( model="gpt-4", messages=messages,)# Now view your converted OpenLLMetry traces in Phoenix!
The traces will be visible in the Phoenix UI at http://localhost:6006.
Convert OpenTelemetry GenAI span attributes to OpenInference format using the @arizeai/openinference-genai package for TypeScript/JavaScript applications.This example:
Creates a custom TraceExporter that converts OpenTelemetry GenAI spans to OpenInference spans
Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.
phoenix serve
3
Create a custom TraceExporter file (e.g., openinferenceOTLPTraceExporter.ts) that converts the OpenTelemetry GenAI attributes to OpenInference attributes:
// openinferenceOTLPTraceExporter.tsimport { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";import type { ReadableSpan } from "@opentelemetry/sdk-trace-base";import type { ExportResult } from "@opentelemetry/core";import { convertGenAISpanAttributesToOpenInferenceSpanAttributes } from "@arizeai/openinference-genai";import type { Mutable } from "@arizeai/openinference-genai/types";class OpenInferenceOTLPTraceExporter extends OTLPTraceExporter { export( spans: ReadableSpan[], resultCallback: (result: ExportResult) => void, ) { const processedSpans = spans.map((span) => { const processedAttributes = convertGenAISpanAttributesToOpenInferenceSpanAttributes( span.attributes, ); // optionally you can replace the entire attributes object with the // processed attributes if you want _only_ the OpenInference attributes (span as Mutable<ReadableSpan>).attributes = { ...span.attributes, ...processedAttributes, }; return span; }); super.export(processedSpans, resultCallback); }}
4
Use the custom exporter in a SpanProcessor and configure the tracer provider. Set the COLLECTOR_ENDPOINT environment variable to your Phoenix endpoint (e.g., http://localhost:6006 for local Phoenix):
// instrumentation.tsimport { resourceFromAttributes } from "@opentelemetry/resources";import { NodeTracerProvider, BatchSpanProcessor,} from "@opentelemetry/sdk-trace-node";import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";import { OpenInferenceOTLPTraceExporter } from "./openinferenceOTLPTraceExporter";const COLLECTOR_ENDPOINT = process.env.COLLECTOR_ENDPOINT;const SERVICE_NAME = "openinference-genai-app";export const provider = new NodeTracerProvider({ resource: resourceFromAttributes({ [ATTR_SERVICE_NAME]: SERVICE_NAME, [SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME, }), spanProcessors: [ new BatchSpanProcessor( new OpenInferenceOTLPTraceExporter({ url: `${COLLECTOR_ENDPOINT}/v1/traces`, }), ), ],});provider.register();
5
Once your application is running and generating traces, the converted OpenTelemetry GenAI traces will be visible in the Phoenix UI. The custom exporter automatically converts GenAI span attributes to OpenInference format before exporting to Phoenix.