Documentation Index Fetch the complete documentation index at: https://arizeai-433a7140.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Java 11 or higher
(Optional) Phoenix API key if using auth
Add Dependencies
Add the dependencies to your build.gradle:
dependencies {
// OpenInference instrumentation
implementation project( path : ':instrumentation:openinference-instrumentation-langchain4j' )
// LangChain4j
implementation "dev.langchain4j:langchain4j: ${ langchain4jVersion } "
implementation "dev.langchain4j:langchain4j-open-ai: ${ langchain4jVersion } "
// OpenTelemetry
implementation "io.opentelemetry:opentelemetry-sdk"
implementation "io.opentelemetry:opentelemetry-exporter-otlp"
implementation "io.opentelemetry:opentelemetry-exporter-logging"
}
Setup Phoenix
Pull latest Phoenix image from Docker Hub : docker pull arizephoenix/phoenix:latest
Run your containerized instance: docker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest
This command:
Exposes port 6006 for the Phoenix web UI
Exposes port 4317 for the OTLP gRPC endpoint (where traces are sent)
For more info on using Phoenix with Docker, see Docker . Sign up for Phoenix:
Click Create Space, then follow the prompts to create and launch your space.
Set your Phoenix endpoint and API Key: From your new Phoenix Space
Create your API key from the Settings page
Copy your Hostname from the Settings page
Set your endpoint and API key: export PHOENIX_API_KEY = "your-phoenix-api-key"
export PHOENIX_COLLECTOR_ENDPOINT = "your-phoenix-endpoint"
If you are using Phoenix Cloud, adjust the endpoint in the code below as needed.
Configuration for Phoenix Tracing
private static void initializeOpenTelemetry () {
// Create resource with service name
Resource resource = Resource . getDefault ()
. merge ( Resource . create ( Attributes . of (
AttributeKey . stringKey ( "service.name" ), "langchain4j" ,
AttributeKey . stringKey (SEMRESATTRS_PROJECT_NAME), "langchain4j-project" ,
AttributeKey . stringKey ( "service.version" ), "0.1.0" )));
String apiKey = System . getenv ( "PHOENIX_API_KEY" );
OtlpGrpcSpanExporterBuilder otlpExporterBuilder = OtlpGrpcSpanExporter . builder ()
. setEndpoint ( "http://localhost:4317" ) # adjust as needed
. setTimeout ( Duration . ofSeconds ( 2 ));
OtlpGrpcSpanExporter otlpExporter = null ;
if (apiKey != null && ! apiKey . isEmpty ()) {
otlpExporter = otlpExporterBuilder
. setHeaders (() -> Map . of ( "Authorization" , String . format ( "Bearer %s" , apiKey)))
. build ();
} else {
logger . log ( Level . WARNING , "Please set PHOENIX_API_KEY environment variable if auth is enabled." );
otlpExporter = otlpExporterBuilder . build ();
}
// Create tracer provider with both OTLP (for Phoenix) and console exporters
tracerProvider = SdkTracerProvider . builder ()
. addSpanProcessor ( BatchSpanProcessor . builder (otlpExporter)
. setScheduleDelay ( Duration . ofSeconds ( 1 ))
. build ())
. addSpanProcessor ( SimpleSpanProcessor . create ( LoggingSpanExporter . create ()))
. setResource (resource)
. build ();
// Build OpenTelemetry SDK
OpenTelemetrySdk . builder ()
. setTracerProvider (tracerProvider)
. setPropagators ( ContextPropagators . create ( W3CTraceContextPropagator . getInstance ()))
. buildAndRegisterGlobal ();
System . out . println ( "OpenTelemetry initialized. Traces will be sent to Phoenix at http://localhost:6006" );
}
}
See all 40 lines
Run LangChain4j
By instrumenting your application, spans will be created whenever it is run and will be sent to the Phoenix server for collection.
import io.openinference.instrumentation.langchain4j.LangChain4jInstrumentor;
import dev.langchain4j.model.openai.OpenAiChatModel;
initializeOpenTelemetry ();
// Auto-instrument LangChain4j
LangChain4jInstrumentor . instrument ();
// Use LangChain4j as normal - traces will be automatically created
OpenAiChatModel model = OpenAiChatModel . builder ()
. apiKey ( "your-openai-api-key" )
. modelName ( "gpt-4" )
. build ();
String response = model . generate ( "What is the capital of France?" );
See all 15 lines
Observe
Once configured, your traces will be automatically sent to Phoenix where you can:
Monitor Performance : Track latency, throughput, and error rates
Analyze Usage : View token usage, model performance, and cost metrics
Debug Issues : Trace request flows and identify bottlenecks
Evaluate Quality : Run evaluations on your LLM outputs
Resources