VertexAI Tracing
Instrument LLM calls made using VertexAI's SDK via the VertexAIInstrumentor
The VertexAI SDK can be instrumented using the openinference-instrumentation-vertexai package.
Launch Phoenix
Install
pip install openinference-instrumentation-vertexai vertexaiSetup
See Google's guide on setting up your environment for the Google Cloud AI Platform. You can also store your Project ID in the CLOUD_ML_PROJECT_ID environment variable.
Use the register function to connect your application to Phoenix:
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)Run VertexAI
import vertexai
from vertexai.generative_models import GenerativeModel
vertexai.init(location="us-central1")
model = GenerativeModel("gemini-1.5-flash")
print(model.generate_content("Why is sky blue?").text)Observe
Now that you have tracing setup, all invocations of Vertex models will be streamed to your running Phoenix for observability and evaluation.
Resources
Last updated
Was this helpful?