Use this file to discover all available pages before exploring further.
Google Colab
colab.research.google.com
LlamaIndex is a data framework for your LLM application. It’s a powerful framework by which you can build an application that leverages RAG (retrieval-augmented generation) to super-charge an LLM with your own data. RAG is an extremely powerful LLM application model because it lets you harness the power of LLMs such as OpenAI’s GPT but tuned to your data and use-case.For LlamaIndex, tracing instrumentation is added via an OpenTelemetry instrumentor aptly named the LlamaIndexInstrumentor . This callback is what is used to create spans and send them to the Phoenix collector.
You can now use LlamaIndex as normal, and tracing will be automatically captured and sent to your Phoenix instance.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderimport osos.environ["OPENAI_API_KEY"] = "YOUR OPENAI API KEY"documents = SimpleDirectoryReader("data").load_data()index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine()response = query_engine.query("Some question about the data should go here")print(response)
Legacy One-Click (<0.10.43)Using phoenix as a callback requires an install of `llama-index-callbacks-arize-phoenix>0.1.3’llama-index 0.10 introduced modular sub-packages. To use llama-index’s one click, you must install the small integration first:
# Phoenix can display in real time the traces automatically# collected from your LlamaIndex application.import phoenix as px# Look for a URL in the output to open the App in a browser.px.launch_app()# The App is initially empty, but as you proceed with the steps below,# traces will appear automatically as your LlamaIndex application runs.from llama_index.core import set_global_handlerset_global_handler("arize_phoenix")# Run all of your LlamaIndex applications as usual and traces# will be collected and displayed in Phoenix.
Legacy (<0.10.0)If you are using an older version of llamaIndex (pre-0.10), you can still use phoenix. You will have to be using arize-phoenix>3.0.0 and downgrade openinference-instrumentation-llama-index<1.0.0
# Phoenix can display in real time the traces automatically# collected from your LlamaIndex application.import phoenix as px# Look for a URL in the output to open the App in a browser.px.launch_app()# The App is initially empty, but as you proceed with the steps below,# traces will appear automatically as your LlamaIndex application runs.import llama_indexllama_index.set_global_handler("arize_phoenix")# Run all of your LlamaIndex applications as usual and traces# will be collected and displayed in Phoenix.