Click Create Space, then follow the prompts to create and launch your space.
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint and API Key:From your new Phoenix Space
Create your API key from the Settings page
Copy your Hostname from the Settings page
In your code, set your endpoint and API key:
import osos.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX HOSTNAME"# If you created your Phoenix Cloud instance before June 24th, 2025,# you also need to set the API key as a header:# os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={os.getenv('PHOENIX_API_KEY')}"
For more info on using Phoenix with Docker, see Docker.
Install packages:
pip install arize-phoenix
Launch Phoenix:
import phoenix as pxpx.launch_app()
By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See self-hosting or use one of the other deployment options to retain traces.
Connect to your Phoenix instance using the register function.
from phoenix.otel import register# configure the Phoenix tracertracer_provider = register( project_name="agentchat-agent", # Default is 'default' auto_instrument=True # Auto-instrument your app based on installed OI dependencies)
We’re going to run an AgentChat example using a multi-agent team. To get started, install the required packages to use your LLMs with AgentChat. In this example, we’ll use OpenAI as the LLM provider.
pip install autogen_ext openai
import asyncioimport osfrom autogen_agentchat.agents import AssistantAgentfrom autogen_agentchat.conditions import TextMentionTerminationfrom autogen_agentchat.teams import RoundRobinGroupChatfrom autogen_ext.models.openai._openai_client import OpenAIChatCompletionClientos.environ["OPENAI_API_KEY"] = "your-api-key"async def main(): model_client = OpenAIChatCompletionClient( model="gpt-4", ) # Create two agents: a primary and a critic primary_agent = AssistantAgent( "primary", model_client=model_client, system_message="You are a helpful AI assistant.", ) critic_agent = AssistantAgent( "critic", model_client=model_client, system_message=""" Provide constructive feedback. Respond with 'APPROVE' when your feedbacks are addressed. """, ) # Termination condition: stop when the critic says "APPROVE" text_termination = TextMentionTermination("APPROVE") # Create a team with both agents team = RoundRobinGroupChat( [primary_agent, critic_agent], termination_condition=text_termination ) # Run the team on a task result = await team.run(task="Write a short poem about the fall season.") await model_client.close() print(result)if __name__ == "__main__": asyncio.run(main())