Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Use Playground when you want a fast UI loop: load a prompt, run it on data, score with evaluators, compare runs, and keep the best setup.

Run an experiment

1

Load a prompt and dataset

Open Playground, load your prompt, then select a dataset (or replay a production span).
2

Attach evaluators

Add evaluators so each output is scored automatically.
3

Run and inspect

Click Run, then open View Experiment to review outputs, latency, tokens, and evaluator scores.

Compare experiments

Once you have multiple runs on the same dataset, open Compare Experiments to inspect:
  • Output differences
  • Evaluator deltas
  • Summary metrics by run
  • Regressions vs baseline
If you need full programmatic workflows, use Experiment in code.

Playground views

A Playground View is a named snapshot of your Playground session. It stores:
  • Prompt messages and tools
  • Model and parameter configuration
  • Dataset or span context
  • Generated results and scores
Use views to save progress, share a setup, or branch experiments without rebuilding from scratch.
Playgrounds list with options to edit name, duplicate, or delete saved playground views