Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

The experiments functions are currently in BETA. The API may change without notice. A one-time warning is emitted on first use.

List Experiments

import { listExperiments } from "@arizeai/ax-client";

// By dataset ID
const experiments = await listExperiments({
  dataset: "your_dataset_id",
  limit: 10,
});

// By dataset name (requires space)
const experiments = await listExperiments({
  dataset: "my-dataset",
  space: "my-space",
});

Create an Experiment

import { createExperiment } from "@arizeai/ax-client";

// Using dataset ID
const experiment = await createExperiment({
  experimentName: "your_experiment",
  dataset: "your_dataset_id",
  experimentRuns: [{ exampleId: "your_example_id", output: "output" }],
});

// Using dataset name (requires space)
const experiment = await createExperiment({
  experimentName: "your_experiment",
  dataset: "my-dataset",
  space: "my-space",
  experimentRuns: [{ exampleId: "your_example_id", output: "output" }],
});

Get an Experiment

import { getExperiment } from "@arizeai/ax-client";

const experiment = await getExperiment({ experiment: "your_experiment_id" });

Delete an Experiment

import { deleteExperiment } from "@arizeai/ax-client";

await deleteExperiment({ experiment: "your_experiment_id" });

List Experiment Runs

import { listExperimentRuns } from "@arizeai/ax-client";

const experimentRuns = await listExperimentRuns({
  experiment: "your_experiment_id",
  limit: 10,
});

Annotate Experiment Runs

Write human annotations to a batch of runs in an experiment. Annotations are upserted by annotation config name for each run; submitting the same name for the same run overwrites the previous value. Up to 500 runs may be annotated per request.
import { annotateExperimentRuns } from "@arizeai/ax-client";

const result = await annotateExperimentRuns({
  experiment: "my-experiment",  // experiment name or ID
  dataset: "my-dataset",        // optional, used to resolve experiment by name
  space: "my-space",            // optional, used to resolve dataset by name
  annotations: [
    {
      recordId: "your_run_id",
      values: [
        { name: "accuracy", label: "correct", score: 1.0 },
        { name: "notes", text: "Well-structured output" },
      ],
    },
  ],
});