The annotation_configs client methods are currently in BETA. The API may change without notice. A one-time warning is emitted on first use.
Create custom labels that can be added to represent this human feedback. Annotation configs allow teams and subject matter experts to label data and curate high-quality datasets.
Key Capabilities
- Create and manage annotation configs within spaces
- List annotation configs with pagination support
- Retrieve annotation configs by ID
- Delete annotation configs when no longer needed
List Annotation Configs
List all annotation configs you have access to, with optional filtering by space or name.
resp = client.annotation_configs.list(
space="your-space-name-or-id", # optional
name="accuracy", # optional substring filter
limit=50,
)
print(resp)
For details on pagination, field introspection, and data conversion (to dict/JSON/DataFrame), see Response Objects.
Create an Annotation Config
Create a new annotation config within a space. Annotation config names must be unique within the target space.
Three types are supported: categorical, continuous, and freeform.
Categorical
A categorical annotation config with discrete labeled values and optional scores.
from arize.annotation_configs.types import AnnotationConfigType
from arize.annotation_configs.types import CategoricalAnnotationValue
resp = client.annotation_configs.create(
name="accuracy",
space="your-space-name-or-id",
config_type=AnnotationConfigType.CATEGORICAL,
values=[
CategoricalAnnotationValue(label="accurate", score=1),
CategoricalAnnotationValue(label="inaccurate", score=0),
],
optimization_direction="maximize",
)
print(resp)
Continuous
A continuous annotation config with a numeric score range.
from arize.annotation_configs.types import AnnotationConfigType
resp = client.annotation_configs.create(
name="relevance",
space="your-space-name-or-id",
config_type=AnnotationConfigType.CONTINUOUS,
minimum_score=0.0,
maximum_score=1.0,
optimization_direction="maximize",
)
print(resp)
A freeform annotation config for free-text feedback with no structured scoring.
from arize.annotation_configs.types import AnnotationConfigType
resp = client.annotation_configs.create(
name="reviewer-notes",
space="your-space-name-or-id",
config_type=AnnotationConfigType.FREEFORM,
)
print(resp)
Get an Annotation Config
Retrieve a specific annotation config by name or ID. When using a name, provide space to disambiguate.
resp = client.annotation_configs.get(
annotation_config="annotation-config-name-or-id",
space="your-space-name-or-id", # required when using a name
)
print(resp)
Delete an Annotation Config
Delete an annotation config by name or ID. This operation is irreversible. There is no response from this call.
client.annotation_configs.delete(
annotation_config="annotation-config-name-or-id",
space="your-space-name-or-id", # required when using a name
)
print("Annotation config deleted successfully")