Skip to main content
This process is not for production. Use it only for testing or development.
This guide walks you through installing Arize AX on a single host (one VM) using k3s. You will install k3s, Helm, MinIO for object storage, then deploy Arize AX from the distribution.

Prerequisites

Before starting, have the following ready:
  • Arize distribution access — JWT token for downloading the distribution from Arize
  • Passwords and secrets — You will choose a MinIO password, Postgres password, and encryption key (all base64-encoded in values.yaml)
  • Organization name — Name of your organization or company (for values.yaml)
  • App URL — The URL you will use to reach the Arize UI (e.g. https://arize-app.yourdomain.com). This can be a hostname you map to a private IP (see Step 8) if the VM has no public address.
  • Network access to the VM — If the VM only has a private IP (for example 10.x.x.x, 172.16.x.x, or 192.168.x.x), your browser, SDKs, and any clients must run on a host that can reach that address (same VPC or subnet, site-to-site VPN, or client VPN into the cloud network). You do not need a public IP for this guide.

Step 1: Create the virtual machine

Create a single VM with these specifications:
RequirementSpecification
Size16 vCPU, 128 GB RAM — e.g. n2d-highmem-16 (GCP), r7a.4xlarge (AWS), Standard_E16s_v5 (Azure)
OSDebian base image
Boot disk500 GB
NetworkAllow HTTP and HTTPS traffic to the VM. If the machine uses a private IP only, restrict ingress to trusted CIDRs (for example your VPC, office, or VPN range) instead of the open internet. If the VM has a public IP, you can allow HTTP/HTTPS from the internet or from specific IPs, depending on your policy.
AccessSSH (port 22). Add firewall rules and an SSH key as required by your cloud provider. For a private-IP-only host, allow SSH from your bastion, VPN, or admin network.

Step 2: Install k3s

SSH into the machine, then run:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode=644" sh - && \
  mkdir -p ~/.kube && \
  sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && \
  sudo chown $USER ~/.kube/config && \
  chmod 600 ~/.kube/config
Verify: Run kubectl get nodes. You should see your node with status Ready and role control-plane, for example:
NAME                  STATUS   ROLES           AGE    VERSION
<your machine name>   Ready    control-plane   176m   v1.34.5+k3s1

Step 3: Install Helm

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify: Run helm version. You should see JSON output with BuildInfo for your Helm installation.

Step 4: Install MinIO (object storage)

MinIO provides the S3-compatible object storage used by Arize AX. The commands below create a minio directory, a values file, and install the MinIO Helm chart.
  • Persistence size: Set persistence.size to the max storage per bucket (e.g. up to 75Gi). With a 500 GB boot disk, 75Gi per bucket leaves space for other PVCs.
  • Credentials: Set rootPassword to a password you choose; you will use the same user/password in values.yaml later.
mkdir minio && cd minio

cat > minio-values.yaml << 'EOF'
rootUser: minio
rootPassword: <your chosen password>
persistence:
  size: <up to 75Gi>
  storageClass: local-path
replicas: 1
mode: standalone
buckets:
  - name: gazette-bucket
    policy: none
    purge: false
    versioning: true
    objectlocking: false
  - name: druid-bucket
    policy: none
    purge: false
    versioning: true
    objectlocking: false
EOF

helm repo add minio https://charts.min.io/
helm install minio minio/minio -f minio-values.yaml

Step 5: Retrieve the Arize distribution

Create a release folder, download the distribution, and extract it. Replace <your JWT token> with your actual JWT.
VERSION=${1:-$(curl -s https://arize.com/docs/ax/selfhosting/on-premise-releases | grep -Eo 'Release [0-9]+\.[0-9]+\.[0-9]+' | head -1 | awk '{print $2}')}
cd ../ && mkdir arize-release-$VERSION && cd arize-release-$VERSION

URL=https://ch.hub.arize.com/dist
JWT="<your JWT token>"
curl -H "Authorization: Bearer $JWT" "$URL/distributions/arize-distribution-$VERSION.tar" --output "arize-distribution-$VERSION.tar"

tar xvf *.tar

Step 6: Create values.yaml

From the arize-release-* directory, create values.yaml by editing the placeholders below and pasting the result into your terminal. All values marked “(base64 encoded)” must be base64-encoded.
  • hubJwt: Your Arize JWT (base64)
  • postgresPassword: A password you choose (base64)
  • cephS3AccessKeyId: MinIO user from Step 4, e.g. minio (base64)
  • cephS3SecretAccessKey: MinIO password from Step 4 (base64)
  • cipherKey: An encryption key you generate (base64)
  • appBaseUrl: The URL where you will access the Arize UI
cat > values.yaml << 'EOF'
cloud: "ceph"
clusterName: "default"
hubJwt: "<JWT>" (base64 encoded)
gazetteBucket: "gazette-bucket"
druidBucket: "druid-bucket"
postgresPassword: "<user selected postgres password>" (base64 encoded)
organizationName: "<name of the organization or company>"
clusterSizing: "nonha"
cephS3Endpoint: "http://minio.default.svc.cluster.local:9000"
cephS3AccessKeyId: "<Minio user set in the previous step>" (base64 encoded)
cephS3SecretAccessKey: "<Minio password set in the previous step>" (base64 encoded)
cipherKey: "<encryption key>" (base64 encoded)
storageClassCephSsd: "local-path"
storageClassCephStandard: "local-path"
ingressMode: "notls"

# The URL used to reach the Arize UI once ingress endpoints are created
# Use the same hostname you will put in /etc/hosts (Step 8), e.g. https://arize-app.example.local
# Private-IP only: the hostname must resolve (via hosts file or DNS) to the VM's private IP for clients on that network
appBaseUrl: "https://<arize-app.domain>"

# Only required if using a private docker registry
pushRegistry: "<docker-registry>"
pullRegistry: "<docker-registry>"

baseOverlay: |
  ---
  apiVersion: v1
  kind: Service
  metadata:
    name: internalendpoints-app
    namespace: arize
    annotations:
      traefik.ingress.kubernetes.io/service.serversscheme: h2c
EOF

Step 7: Install Arize AX

From the same arize-release-* directory, run:
./arize.sh install
This installs the Arize Helm chart and deploys Arize AX on the single host. When the script completes successfully, Arize is now up and running in your k3s cluster. You can run kubectl get pods -n arize to see all pods in a running state.

Step 8: Configure ingress

Ingress exposes the Arize AX UI over HTTPS. You can use any certificate; this step uses a self-signed certificate for a low-effort setup. You can choose any domain (e.g. arize-app.example.local) — no real DNS record is required because you will use /etc/hosts to point the hostname to your VM. Private IP only: If the VM has no public IP, use the VM’s private address in /etc/hosts on every client that should open the UI or send data (your laptop on VPN, a jump host, or a build agent in the same VPC). The certificate and ingress hostnames stay the same; only the IP you map must be reachable from that client.

8a. Generate a self-signed certificate

From the home directory, generate the cert. Replace <your domain> with the domain you want to use (e.g. example.local).
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -out tls.crt \
    -keyout tls.key \
    -subj "/CN=arize-app.<your domain>/O=Arize" -addext "subjectAltName = DNS:arize-app.<your domain>"

8b. Get base64 values for the ingress manifests

You will paste these values into the ingress YAML in the next step. Run:
base64 -w 0 tls.crt   # use this for tls.crt in the manifest
base64 -w 0 tls.key   # use this for tls.key in the manifest
Copy each command’s output and keep it handy.

8c. Create and apply the ingress manifests

Create an ingress directory and an ingress.yaml file. In the manifest below, replace:
  • <your domain> — The same domain you used in the certificate (e.g. example.local)
  • <your value for tls.crt> — The full output of base64 -w 0 tls.crt
  • <your value for tls.key> — The full output of base64 -w 0 tls.key
Then apply the file.
mkdir -p ingress && cd ingress

cat > ingress.yaml << 'EOF'
---
apiVersion: v1
kind: Secret
metadata:
  name: arize-app-services-tls
  namespace: arize
type: kubernetes.io/tls
data:
  tls.crt: "<your value for tls.crt>"
  tls.key: "<your value for tls.key>"
---
apiVersion: v1
kind: Secret
metadata:
  name: default-ingress-cert
  namespace: kube-system
type: kubernetes.io/tls
data:
  tls.crt: "<your value for tls.crt>"
  tls.key: "<your value for tls.key>"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: arize-app-services
  namespace: arize
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
  ingressClassName: traefik
  tls:
    - hosts:
        - arize-app.<your domain>
      secretName: arize-app-services-tls
  rules:
    - host: arize-app.<your domain>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: internalendpoints-app
                port:
                  number: 443
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: arize-app
  namespace: arize
spec:
  entryPoints:
    - websecure
  tls:
    secretName: arize-app-services-tls
  routes:
    - match: Host(`arize-app.<your domain>`)
      kind: Rule
      services:
        - name: internalendpoints-app
          port: 443
---
apiVersion: traefik.io/v1alpha1
kind: TLSStore
metadata:
  name: default
  namespace: kube-system
spec:
  defaultCertificate:
    secretName: default-ingress-cert
EOF

kubectl apply -f ingress.yaml

8d. Point the hostname to your VM and open the UI

On each machine where you want to use the Arize UI (your laptop or another host), add a line to /etc/hosts so arize-app.<your domain> resolves to the VM’s IP. This replaces a DNS record for testing.
  1. Edit hosts (e.g. sudo vi /etc/hosts or sudo nano /etc/hosts).
  2. Add a line: <VM IP address> arize-app.<your domain>
    Use the same domain as in the certificate and ingress.
Which IP to use
  • Public IP: If your cloud VM has a public address, use that IP in /etc/hosts from any client that can reach it (subject to your security group or firewall rules).
  • Private IP only: If the VM has only a private IP, use that private address. The client must be on a network that can route to it (for example same VPC, peered network, or connected VPN). If you reach the VM only via SSH through a bastion, you still need a path for HTTPS (port 443) from the browser/SDK machine to the Arize node—either run the browser on a host inside the VPC, use VPN, or forward ports with ssh -L and point localhost in /etc/hosts to match your tunnel setup.
Example:
# Existing entries (leave as-is)
127.0.0.1       localhost
::1             localhost

# Arize AX single-host VM (public or private IP — use the address your client can reach)
<ip address of your VM>   arize-app.<your domain>
Verify: In your browser, go to https://arize-app.<your domain>. You should see the Arize login page. Accept the self-signed certificate warning if prompted, then sign in with your initial admin credentials. You can now use Arize AX.

Step 9: Validate deployment

The distribution includes example scripts under examples/sdk. Use them to confirm the cluster can receive traces.
  • Certificate: The deployment uses a self-signed cert. Have the certificate (e.g. tls.crt) available on the machine where you run the script, or configure the script to skip TLS verification if it supports that.
  • Network: Run the script from a host that can reach the VM’s reachable address for HTTPS (public IP or private IP). If the VM uses only a private IP, run the SDK from a host on the same VPC/VPN or with routing to that host, and use the same hostname in /etc/hosts (or DNS) as in appBaseUrl.
From the arize-release-* directory (or wherever you extracted the distribution), run the HTTP trace sample:
cd examples/sdk
# Use the script’s options to point at your Arize endpoint and cert if required
python trace_sample_http.py
If the script runs without errors and traces appear in the Arize AX UI, the deployment is working correctly.