Use Case

Time series anomaly detection with foundation models

Use prediction intervals from zero-shot foundation models to detect anomalies in any time series. No labeled anomaly data, no per-series threshold tuning, no model training step.

Prediction interval scoringZero labeled data neededAny time series domain
Anomaly scoringPOST /v1/forecast
{
  "model": "amazon/chronos-bolt-base",
  "inputs": [{
    "item_id": "server-cpu-utilization",
    "target": [42, 45, 43, 41, 44, 43, 42, 78, 95, 88, 45, 43],
    "start": "2026-04-10T00:00:00Z"
  }],
  "parameters": {
    "prediction_length": 24,
    "freq": "h",
    "quantiles": [0.025, 0.5, 0.975]
  }
}

Wide prediction intervals (2.5th-97.5th percentile) flag when observations fall outside expected bounds.

No labels

Detect anomalies without historical anomaly annotations

Universal

Works across infrastructure, IoT, finance, and operational data

Calibrated

Quantile forecasts provide statistically grounded anomaly thresholds

How foundation models detect anomalies

The core idea: a foundation model predicts what a series should look like. When actual observations fall outside the prediction intervals, you have a candidate anomaly.

Prediction interval approach

Request wide quantiles (e.g., 2.5th and 97.5th percentile). Any observation that falls outside these bounds is statistically unusual given the model's learned understanding of normal time series behavior.

No labeled anomaly data required

Classical anomaly detectors need labeled examples of anomalies or long burn-in periods. Foundation models learn normal time series patterns during pre-training and can score anomalies zero-shot on any new series.

Multi-domain coverage

The same approach works for server metrics, IoT sensors, financial transactions, and operational KPIs. One API call, one scoring method, across every domain.

View model catalog

Setting up anomaly detection

Most teams start with one metric that has known anomaly examples, validate detection quality, then roll out across their monitoring surface.

  1. 1

    Choose a metric with known anomalies

    Pick a series where you already know some anomalies occurred. Send the history to the API with wide quantiles and check whether the prediction intervals would have flagged those events.

  2. 2

    Tune your quantile thresholds

    Adjust the quantile width to control sensitivity. Narrower intervals (10th-90th) catch more anomalies but produce more false positives. Wider intervals (2.5th-97.5th) are more conservative.

  3. 3

    Integrate into your alerting pipeline

    Run forecasts on a rolling basis and compare incoming observations against prediction intervals. Flag breaches as anomaly candidates and route them to your alerting or incident management system.

Detection approaches

Foundation model prediction intervals support several anomaly detection strategies depending on your tolerance for false positives and the nature of your data.

Point anomalies

Single observations that fall outside prediction intervals. Common for spike detection in server metrics, sudden drops in transaction volume, or sensor reading outliers.

Contextual anomalies

Values that are normal in one context but anomalous in another. A weekend traffic level on a Tuesday, for example. Foundation models encode temporal context, so their intervals naturally adjust for time-of-day and day-of-week patterns.

Drift detection

Sustained deviation from predicted patterns over multiple time steps. If observations consistently sit near or beyond the prediction interval boundary, the series may be drifting from its historical behavior.

Frequently Asked Questions

Detect anomalies in your time series

Send a series to the API with wide quantiles and see which observations fall outside the prediction intervals. No labels or training required.