Playground

Playground

Use the browser-based workbench to paste data, compare models, inspect charts, and export outputs before you wire production code.

What it does

A browser workbench for trying the contract before you write code

The Playground is the fastest way to test a model, validate a payload shape, or compare tasks against the same series. It is especially useful when you want to confirm data formatting before wiring an SDK or HTTP client.

Open Playground

Access modes

  • Guest mode is good for fast evaluation, but it has tighter limits and no durable account history.
  • Authenticated mode is the better fit once you are testing real workloads and want usage continuity across surfaces.

Available tasks

Each task maps to a specific API endpoint and is filtered by the selected model's capabilities.

FieldTypeRequiredDescription
Forecast/v1/forecastYesGenerate point and quantile predictions for one series. Configure horizon (1-512), frequency, context length (4-4096), and quantiles.
Anomaly Detection/v1/detect-anomaliesYesDetect anomalous observations using z-score thresholds. Tune sensitivity (1.5-6.0) and window size (4-128). Requires 8+ data points.
Classification/v1/classifyYesAssign time series to classes with confidence scores. Configure number of classes (2-20) and optional custom labels.
Imputation/v1/imputeYesFill missing values (blank cells or NaN) in a series using model-based interpolation. No extra parameters needed.
Batch Forecast/v1/forecast/batchYesRun up to 10 forecast jobs in one request (64 max via API). Each job can target a different model, horizon, and frequency.

Bring data in

Three input paths, one normalized series shape

  • Paste timestamp/value rows directly into the series editor for the quickest possible experiment.
  • Upload CSV, TSV, TXT, or JSON and let the parser normalize timestamps, values, and optional covariates.
  • Load a public URL and use `/v1/series/ingest` behind the scenes to normalize the source before inference.

Supported file formats

FieldTypeRequiredDescription
CSVtext/csvNoColumns: timestamp, value, optional item_id, optional covariate columns.
JSON arrayapplication/jsonNoRecords like { timestamp, value, item_id? } or canonical inputs[] payload.
NDJSONapplication/x-ndjsonNoOne observation per line for large uploads and stream transforms.

Parameter limits

UI limits are opinionated for the browser surface. The API is still the source of truth.

FieldTypeRequiredDescription
Horizon1 – 512 stepsYesNumber of future data points to forecast. Default is 16 in Playground, 12 via API.
Context length4 – 4,096 pointsNoOptional cap on how many historical points the model sees. When omitted the full series is used.
Series minimum4+ points (8+ for anomaly)YesMinimum data points required per series. Anomaly detection requires at least 8.
Batch series10 UI / 64 APINoPlayground limits batch jobs to 10 cards. The raw API accepts up to 64 per request.
Sensitivity1.5 – 6.0NoZ-score threshold for anomaly detection. Higher values flag fewer anomalies. Default 2.5.
Window size4 – 128NoRolling window for anomaly z-score calculation. Default 12.
Quantiles0 – 1 (comma-separated)NoForecast quantile levels. Default: 0.1, 0.5, 0.9. Values must be ascending.
Classes2 – 20NoNumber of classification buckets. Custom labels can be provided as comma-separated names.

Task guides

Use the workbench differently depending on the task

Forecast

Best default for evaluation. Adjust `prediction_length`, frequency, and quantiles while comparing models on the same series.

Anomaly detection

Use sensitivity and window size to control how aggressively spikes or drops are flagged.

Classification and imputation

Use the same workbench to test multi-task models before moving into code.

Batch forecast

Validate throughput and partial-failure behavior with multiple series in one run.

Playground state is shareable by URL. Use links like ?model=amazon/chronos-bolt-base&task=forecast when you want a teammate to start from the same model and task configuration.

Export results

Move from visual validation into code, spreadsheets, or review artifacts

  • JSON is best when you want the exact API response for debugging or code handoff.
  • CSV is best when analysts want tabular output for spreadsheets or BI tools.
  • Chart PNG is best when you need a quick artifact for a ticket, doc, or review thread.

Export formats

FieldTypeRequiredDescription
JSONDownloadNoFull API response including predictions, usage, latency, and metadata.
CSVDownloadNoTabular export with step, timestamp, point forecast, and quantile columns.
Chart PNGDownloadNoRendered time series chart with historical data and forecast overlay as an image.