Playground
Playground
Use the browser-based workbench to paste data, compare models, inspect charts, and export outputs before you wire production code.
What it does
A browser workbench for trying the contract before you write code
The Playground is the fastest way to test a model, validate a payload shape, or compare tasks against the same series. It is especially useful when you want to confirm data formatting before wiring an SDK or HTTP client.
Open PlaygroundAccess modes
- Guest mode is good for fast evaluation, but it has tighter limits and no durable account history.
- Authenticated mode is the better fit once you are testing real workloads and want usage continuity across surfaces.
Available tasks
Each task maps to a specific API endpoint and is filtered by the selected model's capabilities.
| Field | Type | Required | Description |
|---|---|---|---|
| Forecast | /v1/forecast | Yes | Generate point and quantile predictions for one series. Configure horizon (1-512), frequency, context length (4-4096), and quantiles. |
| Anomaly Detection | /v1/detect-anomalies | Yes | Detect anomalous observations using z-score thresholds. Tune sensitivity (1.5-6.0) and window size (4-128). Requires 8+ data points. |
| Classification | /v1/classify | Yes | Assign time series to classes with confidence scores. Configure number of classes (2-20) and optional custom labels. |
| Imputation | /v1/impute | Yes | Fill missing values (blank cells or NaN) in a series using model-based interpolation. No extra parameters needed. |
| Batch Forecast | /v1/forecast/batch | Yes | Run up to 10 forecast jobs in one request (64 max via API). Each job can target a different model, horizon, and frequency. |
Bring data in
Three input paths, one normalized series shape
- Paste timestamp/value rows directly into the series editor for the quickest possible experiment.
- Upload CSV, TSV, TXT, or JSON and let the parser normalize timestamps, values, and optional covariates.
- Load a public URL and use `/v1/series/ingest` behind the scenes to normalize the source before inference.
Supported file formats
| Field | Type | Required | Description |
|---|---|---|---|
| CSV | text/csv | No | Columns: timestamp, value, optional item_id, optional covariate columns. |
| JSON array | application/json | No | Records like { timestamp, value, item_id? } or canonical inputs[] payload. |
| NDJSON | application/x-ndjson | No | One observation per line for large uploads and stream transforms. |
Parameter limits
UI limits are opinionated for the browser surface. The API is still the source of truth.
| Field | Type | Required | Description |
|---|---|---|---|
| Horizon | 1 – 512 steps | Yes | Number of future data points to forecast. Default is 16 in Playground, 12 via API. |
| Context length | 4 – 4,096 points | No | Optional cap on how many historical points the model sees. When omitted the full series is used. |
| Series minimum | 4+ points (8+ for anomaly) | Yes | Minimum data points required per series. Anomaly detection requires at least 8. |
| Batch series | 10 UI / 64 API | No | Playground limits batch jobs to 10 cards. The raw API accepts up to 64 per request. |
| Sensitivity | 1.5 – 6.0 | No | Z-score threshold for anomaly detection. Higher values flag fewer anomalies. Default 2.5. |
| Window size | 4 – 128 | No | Rolling window for anomaly z-score calculation. Default 12. |
| Quantiles | 0 – 1 (comma-separated) | No | Forecast quantile levels. Default: 0.1, 0.5, 0.9. Values must be ascending. |
| Classes | 2 – 20 | No | Number of classification buckets. Custom labels can be provided as comma-separated names. |
Task guides
Use the workbench differently depending on the task
Forecast
Best default for evaluation. Adjust `prediction_length`, frequency, and quantiles while comparing models on the same series.
Anomaly detection
Use sensitivity and window size to control how aggressively spikes or drops are flagged.
Classification and imputation
Use the same workbench to test multi-task models before moving into code.
Batch forecast
Validate throughput and partial-failure behavior with multiple series in one run.
Playground state is shareable by URL. Use links like ?model=amazon/chronos-bolt-base&task=forecast when you want a teammate to start from the same model and task configuration.
Export results
Move from visual validation into code, spreadsheets, or review artifacts
- JSON is best when you want the exact API response for debugging or code handoff.
- CSV is best when analysts want tabular output for spreadsheets or BI tools.
- Chart PNG is best when you need a quick artifact for a ticket, doc, or review thread.
Export formats
| Field | Type | Required | Description |
|---|---|---|---|
| JSON | Download | No | Full API response including predictions, usage, latency, and metadata. |
| CSV | Download | No | Tabular export with step, timestamp, point forecast, and quantile columns. |
| Chart PNG | Download | No | Rendered time series chart with historical data and forecast overlay as an image. |