Forecasting API

A forecasting API for production time series systems

Run Chronos, TimesFM, Moirai, and other time series foundation models through one hosted endpoint. Keep one request shape while you compare latency, context length, pricing, and benchmark fit.

Single request shapeHosted model catalogForecasts and quantiles
Canonical requestPOST /v1/forecast
curl -X POST https://api.tsfm.ai/v1/forecast \
  -H "Authorization: Bearer $TSFM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "amazon/chronos-bolt-base",
    "inputs": [{
      "item_id": "daily-sales",
      "target": [428, 435, 441, 438, 446, 452],
      "start": "2026-03-01T00:00:00Z"
    }],
    "parameters": { "prediction_length": 14, "freq": "D", "quantiles": [0.1, 0.5, 0.9] }
  }'

Use the same contract across curl, SDKs, CI smoke tests, and agent tooling.

Model choice

Compare families without rewriting clients

Operational fit

Evaluate latency, pricing, and coverage in one place

Production path

Move from playground to API key to live traffic on the same surface

Why teams use a dedicated forecasting API

The hard part is rarely one single model call. It is keeping request formats stable while you evaluate and swap models over time.

One schema instead of per-model adapters

Keep one payload shape while you test Chronos, TimesFM, and Moirai. That cuts integration churn when new models become worth trying.

Hosted inference instead of model operations

You focus on series quality, horizon selection, and business integration. TSFM.ai handles the serving layer, authentication, and model routing surface.

Benchmark-aware evaluation

Use benchmark pages, playground runs, and model detail pages together instead of guessing from one paper result or one anecdotal benchmark chart.

View benchmarks

How teams usually adopt it

Most teams start with a narrow workflow, then expand once they have one reliable forecast path in production.

  1. 1

    Start with one business forecast

    Pick one demand, traffic, energy, or infrastructure metric that already matters. Send a representative series through the API before generalizing the stack.

  2. 2

    Compare a short model shortlist

    Use the same request against a few candidate models. Evaluate latency, context length, quantiles, and forecast quality on your own data.

  3. 3

    Promote one request path into production

    Keep the request contract stable, instrument success and failure paths, and leave model swaps as a configuration choice rather than an app rewrite.

What to evaluate before you commit to a forecasting API

A strong integration should answer buyer and builder questions at the same time.

Frequently Asked Questions

Evaluate the API on your own data

Start with one forecast that already matters to your business. Keep the request shape fixed and compare models before you commit to a production default.