Alibaba logo

YingLong 6M

online
qcw2333/YingLong_6m

6M params | 4K context | $0.00025 per forecast | CC-BY-4.0

YingLong 6M is the smallest checkpoint in the YingLong family, built for efficient zero-shot probabilistic forecasting. The official card and released code position it as a lightweight YingLong checkpoint with bidirectional attention and a multi-quantile output head rather than an autoregressive decoder. The 6M variant is the most latency-friendly YingLong option in the catalog.

Model Classification

Family

YingLong

Type

time series foundation model

Pretrained time-series model exposed on TSFM.ai for zero-shot or few-shot forecasting workloads.

Training Data

Official model card states that the released YingLong checkpoints were pre-trained on 78B time points.

Recommended For

  • Dense probabilistic forecasting with fine-grained quantile coverage
  • Workloads that need richer distribution coverage than standard low-count quantile sets

Strengths

  • Quantile-focused output head provides unusually dense probabilistic coverage
  • Clear parameter-size ladder from 6M to 300M for cost-accuracy tradeoffs

Limitations

  • Newer family with less public benchmark coverage than the most established TSFMs
  • Dense quantile output increases per-token cost compared to point-forecast-only models

Capabilities

forecastingquantile-forecastingzero-shothigh-throughput

Tags

yinglongprobabilisticlightweight

Specifications

Parameters
6M
Architecture
non-causal transformer forecaster with multi-quantile output head
Context length
4,096
Max context
8,192
Minimum history
n/a
Recommended history
n/a
Input step
n/a
Required target series
1
Temperature
Ignored
Top P
Ignored
Max output
1,024
Avg latency
n/a
Uptime
n/a
Plan limits
1,000 rpm free · 1,000,000 rpm with billing
Accelerator
T4
Regions
Virginia, US
License
CC-BY-4.0

Pricing

Per forecast
$0.00025

Performance

Average latency
n/a
Availability
n/a
Plan limits
1,000 rpm free · 1,000,000 rpm with billing
YingLong 6M — TSFM.ai