Carnegie Mellon University logo

MOMENT-Small

online
AutonLab/MOMENT-1-small

~40M params | 512 context | $0.5000 input | $1.50 output | MIT

MOMENT-Small is the lightweight checkpoint in AutonLab's general-purpose time-series foundation-model family. Like the larger variants, it is framed as a multi-task representation model transferring across forecasting, classification, anomaly detection, imputation, and embedding extraction. The small variant is the most latency-friendly way to access the MOMENT architecture.

Model Classification

Family

MOMENT-1

Type

time series foundation model

Pretrained time-series model exposed on TSFM.ai for zero-shot or few-shot forecasting workloads.

Training Data

Timeseries-PILE, built from public forecasting, classification, and anomaly-detection corpora including Informer datasets, Monash, UCR/UEA, and TSB-UAD.

Recommended For

  • Shared backbones across forecasting, anomaly detection, classification, and imputation
  • Teams that want one general-purpose time-series representation model

Strengths

  • Broadest multi-task scope in the hosted catalog
  • Useful when the same deployment needs to cover several downstream tasks

Limitations

  • Not optimized purely around one forecasting leaderboard objective
  • May be heavier than needed if you only need straightforward zero-shot forecasting

Capabilities

forecastingclassificationanomaly-detectionimputationretrieval

Tags

momentmulti-taskrepresentation-learninglightweight

Specifications

Parameters
~40M
Architecture
patch-based encoder-only transformer trained with masked time-series modeling
Context length
512
Max output
1,024
Avg latency
n/a
Uptime
n/a
Plan limits
1,000 rpm free · 1,000,000 rpm with billing
Accelerator
NVIDIA GPU
Regions
Virginia, US
License
MIT

Pricing

Input / 1M tokens
$0.5000
Output / 1M tokens
$1.50

Performance

Average latency
n/a
Availability
n/a
Plan limits
1,000 rpm free · 1,000,000 rpm with billing