MOMENT-Large
onlineAutonLab/MOMENT-1-large385M params | 512 context | $0.5000 input | $1.50 output
MOMENT-Large is the large checkpoint in AutonLab's general-purpose time-series foundation-model family. Official sources frame MOMENT as a multi-task representation model that transfers across forecasting, classification, anomaly detection, imputation, reconstruction, and embedding extraction rather than optimizing purely for one forecasting benchmark. It is the most flexible hosted model when you expect to reuse a shared backbone across several downstream time-series tasks.
Model Classification
Family
MOMENT-1
Type
time series foundation model
Pretrained time-series model exposed on TSFM.ai for zero-shot or few-shot forecasting workloads.
Resources
Training Data
Timeseries-PILE, built from public forecasting, classification, and anomaly-detection corpora including Informer datasets, Monash, UCR/UEA, and TSB-UAD.
Recommended For
- • Shared backbones across forecasting, anomaly detection, classification, and imputation
- • Teams that want one general-purpose time-series representation model
Strengths
- • Broadest multi-task scope in the hosted catalog
- • Useful when the same deployment needs to cover several downstream tasks
Limitations
- • Not optimized purely around one forecasting leaderboard objective
- • May be heavier than needed if you only need straightforward zero-shot forecasting
Capabilities
Tags
Specifications
- Parameters
- 385M
- Architecture
- patch-based encoder-only transformer trained with masked time-series modeling
- Context length
- 512
- Max output
- 1,024
- Avg latency
- n/a
- Uptime
- n/a
- Rate limit
- n/a
- Accelerator
- NVIDIA GPU
- Regions
- Virginia, US
- License
- n/a
Pricing
- Input / 1M tokens
- $0.5000
- Output / 1M tokens
- $1.50
Performance
- Average latency
- n/a
- Availability
- n/a
- Rate limit
- n/a