mcpclaudellmtool-useapiintegration

Giving Claude Time-Series Superpowers with MCP

Large language models can't forecast time series — but they don't have to. With MCP tool use, Claude can call specialized foundation models and return calibrated probabilistic forecasts in a single conversation turn.

T
TSFM.ai Team
February 12, 20263 min read

Ask Claude to forecast next week's server traffic and it will give you a confident, plausible, and almost certainly wrong answer. This isn't a criticism — it's a fundamental limitation. LLMs process tokenized text, not continuous numerical sequences. Recent research confirms this: Merrill et al. found that in zero-shot evaluations, language models performed no better with time series data than without it (arXiv:2404.11757). A separate study showed that even small amounts of noise break LLM-based zero-shot forecasters entirely (arXiv:2506.00457). The tokenization step destroys the very properties — scale, continuity, autocorrelation — that forecasting depends on.

But LLMs are exceptional at something else: understanding what a user wants, orchestrating tools, and presenting results in context. The question isn't how to make Claude forecast. It's how to give Claude access to models that can.

MCP: The Missing Bridge

The Model Context Protocol (MCP) is an open standard for connecting AI assistants to external tools and data sources. An MCP server exposes typed tools — functions with defined inputs and outputs — that any compatible client can discover and call. When Claude connects to an MCP server, it sees the available tools, understands their parameters from the schema, and can invoke them as part of a natural conversation.

This is the right abstraction for forecasting. Instead of asking an LLM to produce numbers directly, you let it call a time series foundation model through a structured API. The TSFM handles the numerical prediction. Claude handles everything else: understanding the user's intent, formatting the input, interpreting the output, and explaining the results.

How TSFM.ai's MCP Server Works

We built an MCP server that exposes four tools: listModels, getModelById, forecast, and forecastBatch. When Claude connects, it can browse available foundation models, select one appropriate for the task, and run forecasts — all within a single conversation.

The forecast tool accepts a time series as a list of numerical values, along with optional parameters like prediction length, frequency, and quantile levels. It returns calibrated probabilistic forecasts with prediction intervals — not a single point estimate, but a full distribution of possible outcomes. Claude can then explain what the forecast means, flag uncertainty, and suggest next steps.

Authentication follows the OAuth 2.1 PKCE flow per RFC 9728, so MCP clients can authenticate users securely without handling raw credentials. Once authenticated, each tool call deducts from the user's credit balance and logs usage for billing.

Why This Matters

The pattern we're seeing — LLM as orchestrator, specialist model as engine — is converging across the industry. Google recently connected TimesFM to BigQuery via MCP, letting agents run AI.FORECAST inside warehouse queries. AWS published a similar architecture for SageMaker endpoints behind MCP. Salesforce's MoiraiAgent uses a 3B-parameter LLM to orchestrate multiple TSFMs, selecting and configuring them based on context.

The common insight is that zero-shot forecasting works best when the forecasting model focuses purely on numerical patterns and a separate system handles context, intent, and presentation. MCP makes this separation clean and composable.

What This Looks Like in Practice

A user pastes a CSV of monthly revenue data into Claude. Claude parses the numbers, calls listModels to check what's available, selects an appropriate model, calls forecast with a 12-month horizon, and returns a chart showing the median forecast with 80% prediction intervals. The user asks "what if we assume 10% growth in Q3?" — Claude adjusts the input, reruns the forecast, and compares the two scenarios side by side. No notebooks, no infrastructure, no ML expertise required.

This is the interaction model we think production forecasting is moving toward: specialist models behind open protocols, orchestrated by general-purpose AI that meets users where they already work. The forecasting happens in the background. The conversation happens in plain English.

To connect Claude to TSFM.ai's forecast models, add our MCP server in your Claude configuration — details are in our API documentation.

Related articles