Forecasting workflows
The Simulacrum client wraps the /{modelName}/v1/forecast endpoint with input validation and typed responses. By default the SDK uses the tempo model; override via the model parameter.
Minimal forecast
from simulacrum import Simulacrum
client = Simulacrum(api_key="sim-key_id-secret")
values = [412.0, 415.5, 418.3, 420.0, 421.8]
forecast = client.forecast(series=values, horizon=6, model="tempo")
print(forecast.tolist())
series must be one-dimensional. Pass a list, tuple, or numpy array.
horizon represents how many future steps the API should predict.
model defaults to "tempo". Override it to target specialised backends or your fine-tuned model name.
Model selection
Use the model parameter to explore Simulacrum’s models (including fine-tuned ones). When a model name is not recognised, the API returns an InvalidRequestError.
client.forecast(series=values, horizon=6, model="smlcrm-model")
Batching workloads
For high-volume jobs, batch your requests and parallelise the compute with asyncio or concurrent futures.
import concurrent.futures
from simulacrum import Simulacrum
client = Simulacrum(api_key="sim-key_id-secret")
segments = {
"east": [51.2, 59.1, 62.0, 65.4],
"west": [44.0, 45.5, 42.3, 43.7],
"digital": [240.7, 243.5, 249.0, 256.2],
}
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as pool:
futures = {
pool.submit(client.forecast, series=series, horizon=8): name
for name, series in segments.items()
}
for future, name in futures.items():
print(name, future.result())
Keep horizons small and leverage horizontal scaling for throughput. The API enforces request-level rate limits per account, so throttle when QuotaExceededError is raised.
Post-processing
The SDK returns a numpy array so you can slot forecasts into Pandas, Arrow, or downstream models without conversion boilerplate.
import pandas as pd
horizon = 6
forecast = client.forecast(series=values, horizon=horizon, model="tempo")
index = pd.date_range("2024-01-01", periods=horizon, freq="D")
series = pd.Series(forecast, index=index, name="prediction")
Store the same metadata you used to generate the forecast (model name, horizon, feature flags) alongside each result for reproducibility.
For backtesting, partition the tail of your historical data, forecast against the prefix, and compare the result to the holdout set. The numpy output makes error metrics such as MAPE or SMAPE straightforward to compute.