Down the Rabbit-Hole
Subarashiki Hibi is a visual novel developed by KeroQ. The game revolves around a prophecy predicting the world will end on July 20th, 2012, generated by the "Web Bot Project"—an AI system that analyzes web data to forecast future events.
Implement a time series forecasting pipeline from scratch.
You are given a univariate time series (a sequence of numeric values at regular intervals). Your task is to implement two forecasting models — Simple Moving Average (SMA) and Exponential Smoothing (ES) — evaluate them on a held-out test set, and output predictions and metrics.
"We cannot infer the events of the future from those of the present. Superstition is the belief in the causal nexus."

Input Format
n train_size window alpha
v_1
v_2
...
v_n
nis the total number of time steps.train_sizeis the number of time steps in the training set (the firsttrain_sizevalues). The remainingn - train_sizevalues form the test set.windowis the lookback window size used by the SMA model.alphais the smoothing factor for the ES model (0 < alpha <= 1).- Each subsequent line is a single floating-point value.
It is guaranteed that train_size >= window and train_size < n.
Output Format
SMA
p_1 p_2 ... p_m
RMSE: <value>
ES
p_1 p_2 ... p_m
RMSE: <value>
m = n - train_size(the number of test-set time steps).- For each model, output its name, then its
mpredictions (space-separated on one line), then its RMSE on the test set. - RMSE values must be printed with exactly 4 decimal places.
- Predictions must be printed with exactly 4 decimal places.
Models to Implement
Simple Moving Average (SMA)
To predict time step t, average the window most recent values prior to t.
That is:
SMA(t) = (v[t-window] + v[t-window+1] + ... + v[t-1]) / window
- For the first test prediction (
t = train_size), the window pulls from the tail of the training set. - Each subsequent prediction uses the actual value at the prior time step (not the predicted value). This is a single-step-ahead forecast using ground-truth history.
Exponential Smoothing (ES)
The ES forecast is computed as a running weighted average. The recurrence is:
S[0] = v[0]
S[t] = alpha * v[t-1] + (1 - alpha) * S[t-1], for t >= 1
The prediction for time step t is S[t].
Smust be computed over the entire sequence up to each point. For test predictions, the recurrence continues using actual values (not predictions).- The prediction for
t = train_sizeisS[train_size], computed usingv[train_size - 1]andS[train_size - 1].
Evaluation
RMSE is defined as:
RMSE = sqrt( (1/m) * sum_{i=0}^{m-1} (pred[i] - actual[i])^2 )
where actual[i] = v[train_size + i].
Behavioral Requirements
- You must not use any forecasting, statistics, or ML library. Basic math (
sqrt, etc.) is permitted. - Predictions must use actual (ground-truth) values for history at every step — not previously generated predictions. This is called "one-step-ahead" forecasting.
- The train/test split is strictly positional: first
train_sizevalues are train, the rest are test. No shuffling or reordering.
Notes
- The ES model does not have a separate "training" phase — it simply runs its recurrence from
t = 0onward. The train/test split only determines which predictions are evaluated. - Both models are deterministic given the same input. There is no fitting or parameter search.
10 8 3 0.5
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0SMA
5.0000 5.0000
RMSE: 0.0000
ES
5.0000 5.0000
RMSE: 0.0000