R Tutorial
Video tutorials are based on the old forecast package.
Text tutorials have been updated to use fable and associated packages.
Video Tutorials
Hindcasting & Visual Evaluation
Quantitative Evaluation of Point Estimates
Written Tutorial
Steps in forecasting
- Problem definition
- Gather information
- Exploratory analysis
- Choosing and fitting models
- Make forecasts
- Evaluate forecasts
Setup
- Load the packages
library(tsibble)
library(fable)
library(feasts)
library(dplyr)
- Load the data
portal_data = read.csv("content/data/portal_timeseries.csv") |>
mutate(month = yearmonth(date)) |>
as_tsibble(index = month)
portal_data
Evaluating forecast model fits
- We can fit multiple models to a single dataset and then compare them
portal_models = model(
portal_data,
arima = ARIMA(NDVI),
tslm = TSLM(NDVI),
arima_exog = ARIMA(NDVI ~ rain)
)
portal_models
glance(portal_models)
But comparisons typically rely on IID residuals
We know we don’t have this for TSLM, so it’s likelihood and IC values are invalid
These comparisons also typically tell us about 1 step ahead forecasts and we may care about more steps
So it’s best to evaluate forecasts themselves
Ideally we make a forecast for the future, collect new data, and then see how well the forecast performed
This is the gold standard, but it requires waiting for time to pass
Therefore, most work developing forecasting models focuses on “hindcasting” or “backcasting”
Hindcasting
- Split existing time-series into 2 pieces
- Fit to data from the first part of an observed time-series
- Test on data from the end of the observed time-series
- For those of you familiar with cross-validation, this is a special form of cross-validation that accounts for autocorrelation and the fact that time is directional
Test and training data
- To split the time-series into training data and testing data we use the
filterfunction from dplyr - 1st argument is the tsibble
- 2nd argument is the condition
- Data runs through the November 2014
- Hold out 3 years for evaluation
train <- filter(portal_data, month < yearmonth("2011 Dec"))
test <- filter(portal_data, month >= yearmonth("2011 Dec"))
Build model on training data
- We then fit our model to the training data
- Let’s start with our non-seasonal MA2 model
ma2_model = model(train, ARIMA(NDVI ~ pdq(0,0,2) + PDQ(0,0,0)))
Make forecast for the test data
- We then use that model to make forecasts for the test data
- We reserved 3 years of test data so we want to forecasts 36 months into the future
ma2_forecast = forecast(ma2_model, h = 36)
Visual Evaluation
- Start by evaluating the performance of the forecast visually
- Remember that we can plot the data the model is fit to and the forecast using
autoplot
autoplot(ma2_forecast, train)
- If we want to add the observations from the test data we can do this by adding
autolayer
autoplot(ma2_forecast, train) +
autolayer(test, NDVI)
- This adds a new layer to our the
ggplotobjection created byautoplotwith the test data - How does it look?
- Compared to how it looked when we compare it to the fit to the training data
autoplot(ma2_forecast, train) +
autolayer(test, NDVI) +
autolayer(augment(ma2_model), .fitted, color = "blue")
Quantitative Evaluation
Point Estimates
- Quantitative evaluation of point estimates is based on forecast errors
- Draw forecast showing errors
- This gives us one error for each forecast horizon
- Need a way to combine them
- Easiest is to take the average across all horizons
- Called the Mean Error
- But big positive errors and big negative errors cancel out, so bad predictions could have low ME
- To focus on the magnitude of the error we most commonly use a metric called the Root Mean Squared Error
- Square the error, resulting in all errors having a positive value
- Then take the mean
- This is the Mean Squared Error
- But the units are now in terms of the response squared, which can be hard to interpret
- So we take the square Root
Which gives us a measure of the average error in the units of the response variable
accuracyfunction shows a number of common measures of forecast accuracy- 1st argument: forecast object from model on training data
- 2nd argument: test data time-series
accuracy(ma2_forecast, test)
- RMSE is a common method for evaluating forecast performance
- Now it’s your turn.
- Write code to quantify the accuracy of the full ARIMA model
- I’m going to add the full ARIMA in to my existing code
models = model(train,
ma2 = ARIMA(NDVI ~ pdq(0,0,2) + PDQ(0,0,0)),
arima = ARIMA(NDVI))
forecasts = forecast(models, test)
autoplot(forecasts, train, level = 50, alpha = 0.75)
+ autolayer(test, NDVI)
accuracy(forecasts, test)
- autoplot graphs both sets of forecasts for comparison
- accuracy shows all metrics for both forecasts for comparison
- So the full ARIMA is better on point estimates
- Also seems better on uncertainty, which we’ll learn about next time