# R Tutorial

*Video tutorials are based on the old forecast package.*

*Text tutorials have been updated to use*

`fable`

and associated packages.## Video Tutorials

### Hindcasting & Visual Evaluation

### Quantitative Evaluation of Point Estimates

## Written Tutorial

### Steps in forecasting

- Problem definition
- Gather information
- Exploratory analysis
- Choosing and fitting models
- Make forecasts
**Evaluate forecasts**

### Setup

- Load the packages

```
library(tsibble)
library(fable)
library(feasts)
library(dplyr)
```

- Load the data

```
portal_data = read.csv("content/data/portal_timeseries.csv") |>
mutate(month = yearmonth(date)) |>
as_tsibble(index = month)
portal_data
```

### Evaluating forecast model fits

- We can fit multiple models to a single dataset and then compare them

```
portal_models = model(
portal_data,
ma2 = ARIMA(NDVI ~ pdq(0,0,2) + PDQ(0,0,0)),
arima = ARIMA(NDVI),
arima_exog = ARIMA(NDVI ~ rain)
)
portal_models
glance(portal_models)
```

But comparisons typically rely on IID residuals

We know we don’t have this for TSLM, so it’s likelihood and IC values are invalid

These comparisons also typically tell us about 1 step ahead forecasts and we may care about more steps

So it’s best to evaluate forecasts themselves

Ideally we make a forecast for the future, collect new data, and then see how well the forecast performed

This is the gold standard, but it requires waiting for time to pass

Therefore, most work developing forecasting models focuses on “hindcasting” or “backcasting”

### Hindcasting

- Split existing time-series into 2 pieces
- Fit to data from the first part of an observed time-series
- Test on data from the end of the observed time-series
- For those of you familiar with cross-validation, this is a special form of cross-validation that accounts for autocorrelation and the fact that time is directional

#### Test and training data

- To split the time-series into training data and testing data we use the
`filter`

function from dplyr - 1st argument is the tsibble
- 2nd argument is the condition
- Data runs through the end of 2019, so use up to 2017 for training data

```
train <- filter(portal_data, month < yearmonth("2011 Dec"))
test <- filter(portal_data, month >= yearmonth("2011 Dec"))
```

#### Build model on training data

- We then fit our model to the training data
- Let’s start with our non-seasonal MA2 model

```
ma2_model = model(train, ARIMA(NDVI ~ pdq(0,0,2) + PDQ(0,0,0)))
```

#### Make forecast for the test data

- We then use that model to make forecasts for the test data
- We reserved 3 years of test data so we want to forecasts 36 months into the future

```
ma2_forecast = forecast(ma2_model, h = 36)
```

### Visual Evaluation

- Start by evaluating the performance of the forecast visually
- Remember that we can plot the data the model is fit to and the forecast using
`autoplot`

```
autoplot(ma2_forecast, train)
```

- If we want to add the observations from the test data we can do this by adding
`autolayer`

```
autoplot(ma2_forecast, train) + autolayer(test, NDVI)
```

- This adds a new layer to our the
`ggplot`

objection created by`autoplot`

with the test data - How does it look?

### Quantitative Evaluation

#### Point Estimates

- Quantitative evaluation of point estimates is based on forecast errors

`accuracy`

function shows a number of common measures of forecast accuracy- 1st argument: forecast object from model on training data
- 2nd argument: test data time-series

```
accuracy(ma2_forecast, test)
```

- RMSE is a common method for evaluating forecast performance

Now it’s your turn.- Write code to quantify the accuracy of the full ARIMA model

- I’m going to add the full ARIMA in to my existing code

```
models = model(train,
ma2 = ARIMA(NDVI ~ pdq(0,0,0) + PDQ(0,0,0),
arima = ARIMA(NDVI))
forecasts = forecast(models, test)
autoplot(forecasts, train) + autolayer(test, NDVI)
accuracy(forecasts, test)
```

- autoplot graphs both sets of forecasts for comparison
- accuracy shows all metrics for both forecasts for comparison
- So the full ARIMA is better on point estimates