Clients often tell us that their forecasting methods are 95 percent accurate. It sounds impressive, but just how accurate is their accurate, and how does it affect their pricing strategy? The answer, and the pricing strategy may be worth hundreds of millions of dollars in additional revenue, and in some industries such as hospitality, forecast accuracy should be a fundamental business practice.
Forecast accuracy precision is crucial to your organization. Take the client who believes their forecasts are 95 percent accurate. Most likely it’s not because they haven’t asked the right questions: What level of granularity are they measuring? What’s the metric? Is it the right metric?
Making matters worse, some hotels look at forecasting from the inside out, not outside in. Their Revenue Management systems focus on arrivals and length of stay to feed the core optimization. Business users look at demand from the business segment stay night level. When the arrivals by length of stay forecasts are aggregated up, they are less accurate than the business segment forecast revenue managers produce separate from the system. Therefore, users lose confidence in the recommendations from the Revenue Management system and there’s low adoption.
When the Revenue Management system forecast maximizes the forecast accuracy at the business segment stay night level, there’s higher levels of adoption in the price and inventory recommendations. Users have more confidence in the forecast; there’s little loss of accuracy at the arrival length of stay level, so the resulting price and inventory recommendations are good.
Commonly, forecast accuracy is measured as bias, which equals the sum of all errors. Positive errors cancel out negative ones, leading to a small bias or “95% accurate.” A standard scientific metric is Mean Absolute Percent Error (MAPE). MAPE is calculated by subtracting the forecasted value; the difference divided by the actual, converted to absolute value; then total averaged. Unfortunately, MAPE weighs forecast errors on small actuals higher than larger actuals. When forecasting small numbers the problem is exacerbated, and scaled errors are preferred. The Mean Absolute Scaled Error (MASE) is the Mean Absolute Error (MAE) of the forecast relative to a benchmark forecast. Typically, a naïve random walk is used for the benchmark. MASE obviates the small numbers problem and is easy to understand. A MASE less than 1 is good – the smaller the better.
Make the case for using accurate forecast metrics and be clear about what metric you are using to measure accuracy. Follow those steps and you will build a forecast accuracy strategy that can predict both demand and revenue. This formula supports the forecast accuracy methodology:
Where MAE stands for Mean Absolute Error, f denotes the forecast, and n denotes the naïve or benchmark forecast.