The Infosys global supply chain management blog enables leaner supply chains through process and IT related interventions. Discuss the latest trends and solutions across the supply chain management landscape.

« Best practices for Master Data Load & Validation - Part 2 | Main | Getting Serious about Supply chain collaboration »

How to Monitor Forecast Accuracy? Forward or Backward

Generally Statistical Forecast generation process works in following way. History and Forecast horizons are always fixed. (Example - 72 months history period will be used to generate forecast for next 60 months). Every month oldest history data point is dropped and newest history data point is included in history zone. This process is called history rollover (Continuing with above example, if you are in Feb 2011 then January 2005 history data point will be dropped and January 2011 history data will be included in history zone to keep 72 month history period constant). Forecast engine runs after history rollover to generate forecast based on preset models for each SKU. Generally statistical tuning to find out best suitable forecast model for a particular SKU is performed only once during initial implementation. After that some sort of monitoring process and alerts are set up to find out forecasting cases where time series pattern has changed. Topic for this blog is to discuss logic for this monitoring process and consequently alerts.

Most of the standard statistical forecasting suite offers all standard forecast error measures like MAPE, MAD, RMSE, MPE, MSE, etc. General tendency amongst the users is to use one of these error measures and set up alerts if error increases beyond specified threshold. For example - If MAPE is your chosen measure and last month MAPE is 18%. After this month history rollover and forecast engine run if MAPE becomes 25% then system should generate alert as drop in the forecast accuracy is more than 5%. User will see this alert. He or she will look for reasons for this drop in the forecast accuracy. It could be introduction of outlier in the data or it could be genuine trend or seasonality change which warrants refitting of new forecast model with new parameters.

Looks a reasonable process but I have seen it failing in many cases. This process will not necessarily identify all cases where problem exist. Reason for this is as below.

Ask a simple question, how MAPE in above example is calculated? In most of the statistical forecasting suite it works following way. Suppose in above case forecast model used is say "Holt Winters Seasonal-Trend model". Minimum number of data points required for this forecast model to generate first forecast is 2 season data for seasonality plus 3 additional data points for trend. Suppose period per season is 12, we will require 24 plus 3 that is 27 data points to generate first forecast. Forecast engine works following way - It use first 27 points in history to generate forecast for point number 28. In above example it means it will use history from Feb 2005 till April 2007 to generate forecast for May 2007. Similarly it will use first 28 data point history to generate forecast for point number 29 and so on - This way out 72 history data point it can have forecast for 45 data points (Data point number 28 to 72). For these 45 data points both history and forecast are available. Forecast engine compare them and calculate MAPE. Suppose this MAPE calculated is 18%. Now imagine what will happen next month. Feb 2005 data point will be dropped and Feb 2011 will be included in history. Everything else will stay the same. Again first 27 data points will be used to generate forecast for point number 28. Again we will have situation where 45 data points will have both history and forecast but please notice most important thing, 44 out of these 45 data points are same as last month. Only one new data point Feb 2011 has rolled in to make new MAPE different than previous one. It is highly unlikely that by introduction of just one new data point MAPE will differ drastically and we may or may not get alert. However even if this one data point of Feb 2011 is bad it will vitiate the forecast for next year to great extent as every data point matter a great deal in time series pattern having seasonality. So my contention is to set up alerts for monitoring with following specifications.

Instead of setting them up based on forecast accuracy measure calculated based on past history data better is to use forecasted numbers. Continuing with same example, when Forecast engine generated the forecast on 1st Feb 2011 based on history till Jan 2011 it would have generated forecast for next 60 months starting Feb 2011, ending on Jan 2016. On 1st March 2011 we will have history up to Feb 2011. Forecast engine will again generate forecast but this time for 60 months starting March 2011 and ending on Feb 2016. Point to note here that 59 out of these 60 months are same as previous months. My contention is that, rather than comparing MAPE based on past it will be good if you compare previous forecast with new forecast after history rollover. Example - You generated forecast for period March 2011 to Feb 2012 on 1st Feb 2011 and also again on 1st March 2011. If something is wrong with history rollover, these two numbers will differ drastically. Alert set up based on comparison of these two forecast numbers will be much more sensitive and trustworthy than what is set up based on Forecast Accuracy Measures like MAD, MAPE, RMSE etc which are based on past data. Forecast alerts invariably detect all deviations. Forecast Accuracy based alerts miss many time series pattern changes.

A thought worth benefitted my clients.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter