Skip to main content

Applying the Laws of Forecasting to Predictive Analytics

I had a chance to meet with my good friend Tom Choi from Arizona State University a couple of weeks ago. We met at the United Club in Chicago, as he was on his way back from a visit with Honda, and met to discuss the upcoming study I will be working on with him on procurement analytics for the Center for Advanced Purchasing Studies later this summer. We had a chance to catch up on a great number of things, but one that sticks in my mind is the discussion on predictive analytics and forecasting.

Tom recalled a couple of simple rules around prediction, based on some of the time-honored rules of forecasting methods that we have both taught for years.

“In your forecasting classes you take, all you are really learning is a sophisticated way to create mathematical expressions to capture the past, and trying to extrapolate it into the future.   And we always return to the two cardinal rules that deal with the accuracy of forecasting.”

“The first rule is that aggregated forecasts are always better then product-specific forecasts.   (This has been made popular by the trend of “crowd sourcing” recently.) The basic rule is that multiple points of view formed individual experts and respondents act as a synthesizing mechanism to help us see what is going on at a macro level. They offer opinions and paint the future for us. If we can combine multiple insights we can being to get a reasonable view of the future. For example, predicting between whether the NFC or AFC will win the Superbowl is easier than trying to predict which team will win the Super Bowl – and forecasting the trend for products families rather than individual products is likewise much easier.”

“The second rule is that the longer you wait to develop forecast, the more accurate it will be. That is because forecasts for shorter periods into the future are more accurate than those that go further out. Forecasts for tomorrow are better than forecasts for two months from now. We learn about accurate response methods, and as you increase standardization and reduce leadtime you can get “first market data” which is the best predictor. Waiting to get the very latest (and earliest) market response to your product offering works best for product releases. So if you until the last minute –and if you have a very flexible supply chain – you can afford to wait until that last moment and your forecast accuracy increases.”

“If you apply those ideas to the cognitive arena and analytics arena that we find ourselves in, than the implications are obvious. If we can combine last minute decision-making with computing power, you have a very powerful predictive analytics capability. You can afford to wait until the last minute to make a decision, and also you can aggregate the data (using Big Data). If you are trying to make a decision on whether to buy precious metal form one location versus another, then you can bring together the data from both locations, and combine this data with the location’s record in terms of their production stability and their past market performance, and make a solid decision. You can reflect the information that happened that very day to affect your decision-making. Regardless of how smart algorithms are, you are still doing forecasting. Nobody has a crystal ball – but we are just getting a little more sophisticated in how we apply these two rules.   We used to have to make forecasts that were very linear and trying to extrapolate the future based on the past. But now with real-time data we are learning how to do this using smaller increments of time, and we have machines that also are learning faster and faster. We are using a high “alpha” value in our exponential smoothing models and weighting the most recent data the highest. This is driving greater velocity of decision-making. Which is ultimately the capability necessary for survival in today’s global economy!”