Tag: forecast error

  • The ultimate rolling forecast workshop

    Having fun at the workshop

    Forecasting is a critical topic for many companies these days. No big surprise: the volatility and the speed in the world requires organizations to stay agile. About four years ago, my team and I started working with several customers and thought-leaders (David Axson, Steve Morlidge) to collect best practices for forecasting in these turbulent times. The results of the countless hours of talking, brainstorming, analyzing and reading are captured in the IBM Cognos ‘Best Practices in Rolling Forecasts’ workshop. This workshop ended up being way more successful than any one of us would have ever imagined. I have personally delivered over 100 of these events in the past three years.

    THE WORKSHOP FORMAT

    Forecasting is a complex topic and we were able to collect a full library worth of experiences. But simplicity rules and we selected the most interesting aspects

    David Axson is showing the way!

    to fill the agenda for a half-day workshop. That creates more focus and the attendees leave with just enough ideas to drive change in their organizations and without feeling overwhelmed. The overall focus is on the business process and not software. While we share a lot of best practices, the workshops are very interactive. We usually have extended and very fruitful discussions amongst the participants. Many attendees stay after the official event ends to continue their idea exchange. This is one of my favorite parts. There are always many things to learn.

    BEST PRACTICES AND MORE

    Static vs rolling forecasts?

    So, what do we cover? A lot! The focus is clearly on proven practices that were identified by our customers. But it is also important to look beyond those things. We therefore injected some thought-provoking ideas from our thought-leaders. And each workshop we run typically provides new ideas, stories and experiences that we leverage to enhance the materials.  It would be too much detail to cover in this post but here are some of the things we discuss:

    • Is a rolling forecast right for your organization?
    • What’s the right time horizon? 90-day? Four quarter? Six quarter? Three year?
    • How often should you update the forecast?
    • How do you use a rolling forecast as an early alert of threats and opportunities?
    • What is the role of scenarios?
    • What role can driver-based modeling and tools play in the forecast process?
    • How do you sell the need for a rolling forecast?
    • What does the business case look like?
    • How can you measure the efficiency and effectiveness of your process?

    IT’S YOUR TURN NOW!

    If you are considering to make changes to your forecasting processes or if you are working in the IT department supporting Finance, you should join one of these workshops. It is a great opportunity to meet other finance & IT professionals and to get solid ideas. Believe it or not, but we have had several customers attend multiple events. They simply liked the interaction with the other professionals so much and they felt that they got a lot of value out of each workshop. Check out my events page to find out about upcoming dates or simply drop me a note. Hope to see you soon!

  • A discussion about forecast errors

    Forecasting continues to be a hot topic. My recent interviews with Steve Morlidge continues to be very popular. Also, ‘Franz the Frog’ sparked some interesting discussions behind the scenes. Given the strong interest in these topics, I reached out to a friend who has spent a lot of time and effort driving solid forecasting processes.

    Please meet Ulrich Pilsl. He provides a different perspective. Ulrich currently works as an Interim Manager in Munich. He spent over 14 years at Softlab / BMW Group (later Cirquent / NTT Data Group). As a member of the executive board, he held different senior executive positions including CFO of a consulting subsidiary and as the Head of Controlling & Business Administration.

    Christoph Papenfuss: Forecasting is a key focus area for many finance professionals. But many organizations are struggling to obtain an objective view of the future. What are some of the key problems?

    Ulrich Pilsl: The biggest problem I see is complexity. Many companies have bloated processes that are too detailed. It simply takes too much time and people have a hard time differentiating between what is important and what is not. There is no clear focus. Also, management tends to have a hard time managing the process. My advice is to simplify and to get rid of excessive detail. More detail does not create more accurate forecasts. On the contrary: the more detail, the less accurate forecasts tend to be for the above mentioned reasons.

    Christoph Papenfuss: What is the main problem with inaccurate forecasts?

    Ulrich Pilsl: Inaccurate forecasts lead to a serious confidence problem. Shareholders don’t like surprises. It gets worse when surprises are caused by poor forecasting efforts.

    Christoph Papenfuss: Are positive and negative errors equally problematic? Let’s take a look at a typical sales or business forecast. Some people tend to create very conservative forecasts and often end up outperforming. Isn’t this better than creating a very ambitious forecast and then coming in lower?

    Ulrich Pilsl: This is an interesting but common situation. First of all, positive and negative errors are equally problematic. Both type of errors can create serious management challenges apart from the already discussed confidence problems. In regards to this specific situation, one might be tempted to say that it is a good thing for a sales person to continuously beat his or her forecast. However, this can create some serious challenges. Let’s take a look at a consulting company. Low sales forecasts indicated low resource requirements. Hiring efforts might be slowed down and the business might quickly end up in a situation where they do not have enough talent available. Business is lost. Customers might loose confidence in us as a trust-worthy business partner. I therefore strongly believe that both negative and positive errors require serious attention.

    Christoph Papenfuss: What should the Controller do to help minimize forecast errors?

    Ulrich Pilsl: The Controlling department should show some ‘tough love’. They have to challenge the departments to deliver realistic forecasts. We found that it is critical to provide suggestions and to jointly develop scenarios with the business managers. Finance basically acts as a tough but fair coach in the process. This continues in the the monthly and weekly management meetings: We openly discussed the forecast results and challenged the numbers. It is obviously the job of the Business Controller to moderate this process. Last but not least, we found that it sometimes makes sense to create top-down adjustments that reflect upside and downside risk.

    Christoph Papenfuss: Based on your experience, does it make sense to measure forecast accuracy? If yes, how often and at what level did you measure accuracy?

    Ulrich Pilsl: It depends on the organization. This reminds me of a quote by my former manager who said: “Most companies are over-controlled but under-managed.” A team that understands the value of a forecast will usually deliver solid forecasts. Measuring forecast accuracy won’t necessarily improve it. I do believe, though, that it makes sense to measure it if the organization has challenges with the forecast process. Especially in the case of a management team that does not see the value in the forecast. It might make sense to add an accuracy target to the annual objectives. We had a variable goal “internal quality”. This allowed us to substantially change the mindset of some managers. The goal was set once per year.

    Christoph Papenfuss: How do you utilize forecast accuracy measures? Should you communicate the numbers to the organization or is this something that should stay within the walls of the finance department?

    Ulrich Pilsl: In my opinion, it does make sense to communicate forecast accuracy to the management team. But it makes no sense to communicate it to the whole organization. The aim is to improve forecast quality and not to blame the management in the organization.

    Christoph Papenfuss: What can Finance do to help create a culture where people are happy to create meaningful and objective forecasts?

    Ulrich Pilsl: Finance simply has to be the role of a coach and consultant for the business. It is our role to educate and to support the business.

  • Three things every controller should know about forecast accuracy

    Forecast Accuracy

    Forecast accuracy is one of those strange things: most people agree that it should be measured, yet hardly anybody does it. And the crazy thing is that it is not all that hard. If you utilize a planning tool like IBM Cognos TM1, Cognos Planning or any other package, the calculations are merely a by-product – a highly useful by-product.

    Accuracy defined

    Forecast accuracy is defined as the percentage difference between a forecast and the according actuals (in hindsight). Let’s say I forecast 100 sales units for next month but end up selling 105, we are looking at a 95% accuracy or a 5% forecast error. Pretty simple. Right?

    And why?

    Why should we measure forecast accuracy? Very simple. We invest a lot of time into the forecast process, we utilize the final forecast to make sound business decisions and the forecast should therefore be fairly accurate. But keep in mind that forecasts will never be 100% accurate for the obvious reason that we cannot predict the future. Forecast accuracy provides us with a simple measure to help us assess the quality of our forecasts. I personally believe that things need to get measured. Here are three key benefits of measuring forecast accuracy:

    1. Detect Problems with Models: Forecast accuracy can act like a sniffing dog: we can detect issues with our models. One of my clients found that their driver calculations were off resulting in a 10% higher value. A time-series analysis of their forecast error clearly revealed this after just a few months of collecting data.
    2. Surface Cultural Problems: Accuracy can also help us detect cultural problems like sandbagging. People are often afraid to submit an objective forecast to avoid potential monetary disadvantages (think about a sales manager holding back information to avoid higher sales targets). I recently met a company where a few sales guys used to bump up their sales forecast to ‘reserve’ inventory of their hot products in case they were able to sign some new deals. Well, that worked ok until the crisis hit. The company ended up with a ton of inventory sitting on the shelves. Forecast accuracy can easily help us detect these type of problems. And once we know the problem is there and we can quantify it, we can do something about it!
    3. Focus, focus, focus: Measuring and communicating forecast accuracy drives attention and focus. By publishing accuracy numbers we are effectively telling the business that they really need to pay attention to their forecast process. I have seen many cases where people submit a forecast ‘just because’. But once you notice that somebody is tracking the accuracy, you suddenly start paying more attention to the numbers that you put into the template. Nobody wants to see their name on a list of people that are submitting poor forecasts, right?

    BUT……

    Overall, forecast accuracy is a highly useful measure. But it has to be used in the right way. We cannot expect that every forecast will be 100% accurate. It just can’t be. There is too much volatility in the markets and none of us are qualified crystal-ball handlers. There is a lot more to consider, though. Over the next few days, I will share some additional tips & tricks that you might want to consider. So, start measuring forecast accuracy today!

  • Poor forecast accuracy

    Over the last few weeks three separate clients have expressed their frustration with inaccurate forecasts delivered by certain members of the salesforce. Nothing new here. It happens all the time. However, what struck me about these three independent cases was the nature of the issues: The salesforce consistently forecast higher than actuals. This is not typical. Most sales people try to forecast lower to build up some buffer in case of bad news.

    BOOKING INVENTORY

    What happened here? Very simple: sales tried to utilize the forecast to ‘reserve” inventory of their extremely well-selling products. Their rational was that a higher sales forecast would inevitably lead to a higher availability of finished products ready for sale. In the past several sales people had encountered product shortages which affected their compensation negatively.

    The sales forecast as an inventory 'reservation' mechanism

    THE THREAT

    This kind of poor forecast accuracy could lead to a precarious situation. In case of an unexpected economic downturn, the company could end up sitting on a ton of finished goods inventory. And not only that: average inventory could trend upward reducing liquidity as a result. As we all know, inventory is central to effective working capital management.

    THE REMEDY

    The controllers of these companies were very frustrated with the situation. Despite senior management discussing the resulting issues with the sales force, accuracy barely improved. But one controller had developed an interesting idea that he is about to implement: start compensating the sales teams based on Working Capital measures.

    WORKING CAPITAL & FORECAST ACCURACY

    The basic idea of this evolves around punishing sales for consistently producing these unacceptable variances. And the implementation of this does not necessarily have to be that hard. We can measure forecast accuracy. A series of negative variances (Actuals < Forecast) leads to a reduction in the sales bonus. The critical thing here will be to avoid punishing people for random variances. I could see using a rolling average to only punish consistent ‘offenders’.

    This is an interesting idea. The actual compensation impact would obviously have to be worked out carefully. But the basic idea is quite interesting!

  • The case for forecast accuracy

    People always say that you get what you measure. And it is true. When I want to loose weight and I am serious about it, I do have to step on the scale frequently. The same thing is true for business. What gets measured gets done.

    FORECAST ACCURACY

    Forecasting has become a critical business process. Pretty much every company that I talk to is either improving or looking to improve this process. One of the measures that can be used to manage the forecast process is forecast accuracy. Forecast accuracy measures the percentage difference between Actuals and Forecast. Let’s say we forecast 100 units sold for next month and it turns out that we actually sold 95, the forecast accuracy value -5%. We can measure accuracy at different levels of an organization, let’s say at the Profit Center level or at the BU level.

    EMOTIONS RUN HIGH

    A few weeks ago, I had an interesting discussion with a group of consultants. They argued that forecast accuracy is not worth measuring. Their main arguments were:

    • Forecast accuracy cannot be influenced. The markets follow a random path and it can therefore not be expected to achieve accurate forecasts.
    • Forecast accuracy is a dangerous thing to measure and manage. People can start influencing the accuracy by managing their numbers according to expectations (for example sales managers can hold back deals for sake of influencing accuracy)
    • The quality of forecast accuracy is hard to define. Let’s say we beat our own forecast by performing really well. Forecast accuracy is off. Is that good or bad?

    MY VIEW

    Here is my personal view on this topic.

    • No single measure is perfect when looked at in isolation. Let’s say profits. What does the profit number for a certain quarter tell us? Nothing! We need to look at a mix of measures. Forecast accuracy is one measure that we can/ should look at.
    • Forecast accuracy provides us with the ability to identify potential bias. One of my clients, for example, found that their models were flawed. Forecast accuracy revealed this by highlighting a certain consistency.
    • Markets movements are difficult to anticipate. But it is the job of the forecaster to identify potential actions to make sure that targets are achieved. I should have a general clue about what is happening in my business. Once in a while, we encounter some surprises. Does that mean we should not measure forecast accuracy? I beg to differ. At the very least, a detailed analysis of the accuracy measurements can help us learn a lot about our organization and our environment.
    • Forecast accuracy is easy to measure. It can be automated. Cost are almost zero. Why not measure it and potentially learn something?

    I could go on and on. The bottom-line is that forecast accuracy is easy to measure and that it allows us to get a good sense for our ability to forecast and manage our business. But we need to be careful about how we utilize the metric. A singular focus on managing just accuracy won’t do anybody any good. But that’s true for anything. If I want to loose weight, I should also look at muscle mass and water content – not just weight as measured in lbs or kg. But to start bashing a single metric is not a good way. I am all for looking at forecast accuracy – often.