Category: Plan & Forecast

  • 4 additional things to know about Forecast Accuracy

    How is your forecast accuracy measurement project going? I hope the last post convinced you to start measuring this. But there are still some open questions. Let’s take a look at some critical items that you should consider.

    TIME SPAN

    One of the things people often get confused about is the type of forecast accuracy that they should measure. We often create forecasts for many months out. Technically speaking, I could therefore calculate 1,2,3,4,5,etc month forecast accuracy (e.g. I take a forecast value from 6 months ago and compare it to the actuals from today or I take my forecast from last month and compare it with the actuals that just came in). That’s a lot of data! Based on my own experience and discussions with many controllers, I have come to believe that most businesses should focus on a short-term measure (say 1-3 months). The reason for that is simple: the further out we look, the higher the probability for random errors (who can forecast the eruption of an Icelandic volcanoe?). Short-term accuracy is usually more important (think: adjusting production volumes, etc.) and we should have way more control over it than over longer-term accuracy. So, pick a shorter-term accuracy and start measuring it.

    FREQUENCY

    How often should we measure forecast accuracy? Every time we forecast! Why wouldn’t we? Measuring once in a while won’t help us much. The most interesting aspect of this measure is the ability to detect issues such as cultural and model problems. Just make sure to setup the models correctly and the calculations will be automatic and easy to handle. You will soon have plenty of data that will provide you with excellent insights.

    LEVEL

    Where should we measure forecast accuracy? We simply calculate this for each and every line item, correct? Hmm…better now! We already have so much data. I would suggest to look at two key dimensions to consider (in addition to time): the organizational hierarchy and the measure. The first one is simple: Somebody is responsible for the forecast. Let’s measure there. We could probably look at higher level managers (say: measure accuracy at a sales district level as opposed to each sales rep). In terms of the specific measures, experience shows that we should not go too granular. Focus on the top 2-3 key metrics of your forecast. They could be Revenue, Units, Travel Expenses for a sales forecast. The higher up we go in the hierarchy we would obviously focus on things such as Margin, Profit etc.. The general advice is to balance thirst for knowledge with practical management aspects. Generating too much data is easy. But it is the balance that turns the data into a useful management instrument. So, you should measure this at a level where people can take accountability and where the finance department doesn’t have to do too much manual follow-up.

    CAUTION

    But before I finish here, just a quick word of caution. Inaccurate forecasts can have different causes. Don’t just look at the plain numbers and start blaming people. There are always things that are out of our control (think about that unexpected event). Also, there are timing differences that occur for various reasons (think about a deal that is pushed to next month).  We need to go after those differences that are due to sloppy forecasts.

    What about analyzing and communicating forecast accuracy? More about that in the next post. Do you have any other experiences that are worth sharing?

  • Three things every controller should know about forecast accuracy

    Forecast Accuracy

    Forecast accuracy is one of those strange things: most people agree that it should be measured, yet hardly anybody does it. And the crazy thing is that it is not all that hard. If you utilize a planning tool like IBM Cognos TM1, Cognos Planning or any other package, the calculations are merely a by-product – a highly useful by-product.

    Accuracy defined

    Forecast accuracy is defined as the percentage difference between a forecast and the according actuals (in hindsight). Let’s say I forecast 100 sales units for next month but end up selling 105, we are looking at a 95% accuracy or a 5% forecast error. Pretty simple. Right?

    And why?

    Why should we measure forecast accuracy? Very simple. We invest a lot of time into the forecast process, we utilize the final forecast to make sound business decisions and the forecast should therefore be fairly accurate. But keep in mind that forecasts will never be 100% accurate for the obvious reason that we cannot predict the future. Forecast accuracy provides us with a simple measure to help us assess the quality of our forecasts. I personally believe that things need to get measured. Here are three key benefits of measuring forecast accuracy:

    1. Detect Problems with Models: Forecast accuracy can act like a sniffing dog: we can detect issues with our models. One of my clients found that their driver calculations were off resulting in a 10% higher value. A time-series analysis of their forecast error clearly revealed this after just a few months of collecting data.
    2. Surface Cultural Problems: Accuracy can also help us detect cultural problems like sandbagging. People are often afraid to submit an objective forecast to avoid potential monetary disadvantages (think about a sales manager holding back information to avoid higher sales targets). I recently met a company where a few sales guys used to bump up their sales forecast to ‘reserve’ inventory of their hot products in case they were able to sign some new deals. Well, that worked ok until the crisis hit. The company ended up with a ton of inventory sitting on the shelves. Forecast accuracy can easily help us detect these type of problems. And once we know the problem is there and we can quantify it, we can do something about it!
    3. Focus, focus, focus: Measuring and communicating forecast accuracy drives attention and focus. By publishing accuracy numbers we are effectively telling the business that they really need to pay attention to their forecast process. I have seen many cases where people submit a forecast ‘just because’. But once you notice that somebody is tracking the accuracy, you suddenly start paying more attention to the numbers that you put into the template. Nobody wants to see their name on a list of people that are submitting poor forecasts, right?

    BUT……

    Overall, forecast accuracy is a highly useful measure. But it has to be used in the right way. We cannot expect that every forecast will be 100% accurate. It just can’t be. There is too much volatility in the markets and none of us are qualified crystal-ball handlers. There is a lot more to consider, though. Over the next few days, I will share some additional tips & tricks that you might want to consider. So, start measuring forecast accuracy today!

  • Poor forecast accuracy

    Over the last few weeks three separate clients have expressed their frustration with inaccurate forecasts delivered by certain members of the salesforce. Nothing new here. It happens all the time. However, what struck me about these three independent cases was the nature of the issues: The salesforce consistently forecast higher than actuals. This is not typical. Most sales people try to forecast lower to build up some buffer in case of bad news.

    BOOKING INVENTORY

    What happened here? Very simple: sales tried to utilize the forecast to ‘reserve” inventory of their extremely well-selling products. Their rational was that a higher sales forecast would inevitably lead to a higher availability of finished products ready for sale. In the past several sales people had encountered product shortages which affected their compensation negatively.

    The sales forecast as an inventory 'reservation' mechanism

    THE THREAT

    This kind of poor forecast accuracy could lead to a precarious situation. In case of an unexpected economic downturn, the company could end up sitting on a ton of finished goods inventory. And not only that: average inventory could trend upward reducing liquidity as a result. As we all know, inventory is central to effective working capital management.

    THE REMEDY

    The controllers of these companies were very frustrated with the situation. Despite senior management discussing the resulting issues with the sales force, accuracy barely improved. But one controller had developed an interesting idea that he is about to implement: start compensating the sales teams based on Working Capital measures.

    WORKING CAPITAL & FORECAST ACCURACY

    The basic idea of this evolves around punishing sales for consistently producing these unacceptable variances. And the implementation of this does not necessarily have to be that hard. We can measure forecast accuracy. A series of negative variances (Actuals < Forecast) leads to a reduction in the sales bonus. The critical thing here will be to avoid punishing people for random variances. I could see using a rolling average to only punish consistent ‘offenders’.

    This is an interesting idea. The actual compensation impact would obviously have to be worked out carefully. But the basic idea is quite interesting!

  • The case for forecast accuracy

    People always say that you get what you measure. And it is true. When I want to loose weight and I am serious about it, I do have to step on the scale frequently. The same thing is true for business. What gets measured gets done.

    FORECAST ACCURACY

    Forecasting has become a critical business process. Pretty much every company that I talk to is either improving or looking to improve this process. One of the measures that can be used to manage the forecast process is forecast accuracy. Forecast accuracy measures the percentage difference between Actuals and Forecast. Let’s say we forecast 100 units sold for next month and it turns out that we actually sold 95, the forecast accuracy value -5%. We can measure accuracy at different levels of an organization, let’s say at the Profit Center level or at the BU level.

    EMOTIONS RUN HIGH

    A few weeks ago, I had an interesting discussion with a group of consultants. They argued that forecast accuracy is not worth measuring. Their main arguments were:

    • Forecast accuracy cannot be influenced. The markets follow a random path and it can therefore not be expected to achieve accurate forecasts.
    • Forecast accuracy is a dangerous thing to measure and manage. People can start influencing the accuracy by managing their numbers according to expectations (for example sales managers can hold back deals for sake of influencing accuracy)
    • The quality of forecast accuracy is hard to define. Let’s say we beat our own forecast by performing really well. Forecast accuracy is off. Is that good or bad?

    MY VIEW

    Here is my personal view on this topic.

    • No single measure is perfect when looked at in isolation. Let’s say profits. What does the profit number for a certain quarter tell us? Nothing! We need to look at a mix of measures. Forecast accuracy is one measure that we can/ should look at.
    • Forecast accuracy provides us with the ability to identify potential bias. One of my clients, for example, found that their models were flawed. Forecast accuracy revealed this by highlighting a certain consistency.
    • Markets movements are difficult to anticipate. But it is the job of the forecaster to identify potential actions to make sure that targets are achieved. I should have a general clue about what is happening in my business. Once in a while, we encounter some surprises. Does that mean we should not measure forecast accuracy? I beg to differ. At the very least, a detailed analysis of the accuracy measurements can help us learn a lot about our organization and our environment.
    • Forecast accuracy is easy to measure. It can be automated. Cost are almost zero. Why not measure it and potentially learn something?

    I could go on and on. The bottom-line is that forecast accuracy is easy to measure and that it allows us to get a good sense for our ability to forecast and manage our business. But we need to be careful about how we utilize the metric. A singular focus on managing just accuracy won’t do anybody any good. But that’s true for anything. If I want to loose weight, I should also look at muscle mass and water content – not just weight as measured in lbs or kg. But to start bashing a single metric is not a good way. I am all for looking at forecast accuracy – often.

  • The case for continuous forecasting

    Continuous Forecasting

    Time for a confession. I really hated forecasting back in my old job. Here I was working with clients on improving their planning, budgeting & forecasting processes. Yet, I absolutely hated doing my own forecast. It just didn’t feel right. What was wrong? Well, I never really understood the template that our controller sent out. And it always took forever. Luckily, I had to do this only 2-4 times per year. But that was also part of the issue. Every time I received the forecasting template (a complex spreadsheet!) I had to collect and enter a ton of data. Also, I had to re-orient myself and figure out how the template worked this time. And then there was the reconciliation between my project plans and the prior forecast. To sum it up: The ramp-up time was simply too long. The result: I hated the forecast because the process took too long and it was too infrequent.

    Fire-Drill

    Indeed, the typical process for updating, distributing, collecting and aggregating forecasting templates can take up to a few weeks in most companies. It is critical to understand that the templates are typically unavailable to the user community during extended periods of time. Analysts are busy and need to take care of other tasks between forecasting cycles. As a result, forecasts are being conducted infrequently and the business owners feel like conducting a ‘fire-drill” when the templates are actually sent out.

    The traditional spreadsheet-driven process

    Forecasting Software

    But there is a much better model that many of my clients have implemented. Modern planning & forecasting software allows us to keep our forecasting templates online nearly 24*7. We no longer have to collect our 100s of spreadsheets, fix formulas, manually load actuals, manually develop new calculations and the re-distribute the templates in long and manual cycles. Thanks to OLAP technology (sorry for the techie term), we can make model changes in one place only and they can automatically be pushed out to the different templates (e.g. cost centers, profit centers etc..). Automated interfaces between the ERP (for actuals) and the forecast models can be setup. We can automatically aggregate data in real-time and we can control the process flow. Overall maintenance is a lot easier and the templates are available pretty all the time and the users can work with their data around the clock and throughout the year.

    Using this technology, Finance departments can allow the business users to work in their templates around the clock. A sales manager can update her data right after a critical customer meeting (e.g. change the sales quantity for a product). In other words, people can make quick incremental changes to their forecast data instead of performing time-consuming, infrequent larger data input exercises.

    Continuous Forecasting

    But the Finance department now has to carefully communicate with the business. They need to clearly communicate submission deadlines etc..

    The continuous data collection process

    But what is the advantage to the business users and the finance department? How would this technology have change my personal experience in the prior job?

    Clients typically experience three main advantages:

    • The templates are available 99% of the time and users can work in them on a daily basis. As a result, users become a lot more familiar with the templates and their comfort levels rise.
    • The actual forecast process is a lot faster for the business users. They can make incremental changes which typically don’t take that much time. Contrast that to my case where I had to build a bottom-up forecast almost every quarter. The ramp up time can be considerable.
    • Forecasts tend to be more complete. In the case of an urgent ad-hoc forecast (imagine something critical happened), the business is able to compile a near complete forecast in very short time. This is where the incremental updates add serious value. Contrast that to the traditional spreadsheet process. People might be out on vacation or they are out traveling. The potential time-lag to get somewhat decent data can be quite long.

    Let me clarify one last thing: A continuous process does NOT mean I can simply aggregate my data every night and obtain an updated forecast. No, I need to communicate to the business WHEN I need the data. But due to the 99% availability I can collect my data very quickly.

    Let’s go continuous! Would love to hear your thoughts and experiences. Good or bad.