How I Explained MAPE to My Boss: A Demand Planner's Guide to Forecast Accuracy

Nick Mishkin

By Nick Mishkin, Data Scientist at Intelichain

This article offers a glimpse into the daily challenges faced by demand planners as they navigate the best ways to communicate forecasting success to their managers. While the story and numbers presented here are fictional, they are grounded in real-world scenarios.

It’s 9:15 AM, and my manager strolls in with a straightforward request: “How did our forecast projections do last month?”

 At first glance, this task might seem straightforward. We developed sales forecasting models, so theoretically, we just need to compare the actual sales data with our projections. Simple, right? However, when you’re managing a portfolio of a hundred different food products, things get a bit more complicated. Take our chocolate assortment, for example—it includes 20 varieties priced between $1 and $10 per unit. Then there are our coffee machines, which are priced significantly higher, ranging from $150 to $575 per unit. This wide range in pricing introduces challenges when trying to compare accuracies across different product categories.

 In the supply chain industry, the Mean Absolute Percentage Error (MAPE) is the standard metric for assessing forecast accuracy. Given its widespread use, it appears to be a logical starting point for our analysis.

Step 1: Tackling MAPE

The beauty of the Mean Absolute Percentage Error (MAPE) lies in its simplicity. Errors are measured in percent, making it more natural to interpret and relay to managers. For each product, I subtract the forecasted units from the actual units sold, grab the absolute value, and then divide by the actual units sold. To finish, I average out the errors across all products. In our case, this includes adding up each MAPE and dividing by 100, since we have a hundred different items in our lineup.

 Side note: Some companies take a different route when calculating MAPE. They divide the absolute difference by the forecasted value instead of the actual value. Why? To evaluate performance errors based on what they expected to happen, rather than what actually happened.

After crunching the numbers, I end up with an overall MAPE of 115%, which translates to a forecast accuracy of -15% (1 – MAPE). Hold up! I can’t tell my boss we have a negative accuracy rate. Something’s off.

Digging deeper, I notice that a few products have MAPE errors past 300%. Take our cherry chocolate, for instance. We forecasted 500 units sold, but only moved 100. That’s a MAPE of 400%. But here’s the good news—we can use nMAPE to clean this up.

Step 2: Leveraging nMAPE

Demand planners frequently overlook a crucial detail—MAPE is not always the best metric for calculating forecast accuracy. Instead, let’s use the Normalized MAPE (nMAPE). Since our forecast accuracies vary wildly between different products, it’s not fair to let a few outliers drag down the whole ship.

In our case, a handful of poor predictions are skewing the overall accuracy, painting a picture that doesn’t reflect true performance. That’s where nMAPE steps in. Instead of letting outliers dictate the narrative, nMAPE smooths the relative errors by dividing the absolute difference between the actual and forecast values by the greater of the two.

Dividing by the maximum instead of the actual value, like we previously did with MAPE, helps to mitigate large errors. It also solves the issue of dealing with items that sold zero units—because we cannot divide by zero. When calculating the nMAPE for cherry chocolates, we get 80%, far from the original 400% MAPE. In many cases, nMAPE will match or come close to MAPE, but for items like our cherry chololates which have a large relative error, it makes a huge difference.

After running nMAPE across the board, I land at an overall error of 37%. But I still have a problem. The 80% nMAPE for cherry chocolates still dominates the overall error, especially when I compare it to some of our coffee machines. Take the Max-Presso, our top seller, which has an nMAPE of just 10%—a solid 90% accuracy. So, is it fair that cherry chocolates, contributing less than 1% of our total revenue, is skewing the error while we projected Max-Presso, which drives 81% of our revenue, with 90% accuracy?

No, we need to use the weighted mean.

Side note: It’s worth mentioning another metric often used to address the limitations of MAPE—sMAPE, or Symmetric Mean Absolute Percentage Error. Unlike MAPE, which can be heavily skewed by large errors, sMAPE accounts for both the forecasted and actual values in its calculation, providing a more balanced measure of accuracy. This is particularly useful in scenarios where both overestimations and underestimations need to be equally penalized. However, while sMAPE can offer a more symmetrical perspective, it still doesn’t fully address the issues posed by outliers, which is where nMAPE becomes crucial.

Step 3: Bringing it All Together with wMAPE

The Weighted Mean Absolute Percentage Error (wMAPE) takes nMAPE a step further by multiplying the errors by each unit’s share of revenue. In our case, that means multiplying the cherry chocolate’s nMAPE by 1% and the Max-Presso’s nMAPE by 81%. After doing this for every product and summing up the errors, we arrive at a wMAPE of 15%, which translates to an 85% overall forecast accuracy.

wMAPE Formula

Where:
Ai = Actual value
Fi = Forecast value
n = Number of data points

Now, I’m ready to report back to my manager with confidence: our models did a solid job. We predicted sales with 85% accuracy.

Concluding Remarks

The story above, though fictional, is a snapshot of what a day in the life of a demand planner can look like. By walking through this scenario, we aim to highlight why selecting the right metric is crucial. For companies moving millions of units a year, even a small boost in accuracy can translate into significant savings and increased sales.

As a rule of thumb, demand planners should aim for an overall nMAPE of 10% or better. Hitting that mark means they’ve correctly forecasted 90% of their supply chain projections. At Intelichain, our algorithms consistently predict sales with an accuracy of 91.3%. This level of performance is not accidental—it’s the result of advanced forecasting techniques and fine-tuning based on key performance indicators and automated metric optimizations.

If you have any questions about sales forecasting or demand planning, don’t hesitate to reach out. We’re here to help.

Author bio: Nick Mishkin is a data scientist specializing in time series, deep learning, and large language models. He has written for prestigious publications such as NoCamels, MoneyGeek, and SeekingAlpha.

Mishkin earned his bachelor’s degree in economics from the University of Pennsylvania and his master’s degree in behavioral economics, magna cum laude, from Reichman University. Additionally, he completed the hands-on Data Science & Machine Learning training program at the Israel Tech Challenge (ITC).