Chapter X – Why Predict

Chapter X – the important bit I realized after I put the book together.

Why Predict?

I’ll admit that the way that I see and communicate things is a little different. In our family we have a phrase for long complicated explanations, we call them a “Russian Candy Machine.” To explain “Russian Candy Machine” would take a very long complicated explanation of my parenting style, my families shared sense of humor, and how you get from evolution of the spinal cord to a Russian Candy Machine. In other words, it would take a “Russian Candy Machine” to explain a “Russian Candy Machine.” Words and phrases can often be explained with a simple metaphor. Complex ideas and ways to look at the world, however, often require more than a simple metaphor, sometimes they need a Russian candy machine. The simple question “why predict?” needs a complex answer. But every complex answer starts with a simple insight.  Here is the insight:


When we talk about setting a goal, we imply that we have already made a prediction. Or put another way, you can’t set a goal without first making a prediction.

For example, to set a goal of 5% increase in sales for the next year, you need to at least have an implied forecast of “if you change nothing” next year sales will change by X% percent. It is the comparison to that forecast that makes the goal reasonable or not. For example, say you are a salesperson on my team and if this year’s sales increased by 5% from the prior year. I, as the team leader, set a goal of 20% sales increase for next year. You would probably react by indicating that I am crazy, and I should keep a more level head when forecasting increased sales. That is because you have made a prediction in your head. My guess is that you thought, we increased by 5% last year, at best we’ll increase by 5% again. If, however, sales increased by 20% last year, then the sale goal of 20% may be more reasonable.

A simple “what happened last year will happen again” is a naïve prediction. Really, it is called a naïve forecast.  So, if sales last year were one hundred fifty thousand, then a naïve forecast is for next years sales to also be one hundred fifty thousand. Another a simple model is whatever the percentage change was for the prior period, apply it to the next period – this is the example above. Another simple linear type prediction is to take the value of the increase and apply it. So, if sales went from one hundred thousand up to one hundred fifty thousand, the sales next year will increase by fifty thousand again.

Lest there by any doubt, these three simple methods are VERY effective. In fact, they are considered the gold standard by which more sophisticated predictions are judged. Even here, with these three basic forecast methods, we have a range of potential outcomes. Given the sales going from $100,000 in the prior year to $150,000 last year. We have naïve ($150,000 again), repeat value increase ($200,000) and repeat percentage ($225,000).

A goal is set above or below the basic forecast of “what happens if we keep going as we are going.” If my forecast methodology is percentage and yours is value increase – we already have a basic gap in understanding and expectations. Whatever reasoning I used to pick to set the goal above the forecasted value is meaningless if we don’t even agree on the basic forecast.

Every time as leader that you hand out a goal, it comes packaged atop a forecast. It doesn’t matter if a forecast came from a simple application like the above or through a complex sales forecasting model, a goal is a change from the prediction.

And reasonable people can, and should, disagree on a forecast. Years ago, columnist Gregg Easterbrook started using the tag line “All predictions are wrong or your money back.” That comment should be tagged to the bottom of every sales forecast. The variation I use is “all predictions are wrong; it is just a matter of how wrong.” Back to our sales example, there are three values we can probably rule out as the exact sales for the following year, $150,000, $200,000 and $225,000. The sales forecast never hits exactly. In fact, the only forecaster who was right every quarter was Bernie Madoff and he was cheating to get it right every time. Or for those people who like to golf, the safest place to stand on a green is right next to the flagstick – it is just about the least likely point to be hit even though most golfers are aiming right at it.

We know and accept that the forecast is wrong, but we need that baseline for discussions and goal setting. Data science is an attempt to decrease forecasts error, but it can never eliminate the error.

When you and your team think of “SMART” goals (simple, measurable, achievable, relevant, time-fenced), the achievable characteristic is based a mutually accepted forecast. If we don’t agree on the forecast, we can’t agree if the goal is achievable or not. The next time you feel disconnect on a goal, step back and talk about the forecast without the goal.

Most forecasts also have a social range in which they are acceptable. For example, for a large established company, having a single quarter with a 20% profit margin might be a cause for celebration. For a high-tech startup, 20% profit margin (or any profit) is a hope and dream. For stable software company, 20% margin would be a significant drop and reason to panic. The type of company shapes an acceptable forecast for quarterly results.

Think of a coach of a team. Given that there are only three outcomes: win, loss, draw – how many times does a coach come out and predict a loss? The coach can’t because it isn’t socially acceptable, even though it may be an accurate forecast. Sure, when sitting at a wedding, you and your friends can have a funny conversational bet as to if the marriage might not work out, but don’t say that to the bride or groom. The social range of the forecast matters.

That brings us to the last type of forecast. Politely put, a disingenuous forecast. Bluntly put, a lie. This kind of forecast is the hardest to combat because, as we have said, all forecasts are wrong, and one more wrong forecast is hard to point out as bad. Except companies and careers can be destroyed by bad forecasts. If we are going to have sales over $200,000 then a $20,000 marketing spend isn’t a problem. But if we are only going to sell $15,000 then the marketing spend is a problem. Many a small business has been undone not just by missing a sales forecast, but by spending to a disingenuous forecast that was set by hope and not by data. You can’t have goals without a forecast, and you can’t plan your expenses without having expected revenue.

Probably some of the clearest examples of disingenuous forecasts come from politicians. For example, A politician will say that if we pass a law on wages it will change income for a certain number of their constituents. Or they’ll say that the economy will grow at a certain percent. Or that their tax program will have a certain benefit. And while we know reasonable people can disagree on a forecast, politicians have a special forecasting trick and that is to pay consultants to make their forecast. When you already set the number you want to have, it is easy to back into that forecast. Trust me, as a person who makes predictions professionally, the easiest prediction to make is the one my customer wants to hear.

We all see the results of the bad forecasts, somehow the legislation didn’t result in the change predicted, and we have all heard the excuses from politicians. The excuses usually sound like a version of the bad guy in Scooby Do, “my legislature would have worked, if it weren’t for those meddling {other political party}.”

The reason I point this out, is not that I don’t like lying politicians. I don’t. It is that while we can see the political version of the results of bad forecasts, we often ignore that businesses also have a version of this. My favorite version of bad forecasts is the “excuse management” process by which organizations set up detailed processes to excuse away missed forecasts. Dashboards upon dashboards have been setup with the original intention to highlight failures in order to prevent them in the future. However, these dashboards are now used to excuse away poor performance. And Scooby Do comes back into play, “I would have hit my on-time measure, if it weren’t for those meddling {other department}.”

The looking back at the details of the past isn’t supposed to be to find excuses, even though today it most often is. The looking back is to inform your next prediction. You want to say that your on-time miss last quarter was driven in part by bad weather in the Rockies. OK, so next winter we need to adjust our prediction down or take steps to plan around weather conditions. The social reluctance to adjust the forecast down to lower than the department goal creates the excuse management process. If you are not honest with yourself on the prediction, how are you going to be honest with the results?

Data science provides some relief from this circle of dishonest predictions leading to dishonest excuses for missing the dishonest prediction. You can “blame” the data science results for the honest, but less than goal, prediction. Because data science only makes predictions from history, it can only give a “if nothing changes” type of prediction. Data science also gives you the details on which specific activities / customers are more likely to miss. Armed with a collection of predicted bad results, you and your team now get to get creative in trying to figure out how you will beat the prediction.

This changes job roles; team roles shift from looking back looking for patterns (and making excuses) to taking a set of predictions and getting creative about how you are going to “beat the house.” This is a fundamental change to job roles going forward. Jobs will require more creativity because while data science can give you a detailed prediction with the characteristics of what makes that prediction more accurate, it can’t tell you what to do about it. Beating the house is fundamentally a creative activity.

There is a large cautionary item to point out here. As mentioned, data science is only making predictions based on what has happened. If there are problems with what has happened in the past that may be socially unacceptable, data science with predict those to keep happening. If there is a racial, gender, or age bias to past activities and that data is included as potential characteristics, any good algorithm will find that pattern. This can place people facing questions they may not like or may not be positioned to deal with. On the other hand, if the bias is a problem, shouldn’t it be fixed? I’m not saying there is a right or easy answer, I’m just saying that sometimes key predictive characteristics aren’t ones that are pleasant to deal with.

So how, practically, do you make the shift from excusing management dashboards to the new “beat the house world”? The answer is, in a way, already staring you right in the face. It is the new face of digesting data science: prediction dashboards.

There is already a version of prediction dashboards. My guess that there is at least one chart that is looked at that has a “goal” line for future months with the current period actuals mapped against it.  Something like this:

This chart contains a prediction! It is labeled as “goal”. So, let’s make the first change to this chart. Don’t primarily measure yourself against your goal, measure yourself against the predicted value! I’m not saying not to have the goal, I’m saying if you want your marriage to work, you need to first acknowledge that current divorce rate is 0.32% per person annually in the US. (If that number looks low, I picked the version of the statistic that is the smallest for surprise value – for more on this see my upcoming book “How to Lie With Accurate Statistics.”)

So, changing to comparing to the prediction we get:

As you can see June, despite being below the goal line is green because it was above the prediction! Also, we have the rest of the year predicted. The primary goal is to beat the prediction – the goal line is there for reference – the goal line is the agreement between management that mixes in the social expectations above the prediction. For positive metrics, the goal is above the prediction. For negative predictions, the goal would be below the predictions. Some goals can also be a high / low range around a prediction. Also, in this case having a standard annual goal for reference is just fine. Sometimes the goal may be a percentage above the prediction or a fixed number above the prediction or a repeat of the prior months amount above the prediction. And, yes, that is a direct call back to the naïve forecasting.

And, yes, one may also notice then that the difference between the forecast and the goal is a kind of prediction in and upon itself. The reason for that is that part of the challenge in setting a goal is the analysis of how hard it is to beat the house. Are there lots of additional sales opportunities out there and with just a simple bit of work sales can beat the prediction by 5% or is the market full of competitors and you will have to work day and night just to move the sales up by 1% over the prediction. There is a complex discussion about how data science can identify market flexibility, but for now, we’ll keep focusing providing information to help beat the prediction.

Anyone who has worked with dashboards has worked with click through. That is where when you click on the chart, you are taken through to another chart that is view of the data at a lower level of aggregation. Usually we set these levels of aggregation at human pattern recognition levels or levels of organization within the company. For example, we might see sales by sales region or sales team. In our new prediction world, we don’t care nearly as much about the sales team – we only care when we are evaluating individual performance. If we focus on sales teams in the past we are pushing towards excuse management. “My team would have hit my sales number, if it weren’t for the meddling customer service department.”

Instead of looking at levels that help people adjust their human created predictions, we need to look at the key characteristics that did drive machine learning prediction. Often, we think of these key characteristics as having a linear relationship. Not so. For example, divorces in the first five years tend to happen more frequently among young and old but marriages in the 30’s have a lower level of immediate breakdown. With a good combination of algorithms and data manipulating, a good data scientist should include these kinds of non-linear relationships into their model.

What to look at is a breakdown of the populations by the key characteristics that drove the prediction. What we want to do is compare the predicted population to the current population. The goal not being to second guess the prediction but to use the characteristics to fuel the creative process to beat the house.

So, this chart called is a heat map or tree map. It is particularly good for showing 2 different characteristics. Not listed on the chart, but the size of the box in this example is the size of the current business. The color is the percent change of business in the predicted model. Besides some description about the size of the box and the color, I’d probably add on to this chart an additional note that the overall growth is 3%, but for this example the chart is designed to be simple. As you can see, the size lets you focus on the biggest items first while taking into consideration which items are predicted to change the most.

Just looking at this chart the first reaction is probably “why is small business predicted to make such a drop!” But the right phrase should be “how can we prevent a drop in small business?” The reason for this is a subtle change – don’t try to re-guess the prediction, try to beat it. So, we need to get creative – is the small business segment itself simply shrinking? Is there a problem with how we are talking to small businesses? What programs / activities have helped on the large customers and can they be translated to the smaller companies? Basically – what creative steps can we take to make this future happen better for us?

This sample breakout should just be one of the top 5 or so most predictive key characteristics. When a model is refreshed, the key characteristics probably won’t change greatly from model build to model build, but one or two may move on or off the list as the model is refreshed.

One other note: these breakouts of the customers, they should be chosen by data science. The difference between small and large companies shouldn’t be set by what makes convenience organizationally but by the how the companies activities cluster together.

Which brings us back to the question of “why predict?” Firstly, predictions have always been there, often naively overlooked. Second, once you add in better predictions with data science, you begin to shift your focus from looking backwards to looking forwards. The third thing is that by looking at the how the prediction is different than the present, you are given clues as to what you want to change.

Back to how jobs change. We are moving away from having jobs looking at past data to find patterns of deviation from desired in order to make process changes to improve results. We are moving towards data science making a detailed prediction with key predictive characteristics. The job now becomes how to design a creative process in order to beat the predicted value. This is a big irony at the heart of the data science revolution: Application of data science increases the need for employees to be more creative. (And not more creative with their excuses!)

An added benefit is that by having employees focused on improving the future instead of excusing the past it creates a more optimistic environment. Who doesn’t want to work in a more optimistic environment? Everyone is looking forward to “beat the house” instead of backwards and being beaten up.

Why predict? Because you have been. Because you need to.