Estimating . . . predicting . . . forecasting, these are three different words with three things in common. They establish a position on the future with some margin of error. All of them are used for planning purposes and aim at helping companies prepare for the future. We all know their accuracy and successes are broad and varying.
The reason we use and rely on the above techniques is because they help us to be more efficient and produce greater profits. This is one of the top priorities of current business. If we know how long it will take to develop something, we will know when to shift resources to something new. If we know how much additional revenue we will make next year we can determine what budgets can be allocated to jump ahead of our competition. If we know what product functionality a customer wants we can build just that and avoid developing unwanted capabilities.
Nowhere is this truer than with project management, which relies heavily on estimating. What tasks need to be accomplished, how long each task will take, how many resources need to be on the project to complete it on time, what will be built, and what the best approach is to building the product. Project success often relies on our ability to estimate.
We all realize that perfect estimating accuracy is unrealistic. What we don’t realize is just how wrong our expectations on estimating accuracy truly are. As a result we search and search for better ways of estimating hoping we will find the one method that will make us significantly better estimators. But, this will never happen, so perhaps we should focus on adjusting our expectations.
Unfortunately, we have a habit of departing from rational thinking and act in ways that prevent us from accommodating a reality that allows us to feel we are doing a good job at estimating. Here four ways we depart from rational thinking:
- There is much more randomness in projects than we want to believe. Nassim Nicholas Taleb wrote two fantastic books that establish this truth: “Fooled by Randomness” and “The Black Swan”. As a result of randomness, at best, our estimating accuracy will follow the bell curve, with a good portion of estimates a standard deviation from perfect and with some falling at the outer edges of the curve. At worst, our estimating error can be a black swan event, an event that comes in tens of times greater than our estimate. Either way, our projects zig and zag all over the place because of the flow of randomness. Funny thing, Teleb says the best way to protect yourself from randomness is to build slack into your system. In reality, there isn’t much chance of that happening as it is the very inefficiency we do not want.
- We don’t invest in getting better at estimating. It is amazing how little project data we keep and analyze. All we need is time and a few processes to make it happen, and if we did this we would have a better idea of the standard deviations estimating error on tasks, phases, and project completions. We would also become better estimators of scope, time, and cost on common projects for sure. If we implemented sophisticated predictive models, such as the Monte Carlo method, we would know what the outer edges of our estimating error could be on complex projects. For some reason these approaches are not valued enough in organizations; though, it never stops us from passing judgment on people’s performance based on incomplete or bad data.
- Our expectations are inversely proportional to project scale. Projects with longer durations and greater complexity have more aggressive expectations than shorter duration, less complex projects. For instance, say we ask Jack to estimate how long it will take for a baseball dropped from the top of the Empire State building to hit the ground. He estimates 20 seconds and upon doing so it takes 9.8 seconds. Most would think the estimate was fairly close. He was only off by a few seconds. Now let’s ask Jill to estimate how long it would take for her and a baseball to sail from a peer on Hawaii’s main island to Los Angeles, fly to New York City, go to the top of the Empire State building and drop the baseball. She estimates 17 days and it actually takes her 10.4 days. Most would say she estimated poorly as she is off by almost 7 days. The reality is Jill was off by only 38% and Jack was off by 51%. These misconceptions of relativity do not make sense but it is how we think. This reality becomes especially painful when you consider projects that have the most impact on our businesses and are more complex and longer in duration, making them harder to estimate, and requiring smaller margins of error to be a success.
- We put too much emphasis on human-will and competence. At its roots, estimating error contains human-will and competence, but it also contains randomness. It is true that human-will can heal a person faster or help them overcome major life catastrophes. It is also true that highly skilled individuals can outperform their lesser peers two to one. However, randomness plays a much bigger role in estimating error than we think. We have all heard of teams coming together to overcome extreme obstacles, but what we don’t know is if randomness actually aligned in favor of their success and thus played a bigger role. I wonder if we put so much emphasis on human-will and competence because we are afraid that if we don’t people will slack off and not perform as well.
It may seem like a pessimistic or fatalistic view, but there is a good chance we will never embrace reality and become comfortable with the estimating accuracy that exists in a random world. It would mean we’d have to minimize human-will and effort, rely on data driven criteria for estimating error that focus on percentages, and give projects with longer durations and more complexity estimating handicaps. It’s a daunting challenge and that’s why we’ll never get it right. Anyone up for the challenge?