Why We Underestimate Large IT Projects and What To Do About It

Congratulations!  You are the Project Manager for the multimillion dollar software project that will give your business a huge competitive advantage. You just completed the planning phase. You worked with the software, DBA, networking, QA, and other teams to develop the schedule and cost estimates based on the project scope and the work breakdown structure. The budget has been approved, the team assigned, and the project is off and running. But, how confident are you in the schedule and cost estimates you presented?

A basic premise of project management is that we define scope at the beginning of a project. Then we estimate the effort, cost, and schedule to design, build, and deploy the solution that satisfies that scope. If we do a good job of managing scope, cost, and schedule we should come in on time and on budget.

But is this realistic for large IT projects? Surveys by the Standish Group and others have shown that less than half of IT projects deliver what was expected for the original budget and schedule. There is some evidence that the larger the project, the worse the problem. In their 2010 “Chaos Report” the Standish Group reported that only 7% of projects over $10 million were successful.

Large IT projects are very labor intensive. Look at the portion of most project budgets that are staff costs. Accurately estimating effort is the key to accurately estimating the overall project cost and schedule.

The first problem we face is defining the work needed to build the solution based on the early scope definition. Particularly with traditional software engineering methodologies, estimates for development and integration must be developed at an early stage when the solution that will be deployed is only understood at a high level. A scope definition developed early in the project is at best a very high level abstraction of what will actually be built – not much of a basis for accurately estimating effort.

Even if we did understand exactly what needs to be built, we have only crude metrics for measuring the software we will develop and the productivity of the developers. Lines of code or function points are the main choices. Both metrics have significant shortcomings (search on “programmer productivity”). Bill Gates has said: “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” Function points take specialists to estimate. Both are difficult to estimate accurately early in a project.

That’s for software development. What about other IT roles such as design, integration, DBA, network engineering, data migration, application configuration, and QA? Again we lack good productivity metrics.

In addition, it is generally agreed that there is a 10 to 1 difference in productivity between the best and average software developers (admittedly hard to prove given the previous point). In other words, the best programmers are ten times more productive than the average. The same variation is likely present in other technical skill areas. So there is a huge variation in the effort needed to complete a task based on the productivity of the specific staff assigned to the task.

A few organizations have historical productivity data to develop and verify project estimates. If you are a PM in one of those organizations, consider yourself lucky. Most organizations do not have such data.

The most common estimating approach is to use ”experts.” We ask experienced members of the software development team to estimate development tasks , DBA’s to estimate DBA tasks, etc. Based on their experience with similar efforts they develop estimates for the new project.

In addition to the lack of detail about what needs to be built and the lack of good productivity measures, there are two factors that cause people to under estimate the time and effort to complete tasks – even if they have extensive experience. These two factors are optimism bias and the planning fallacy.

Optimism bias is the demonstrated tendancy for people to believe that things will go well, in spite of experience to the contrary. As discussed in Tali Shrot’s book Optimism Bias: A Tour of the Irrationally Positive Brain, people over estimate the probability of positive things happening and under estimate the probability of negative things happening. This has been shown to be the case even when people have significant experience. It appears to be hard wired into the human brain. So your development experts are likely to assume that all the developers will be top performers and underestimate the extent of defects, rework, and unforeseen delays.

The planning fallacy, also scientifically verified, causes people to underestimate the effort required to complete tasks. Optimism bias is a contributing factor in the planning fallacy but there are other contributing factors as well – see Daniel Kahneman’s book: Thinking Fast and Slow. Again, this happens even when the individual has experience with similar tasks.

The bottom line is that you should expect that your experts will underestimate. For a large project where you have a number of groups involved, such as software development, QA, and business intelligence, the effect of this underestimating is cumulative.

On top of these biases there is the tendency to underestimate due to the desire to please and pressure to keep cost low. People may subconsciously lower their estimates in an effort to avoid the negative reaction to a big number, especially if it’s higher than what was previously discussed.

Then there are the “unknown unknowns.” This phrase was first used by former Secretary of Defense Donald Rumsfeld in a different context but it’s a useful way to express an issue with project estimates, especially with large projects. It’s another way of saying “you don’t know what you don’t know.” How much time and effort do you include for dealing with events which you cannot predict? Risk management helps but we are talking about things that are by definition not known until they appear. The more the project includes technology or other aspects that are new to the organization and the more complex it is, the more unknown unknowns will add to the ultimate cost and schedule.

Business stakeholders of IT projects expect that we know our business and have a reliable basis for estimating cost and schedule. The reality is that even after more than fifty years of IT projects we have poor estimating tools and inherently uncertain and optimistic estimates. As Project Managers we must bridge this gap. The question is – how?

There are some things you can do to mitigate this issue:

  • Probe your experts’ estimates,
  • Understand the drivers of cost and schedule,
  • Don’t confuse precision with accuracy.
  • Relentlessly manage stakeholder expectations,
  • Document and report on assumptions,
  • Build in checkpoints to refine estimates, and
  • Test the business case.

Probe your experts’ estimates. Ask questions such as:

  • Who will do the work and how productive will they be?
  • How do we know their productivity?
  • What comparable efforts were used as analogies and why are they good analogs?
  • What was the outcome of those efforts?
  • What is the driver of effort needed for the task?
  • What would cause the work to require more effort?

The goal is not to question the expertise of the estimators but to understand where optimism bias and the planning fallacy may have caused overly optimistic estimates and make appropriate adjustments.

Understand the drivers of cost and schedule. What are the areas that have the most impact on cost and schedule? Is it the details of a new piece of software, the integration with a partner’s system, complex integration, a large custom development effort, etc.? Identify what drives cost and schedule so you can give those areas extra attention.

Don’t confuse precision with accuracy. There is inherent uncertainty in any estimate. The software you use for project management will calculate estimated cost and effort as very precise numbers, e.g., cost to the dollar. If you present these to your stakeholders their expectation will be that your confidence in the accuracy of your estimate is at this degree of precision. Round estimates up to the appropriate level of precision such as $1,000’s or $10,000’s. And, use ranges rather than single numbers.

Be relentless about managing stakeholder expectations. Use every opportunity to reinforce the degree of uncertainty. Educate your stakeholders that this is not a result of a lack of knowledge or experience in the part of the IT team, it is the nature of IT where almost every project is unique.

Document and report on assumptions. Make a list of the assumptions that underlie the estimates. Make sure your stakeholders understand these assumptions. Periodically update your stakeholders – which assumptions have proved correct, which have not, and the implications.

Build checkpoints to refine estimates into the project plan. Use prototypes to verify and refine overall estimates. Iterative development delivers a large project in smaller, more manageable, and easier to estimate “chunks.”

Test the business case. Would the project still make business sense if it cost 10%, 25%, 50% or 100% more than the original estimate? If the business case only holds if there are no cost increases it may be too weak.

As project managers we need to understand the limits of our ability to develop reliable estimates in the early stages of a large project and that despite our best efforts there is a tendency to estimate low. But at the same time, our business stakeholders do not share that understanding. It is our job as PM to close that gap. This is not making excuses for overruns, it is recognizing the reality of large technology projects.

What has been your experience with estimating large projects? Have your estimates been accurate? What techniques did you use?