Why We Underestimate Large IT Projects and What To Do About It

Congratulations!  You are the Project Manager for the multimillion dollar software project that will give your business a huge competitive advantage. You just completed the planning phase. You worked with the software, DBA, networking, QA, and other teams to develop the schedule and cost estimates based on the project scope and the work breakdown structure. The budget has been approved, the team assigned, and the project is off and running. But, how confident are you in the schedule and cost estimates you presented?

A basic premise of project management is that we define scope at the beginning of a project. Then we estimate the effort, cost, and schedule to design, build, and deploy the solution that satisfies that scope. If we do a good job of managing scope, cost, and schedule we should come in on time and on budget.

But is this realistic for large IT projects? Surveys by the Standish Group and others have shown that less than half of IT projects deliver what was expected for the original budget and schedule. There is some evidence that the larger the project, the worse the problem. In their 2010 “Chaos Report” the Standish Group reported that only 7% of projects over $10 million were successful.

Large IT projects are very labor intensive. Look at the portion of most project budgets that are staff costs. Accurately estimating effort is the key to accurately estimating the overall project cost and schedule.

The first problem we face is defining the work needed to build the solution based on the early scope definition. Particularly with traditional software engineering methodologies, estimates for development and integration must be developed at an early stage when the solution that will be deployed is only understood at a high level. A scope definition developed early in the project is at best a very high level abstraction of what will actually be built – not much of a basis for accurately estimating effort.

Even if we did understand exactly what needs to be built, we have only crude metrics for measuring the software we will develop and the productivity of the developers. Lines of code or function points are the main choices. Both metrics have significant shortcomings (search on “programmer productivity”). Bill Gates has said: “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” Function points take specialists to estimate. Both are difficult to estimate accurately early in a project.

That’s for software development. What about other IT roles such as design, integration, DBA, network engineering, data migration, application configuration, and QA? Again we lack good productivity metrics.

In addition, it is generally agreed that there is a 10 to 1 difference in productivity between the best and average software developers (admittedly hard to prove given the previous point). In other words, the best programmers are ten times more productive than the average. The same variation is likely present in other technical skill areas. So there is a huge variation in the effort needed to complete a task based on the productivity of the specific staff assigned to the task.

A few organizations have historical productivity data to develop and verify project estimates. If you are a PM in one of those organizations, consider yourself lucky. Most organizations do not have such data.

The most common estimating approach is to use ”experts.” We ask experienced members of the software development team to estimate development tasks , DBA’s to estimate DBA tasks, etc. Based on their experience with similar efforts they develop estimates for the new project.

In addition to the lack of detail about what needs to be built and the lack of good productivity measures, there are two factors that cause people to under estimate the time and effort to complete tasks – even if they have extensive experience. These two factors are optimism bias and the planning fallacy.

Optimism bias is the demonstrated tendancy for people to believe that things will go well, in spite of experience to the contrary. As discussed in Tali Shrot’s book Optimism Bias: A Tour of the Irrationally Positive Brain, people over estimate the probability of positive things happening and under estimate the probability of negative things happening. This has been shown to be the case even when people have significant experience. It appears to be hard wired into the human brain. So your development experts are likely to assume that all the developers will be top performers and underestimate the extent of defects, rework, and unforeseen delays.

The planning fallacy, also scientifically verified, causes people to underestimate the effort required to complete tasks. Optimism bias is a contributing factor in the planning fallacy but there are other contributing factors as well – see Daniel Kahneman’s book: Thinking Fast and Slow. Again, this happens even when the individual has experience with similar tasks.

The bottom line is that you should expect that your experts will underestimate. For a large project where you have a number of groups involved, such as software development, QA, and business intelligence, the effect of this underestimating is cumulative.

On top of these biases there is the tendency to underestimate due to the desire to please and pressure to keep cost low. People may subconsciously lower their estimates in an effort to avoid the negative reaction to a big number, especially if it’s higher than what was previously discussed.

Then there are the “unknown unknowns.” This phrase was first used by former Secretary of Defense Donald Rumsfeld in a different context but it’s a useful way to express an issue with project estimates, especially with large projects. It’s another way of saying “you don’t know what you don’t know.” How much time and effort do you include for dealing with events which you cannot predict? Risk management helps but we are talking about things that are by definition not known until they appear. The more the project includes technology or other aspects that are new to the organization and the more complex it is, the more unknown unknowns will add to the ultimate cost and schedule.

Business stakeholders of IT projects expect that we know our business and have a reliable basis for estimating cost and schedule. The reality is that even after more than fifty years of IT projects we have poor estimating tools and inherently uncertain and optimistic estimates. As Project Managers we must bridge this gap. The question is – how?

There are some things you can do to mitigate this issue:

  • Probe your experts’ estimates,
  • Understand the drivers of cost and schedule,
  • Don’t confuse precision with accuracy.
  • Relentlessly manage stakeholder expectations,
  • Document and report on assumptions,
  • Build in checkpoints to refine estimates, and
  • Test the business case.

Probe your experts’ estimates. Ask questions such as:

  • Who will do the work and how productive will they be?
  • How do we know their productivity?
  • What comparable efforts were used as analogies and why are they good analogs?
  • What was the outcome of those efforts?
  • What is the driver of effort needed for the task?
  • What would cause the work to require more effort?

The goal is not to question the expertise of the estimators but to understand where optimism bias and the planning fallacy may have caused overly optimistic estimates and make appropriate adjustments.

Understand the drivers of cost and schedule. What are the areas that have the most impact on cost and schedule? Is it the details of a new piece of software, the integration with a partner’s system, complex integration, a large custom development effort, etc.? Identify what drives cost and schedule so you can give those areas extra attention.

Don’t confuse precision with accuracy. There is inherent uncertainty in any estimate. The software you use for project management will calculate estimated cost and effort as very precise numbers, e.g., cost to the dollar. If you present these to your stakeholders their expectation will be that your confidence in the accuracy of your estimate is at this degree of precision. Round estimates up to the appropriate level of precision such as $1,000’s or $10,000’s. And, use ranges rather than single numbers.

Be relentless about managing stakeholder expectations. Use every opportunity to reinforce the degree of uncertainty. Educate your stakeholders that this is not a result of a lack of knowledge or experience in the part of the IT team, it is the nature of IT where almost every project is unique.

Document and report on assumptions. Make a list of the assumptions that underlie the estimates. Make sure your stakeholders understand these assumptions. Periodically update your stakeholders – which assumptions have proved correct, which have not, and the implications.

Build checkpoints to refine estimates into the project plan. Use prototypes to verify and refine overall estimates. Iterative development delivers a large project in smaller, more manageable, and easier to estimate “chunks.”

Test the business case. Would the project still make business sense if it cost 10%, 25%, 50% or 100% more than the original estimate? If the business case only holds if there are no cost increases it may be too weak.

As project managers we need to understand the limits of our ability to develop reliable estimates in the early stages of a large project and that despite our best efforts there is a tendency to estimate low. But at the same time, our business stakeholders do not share that understanding. It is our job as PM to close that gap. This is not making excuses for overruns, it is recognizing the reality of large technology projects.

What has been your experience with estimating large projects? Have your estimates been accurate? What techniques did you use?

 

Why Project Managers Should Monitor the Technology Hype Cycle

As a project manager, do you know where your project’s technology is on the technology hype cycle? Should you?

The Gartner Group introduced the technology hype cycle in 1995. They use it to track the “common pattern of over-enthusiasm, disillusionment, and eventual realism” with the adoption of new technology.

If you haven’t seen the hype cycle, it is a graph illustrating the normal life cycle of technology.

The vertical axis of the graph is “visibility” and the horizontal axis is time. The graph shows five stages in the life of a technology:

  • “Technology trigger” – the initial introduction of the technology,
  • “Peak of inflated expectations” – when the early, often unwarranted, enthusiasm for the technology reaches its maximum,
  • “Trough of Disillusionment” – when the poor results of early projects cause interest to drop off significantly,
  • “Slope of enlightenment” – when the actual value of the technology becomes understood and products mature, and
  • “Plateau of productivity” – when the technology becomes mainstream and is widely adopted.

There have been some criticisms of the hype cycle but I have found it to be a useful tool – a good “reality check” and an excellent communications medium.

The Gartner Group releases a new version of the hype cycle each year showing where various technologies lie on the cycle. There are hype cycles for different areas of technology but the general version is the “hype cycle for emerging technologies.” The current version is here. Some of the technologies we are hearing so much about these days such as big data, hybrid clouds, and Bring Your Own Device (BYOD) are near the peak of inflated expectations on the current hype cycle.

One important point is that the makeup of “visibility” changes over time. Early, around the peak of inflated expectations there are a few highly publicized project successes. Other factors contributing to early visibility include:

  • vendor marketing activity,
  • discussion on blog posts,
  • articles in the IT trade press,
  • conferences,
  • articles in airline magazines and the business press, and
  • startup and venture capital investment activity.

Later, during the plateau of productivity, there are different contributors to visibility such as:

  • successful projects,
  • wide adoption, and
  • certified specialists in the technology.

Early in the life of a technology it is easy for people become overly optimistic that a new technology is the “silver bullet” for their problem. These people may include stakeholders on your project.

How can you, as a project manager, use the hype cycle?

It is very helpful to know where the technologies used by your project are in the cycle. In particular, it is critical to know if you are using technology near the peak of inflated expectations. This is when you as the PM are most at risk. Using technology at this stage means there is much less predictability of effort, cost, and results. There is also a much higher risk that unforeseen issues will arise.

And there are those ”inflated expectations” that your stakeholders may have based on vendor presentations, press articles, etc.

So if your current project is to implement a big data solution on a hybrid cloud platform and deliver the results to any user’s personal mobile device, you are operating at the peak of inflated expectations. One implication is that you will need to pay increased attention to schedule, cost, and risk management as compared to a project using more mature technology.

The other implication is that you need to manage the expectations of your project’s stakeholders. One way to do that is to use the Gartner graphic. The next time you give a presentation to your steering committee or other stakeholders, include the Gartner hype cycle. My experience is that most people understand and relate to the point of the hype cycle, especially when given a few examples. And, you have the backing of the Gartner Group, a well-respected research firm.

The technology hype cycle can help you communicate to a wide audience the realistic expectations they should have when adopting a technology early in its evolution.

If the current hype is right, there should be a lot of PM’s out there managing big data, cloud, BYOD, and other projects with technology around the peak of inflated expectations. Will using the hype cycle help you manage expectations?

 

How Much Technical Knowledge does a Project Manager Need?

CIO recently published an article titled “7 Must-Have Project Management Skills for IT Pros.” The author asks “How can you tell a good project manager from a bad one?” She goes on to say that these are the “seven skills to be effective and successful.” Continue reading