The perils of estimation

Business people want estimates. They want to know how much it’s going to cost them to get a solution, and they want to know how likely it is to come in on time and on budget. And of course quality is not negotiable.

Agile teams I encounter are at best nervous about estimates and at worst simply evasive. “You don’t need estimates if you’re doing Agile,” they say. “It will be ready when it’s done. We’re constantly adding value so we don’t need to commit to a date.”

We’re missing the point of release planning

My favourite exchange goes something like:

- “We’ve done an inception and broken down the entire project into stories and measured it, and it’s come in at 400 stories, estimated at 865 story points.” - “865 what?” - “Story points.” - “So how big is a story point?” - “We don’t know yet, we’ll let you know in a few weeks.”

At a governance and funding level the business could care less about story points. They don’t actually care about stories except that we shove them in their faces with our release plans. They care about solving a problem. They came to us and asked us a) how much will it cost to solve the problem, and b) how confident are we about that number?

So how do we approach that? We go through some sort of inception process that looks something like this:

  1. Identify some personas
  2. Identify some process flows
  3. Start breaking the flows down into stories
  4. Lots and lots of stories
  5. Lots and lots and lots and lots of stories
  6. Spike some technical ideas that came out of the stories
  7. Estimate the stories
  8. Roll up all the estimates and call that our project estimate

The part where we estimate the stories is a real chore (c’mon - we’re estimating 400 stories here), so we cut corners. We do a first pass as t-shirt sizes (small, medium, large) and then take a representative sample (sounds suitably scientific) and do a “detailed” estimate of those. This involves a bunch of people estimating lots of important-sounding metrics: minimum, likely and maximum size, clarity, volatility (eh?) and whatever else, and then multiply it up to provide a WOOOOAAAAAHHHH! hang on a minute! What were we trying to do again?

All they wanted to know is: How long will it take, and how confident are you about that number?

Redefining success - in a bad way

By introducing a comprehensive story list - with or without the notion of story points - we have unwittingly reframed the project. The business started out by defining success as solving the problem, but now we have redefined success as delivering this list of stories. However we frame it, that’s what the business will believe. The project will start and and the business stakeholders will start counting down the list of stories until you get to zero.

So now we have the worst of both the Agile and plan-driven worlds: the business expects delivery of a fine-grained list of requirements (whether we call it a Product Backlog or a Master Story List), and we have only taken a half-hearted attempt at it compared to the big up-front analysis we used to do. From here on we are on the back foot, constantly negotiating with the business to manage scope, when it’s our own fault they even care about the story-level detail. They see the story backlog and mentally turn it 90 degrees and think of it as a Gantt chart. Happy days!

Back to project basics

There are two observations to make here. Firstly the business wants accuracy and we’re giving them precision. If you tell me it will take 4.632 months and it takes 8 months, that’s worse than useless. If you tell me it takes “about six months” and it takes seven months, I should still be onto a winner. (If the return on investment is small enough that the extra month stops the proposition being viable, I’d have been better off investing in something else in the first place. Spending $60,000 to realise a return of $70,000 is risky to say the least.) I’m simplifying here of course because the real RoI varies over time, and its value may be particularly time-sensitive.

Secondly we know that requirements vary on an agile project over time, for good reason. Hopefully we are learning as we go, which means we will discover new requirements and decide others are no longer worth pursuing. If we assume about a third of the requirements will be delivered as described (this is generous in light of the Standish Chaos reports), another third will be delivered but with changes, and the last third won’t be delivered at all - but replaced by other features - then we have just wasted all that time and effort in the inception coming up with detailed, high-precision data for a pile of stories we will never deliver.

To compound this, it turns out that estimation is fractal. The more fine-grained you break down the requirements, the more “edges” you will discover. This means that the more detailed you estimate, the more the total will tend towards infinity, simply due to rounding errors and the fear factors that we multiply into fine grained estimates.

Use the inception for deliberate discovery

So what should we be doing during an inception instead of doing the fine-grained story breakdown? Taking it back to first principles we simply want a rough idea of size and an understanding of certainty. There is uncertainty in everything, so the purpose of the inception is to understand the potential landscape we are delivering in.

When we start the inception we know nothing. We want to come out of the inception knowing as much as we could reasonably expect to learn in the time we have allocated. This discovery is along several axes: technical areas such as the technology stack, potential architectures, integration points and external services; domain questions such as how well we understand the problem and whether it is more about research than solving a clearly-articulated problem; people and process challenges like the path to production, identification of stakeholders, how co-located or distributed the team is and how much of this kind of delivery they have done before. For some of these areas, breaking broad requirements into finer-grained detail is a great way to discover more. But not for all of them, and certainly not at the expense of other discovery activities.

There are good arguments from both the Kanban and Real Options folks about deferring the decomposition of feature sets into features and stories until the “last responsible moment”. This means the information is freshest and you aren’t holding an inventory of atrophying information. You might want a couple of weeks of story-level detail - to promote a consistent flow and avoid starving your process - and beyond that a few features identified that will be broken down into the next candidate stories, but beyond that you shouldn’t be worrying about that level of granularity. The experienced members of the team should be estimating feature sets of the order of person-weeks (or better yet, pair-weeks), not going down to the level of individual pair-days. The less experienced team members should be using the exercise as a learning opportunity.

So please, let’s move beyond this cargo cult approach to inception where we slavishly trot out hundreds of stories with their associated estimates, and remember that we are engaging in a process of deliberate discovery.

Its purpose is firstly to convey to our stakeholders and ourselves an order-of-magnitude sense of size - to quote the Pragmatic Programmers, is it larger than a breadbox and smaller than a house? - and secondly to present the risk landscape in which to understand that estimate.


Colophon

This article has been translated into Russian by Denis Oleynik.