Introducing Deliberate Discovery
Last year I wrote about how we are doing planning all wrong, or rather, how we seem to focus on the wrong things when we do planning. We obsess about stories and story points and estimation, because that’s what we’ve been taught to do.
It reminds me of the story about a man who comes across a drunk standing under a street lamp at night time, staring at the floor. The drunk says he’s looking for his lost keys, and the man says: well they are obviously not here under the lamp or we would see them. No, replies the drunk, I dropped them over there, but it’s dark over there so I decided to search over here instead.
Our street lamp is the Planning Game, which involves writing Stories and Estimating, using Planning Poker or other Estimation Techniques (everything in caps appears in the Agile Literature, and so has been deemed Official).
I suggested we are failing to use the planning time effectively, and that we should be devoting the time to finding out as much useful stuff as we can while everyone was in the same room, and I called this Deliberate Discovery. Marc McNeill commented: “Deliberate discovery. As opposed to accidental discovery? Or any other sort of discovery? Why add the extra word ‘deliberate’?”
Accidental discovery ¶
If you do something a bunch of times, you will learn more and more about it. Probably. Potentially. Maybe. Ok, if you do something a bunch of times and unexpected things happen you will learn. If you just do the same thing over and over again, you will probably get better at performing that sequence, but you won’t learn anything new. The pragmatic programmers describe this as the difference between ten years’ experience vs. one year’s experience ten times.
Learning comes from experiencing the unexpected. As the saying goes: “Good judgement comes from experience. Experience comes from bad judgement.” When you receive an unfamiliar outcome you have to alter your model of the world to accommodate it (or dismiss the encounter as a fluke and only seek out reinforcing data—this is known as confirmation bias and is a great way to not learn). In Zen teaching this moment of enlightenment—of evolving your model of the world—is known as satori, and Zen students strive to induce these moments of satori by the use of koans. Similarly, the Dreyfus model of skills acquisition describes an Advanced Beginner as someone who is starting to put rote-learned rules into context—understanding where they do and don’t apply. This again can only happen when the learner experiences situations outside what they already know (which is why artificially constraining Best Practices can stifle learning).
This implies the student can accelerate their learning by actively seeking out encounters where they are likely to learn. This is the difference between accidental and deliberate discovery.
“Learning is the constraint” ¶
Liz Keogh told me about a thought experiment she came across recently. Think of a recent significant project or piece of work your team completed (ideally over a period of months). How long did it take, end to end, inception to delivery? Now imagine you were to do the same project over again, with the same team, the same organisational constraints, the same everything, except your team would already know everything they learned during the project. How long would it take you the second time, all the way through? Stop now and try it.
It turned out answers in the order of 1/2 to 1/4 the time to repeat the project were not uncommon. This led to the conclusion that “Learning is the constraint”.
Edit: Liz tells me the thought experiment and quote originated with Ashley Johnson of Gemba Systems, and she heard about it via César Idrovo.
If we assume the only difference is that the second time round you have learned about the problem, this would suggest that the biggest impediment to your throughput was what you didn’t know. That’s not to say you spent all the extra time busily learning stuff. Heck that’s fun and what’s more it’s even useful! More typically you probably spent it thrashing around trying to find a way forwards, or mired in meetings where you were trying to figure out the other guy’s agenda so you could get past another roadblock, or going down a path that was always destined to be a dead end had you only known it. Or trying to work out for the umpteenth time how stupid Java NIO sockets work. So it’s not really learning that’s the constraint—it’s ignorance. More accurately, it’s ignorance about specific aspects of the problem at hand. In other words:
Ignorance is the single greatest impediment to throughput.
Ignorance is multivariate ¶
Ignorance applies along multiple axes. You can be ignorant of the particular technologies you are using, ignorant of the breadth of technology options available to you, ignorant of the domain, ignorant of the ways in which you could address the problem or opportunity, ignorant of a better way of articulating the problem—a better model—that would make the solution obvious, ignorant of the people in the team—their aspirations or fears, their motivation, their relationships with one another and out into to the wider organisation, ignorant of organisational constraints, ignorant of third party integration risks, ignorant of who are the people you should be building relationships with, ignorant of the delivery methodology, ignorant of the culture of the organisation. I’m just scratching the surface here—I’m sure you can imagine many other factors that could affect our ability to deliver and about which we will be more or less ignorant.
More insidiously, you are usually ignorant of how ignorant you are, and rather than making you more wary, this second-order ignorance actually makes you more likely to rush in like the proverbial fool. This is beautifully summed up in this quote from IRC snippets site, bash.org:
<Pahalial> "ignorance more frequently begets confidence than does knowledge"---Charles Darwin
<kionix> wtf? begets isn't a word. quit trying to make up words [...expletive...]
Discovery is non-linear and disjoint ¶
Now, think back to that project you just completed. Did your ignorance decrease consistently and linearly along all these axes? It’s fairly unlikely. What probably happened was that at various points—some more memorable than others—you had sudden insights or realisations, that either came to you or were thrust upon you by circumstance. Chances are that a lot of this unplanned learning happened fairly late in the day, accompanied by much disbelief, anger, and probably all the other stages of grieving too, as you come to terms with the death of your strongly-held model of How Things Ought To Be.
So for any single factor, you start the project with a particular level of ignorance, and it decreases in “bumps”—in a disjoint fashion—with each learning episode, until at the end of the project it is at another, lower level. Of course many of these factors are interrelated, so in a single episode you will typically learn about several different things—your ignorance of various factors will decrease by different amounts simultaneously. In reality much of this learning is at the whim of the Project Gods, and largely out of your control. How could you have possibly guessed that the third party API would be that different from the spec? Who knew Dave’s wife would choose that weekend to have the baby?
Well actually you could have. Ok, you couldn’t have known that those things would happen, but you would be insane—or rather “normal”—not to think something would happen, and this is the crux of Deliberate Discovery.
What if you assume something bad will happen? ¶
We have a built-in mechanism for mindless optimism. It’s called attribution bias, and we use it to protect our fragile egos from the big bad reality out there. You can read more in Cordelia Fine’s fascinating book A Mind of Its Own, but in short, it means we assume when bad things happen to other people, they probably deserve it. They screwed up, or they didn’t plan ahead, or, well, any number of reasons. But when bad things happen to us, well that’s different. We couldn’t possibly have seen that coming! That could have happened to anyone. Poor us.
We are susceptible to attribution bias when we estimate (as Linda Rising has pointed out), and when we assess risk, and we are therefore constantly amazed—and let’s face it, a little disappointed—when bad things happen on our projects. Here’s another thought experiment: What if instead of hoping nothing bad will happen this time, you assumed the following as fact:
- Several (pick a number) Unpredictable Bad Things will happen during your project.
- You cannot know in advance what those Bad Things will be. That’s what Unpredictable means.
- The Bad Things will materially impact delivery. That’s what Bad means.
How would this affect your approach to the project?
Finally, deliberate discovery ¶
So, I think we’ve been looking in the wrong place. Methodologies, practises and patterns for delivery are all well and good, but they don’t take into account the single biggest limiting factor to successful delivery. Let’s assume that during the life of the project our ignorance will reduce across a number of axes that are relevant to our project. Let’s also assume that ignorance of certain factors right now are the things currently limiting us the most. Let us further assume that we probably don’t know which ones those magic enabling factors are: we are second-order ignorant of which factors are currently the most constraining. Once we realise what these are, we can apply methodology to consistently move forwards, but until we do, we’re shooting in the dark.
Surely it makes sense to invest effort in firstly discovering which aspects of delivery we are most critically ignorant of (i.e. both where we are ignorant and where that ignorance is hampering throughput), and further to invest in reducing that ignorance—deliberately discovering enough to relieve the constraint and allow us to proceed. And all the way through the project, day by day, we should be trying to identify where and how ignorance is hampering us. Ideally we want to create as steep a descent as possible for each axis on the curve of ignorance, so we deliberately reduce the degree to which we are constrained by that ignorance, rather than being a victim of circumstance.
This is a skill, and as such is subject to the Dreyfus model. Which means that initially we’ll be rubbish at it. Then we’ll start to figure out how we are rubbish at it, and work on that. Then we’ll figure out some Best Pracises (I’m sorry—it’s inevitable. That’s what Competent people do to “protect” Advanced Beginners), and, hopefully shortly after, figure out ways to subvert those Best Practises to continue getting work done. The hardest part is going to be the damage to our egos as we realise just how systemically poor we are at delivering projects, and how we’ve been staring straight past the problem, because our methodology is always under the street lamp, and we feel safe under the street lamp.
What next? ¶
This then comes back to my original premise last year, which is that during an inception, when we are most ignorant about most aspects of the project, the best use we can possibly make of the time available is to attempt to identify and reduce our ignorance across all the axes we can think of. (Arguably one of the first exercises should be to take a first stab at identifying these axes, and trying to figure out just how ignorant we are. How’s that for an exercise in humility!) Sure, if we stick to the traditional Agile planning model, we will do some discovery as we break down epics into features into stories into scenarios, but how much more could we accomplish if we put that to one side and instead focused on the real deal?
Domain-driven design inventor Eric Evans describes up-front analysis as “locking in our ignorance.” He’s got a point, and it doesn’t just apply to old-school requirements analysis. The death-by-stories planning exercises I’ve seen on many Agile projects bear testament to the same problem.
I hope this has given you an idea of where my head has been at. There is much more to say about deliberate discovery. Think about applying the principle to learning a new language, or picking up a new technology, or a new domain. What could you do to identify and reduce your ignorance most rapidly? Why are rapid-feedback teams so much more successful than long cycle teams? How can we measure ignorance? I’ll be writing more on this topic over the coming weeks and months, and I’ll be talking about Deliberate Discovery at QCon, San Francisco in November.
Colophon ¶
Thanks to Liz Keogh, Lindsay Terhorst North, Joe Walnes, Chris Matts, Steve Hayes, Kevlin Henney and numerous others for helping to grow the ideas in this article. And special thanks to Glenn Vanderburg and Mike Nygard for the conversation at JAOO Australia about a unit of ignorance. (That will have to wait for a future article.)
This article has been translated into Russian by Denis Oleynik.