Introducing Deliberate Discovery

Last year I wrote about how we are doing planning all wrong, or rather, how we seem to focus on the wrong things when we do planning. We obsess about stories and story points and estimation, because that’s what we’ve been taught to do. It reminds me of the story about a man who comes across a drunk standing under a street lamp at night time, staring at the floor. The drunk says he’s looking for his lost keys, and the man says: well they are obviously not here under the lamp or we would see them. No, replies the drunk, I dropped them over there, but it’s dark over there so I decided to search over here instead.

Our street lamp is the Planning Game, which involves writing Stories and Estimating, using Planning Poker or other Estimation Techniques (everything in caps appears in the Agile Literature, and so has been deemed Official).

I suggested we are failing to use the planning time effectively, and that we should be devoting the time to finding out as much useful stuff as we can while everyone was in the same room, and I called this Deliberate Discovery. Marc McNeill commented: “Deliberate discovery. As opposed to accidental discovery? Or any other sort of discovery? Why add the extra word ‘deliberate’?”

Accidental discovery

If you do something a bunch of times, you will learn more and more about it. Probably. Potentially. Maybe. Ok, if you do something a bunch of times and unexpected things happen you will learn. If you just do the same thing over and over again, you will probably get better at performing that sequence, but you won’t learn anything new. The pragmatic programmers describe this as the difference between ten years’ experience vs. one year’s experience ten times.

Learning comes from experiencing the unexpected. As the saying goes: “Good judgement comes from experience. Experience comes from bad judgement.” When you receive an unfamiliar outcome you have to alter your model of the world to accommodate it (or dismiss the encounter as a fluke and only seek out reinforcing data – this is known as confirmation bias and is a great way to not learn). In Zen teaching this moment of enlightenment – of evolving your model of the world – is known as satori, and Zen students strive to induce these moments of satori by the use of koans. Similarly, the Dreyfus model of skills acquisition describes an Advanced Beginner as someone who is starting to put rote-learned rules into context – understanding where they do and don’t apply. This again can only happen when the learner experiences situations outside what they already know (which is why artificially constraining Best Practices can stifle learning).

This implies the student can accelerate their learning by actively seeking out encounters where they are likely to learn. This is the difference between accidental and deliberate discovery.

“Learning is the constraint”

Liz Keogh told me about a thought experiment she came across recently. Think of a recent significant project or piece of work your team completed (ideally over a period of months). How long did it take, end to end, inception to delivery? Now imagine you were to do the same project over again, with the same team, the same organisational constraints, the same everything, except your team would already know everything they learned during the project. How long would it take you the second time, all the way through? Stop now and try it.

It turned out answers in the order of 1/2 to 1/4 the time to repeat the project were not uncommon. This led to the conclusion that “Learning is the constraint”.

Edit: Liz tells me the thought experiment and quote originated with Ashley Johnson of Gemba Systems, and she heard about it via César Idrovo.

If we assume the only difference is that the second time round you have learned about the problem, this would suggest that the biggest impediment to your throughput was what you didn’t know. That’s not to say you spent all the extra time busily learning stuff. Heck that’s fun and what’s more it’s even useful! More typically you probably spent it thrashing around trying to find a way forwards, or mired in meetings where you were trying to figure out the other guy’s agenda so you could get past another roadblock, or going down a path that was always destined to be a dead end had you only known it. Or trying to work out for the umpteenth time how stupid Java NIO sockets work. So it’s not really learning that’s the constraint – it’s ignorance. More accurately, it’s ignorance about specific aspects of the problem at hand. In other words:

Ignorance is the single greatest impediment to throughput.

Ignorance is multivariate

Ignorance applies along multiple axes. You can be ignorant of the particular technologies you are using, ignorant of the breadth of technology options available to you, ignorant of the domain, ignorant of the ways in which you could address the problem or opportunity, ignorant of a better way of articulating the problem – a better model – that would make the solution obvious, ignorant of the people in the team – their aspirations or fears, their motivation, their relationships with one another and out into to the wider organisation, ignorant of organisational constraints, ignorant of third party integration risks, ignorant of who are the people you should be building relationships with, ignorant of the delivery methodology, ignorant of the culture of the organisation. I’m just scratching the surface here – I’m sure you can imagine many other factors that could affect our ability to deliver and about which we will be more or less ignorant.

More insidiously, you are usually ignorant of how ignorant you are, and rather than making you more wary, this second-order ignorance actually makes you more likely to rush in like the proverbial fool. This is beautifully summed up in this quote from IRC snippets site,

<Pahalial> “ignorance more frequently begets confidence than does knowledge” – Charles Darwin
<kionix> wtf? begets isn’t a word. quit trying to make up words […expletive…]

Discovery is non-linear and disjoint

Now, think back to that project you just completed. Did your ignorance decrease consistently and linearly along all these axes? It’s fairly unlikely. What probably happened was that at various points – some more memorable than others – you had sudden insights or realisations, that either came to you or were thrust upon you by circumstance. Chances are that a lot of this unplanned learning happened fairly late in the day, accompanied by much disbelief, anger, and probably all the other stages of grieving too, as you come to terms with the death of your strongly-held model of How Things Ought To Be.

So for any single factor, you start the project with a particular level of ignorance, and it decreases in “bumps” – in a disjoint fashion – with each learning episode, until at the end of the project it is at another, lower level. Of course many of these factors are interrelated, so in a single episode you will typically learn about several different things – your ignorance of various factors will decrease by different amounts simultaneously. In reality much of this learning is at the whim of the Project Gods, and largely out of your control. How could you have possibly guessed that the third party API would be that different from the spec? Who knew Dave’s wife would choose that weekend to have the baby?

Well actually you could have. Ok, you couldn’t have known that those things would happen, but you would be insane – or rather “normal” – not to think something would happen, and this is the crux of Deliberate Discovery.

What if you assume something bad will happen?

We have a built-in mechanism for mindless optimism. It’s called attribution bias, and we use it to protect our fragile egos from the big bad reality out there. You can read more in Cordelia Fine’s fascinating book A Mind of Its Own, but in short, it means we assume when bad things happen to other people, they probably deserve it. They screwed up, or they didn’t plan ahead, or, well, any number of reasons. But when bad things happen to us, well that’s different. We couldn’t possibly have seen that coming! That could have happened to anyone. Poor us.

We are susceptible to attribution bias when we estimate (as Linda Rising has pointed out), and when we assess risk, and we are therefore constantly amazed – and let’s face it, a little disappointed – when bad things happen on our projects. Here’s another thought experiment: What if instead of hoping nothing bad will happen this time, you assumed the following as fact:

  • Several (pick a number) Unpredictable Bad Things will happen during your project.
  • You cannot know in advance what those Bad Things will be. That’s what Unpredictable means.
  • The Bad Things will materially impact delivery. That’s what Bad means.

How would this affect your approach to the project?

Finally, deliberate discovery

So, I think we’ve been looking in the wrong place. Methodologies, practises and patterns for delivery are all well and good, but they don’t take into account the single biggest limiting factor to successful delivery. Let’s assume that during the life of the project our ignorance will reduce across a number of axes that are relevant to our project. Let’s also assume that ignorance of certain factors right now are the things currently limiting us the most. Let us further assume that we probably don’t know which ones those magic enabling factors are: we are second-order ignorant of which factors are currently the most constraining. Once we realise what these are, we can apply methodology to consistently move forwards, but until we do, we’re shooting in the dark.

Surely it makes sense to invest effort in firstly discovering which aspects of delivery we are most critically ignorant of (i.e. both where we are ignorant and where that ignorance is hampering throughput), and further to invest in reducing that ignorance – deliberately discovering enough to relieve the constraint and allow us to proceed. And all the way through the project, day by day, we should be trying to identify where and how ignorance is hampering us. Ideally we want to create as steep a descent as possible for each axis on the curve of ignorance, so we deliberately reduce the degree to which we are constrained by that ignorance, rather than being a victim of circumstance.

This is a skill, and as such is subject to the Dreyfus model. Which means that initially we’ll be rubbish at it. Then we’ll start to figure out how we are rubbish at it, and work on that. Then we’ll figure out some Best Pracises (I’m sorry – it’s inevitable. That’s what Competent people do to “protect” Advanced Beginners), and, hopefully shortly after, figure out ways to subvert those Best Practises to continue getting work done. The hardest part is going to be the damage to our egos as we realise just how systemically poor we are at delivering projects, and how we’ve been staring straight past the problem, because our methodology is always under the street lamp, and we feel safe under the street lamp.

What next?

This then comes back to my original premise last year, which is that during an inception, when we are most ignorant about most aspects of the project, the best use we can possibly make of the time available is to attempt to identify and reduce our ignorance across all the axes we can think of. (Arguably one of the first exercises should be to take a first stab at identifying these axes, and trying to figure out just how ignorant we are. How’s that for an exercise in humility!) Sure, if we stick to the traditional Agile planning model, we will do some discovery as we break down epics into features into stories into scenarios, but how much more could we accomplish if we put that to one side and instead focused on the real deal?

Domain-driven design inventor Eric Evans describes up-front analysis as “locking in our ignorance.” He’s got a point, and it doesn’t just apply to old-school requirements analysis. The death-by-stories planning exercises I’ve seen on many Agile projects bear testament to the same problem.

I hope this has given you an idea of where my head has been at. There is much more to say about deliberate discovery. Think about applying the principle to learning a new language, or picking up a new technology, or a new domain. What could you do to identify and reduce your ignorance most rapidly? Why are rapid-feedback teams so much more successful than long cycle teams? How can we measure ignorance? I’ll be writing more on this topic over the coming weeks and months, and I’ll be talking about Deliberate Discovery at QCon, San Francisco in November.

Thanks to Liz Keogh, Lindsay North, Joe Walnes, Chris Matts, Steve Hayes, Kevlin Henney and numerous others for helping to grow the ideas in this article. And special thanks to Glenn Vanderburg and Mike Nygard for the conversation at JAOO Australia about a unit of ignorance. (That will have to wait for a future article.)

[This article has been translated into Russian by Denis Oleynik.]


  1. […] This post was mentioned on Twitter by Michael Feathers, Dan North, Agile Carnival, Christian Blunden, PMI Rochester, NY and others. PMI Rochester, NY said: RT @AgileCarnival: Introducing Deliberate Discovery […]

  2. Interesting post, but I have to confess I still don’t understand exactly what you mean by ‘deliberate’ discovery. My manager sometimes says, when we are talking about a theme or a story, “What should we have thought of that we didn’t think of?” But if you don’t know, how do you know you don’t know?

    It would help me if you could give a realistic example of a case where you do deliberate discovery, versus a case where you don’t, when working on a story or a theme. My team finds that by breaking things down into small chunks and starting with the one that seems riskiest or the least known, we do a pretty good job of producing the right software. Most of the time, it isn’t us, the dev team, who failed to discover something, it’s the stakeholders who can’t decide what they want.

    It seems to me that something like story mapping would help our company learn the right things about each theme or story, but I can’t get anyone to try it (we don’t have a bad problem to solve, but personally I think we could improve!)

    1. Practically, I would treat each axis as a spike that could last anything from a few minutes, hours or days, but less than a week. The shorter the better from a feedback loop perspective.

      1. Hi Aslam.

        Spiking in this context is definitely useful. The challenge is in recognising the diminishing return: going far enough with a spike to learn, but resisting the urge to go further than you need. It’s also difficult to throw stuff away you’ve spent time working on: even though I “know” I was only in it for the learning there’s still an emotional attachment.

    2. Hi Lisa.

      Deliberate discovery is being aware that ignorance is our biggest constraint, and actively seeking to identify and address the particular axes of ignorance that are limiting your throughput at any given moment.

      The behavioural difference comes from assuming second order ignorance as your default state, and assuming you are currently constrained by a lack of knowledge of something (i.e. that a similar team in a similar situation would be going faster if it knew something you don’t).

      Examples are definitely the way forward. In the next few posts I’ll be unpacking a number of situations where actively addressing a particular lack of knowledge has had a dramatic impact on throughput. I’m also on the lookout for other people’s examples.

      On a related note, I’ve been thinking that skilled manual testers are probably the people who have this instinct the best, because they are experts in looking everywhere but under the street lamp! Conversely automated testing is only ever going to take place directly under the light (with the exception of tools like QuickCheck).

      1. “Tools like QuickCheck” = Tools for

  3. I think this is exactly what the Lean Startup thing that’s going on is all about. Spending more time with customers (they say at least 15%), creating and vetting hypotheses, failing fast, etc.

    Check out “Four Steps to the Epiphany” by Steven Blank for more information. I think you’ll find it dovetails with your observations quite nicely.

    1. Hi David.

      It certainly sounds like a good start! The challenge is then to keep that attitude and nimbleness throughout development, rather than “doing all the innovating at the start.” I’ll put that book on my Some Day list, but I’ll be honest and say my Some Day list is getting rather long :)

  4. This reminds me of my experience doing research. At the start of the work we didn’t have a clue – by definition – but we knew what the result would look like.

    In sw projects I work on now we deliberately first tackle those areas we know least about, we actively seek them out, and explore them. We write tests that will tell us when we’ve solved the problem.

    We may need to assure clients in the early days that we are making progress since we are doing a lot of research, but as the unknowns are knocked off, progress accelerates considerably.

  5. Hey Dan,

    I recently did a talk in Cape Town titled “Agile Architecture: The Age of Ignorance”, but you articulate it differently from me in terms of ignorance is a constraint. I view it as a state of imperfection and that aiming for perfection is not very productive.

    When we relieve any constraint, a new constraint will appear elsewhere. Eventually, the constraint is at a point that we can tolerate or deal with reasonably well. If ignorance was the constraint, and we are no longer ignorant, where does the new constraint surface? On another axis of ignorance? Maybe. I guess it just depends on the nature of ignorance of the first axis.

    Also, the concept of axis suggests that they are potentially independent, or orthogonal. It might not be so, in reality because of dependencies, influence or affinity.

    Eric Evans also talks about working with multiple models and using other stories and scenarios to challenge these models. I guess each model, in a way, represents a different degree of understanding and ignorance.

    1. Hi Aslam.

      My thesis with deliberate discovery is that some kind of ignorance is always one of your primary constraints. I believe that if the team develops an instinct for identifying and addressing any area where learning would help (but learning for the sake of improving throughput, not for its own sake, otherwise you just descend into endless introspection and never get any work done!) then your throughput will improve dramatically. At least that’s what (I think) I’ve observed among the most high-performing teams and people I know.

  6. Hi, Dan,

    Thoughtful post. What you call “Deliberate Discovery” sounds like what I know as chartered exploratory testing — a way to embrace ignorance by choosing a path into the unknown and following it. Testers have always served to “shine the light into dark spaces”, but exploration is a particularly valuable approach to do that.

    “Accidentally On Purpose” is the titled for a talk I once did about skills and tactics of exploratory testing to help testers not feel so helpless or flummoxed when reporting their exploratory work to their managers.

    In that talk, I mention finding things deliberately (accidentally is fine, too, but I de-emphasize that) by using Session-Based Test Management to frame and guide exploration.

    Recently, my brother James and I have come up with something for situations in testing where the environment may be especially chaotic. It’s called Thread-Based Test Management — a more specific strategy to organize and report progress toward activity on themes of what you’re pursuing (“threads”).

    I have long believed not enough of us are writing about knowledge work despite chaos, improvisational investigation, or deliberate discovery, so I appreciate your post.

    Jon Bach

    1. Hi Jon.

      Yes, exploratory testing feels very much like deliberate discovery as it applies to defect detection. I certainly find it more a fruitful source of bugs than rerunning the same bunch of automated acceptance tests that didn’t fail for the last 300 times.

      You can think of deliberate discovery as that idea extended to the whole software development process, continually “debugging” your process and your knowledge in order to deliver more relevant stuff quicker.

      I met James all too briefly at Øredev a couple of years ago. You guys have some really inspiring ideas .

  7. Nice post. I’ve always tried to teach that the output of planning is not a plan. For the record (‘cos these things worry me) the “Learning is a bottleneck” stuff, though talked about for a while, is written up (with thought experiment and all) in Amr Alssamadisy’s book Agile Adoption Patterns (the most interesting thing about the book, I think.)

    1. Hi David.

      I don’t doubt the “learning is a bottleneck” meme has been around for a long time. (I’ve updated the post to add a clarification.) The part I’m interested in is how to consciously impact the shape of the ignorance curve: how to actively engineer episodes that create the right kinds of learning and insight early and continually throughout delivery, such that we are actively aware of and managing the constraint of ignorance. That’s what I’m trying to do with deliberate discovery. I think :)

  8. Great post Dan, can’t wait to see how this reveals itself at QCon.

    From the comments, I can see the similarities with exploratory testing although this seems much far reaching. Exploratory testing is still bounded by some sort of canvas (the application) whilst in inception this is pretty much unbounded.

    Somehow, this reminds me of the debono’s thinking hats, a systematic way of revealing information. I could imagine a similar wardrobe for deliberate discovery, perhaps Deliberate Discovery dungarees?

  9. Hi Dan! Great to see you looking into this. I’ve been talking about learning and re-learning for a few years. It’s mostly been an issue in the Lean Development community. Great to see it starting to show up in mainstream spaces as well!

  10. Dan,

    Thanks again for another timely post — hits in exactly with some stuff we’re working through just now.

  11. […] Introducing Deliberate Discovery Interesting, stimulating but not sure I agree. Feels a little like "know what you don't know". (tags: agile learning analysis) […]

  12. I want to thank you for these very relevant posts.
    The issues of estimation in software project appear extremely often, yet I find they are generally very little understood.
    Although ‘Agile’ is trying to be very ‘different’ in many ways than ‘traditional ways’ one of the things it’s trying to keep being strong on is this whole issue of being able to provide ‘estimates’.
    This is a tricky issue, because there is a fundamental difficulty – estimation is still in the hands of the developer. The ‘Scrum Master’, no matter how good, will be unable to estimate properly unless they have an actual ‘hands-on dev’ role, which is rarely the case. Without understanding very well the technology (which is a ‘fractal’ meaning that even if one understand the ‘big picture’ this can’t be good enough), it is essentially impossible to provide good estimates. And yet how does one justify the role of a Scrum Master who is not even able to offer what ‘higher management’ often craves – estimates, which provides (at least a sense of) control over the process.

  13. Hey I just wanted to let you know, I really like the writing on your site. But I am utilising Firefox on a machine running version 9.10 of Ubuntu and the layout isn’t quite proper. Not a important issue, I can still basically read the posts and search for info, but just wanted to inform you about that. The navigation bar is kind of hard to apply with the config I’m running. Keep up the great work!

    1. Thanks for letting me know Virgilio. I’ve been trying out different themes, and I’ve just been testing this one (INove) with Firefox and Google Chrome on Ubuntu 10.10 and Windows 7. Let me know if it still doesn’t look right on Ubuntu 9.10.

  14. Hi Dan,
    Nice. Reminds of a few article you don’t know, if I read you right. They are by Philip Armour and titled The Five Orders of Ignorance (Communications of the ACM, October 2000) and The Laws of Software Process (Communications of the ACM, January 2001). Yes, old, but on the same nice track as you are: software development process as a learning or knowledge acquisition process. Another publication you might enjoy, given your perspective – again: if I read you right – is Kelly’s Behaviour is an Experiment (Kelly, G. A. (1970) Behaviour is an experiment. In: Perspectives in personal construct theory. Ed. Bannister, D. London. Academic Press, 255-269.) The central element there is that when it’s new, you just don’t know. If yoy knew, it would not be new, right?

    In your analysis you seem to focus on knowledge, if unknowns belong to that domain. You name a few, but don’t differentiate much. If we delve deeper into this, I wouldn’t be surprised if the larger part of the influence was related to interpersonal issues. Cooperation in teams is less “natural” than most people think. I think many unknown live in the interpersonal relations within the team and between the team and its environment.

    Keep on the good work!

    Wil Leeuwis

  15. […] the end of Dan North’s post on Deliberate Discovery he makes the following suggestion: There is much more to say about deliberate discovery. Think […]

  16. […] trawling the comments of Dan North’s ‘Deliberate Discovery‘ post I came across an interesting article written by Phillip G. Armour titled ‘The […]

  17. Bill Campbell · ·

    Hi Dan,
    I really enjoyed this blog as well as the skills-matter podcast. I’ve actually watched it a few times as picked up a couple extra ideas from the second watch. Now – if we could just come up with some magic calculator to give us estimates! We are still using Stories (when we have time), or just a relative sizing to previous work we’ve done for business estimates. In order for the business to determine if it will be worth it to pursue any particular project they need something. Often our senior management just gives us a man-hours/delivery date and tells us to go do f-inAgile or whatever and get it done. Not that their estimates are any worse off than ours! Ha! What a crazy business this software development stuff is.
    regards and Keep On!

  18. Bill Campbell · ·

    Hi Dan,
    I was wondering, if you believe that going the whole User Story/Planning Poker estimating routine is not the most productive use of time, what do you do instead to provide the business with estimates? I know that our educating the business that estimates are not firm commitments cast in concrete is a totally separate issue but none-the-less prevalent.

    1. Bill Campbell · ·

      I see that your previous article answers this question. (how we are doing planning all wrong).

  19. […] 因此不同点是什么呢?除了使用不同的语言,Dan从更广泛的哲学角度强调了BDD的自我学习方面,他称之为“蓄意发现”,“而不是偶然发现”。他把蓄意忽略我们的技术、人员和过程作为目标;尽管我们尽心尽力,但我们总是忽略了这一点。然而,他让社区缩小BDD和TDD,或者做得比较好的ATDD之间的差异,他请求道: “我想避免‘因为……,BDD要优于TDD’,或者更有甚者‘BDD不同于TDD(就如原先设想的那样),因为……’。TDD是令人惊叹的,它最初的概念就是用于解决我一直想用BDD解决的问题的……它并不是做出优秀设计的唯一方法,BDD也不是。BDD是有关了解客户需求,并让这种对需求的逐渐理解来驱动软件开发……总是试图去获取更深入的理解。但我敢打赌,如果你问Kent Beck什么是TDD,他的回答会是相同的。” […]

  20. […] the way, I wasn’t trying to consciously parrot Dan North and his Deliberate Discovery thing, but it pretty much came out that […]

  21. […] wrote a blog post introducing Deliberate Discovery some time ago. This philosophy is at the heart of BDD. There are […]

  22. […] Introducing Deliberate Discovery « We are doing planning all wrong (tags: agile planning productivity toread development) March 10th, 2011 | Category: Deliciousness […]

  23. Hi, Dan!

    Good job. See also for a similar exposition of the same ideas – Deliberate discovery helps steer not only software projects, but also startups and other things.

    cheers – Alistair

    1. Hi Alistair, thanks for popping by!

      Actually I’ve been referencing you all over the place, not least with your Risk Reduction patterns. In particular, Walking Skeleton is just gold, and I’ve been adapting it in my Patterns of Effective Delivery work as Dancing Skeleton (which I’ll tell you about over a beer some time – think Walking Skeleton with a REPL so you can make it do stuff.)

      Cheers, Dan

      1. Alistair Cockburn · ·

        “Patterns of Effective Delivery”! What a great name – I’m so jealous! I can come up with advice, but naming is waay out of my league, and remarkably important. You’ve got a great handle here: Deliberate Discovery & Patterns of Effective Delivery. I can’t wait to see the collection, it’ll be great. Alistair.

    2. Thanks Alistair for including your article. It is helpful to know that I have been doing the right thing all along. I was directed to “Introducing Deliberate Discovery” article by someone but find the article a bit vague. Many stories go into each section of the article but am having a hard time to correlate with the main points in this article. Nevertheless, it is an interesting article to read about.

  24. Right now it looks like Expression Engine is the preferred blogging platform available right now. (from what I’ve read) Is that what you’re using on your blog?

    1. I’m using WordPress, hosted at I used to host my own instance but they have better infrastructure and spam management :)

  25. […] Second, Deliberate Discovery (discussed in Dan North’s article above and in more detail here) […]

  26. […] GA_googleAddAttr("AdOpt", "1"); GA_googleAddAttr("Origin", "other"); GA_googleAddAttr("LangId", "1"); GA_googleAddAttr("Autotag", "technology"); GA_googleFillSlot("wpcom_below_post"); […]

  27. Just updated to show Trim The Tail in table form for explicity risk management. Looking for volunteers to try it. This continues to fit with Dan’s Deliberate Discovery notion; just nudging the ball forward :). Alistair

    1. Hi Alistair. That is really lovely! I might have to steal those diagrams…

      1. Hi, Dan! Simply quote them with attribution in the usual way. I publish these things to be used. (I’ve been showing these charts since early-2009, eg at the Agile 2009 keynote I’m a bit surprised you haven’t run into them. However that may be, let’s both build on them and run forward. Miss beer arguments/discussions with you. Alistair.

  28. […] Dan North(BDD的发明人)去年提出了“刻意的发现”(Deliberate Discovery),参见他的博客和在伦敦QCon的话题分享,更加强调了无论是需求还是设计,都要避免过多的预设计,而要通过在日常功能开发过程中不断地、有纪律地、刻意地去发现,去抽象,去演进,才会做出好的系统。迭代贯穿整个产品开发的方方面面,是刻意的 […]

  29. […] scientific rigor. Dan North and Elizabeth Keogh have talked a lot recently about what they call deliberate discovery. In their reasoning, all indications point towards our own ignorance being the single biggest […]

  30. […] 70% of construction taking place in already built-up areas with of course a much higher level of unpredictability. Also, at least 50% of construction work is not new buildings but repairs, remodelling and the […]

  31. […] course, we don’t know what we don’t know, so feedback is still important. We’ve got a good chance of discovering what someone else […]

  32. […] dell’articolo Introducing Deliberate Discovery di Dan […]

  33. […] che si chiama “confirmation bias“, “pregiudizio di conferma“. Dan North in Introducing Deliberate Discovery lo descrive così: L’apprendimento deriva dall’esperienza dell’inatteso. Proprio […]

  34. […] Deliberare Discovery Posted on October 13, 2011 by am Dan North’s article “Introducing Deliberate Discovery” ruined my life: from the moment I read it, I haven’t been able to stop re-thinking […]

  35. […] about Deliberate Discovery Alberto used this […]

  36. […] Introducing Deliberate Discovery by Dan North I came to the conclusion that placing the estimation effort in the inception phase […]

  37. […] waterfall even if we say we are doing Agile/Scrum/Kanban. So true, so deep. Dan has a solution with Deliberate Discovery, not an easy solution by any means and a tough sell to customers and colleagues. A brilliant […]

  38. […] and spoke about. This is about embracing uncertainty – which seems to sit nice with the whole BDD – Deliberate Discovery body of knowledge. Here's a few things I jotted […]

  39. […] a potential technique to deal with uncertainty, he suggested Deliberate Discovery: if we know that some unexpected bad things will happen, we can optimise the delivery process for […]

  40. […] By using concrete scenarios and using in plain text we ensure that everybody can collaborate or at least understand and critique them. That is the main point and gain from BDD – communication and a way to quicker find out what we don’t know about the feature in question yet. […]

  41. […] story planning game is the exact moment you know the less about your stories. Dan North, with his Deliberate Discovery, is very […]

  42. […] Dan North wrote an excellent post on Deliberate Discovery, and I’ve been using it to manage risk on my projects for a while now. It’s one of the most important tools in my toolbox, along with Real Options to which it’s strongly related, so I want to cover how I use it here. […]

  43. […] about uncovering the parts you don’t understand; the parts that are hard, and the gaps. Dan’s post introducing “Deliberate Discovery” takes this idea even further, but it started here: replacing the word “test” with the […]

  44. […] North wrote about this very phenomenon in his post called “Introducing Deliberate Discovery” a couple of years ago. In it, he recommended systematically going after unknowns in order to […]

  45. […] story planning game is the exact moment you know the less about your stories. Dan North, with his Deliberate Discovery, is very […]

  46. […] place alongside some of the Cynefin conversations that have been happening in the UK, as well as Dan North’s call for Deliberate (as opposed to accidental) Discovery. One excellent quote: “They’re only […]

  47. […] If you’re into Theory of Constraints, you know that in a production line environment, you tailor your system to the constraining machine, putting it first where possible. But software development is more like product development; it’s creative knowledge work, rather than doing the same thing over and over again. Ignorance is the constraint. […]

  48. […] Artikel von Dan North aufmerksam gemacht, der diesen Sachverhalt noch wesentlich genauer beschreibt: Introducing Deliberate Discovery. Dan North beschreibt hier wie die Ahnungslosigkeit (ignorance) verhindert, die eigentlichen […]

  49. […] agree so much that I decided to adapt my future Daily Scrums to to this. In Introducing Deliberate Discovery Dan North quoted this […]

  50. […] 刻意的发现,以最大化的积累决策所需的知识刻意的发现(Deliberate Discovery)是由Dan North提出。他指出项目启动时,团队缺乏业务领域、构建技术、遗留代码、工具等方面的知识,处于对项目最无知的状态,其中也包含对无知的无知。无知是项目开发进度和质量的最大制约。被动的、基于已有知识决策是不够的。团队应该在开发过程中,通过有计划的活动,刻意的去探索发现,以最快和最大化的消除影响项目进展的无知。这一过程被称为刻意发现的过程,它加速了软件开发的知识创建过程,同时为项目决策提供知识和技能的支持。在敏捷软件开发中,风险驱动的计划,技术探索需求(spike story), 早期的反馈,探索性的测试,演进式设计都属于刻意发现的过程,它们有力的支持了项目在执行、技术,以及商务上的成功。 […]

%d bloggers like this: