Watch the magician. Watch how he drops a coin into his hand, closes his hand, shows you the closed hand, opens it with a flourish, and the coin is gone! He smiles. You look to his other hand. He turns it over and opens it with the same flourish. Not there either! Then he takes your hand, closes it into a fist, opens it and there is the coin! Now watch again. This time watch the other hand. As he turns the upper hand over and makes a fist you see the coin slip into the lower hand in a much smaller movement. He opens the empty hand and moves the lower hand to the top. Ignore the obvious movement. Notice instead how he guides the coin between his middle fingers. Ignore how he opens the other hand and instead see him discreetly drop the coin into the first hand again. Finally feel how he slides the coin into your palm as he closes your fingers. Magic? Perhaps not. But a classic illusion.
Magic works by misdirection. The magician exploits your tendency to look at the obvious while the real action is taking place elsewhere. When the obvious is made compelling enough it’s hard to even imagine looking elsewhere. Economists have a name for the impact of only looking in one place: they call it Opportunity Cost. Whatever you are doing right now comes at the cost of everything else you might be doing instead, and there are always other things you could be doing. But you aren’t considering the other things because you’re busy concentrating on the obvious.
The hidden cost of software development ¶
How does this apply to software development practices? A new practice usually starts out as well-intentioned advice: “We tried this thing [stand-ups, pair programming, TDD, burn down charts, iterations, build automation] and it worked out well for us. Maybe you should give it a try too!”
All too soon there’s an agenda attached: “We tried this thing and we think we can make money showing people how to do it. You should use it because it works! Here’s a white paper to prove it.”
By the way, if you read through that list of practices and thought “but those are all good things to do,” I’m talking to you. Where doesn’t the technique work? What else could you be doing instead? What are the trade-offs in choosing this over one of the alternatives?
Over the last couple of years I’ve been working with teams that were more productive than anything I had seen before. Their methods were a mix of “traditional” agile techniques and crazy, counter-intuitive practices that gave me culture shock. I had to unlearn a slew of received wisdom before I was able to start thinking the way they did.
Try this two-part exercise: Think of a practice or technique you use when you develop software, maybe your favourite practice. Got it? Ok, part 1: why do you do it? What benefits does it give you? You can probably think of a few. Write them down. Now, part 2: where wouldn’t you use it, and what are some alternatives? Write down some pros and cons of each one. I’ll wait.
Chances are you found that second part much harder. If you mostly came up with negative or ironic reasons for the alternatives, you are just listing more reasons to use the original practice. You can’t see past it to the alternatives. You’ve been tricked!
TDD under the spotlight ¶
As an example I’m going to pick on TDD since it’s among the most dogmatically advocated practices I’ve encountered, but you can apply this approach to anything. We’re going to look at TDD — really look at it — and see if we can discover where it shines and where it isn’t so useful.
If you are a TDDer, take a moment to think about where you wouldn’t use it, and what you might do instead. Recently I’ve seen a couple of people asking this as a rhetorical or even ironic challenge, as though you’d be mad to ever consider not using TDD.
TDD advocates say things like “TDD lets you make steady, confident change.” It often does. “TDD allows for emergent design.” Maybe. “Automated tests act as a regression suite and stop you reintroducing bugs.” They often do. “Tests act as living documentation.” They can. “Test-driven software is cleaner and easier to change than non-test driven software.” Actually I’ll take issue with this one. I’ve seen shocking pure-TDD codebases, and clean and habitable codebases with no automated tests at all.
Let’s take a look at the opportunity cost — the trade-offs — in each of these assertions. Each one could be an article in itself. I just want to give you a feeling for finding the trade-offs inherent in sound bites like these.
The opportunity cost of steady, confident change ¶
What’s not to like about steady, confident change? Well sometimes you can describe the problem but you can’t see an obvious answer. Financial trading applications are a lot like this: you might try several approaches and see how they perform, like a series of experiments. You want each experiment to be inexpensive so you can try several. TDDing each option will work, sure, but it is going to be more expensive than just sketching something out to see if it feels right.
What is the opportunity cost of all that extra time spent TDDing the sketches? You can sketch half-a-dozen ideas in the time it would take to TDD any one of them. And it doesn’t end there. The result of one attempt can change your understanding of the problem as much as offering a potential solution, which sends you in a new, unexpected direction. Some TDD advocates will say this is a series of spikes and you don’t need to TDD those. However I’m talking about putting software fully into production, and keeping the successful experiments as production software, so this doesn’t really hold.
TDD locks in your assumption about the desired end goal. It assumes you know where you are heading, or at least where you are starting from. If you don’t know what the solution even looks like this could be an undesirable strategy. Perhaps you should defer investing in a solution until you know more about the problem.
The opportunity cost of emergent design ¶
Sometimes the right design isn’t staring you in the face. It might take a shift in perspective, a reappraisal of the whole premise, to see simplicity through apparent complexity. TDD is about incremental change and improvement. It is great for finding local maxima but the best solution might require a radical rethink. The opportunity cost in this case is that we get trapped in this local maximum and miss a bigger win. In Lean terms this is the difference between kaizen, continuous improvement, and kaikaku, abrupt transformation. Neither can succeed without the other, but we have tended to focus only on kaizen as a form of optimisation.
TDD is the epitome of kaizen programming. To shine it needs an environment where you can step back, go for a mental walk around the block, come back and maybe change everything. You need an overarching vision, a “big picture” design or architecture. TDD won’t give you that. However, a robust implementation of complex behaviour is going to benefit from the incremental approach TDD provides. You will notice I’m not suggesting not to use TDD but qualifying where and how it is effective.
We hadn’t over-invested in the software by surrounding it with comprehensive automated tests, otherwise it would have been an expensive exercise. But what of the software that survived? We found it was possible to produce TDD levels of quality after the fact. We know what well-factored software looks like, and by this stage we knew we were prepared to invest in this software: it had proved its value. So we started introducing TDD-style tests for the more complex or critical parts of the code, which led us to refactor it to make it more testable, which created natural seams and subsystems in the code, leading to bigger refactorings, and so on. I’ve been describing this technique as Spike and Stabilize: get something, anything, into production to solicit rapid feedback, and invest in whatever survives.
The opportunity cost of automated tests ¶
Automated tests provide assurance that code is doing what it should. This suggests two questions: Are there other ways to gain that assurance? And is the assurance even valuable?
Automated tests are good at exercising specific pieces of code. Would you spot bugs in that code without the tests? You can’t help noticing a missing or unconnected submit button. If the app fails halfway through retrieving data what should it do? Once you’ve coded it to do that is it ever likely to stop doing it? Will an automated test give you any more assurance? What could you do instead?
The value of an automated test is a function of a number of things: the criticality of the code, the impact of a Bad Thing happening, the cost of correcting that Bad Thing, which may be reputational as well as operational, the likelihood of you not spotting it with regular usage in development, your familiarity with the domain. Code at the edges of a system is often harder to test than internal code. UI widgets, external integration points, third party services, can all be expensive to automate. Does the cost justify the value? Again I’m not saying not to do this, but to question the trade-offs and decide where it makes sense. I’ve seen teams burn weeks of development effort creating beautiful automated test suites that provided almost no additional assurance or feedback over using the application. The reputational cost there is high: they lose credibility with their stakeholders, who would prefer to see features being delivered. Again you need to strike a balance, understanding where automation is valuable and where it is habit.
The opportunity cost of tests as living documentation ¶
There are many kinds of documentation. Valuable documentation educates you. It tells you things you wouldn’t otherwise know because they aren’t obvious. You want to know what makes this system different from all the other similar systems you’ve seen. (You may also question why you are documenting these quirks, rather than eliminating them from the system and reducing the level of surprise for a newcomer, but that’s another story.) Some automated tests read more like historical documents, giving an insight into times past. Perhaps at one point there was a silly bug where refreshing the current status fired off an email to the back office. They spotted it straight away, of course, so the developers wrote a test called “should not send email to the back office when refreshing current status”. What? Of course it shouldn’t. But over time we accumulated many of these “of course not” tests, increasing the noise among the signal, so it becomes harder to find the really useful living documentation. Living is not a synonym for useful. Curating living documentation — automated tests — is as important an activity as curating any other kind of documentation. All too often it gets overlooked.
While I’ve focused on TDD in this article, there is value in identifying opportunity cost and trade-offs across all your practices and activities. I see TDD as a valuable and important development technique, but there are contexts in which it shines and others in which it is a hindrance. So take nothing at face value, and instead look for the trade-offs in every decision you make, because those trade-offs are there whether or not you see them. And if you can learn to spot them you can do magic.
History: This article was first published in_The Developer_magazine issue No. 2/2012 and is available as a pdf download. [This article has been translated into Russian by Denis Oleynik, and Spanish by Victoria Jaume Truyols.]