McKinsey recently published an article claiming they can measure developer productivity. This has provoked something of a backlash from some prominent software people, but I have not seen anyone engage with the content of the article itself, so I thought this would be useful.
I am writing this as though the authors have approached me for a technical review of their article. You can think of it as an open letter.
What started as lighthearted iconoclasm, poking at the bear of SOLID, has developed into something more concrete and tangible. If I do not think the SOLID principles are useful these days, then what would I replace them with? Can any set of principles hold for all software? What do we even mean by principles?
Or how programmers and testers can work together for a happy and fulfilling life.
Why don’t we just automate all the testing? Is test coverage a useful metric? What does it mean to “shift testing left”? When and where should we be testing? How much is enough testing?
“If you had to offer some principles for modern software development, which would you choose?”
At a recent Extreme Tuesday Club (XTC) virtual meet-up, we were discussing whether the SOLID principles are outdated. A while ago I gave a tongue-in-cheek talk on the topic, so ahead of the meet-up one of the organisers asked what principles I would replace SOLID with since I disagreed with them. I have been thinking about this for some time and I proposed five of my own, which form the acronym CUPID.
My friend Gojko Adzic has been running a series of BDD quizzes illustrating different ways to approach some interesting BDD situations. I noticed on Twitter that Seb Rose, another BDDer (Cucumberer?), had gently taken issue with one of Gokjo’s solutions so I thought I’d take a look at them both. Before reading on I recommend reading Gojko’s solution and Seb’s response for context.
I recently found myself in yet another circular Twitter discussion of estimation, in which the One True Way to scope work in uncertainty ranged from entirely abandoning estimation to applying formal Cost Accounting methods and nothing less would suffice. I’ve talked about this at length and I will happily excise any comments that get into #noestimates.
Organisations often introduce Best Practices as part of a change program or quality initiative. These can take a number of forms, from “cook books” and cheat sheets to full-blown consultant-led methodologies, complete with the requisite auditing and accreditation.
This article introduces the Dreyfus learning model to challenge the strategy of naively applying Best Practices, and shows how they can not only fail to help, but even have a severe negative impact on your top performers.
Most of my work these days is helping organisations figure out how to be more effective, in terms of how quickly they can identify and respond to the needs of their external and internal customers, and how well their response meets those needs. This tends to be easy enough in the small; the challenges appear as we try to scale these techniques to the hundreds, thousands or tens of thousands of people.
Modern Scrum is a certification-laden minefield of detailed practises and roles. To legitimately describe oneself as a Scrum Master or Product Owner involves an expensive two day certification class taught by someone who in turn took an eye-wateringly expensive Scrum Trainer class, from one of the competing factions of “Professional” or “Certified” (but ironically not both) schools of Scrum training. But it was not always so.
Someone asked me recently for advice about consulting into multiple teams, in particular about how to make the most effective use of a number of short sessions. He will be spending two hours with each team in each visit. This will be an ongoing relationship, with a series of visits made up of these two hour coaching sessions.
Some software teams get stuck because their business users don’t realise they need to make time to take delivery of features they’ve requested. Over time their User Acceptance Testing backlog increases to the point where the team’s throughput virtually stalls.
Experienced delivery folks can have surprisingly good instincts for macro-level estimation, as long as we are careful to manage blind spots and cognitive biases. This can be an important tool in early project investment discussions, and can remove roadblocks where people are uncomfortable or unwilling to provide estimates.
Watch the magician. Watch how he drops a coin into his hand, closes his hand, shows you the closed hand, opens it with a flourish, and the coin is gone! He smiles. You look to his other hand. He turns it over and opens it with the same flourish. Not there either! Then he takes your hand, closes it into a fist, opens it and there is the coin!
The brittleness of tests or specs is a recurring topic in BDD (or acceptance test-driven development, specification-by-example, or whatever you choose to call the thing where you write acceptance criteria, automate them and then make the application match). This is a tricky area, and there are probably as many styles of defining and grouping acceptance criteria as there are teams automating them.
The aspect I want to focus on in this article is domain language, because there’s a failure mode I encounter surprisingly often, which seems to have a common root cause.
Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver, mostly because we are romantics with big egos. Programming is about automating work like crunching data, processing and presenting information, or controlling and automating machines.