What was your first job in the industry? ¶
My first job was playing Star Wars for a living. True story! I had a student internship in 1988 with a games company called Domark. They published the 8-bit versions of movie and board game tie-ins like Star Wars and Trivial Pursuit on home computers like the ZX Spectrum or Commodore 64. If someone called and said they had found a glitch on level 8 of The Empire Strikes Back, I needed to know what they were talking about.
What inspired you to create BDD? ¶
I was with ThoughtWorks in the early 2000s coaching development teams in agile methods and they would often struggle with TDD. I discovered that by changing the language around the principles and mechanics of TDD, specifically removing the word “test” and talking about “examples” and “behaviour”, that they seemed more comfortable with the idea. So it started out as a coaching tool for teaching programmers about iterative design.
Later on I was working with Chris Matts who was a business analyst at the time. We realised you could express user requirements in a similar way, using examples, so BDD grew to encompass analysis and design.
Who/what do you look to for inspiration? ¶
I find inspiration in day-to-day interactions with people. I see unnecessary complexity in so much of what we do, so I spend my time trying to simplify it. The people closest to the work often have the best insights into how to do this.
A lot of inspiration is about seeing the familiar through a different lens, and seeing the opportunities that are “hiding in plain sight”. Very little inspiration is truly original, it is more about applying existing ideas in novel situations. I love finding things that makes me think “Someone really thought about how people are going to use this.” I see it occasionally in physical devices but just as often in well-designed software or APIs. Conversely I find it grates when I use something that isn’t intuitive.
What do you think are the biggest challenges that organisations face in software development? ¶
It all starts with the structure of the organisation. Systems Theory says the structure of a system determines its behaviour. So an organisation with separate silos of Business and Technology, and further silos within those, like PMO, Development, Testing, Operations, Support, etc. is doomed to a life of hand-offs and blaming.
The biggest hurdles are around challenging existing organisational beliefs about how to structure work and the organisation around it. Large companies are still uncomfortable with cross-functional teams. Where you do see cross-functional teams it is usually only within the development “silo”. You rarely see product and user representatives, operations, security, compliance, or support functions as first-class citizens in a “cross-functional” team. It is usually just programmers and testers sitting together, which is a great first step but there is still a long way to go.
Why do you think teams have trouble delivering software faster? ¶
Again most of the challenges I see are structural. Hand-offs are time-consuming and often politically-mandated, involving sign-off and transitioning of blame. So people tend to work in larger batches. Instead of delivering a single feature all the way through, the process starts with an expensive exercise to sign off a large chunk of money, then there is a cascade of documents from the Proposal and Business Requirements Document through to detailed functional specification, and so on. It seems agile methods are tolerated for smaller pieces of work, but anything substantial immediately falls back into the traditional gated processes.
Ironically this is a self-perpetuating system. Having larger chunks causes the kinds of problem that people try to solve with larger chunks of work! We have a tendency to change systems for the worse when we try to make them better.
What are the biggest testing challenges that teams face today? ¶
Agile programming methods have led to faster delivery of smaller pieces of work. Traditional testing methods and strategies aren’t designed for this kind of delivery so traditional testers are struggling to make sense of this. It isn’t just how we test, it’s how we reason about testing and where testing fits into the process.
We used to save all the testing until the end, or at least until after a chunk of development work. We would have acceptance testing, performance testing, integration testing, regression testing and various other activities carefully planned and scheduled, taking weeks or months after the core development was done. Now we have more frequent delivery of smaller features, and what’s worse programmers are going in and changing existing code to add new features. Traditional testing methods are simply not suited to this.
How are teams solving these? ¶
A common approach I see, and one advocated by many agile methods, is a “testing pyramid”, with lots of unit tests at the bottom, a smaller number of acceptance tests as the middle layer, and a small amount of manual testing to top off the pyramid. This is often presented as an alternative to the traditional model which would have this the other way around, with most of the effort spent on manual testing and only a small amount automated.
I think we have elevated test automation into a religion. We invest disproportionately not only in automation efforts, but in the kinds of testing we are doing. We are missing out on enormous potential benefits of testing because we are looking at it through too narrow a perspective.
How can teams who want to improve quality make the case for the initial and ongoing investment that is required for testing? ¶
A while back I remember seeing testing described as “waste” in Lean terms, in that it was not a value-adding activity. The counter I saw to that was someone asking: Would you pay more for a car whose brakes had been tested than one whose brakes categorically hadn’t been? (And if you wouldn’t, would you pay money to have the brakes tested?) The answer to both is obviously yes, so testing the brakes adds value to the car.
For software, more than investing in testing it is important to invest in testability. Some architectures are inherently more testable. Some development practises lend themselves to testability. You can systemically make the process of software development more testable, and it pays dividends all the way. Don Reinertsen’s classic “The Principles of Product Development Flow” makes a solid economic case for testing, or more accurately for frequent, fine-grained feedback.
Is there any data to back up that testing improves quality, that we can use for this? ¶
The place to look for this is safety-critical industries. Look at the amount of testing and quality assurance NASA employs, or people building flight systems, MRI scanners, pharmaceutical software or other safety-critical technology. They are all too aware of the need to invest in testing at every step in their development process, and have the data to justify the enormous amounts they spend on it.
Is there still room for manual testing, or should all testing be automated now? ¶
Emphatically yes! I would argue that “manual testing” isn’t a single activity. There are many kinds of testing, and some lend themselves to manual work, others to automation, some to both. In each case there is a trade-off. Sometimes automation is the most effective approach, sometimes manual, sometimes a mix. The balance may also vary with time, so you should review things regularly.
You’re running two courses with AWA in June. Why did you create these courses? ¶
Software Faster came about through my experiences after I left ThoughtWorks. I spent a couple of years working with some extraordinary teams that seemed to be breaking all the agile rules. They were shipping world-class software in a fraction of the time of other teams, consistently, and having fun doing it! I wondered if it were possible to articulate how they were working and I ended up describing it in a series of patterns. Software Faster is my forum for teaching some of those patterns.
I built Testing Faster because a couple of clients were asking for training in agile testing. They wanted something suitable both for a traditional test organisation and for cross-functional delivery teams. Most of the agile testing training I see pushes the same line of “automate as much as you can and supplement the rest with exploratory manual testing”. In most of the organisations I’ve worked in this is simply bad economics. The cost of automation is often disproportionate, not just in the effort invested but in living with the poor quality of the results. This isn’t a criticism of the people involved. It is an inevitable outcome of the way they are approaching the problem and the tools they are using.
What is the difference between the two courses? ¶
They address two different audiences. Testing Faster is for people who want to do agile development well. Software Faster is for people who have reached the limits of agile methods and want to know what happens next.
What are you predictions for software development in the next 5 years? ¶
I know enough about our industry to know that I don’t know anything! Who could have predicted the rise of Docker containers, and the microkernels following in their wake? Even the ubiquity of cloud computing wasn’t certain five years ago.
I know we still haven’t figured out how to use different device formats. We tried to make tablets work like a computer until we figured out how people want to use a tablet. The same thing is happening now with watches and other wearables. Someone will figure out how to interact with a much smaller screen and that will be another shift. Virtual reality (VR) is just another screen format. It will either take over the world or remain a niche gaming technology.
I would love to think businesses will finally see the obvious common sense of Theory of Constraints, Systems Thinking, Complexity Theory and Lean Operations. They are all really saying the same thing, which is that everything is connected end understanding the relationships is more interesting than studying any of the pieces. Sadly I don’t think this will happen in the next five years because there are too many vested interests and egos involved, and people like to stick with what they know.
We are in 2016 in the middle of the next tech bubble. I don’t know if it will burst in the next five years, but it isn’t sustainable. Every tech startup’s business model is to gain subscribers and get acquired by Google or Facebook. FinTech and blockchain are interesting but again I don’t know what’s going to happen there. I would love to see more federated and transparent control of currency but I don’t see that happening in my lifetime, never mind the next five years!
What are you recommendations for organisations just getting into software development? ¶
Get help! Hire someone who has made the mistakes you are about to make, and listen to them. But make sure you establish their credentials. There are a lot of people selling snake-oil out there.
Which conferences can we see you at in London/UK this year? ¶
That’s a moving target! I try to keep my website up-to-date at https://dannorth.net.
Which books are you reading at the moment? ¶
I try to always be reading something about my work, something about faith (I’m a Christian) and something for fun. At the moment I’m listening to Beyond the Goal by Eli Goldratt, reading Paradoxology by Krish Kandiah as my faith book, and rediscovering The Eight by Katherine Neville, which Is a fantastic adventure novel I read when I was much younger!