Deliberate Testing interview
Josiah Renaudin interviewed me in May 2015 ahead of the STAREAST conference in Orlando. The original interview is online at AgileConnection.
Summary: In this interview, technology and organizational consultant Dan North discusses deliberate testing in an agile world. He talks about how testing was perceived before agile became such a big part of the industry, and whether or not we’ve lulled ourselves into a false sense of testing security.
Josiah Renaudin: Today I’m joined by Dan North, who’s a keynote speaker at our upcoming STAREAST conference held in Orlando. First, could you tell us a bit about your experience in the industry?
Dan North: I’ve been working in IT as a developer, coach, consultant, and various other roles for about twenty-five years, in a varied mix of organizations and industries. In terms of agile experience, I first came across Extreme Programming (XP) around 2000 and joined agile pioneers ThoughtWorks in 2002 as their first UK technical hire. I spent eight years there, helping to grow the London office to around 250 people, and along the way I developed behavior-driven development, which I describe as a second-generation agile method, inspired by the work of Kent Beck, Ward Cunningham, Martin Fowler, and other Agile Manifesto signatories.
Since leaving ThoughtWorks at the end of 2009, I’ve been exploring other software delivery methods, as an employee of an electronic trading firm and then as an independent, which has led to my current “Software, Faster” body of work.
JR: Before agile became a mainstream methodology, how was testing treated or perceived within a standard organization?
DN: Well, “agile” is really a blanket term for a whole family of methodologies. The industry seems to have adopted agile as a synonym for Scrum, but that’s a historical accident. In any case, traditional plan-driven software delivery methods tend to view testing as a separate stage, near the end of development, and informally that testers were like second-rate programmers. Testing was viewed as something you did to learn your technical chops so that one day you would graduate to programming.
JR: Was it difficult to maintain project cohesion within an integrated development team with testing often being outsourced?
DN: I think it’s difficult to maintain cohesion with anything being outsourced, unless the outsourcing partner is genuinely a partner and is treated as a first-class player. Usually we outsource things we think are commodity activities, as a cost-saving strategy. Outsourcing something as critical as testing has never made sense to me.
JR: Why do you think we’ve lulled ourselves into a false sense of security with agile testing?
DN: Most of the teams I work with who would describe themselves as agile tend to have two types of testing: automated feature and unit testing, and manual exploratory testing. When you look at the rich and varied landscape of software testing, it’s almost embarrassing how many types of testing we aren’t even aware of; never mind whether or not we are choosing to do them.
JR: Do you think we automate too much, or too little in our current testing climate?
DN: Yes! I believe we automate both too much and too little, or rather, we tend to automate indiscriminately, which leads to both of these. This is a result of having an arbitrary goal of “automation,” driven either by a test coverage metric or just the received wisdom that “Automation Is Good.” Automation is just a technique, and like any other technique, it can be used well or poorly, and can provide benefit or hindrance.
JR: Can you talk about some of the classes of tests that aren’t being considered today?
DN: To give you a frame of reference, a technical leader I know was putting together a talk about testing and built a list of all the types of testing he could find, asking numerous testers and researching various testing resources. His final list was well over 100 distinct types of testing. Most teams I know can only even think of ten or twenty types of testing, even those with dedicated testers. It’s not surprising that we have so many blind spots.
JR: Are we purposely ignoring these tests, or are testing teams just not knowledgeable enough about these classes of tests to take notice?
DN: I think it’s down to perspective. Terms like “test-driven development” or “automated acceptance testing” imply that driving behavior using automated examples is a substitute for proper testing. That was one of the reasons I started using the term “behavior-driven development,” taking the testing vocabulary right out of it. An unexpected side-effect of that was how much the tester role became central to BDD. I believe the idea of testing teams itself is flawed. Testing is a set of capabilities that should be intrinsic to any software delivery team, rather than something handed off to a dedicated testing team.
JR: More than anything else, what message would you like to leave with your audience at STAREAST?
DN: Mostly to reaffirm that testing is a first-class discipline in itself and is a necessary and vital part of successful software delivery. And that the role of a tester in an agile team is about raising the team’s awareness and capabilities in the rich domain of testing.