The brittleness of tests or specs is a recurring topic in BDD (or acceptance test-driven development, specification-by-example, or whatever you choose to call the thing where you write acceptance criteria, automate them and then make the application match). This is a tricky area, and there are probably as many styles of defining and grouping acceptance criteria as there are teams automating them.
The aspect I want to focus on in this article is domain language, because there’s a failure mode I encounter surprisingly often, which seems to have a common root cause.
Here’s an example:
Story: User logs in
Scenario: User with valid credentials
It seems innocuous enough. It’s a scenario about logging in, a common feature on a web app, but let’s take a look at the vocabulary in use here. We can see language from a number of different domains:
- Words like unauthenticated, valid, credentials are from the domain of security, more specifically user authentication.
- Words like name and password are from the subset of that domain concerning password-based authentication.
- Then we have words like enters, field and button. These are from the domain of UI widgets.
- Finally we have login page and welcome page. These are from the domain of web assets.
Imagine we’ve delivered this feature and the scenario works as described. Now what happens if we go from password-based authentication to OpenID, or a centralized authentication system (CAS) model? This would obviously break the scenario. The requirement for a password becomes redundant so we would naturally expect to have to rethink the scenario. Alternatively what if we change the UI so the name is now selected via a drop-down list or a radio button? The scenario breaks again, because there is no longer a Name field to enter a value. And again, what if we change the site navigation strategy and decide that a successful login should take you to the Dashboard page, bypassing the Welcome page? Once again we would need to reword the scenario.
If you are thinking this is starting to look like a bit brittle, then you are right. We’ve made the mistake of combining too many domains into a single scenario – an indicator that we are overspecifying something – which makes the scenario vulnerable to change along a number of axes. We’ve created accidental complexity because we lost sight of the intended audience of the scenario.
Unpacking the domain
Let’s think about this for a moment. The intent of the scenario is that a user with valid credentials can gain access. That entire sentence lies in the domain of user authentication, so we could ask someone from that domain whether the premise of the scenario is correct. (Are valid credentials enough? Should we also require a physical token? Can credentials expire?) In general, any domain will map to a single subject matter expert, who we can call the stakeholder or representitive for that domain. In fact it’s a circular definition: a domain is simply the “subject” in the phrase “subject matter expert,” so for any given domain we should be able to identify a single stakeholder who represents that domain.
As an aside, this raises some interesting questions. What if you are writing scenarios in a domain that no-one seems to care about? (You can tell by watching their eyes glaze when you talk about it.) A lot of what we traditionally call non-functional requirements can fall into this category. For instance, most non-technical people aren’t interested in networking terms like latency, throughput or packet loss, but they might perk up when you start talk about sluggish response times or requests going missing. You can use the glaze test as a heuristic to know if you are talking to the wrong person – or using the wrong language. Similarly, if you can’t the answer the question “who cares about this requirement?” with an actual name, you either have a redundant requirement or a missing stakeholder.
But back to our login story. Every domain we include in a scenario potentially pulls in an additional stakeholder, whose requirements or priorities might change at any time. So to avoid a scenario becoming brittle we want to involve the minimum number of stakeholders we can get away with. The simplest useful scenario should involve exactly two domains – and therefore only two stakeholders – namely the domain of the scenario title (the problem domain, or the what) and the domain of the steps (the solution domain, or the how). We can’t have fewer than two domains. If we exclude the problem domain we aren’t explaining the value, or the intent, of the scenario. If we exclude the solution domain the scenario can’t describe the behaviour we want from the application. But any additional domains are likely to provide unnecessary constraints or noise, and make the test brittle. (When I talk about the solution domain, I mean the “outermost” solution domain – the solution’s interface if you like – as opposed to the domains of any of its implementation details.)
The what of our example scenario is logging in with valid credentials. The how is using a name/password pair to validate the user. The details of the how with its button clicks and whatnot aren’t adding any value to describing the capability we want. So maybe we could convey the same intent by rewording the scenario like this:
Scenario: User with valid credentials
Now we have exactly two domains, namely the “what” of user authentication (words like unauthenticated, user, credentials) and the “how” of web-based security (words like restricted asset, submits, redirected, content). If either of these domains changes then we would expect the scenario to change too. If we decide valid credentials are no longer enough we’ll probably need to add some more steps. Or if we decide to turn the app into a thick client then there will no longer be a login “page” so we might need to change that step to be a “modal dialog” or “screen” or something else thick-clienty.
Notice how we don’t explicitly say what it means to “submit valid credentials.” We’ve pushed this down into the implementation of the step. If we were to change the authentication model, say from name/password to OpenID, then the scenario would break, which we want (the implementation of the how has changed) so we would change the implementation of the step “the user submits valid credentials” to provide an OpenID URL rather than name/password pair, say.
But the wording and sequence of the scenario itself wouldn’t change because the intent is still the same and the behaviour is still the same – it’s the implementation of the behaviour – corresponding to the step’s implementation – that has changed, and that shouldn’t affect the meaning of the scenario.
Chunking – or the myth of “declarative”
Neuro-linguistic programming (NLP) describes a technique called chunking, that’s useful for either solving problems or creating options. For any statement, you can chunk up by asking “Why..?” or “What for..?” questions and chunk down by asking “How..?” questions. The further you chunk up, the broader your perspective becomes, and the further you chunk down, the more detailed. The power of chunking comes when you start to chunk sideways, by asking “How else..?” questions.
Once you realise you can ask “What for?” or “How?” at any layer of abstraction, concepts like “declarative” or “imperative” suddenly become relative. Any layer is the what of the layer below, and the how of the layer above. SQL is often described as a declarative language: you describe what you want to select but you don’t tell the database how to find it. However, the statement
select employee_id, salary from employees where salary > 100000 could equally be considered imperative, a command, if the “declared” requirement is “find all employees paying higher rate tax” (tacit knowledge: higher rate tax kicks in at 100k), which itself is an imperative implementation detail if the original request is “establish our income tax liabilities by rate type.”
Be deliberate in your use of domain language
So tying this all up, when writing your scenarios keep in mind that you are writing them for two audiences: the person the feature is for and the person implementing it. Check the wording to see if you can spot anything from neither the problem nor the solution domains. If you find you are using language from outside those domains, you might be over-specifying the implementation or specifying unnecessarily broad requirements that mix concerns.
If you really care about how the behaviour is implemented, you should probably be specifying that elsewhere in a more fine-grained story – in other words chunking down to provide more detail – that won’t be interesting to the audience of this one. If not, you might want to push the detail down into the implementation of the steps.