This may seem odd coming from someone who spent months coding a BDD tool, but seriously… put it down.
BDD has taken off in the last few years, and lots of people have approached me for help. Many of the questions I get are along these lines:
- These don’t seem worded quite right. What do you think?
- How can we stop these scenarios being so brittle?
- How do we use our BDD tool to do <this kind of testing>?
- We’re using a BDD tool. How do we get our analysts to help us?
- How can we avoid all this duplication in our scenarios?
- What do we do if the business aren’t interested in BDD?
Don’t start with BDD, and definitely don’t start with the tools.
Start by having conversations, engaging both testers and business stakeholders or their proxies (analysts are good here). If you can’t have those conversations, stop. You can’t do BDD. You might be able to write some scenarios or some acceptance tests, but they’re likely to be brittle and it’s unlikely that anyone will be interested. Here’s why.
BDD is focused on discovery.
Dan wrote a blog post introducing Deliberate Discovery some time ago. This philosophy is at the heart of BDD. There are things about your domain that you don’t know, or you’ve misunderstood, or that nobody’s thought of yet. By talking through examples in groups, the chances of uncovering these gaps and misunderstandings is increased. BDD provides a language – “Given <a context>, when <an event happens> then <an outcome should occur>” – which helps to prompt questions around whether the scenarios are accurate. It’s meant to help you have conversations, not replace those conversations entirely.
Conversations help us discover things.
You don’t have to stick to the BDD format to have those conversations. The language is a guide, and serves well for automation. Chances are that if you can’t phrase your discoveries in that way, you won’t be able to test the understanding later (though you might be able to explore some more). This is because tests start from a particular context, perform some events and verify some outcomes.
There are other things than scenarios which you can discover through conversations, though. You can listen to the language which your business stakeholders use. You can ask questions about the words they use and gain an understanding of the domain, which may not be used right now but will help you design your code in a way which allows changes to be made more easily, later.
The conversations often help to engender trust between stakeholders and the development team. When someone indicates that he wants to understand me, and plays back that understanding to check its accuracy, I am reassured that he genuinely cares. I tend to be more patient in my explanations, more likely to come join him in his place, wherever it is, and more likely to check back later to see how he’s doing.
Business stakeholders are just the same, only busier, and harder to get hold of. But that’s why they call it business.
BDD isn’t the only way to do testing.
In fact, BDD isn’t even really about testing. It’s just a way of capturing those conversations which happens to provide some tests, and lifts some of the burden on the testers. If you want to run additional performance tests, exploratory tests, or even record some tests, it doesn’t have to come under the BDD banner. It’s perfectly OK to do BDD and test things as well.
BDD isn’t about the tools.
You don’t need BDD tools to do BDD. A few of the projects I’ve been on now have run up a small DSL which they found perfectly easy to maintain. The only reason to use English-language BDD tools is because it helps to engage people who can’t read code (and I’ve seen senior business stakeholders perfectly happy with code-based DSLs before).
Natural-language BDD tools introduce another level of abstraction. They make scenarios harder to maintain. They can introduce ambiguity. If they don’t have auto-completion, it’s hard to see what steps are already in play. It’s even harder to see what steps can be deleted, and don’t need to be maintained. Don’t use natural-language BDD tools unless they give you significant other benefits. You can capture scenarios from conversations just as easily on an index card.
Write fewer scenarios.
Most of your code will be covered by just a few scenarios along the lines of, “Normally everything works like this, and then in this context it behaves unexpectedly, like this.” Use those tricky scenarios to cover the normal behaviour wherever possible, and focus on making sure that the code is well-designed.
Another way to make scenarios maintainable is to make sure that the steps are phrased at a very high level.
Higher than that.
If your scenario starts with “When the user enters ‘Smurf’ into ‘Search’ text box…” then that’s far too low-level. However, even “When the user adds ‘Smurf’ to his basket, then goes to the checkout, then pays for the goods” is also too low-level.
Think about the capabilities of your business. What does it allow your users to do? What value do the stakeholders get from it? How does it actually make money?
You’re looking for something like, “When the user buys a Smurf.” Now you’ve got the money, what differentiates your business from everyone else’s? Do you advertise other Smurf-related products? Find everyone who’s bought a Smurf and show what else they bought? Offer discounts for Smurf holidays?
By thinking about the capabilities of the system and the goals of the business stakeholders, you’ll find ways to phrase scenarios which keep them maintainable, no matter how the GUI changes. The conversations you have will help you to identify those capabilities and goals.
The rest of scenario maintainability is mostly about removing duplication. Separate steps into particular concerns, like “admin”, “browsing” or “purchasing”. Put screens or pages into their own objects (thank you Simon Stewart and WebDriver). If you find your scenarios are getting unmaintainable, pay them the same kind of love you’d pay your code. They’ll probably be around longer.
Assume you got it wrong.
Part of the Deliberate Discovery ethos is that there is always something we don’t know we don’t know – second order ignorance. No matter how many scenarios we have, our understanding can still be wrong. The only way to discover this is to get feedback. Showcase what you’ve produced. Ask people to use the system and see how it feels. The earlier you can do this, the quicker you’ll discover that there’s no silver bullet. You will never get everything right – even if you analyse everything up-front, Waterfall style – and BDD is no exception.
Have enough conversations to know how to get started. Find out where the riskiest bits are, and where the system behaves unusually. Chat. Discuss. Discover. Work out how, and when, you’re going to get feedback on the work you’re about to do.
Then, and only then, reach for the tools.