I’m sure this can’t be original – most of it has come about from talking to my colleagues, or is nicked from the way JBehave works and my attempts to use it – but I wanted to write it down so that I’ll remember next time. It’s a bit long and techy. Forgive me.
JBehave introduced me to the idea:
- Given a context
- When these events happen
- Expect this outcome to occur.
At the moment, our team writes the context by performing the steps which get us into that context – we run a scenario, which might be the first part of an existing story test. It might be the first part of tests for a lot of stories. This means that we’re duplicating bits of tests. It often means that the same code appears twice in different tests. It certainly takes time to run.
Here are some stories:
- As a sheep farmer and organic wool producer, I want to shear sheep and sell the raw wool so that I can make money.
- As a sheep farmer and organic jumper maker, I want to shear sheep, spin the wool, make jumpers and sell them so that I can make money.
- As a sheep farmer, I want to shear sheep and record the weight of wool sheared so that I can breed the best wool producers.
- As a sheep farmer, I want to shear sheep and record the date on which they were sheared so that I know when to shear them again.
All of them require the shearing of a sheep. I imagine nice little screen in which you can add the details of your sheep to the database, then pick which of your sheep were sheared, what weight of wool was produced, etc. Filling in these screens for every story doesn’t give you any value in terms of code confidence the second time it’s done. Why not just do it once, then pretend that it’s been done for every other story?
So here’s my idea for cleaner, quicker acceptance testing.
We write the first part of an acceptance test. The bit of the test which gets us to this point of the story is a class of its own, and implements a contextual interface. So, for instance, we might test that when we shear a sheep, weigh the wool and tell the app that we’ve put it in the cupboard, we get records that there are 3kg of wool in the cupboard. We could also, if we wanted to, check that one of our sheep was shorn on the 13th August 2005. We would name this test
SheepShearingTest, and it might implement the interfaces
BlackieIsShorn, both of which would extend the role of Context. (In JBehave there’s a
Context interface). If we want to make these contexts reusable, we can always just call them
SheepIsShorn and either set parameters which configure them or respond to the configuration in our tests.
At this point in the acceptance test, we can check the state of the domain model – eg: check that we have 3kg of wool in our class representation of the domain. This isn’t part of the story test itself. It might even have a class of its own –
RawWoolInCupboardVerifier. This verifier takes a domain model in its constructor, possibly through interfaces only, and has methods to allow you to check the bits you’re interested in. It might even be able to check what shades or species of wool are in the cupboard, what date they were put in there, etc. It’s reusable, so you can put it into lots of tests. It should also be pretty quick to run. Each test should only use the verifications it’s actually interested in.
The second part of our acceptance test deals with the events which happen – eg: spin the wool. This, we just ‘do’. Whatever we use to do it implements interfaces – for instance,
WoolSpinner. The implementing class is part of the production code. If we don’t have code to do this, we can write it as we complete the story, unit testing as appropriate.
Running the events on the context gives us an outcome – 10 balls of wool in the spun wool cupboard. We write another checker for this. It might also be in its own class, which implements an interface of its own –
SpunWoolInCupboardVerifier. You get the idea. (In JBehave, we also have an
Now, when we construct our test, we can dependency-inject it with the contexts, the bits of code which actually do the events, and the outcome.
And they’re all reusable. They’re all clean. Best of all, they encourage us to think about the domain model; to write our application code cleanly, around that model; to understand the points at which events converge, and package the application classes appropriately.
Things I really like about this approach
- At any point, you can replace the context of an acceptance test with anything else that matches the context / outcome Verifier – a stub of a domain model, a domain backed by a test database, or a slightly different method of getting into the same state.
- The outcome of one event can be used as the context for another.
- You can even do the same thing with the events.
- The stubs or mocks for a context can be written before the real code to get to that point has been developed – so any part of a system can be produced to an interface without waiting for other parts. The Verifiers help check that the real system does actually produce the expected result. Top-down or bottom-up – doesn’t matter.
- If we have a bug which requires a bit of deviation from the simple story, we can use the same context and outcome classes as the simple story tests do, and just change the events. And as long as we can create the same context without running the application, it doesn’t matter whether we use the real application, or just build up an appropriate domain model as if we really had used the application.
- The Verifiers themselves can be implementations of interfaces – for example, one might just check the domain model; another might check that the database has changed. Which checker you use would depend on whether you’re running real code, or just stubbing the model out, or mocking a system… etc.
JBehave is not yet at version 1.0 – it needs a bit of work. The unit behaviour classes are good, but the story runner isn’t done yet. It promises to be good, and to support the framework above. I can’t wait till it’s finished, so I’ll be putting some more effort into it next week.