Jim and his colleagues (hope you don’t mind me using you as an example) write documentation as the reference point to code. Then, one group of people write the tests, and one group of people implement the functionality, both working from the same set of documents. They then run the functionality against the tests to see if they’ve done it correctly (or if perhaps the documentation is wrong).
(If the documentation is wrong, then documentation, test and code must all need rewriting. Do you have to wait for the documentation to be produced before the two teams of testers and coders can get on with the next bit?)
With Test Driven Design you write tests first, thinking all the time of the behaviour you want to achieve, then you write the functionality to pass the tests. So it doesn’t matter who writes the tests and who writes the functionality (and pairing helps you to avoid any bad design decisions that might otherwise get made).
The tests form the reference point. Because the full test suite is run every time anyone changes the build (about once an hour) you know very quickly if something is broken. Then, the person who breaks the build can look at the ‘broken’ code and tests and decide whether they’ve actually caused a bug, or whether the code should behave in a different way to the way it’s defined in the tests (usually this happens when a customer decides to change the requirements, or a bug is found because no one thought that a user would ever do something that obviously silly, even though they always do). The story cards provided by the customer also form a reference point, but if you don’t have a story card in your hand which directly contradicts the code behaviour, you know that the stories from previous iterations are still valid and still working.
When a requirement changes, or a bug is found, we write tests which fail, to prove that the software no longer does what we want it to. Then we change the code so that the tests pass again. There’s an excellent form of pair TDD called “Ping-pong programming” in which coder A writes a failing test which coder B must make pass, then coder B writes a failing test for coder A to implement, etc. The two coders sit together to do this. It’s a lot of fun and results in code that does as little as it possibly can to get the job done (ie: minimalist, clean, and very legible if me* and people like me** have beaten them over the head often enough).
Documentation makes it harder to change code, and is one of the reasons for the exponential cost-of-change curve seen on top-down projects. If your code is defined by its tests, then you can do whatever you like to them – refactor, rename, remove – and know that you haven’t broken any existing functionality. Five years down the line, the tests will tell you what the code ought to do, as well as giving you confidence that it still does it.
Oh, yeah – I think I’ve written a single one-line comment this week, and no documentation. You can tell what my code does because my tests say things like “testShouldAllowSheepToGrazeInTheField” and “testShouldPreventSheepFromWanderingOutsideField”. There are parsers around which will go through tests like this and produce documentation for you, eg:
– should allow sheep to graze in the field
– should prevent sheep from wandering outside field
Class ElectricFence*** extends Fence:
– should dissuade sheep from breaking this fence
* Should be “I and people like me”. Agile and English have this in common: it takes real effort to get it right, even the strictest proponents aren’t perfect, and when you do get it right, it can feel very strange.
** By people like me I mean people who have been beaten over the head regarding code legibility by people like me, repeatedly, until they got it. Like me.
**My ElectricFenceTest class extends the FenceTest class, so ElectricFence still maintains the behaviour of a Fence. Nothing to teach about TDD there; I just love geeky things like that.