Striking the balance between analysis and feedback
For decades, we thought that it was possible to get requirements right, or at least mostly right, before developers started coding. Analysts absolved us of the responsibility to collaborate by producing all our requirements up-front, exhaustively checking and re-checking to make sure that nothing the business needed had been missed. Now, with the rise of Agile methods and the Lean Start-up movement, we have a greater awareness of the need for feedback.
Yet teams are still struggling with a very human quality: the innate dislike that we all have for uncertainty. Methods like Acceptance Test Driven Development (ATDD) and Behaviour-Driven Development (BDD) help us to come up with clear acceptance criteria, replacing the need for large amounts of documentation, while still creating the illusion that we can get our requirements correct before we start to code. Yet we still find our requirements changing, either from internal stakeholder feedback from our prototypes, or from the feedback we get from users and external stakeholders when our system goes into production.
As a result, we find ourselves trying to analyse everything up-front, then getting feedback on everything afterwards, until our whole process is bogged down with… well, with process. It’s a miracle that we’re ever able to produce anything in the middle!
Wouldn’t it be fantastic if we could know, in advance, where analysis up-front would be useful, where it’s completely unnecessary, and where feedback is required?
Some of the Lean enthusiasts have been looking at complexity thinking and the models around it, and applying it to software development. The Cynefin framework, in particular, has given us a new way of understanding our problems and the requirements that come from them – and like the best thinking tools, it all seems so obvious once you know about it! I’m going to explain the framework, and then show how it applies to our software development process and how we can make our practices more effective.
Cynefin and Complexity Thinking
The Cynefin framework was developed by Dave Snowden and some of his colleagues to describe different types of problems. It introduces four domains – obvious, complicated, complex and chaotic – and a fifth domain in the centre, disorder, for when it’s unclear which type of problem we’re dealing with (and we all have a preferred domain whose practices we fall back into). There’s also a little fold beneath the “obvious” to “chaotic” boundary, to show how easily obvious solutions can cause complacency and tip into chaos.
The domains are applicable to many different types of problem. Two of them – complex and complicated – are particularly applicable to software, so let’s look at how we can tell which domain we’re in.
These are the kind of problems that anyone can solve, even children. Dave Snowden’s company, Cognitive Edge, uses the example of a bicycle chain falling off. It’s easy to work out how to get it back on again.
In an obvious environment, you sense, categorize and respond. You say, “Oh, it’s one of those problems.” No analysis is required. There are best practices for solving the problem, and doing it a different way will be less effective.
If you’ve ever programmed a turtle – two units forward, turn right 90°, two units forward until you make it go in a square – then you’ve solved an obvious software problem.
Complicated problems have predictable solutions, but require expertise to understand. A watch is complicated. Cars are complicated. They have well-understood purposes and outcomes, but an untrained person without expertise will not be able to solve the problem.
In a complicated environment, you sense, analyse and respond. You say, “Let me have a look at this problem and I’ll tell you how to solve it, because I’m an expert at this.” Problems in this domain usually have more than one way of being solved, so there are multiple good practices.
A lot of our software is complicated. For instance, a CRUD form is well-understood by expert development teams. These problems are the least interesting for me (my preferred domain is Complex). In fact, it’s so uninteresting for most developers that they’ll create tools to do the job for them (like Rails), turning repetitive complicated problems into a smaller complex one.
My favourite way to understand complexity is that acting in the space causes the space to change, and cause and effect can only be understood in retrospect.
In a complex environment, you probe, sense and respond. You do something that can fail, safely, and it tells you things about the environment which you respond to, changing the environment. This is the land of high-feedback, risk and innovation.
In this domain, because the outcomes we look for keep changing, we can’t merely apply our expert practices and expect success. Instead, we have to change the practices we use based on what we learn. In this domain, we have emergent practices.
Chaos is your house catching fire. Chaos is accident and emergency. In chaos, you act, sense and respond, with act being the most important thing to do. You get out of the house. You stem the bleeding. You do something to get the situation under better control.
Ron Jeffries asked, “It is wise, in chaos, to take actions that cannot fail safely?”
The truth is that if our house is burning down and we don’t get clear, the situation will resolve itself – but it might resolve itself in our burning to death along with the house! There’s nothing safe-to-fail about that!
In chaos, novel practices emerge, which can help us make things safer in future. A lot of the practices have come about because of previous accidents – think of all the ways in which aircraft have changed, like oval windows instead of square ones, and inflatable slides that can be used as life rafts.
In software, chaos is that bug you released to production that brought your site down on the day of release, and you need to drop everything and fix it now.
Some code I wrote once brought down the Guardian travel site because the data we were using in staging was subtly different to the data in production. We all stopped what we were working on to fix the problem. We found the bug quickly, and I’m sure I felt more panic than anyone else, but I definitely don’t want to program in this space again!
If you’ve done it before, requirements are known
David Anderson, author of “Kanban”, used to work in the mobile phone industry. He talks about the difference between differentiators and commodities, or “table stakes”; the things you need to play the game. As an example, think of what it would be like if you entered the mobile phone market. Your phone would need to be able to make and receive calls and texts, store contacts, etc.
Now, if I asked you to define what “make a call” looks like, it won’t be difficult. You can easily give me a scenario in which you call someone. The requirements are very well understood, and if they aren’t, there’s probably somewhere you can go to get a better understanding of the domain.
If it’s this easy to derive scenarios from a brief outline of the requirements, then a brief outline is enough! We don’t need to go into detail about how we’ll know our solution works. We might want to have a couple of conversations, just to check that there’s no exceptional aspect to the problem. I usually find that asking, “Does it do anything differently to X?” is enough.
Sometimes we talk about bicycle thinking versus frog thinking. If you have enough expertise you can take a bicycle apart and put it back together again, and it will still work. With this kind of software we can divide it into small pieces and put those pieces together. User stories with clear acceptance criteria are one way to do this. This kind of work is usually easy enough to estimate, too.
If someone else has done it before, requirements are knowable
As well as differentiators and commodities, David Anderson talks about spoilers – things we do that someone else has already done. For instance, Kyocera produced the first camera-phone. When they did it, it was a differentiator, then Nokia spoiled it. Nowadays it’s so common that it’s pretty much a commodity.
As business domains and problems within them become better understood, we move from the complex space to the complicated one. There is a predictable solution; it just takes a bit of effort to find it.
In this space, having conversations about the problem can help to provide an understanding of the domain as well as uncover differentiating problems. Chris Matts, the analyst who helped in the early days of BDD, has a warning. “You don’t go to your expert trader and ask, ‘What’s an option?’” he says. “Find a different way to get that information, and save the conversations with the experts for the really interesting stuff.”
Once the domain is better understood, automating scenarios – using a tool like Cucumber to tie them to steps in code which perform the scenarios for you – can help to provide documentation for anyone else coming along afterwards. There’s a balance to be struck here between automating before code is written and automating afterwards. If you find yourself changing the automated scenarios too often, wait until you understand the problem better. If you find you can’t be bothered to add scenarios afterwards because you understand the problem so well, do more of it up-front. As your understanding of the domain grows, you will hopefully find yourself writing scenarios before you write the code a lot of the time.
If it’s never been done before by anyone, requirements will change
The code written in both BDD and TDD relies on a clear understanding of the outcome and how to reach it – when requirements are known, or knowable (you might have to look for a better expert).
Seeking clear outcomes works well in a complicated or obvious space – one in which cause and effect are strongly correlated and predictable – but we know that if it was possible to analyse everything up-front, Waterfall would work.
When you have conversations around the scenarios, sometimes the business won’t be certain about the outcomes they want. They might know about a higher-level outcome – gaining more customers, or more advertising revenue, for instance – but they might not know whether the feature they’re suggesting will actually work.
A lot of Agile enthusiasts recommend having clear acceptance criteria before a story is accepted into a backlog. However, Extreme Programming has one practice called a spike, otherwise known as “try something out and learn from it”. Prototyping in paper or code is similar.
If you find yourself in a space with high uncertainty, rather than trying to eliminate the uncertainty with analysis, it’s better to try something out – safely, so that it’s all right to fail – and respond to what you find. The more collaborative the business and IT become, the more business stakeholders are prepared to try out risky things, and the more the company innovates!
Make the conversations count
When we use concrete examples of how the system might work in our conversations, it makes it easier to spot the things we’ve missed. There are two questions I ask when I’m talking through scenarios, both based on the BDD template for scenarios:
Given a context
When an event happens
Then an outcome should occur.
I call the first context questioning:
Is there any other context in which this event would produce a different outcome?
And the second outcome questioning:
Is there any other outcome that’s also important?
For instance, if we were designing a cash machine, the two questions would remind us that some customers have overdrafts, and we need to debit the account as well as giving the customer money.
The most interesting responses, however, are when the business stakeholders reply with, “Oh, I’m not sure! I hadn’t thought of that.” At that point, we can stop trying to do analysis – we’ve just found uncertainty; we’re in the complex domain, and we can focus on getting feedback rather than forcing the business to clarify the requirements.
If people look as if they’re bored of the conversation, it’s probable that we’re in obvious territory or on that side of complicated, and there may be better ways to get the requirements than having yet more boring conversations.
Analysis, whether through conversation or otherwise, works best when things are a bit complicated, and a bit complex. When you know how to spot the extremes on either side, you’ll know how to focus your conversations better and make them count.
An earlier version of this article was first published in Developer Magazine in December 2012.