Over the last few years, teaching people the Cynefin framework early on in engagements has really helped me have useful conversations with my clients about when different processes are appropriate.
There’s one phrase I use a lot, which is self-evident and gets around a lot of the arguments about how to do estimation and get predictability in projects:
“It’s new. If you’ve never done it before, you have no idea how it will turn out.”
This is pretty much common sense. When I teach Cynefin, I also help management and process leads look for the areas of projects or programmes which are newest. These are the areas which are most risky, where the largest number of discoveries will be made, and often where the highest value lies.
A Simple Way to Estimate Complexity
There’s one kind of work which is urgent and unplanned, and we don’t tend to worry about measuring or predicting, because it absolutely has to be done: urgent production bugs, or quick exploits of unexpected situations. This matches Cynefin’s chaotic domain; a place which is short-lived, and in which we must act to resolve the situation lest it resolve itself in a way which is unfavourable to us.
Aside from this domain, all other planned work can be looked at in terms of how much ignorance we have about it.
Something I often get teams to do is to estimate, on a scale of 1 to 5, their levels of ignorance, where 5 is “complete ignorance” and 1 is “everything is known”.
If a team want a more precise scale, I’ve found this roughly corresponds to the following:
- Just about everyone in the world has done this.
- Lots of people have done this, including someone on our team.
- Someone in our company has done this, or we have access to expertise.
- Someone in the world did this, but not in our organization (and probably at a competitor).
- Nobody in the world has ever done this before.
You can see that if a piece of work is estimated at “5”, it’s likely to be a spike, or an experiment of some kind, regardless of how predictable we might like it to be! This matches Cynefin’s complex domain, and sits at the far edge, close to chaos, since we don’t yet know if it’s even possible to do.
As we move down the numbers, so we move through complicated work – understood by fewer people that we might consider to be experts – through to simple work that anyone can understand.
We can also measure this complexity across multiple axes: people, technology, and process. If we’ve never worked with someone before, or we’ve never made a stakeholder happy; if there’s a UI or architectural component that’s unusual; if there’s something we’d like to try doing that nobody has done; these are all areas in which the outcome might be unexpected, and in which – as with Cynefin’s complex domain – cause and effect will only be correlated in retrospect.
Helping teams to be able to estimate the complexity of their work has had a number of interesting outcomes.
Devs are happier to provide estimates in time alongside the complexity estimates. There’s nothing like being able to say, “It’ll take about 20 days, and if you hold us to that you’re an idiot,” with numbers!
Management can then use the estimates to make scoping decisions about releases (in the situations where an MVP might not yet be doable due to large transaction costs elsewhere in the business, like monolithic builds or slow test environment creation). We can also make sensible trade-offs, like whether to use an existing library, or build our own differentiating version now rather than later.
When the scope of a project is decided, be it an MVP or otherwise, it’s very easy to see where the risk is, in the project – and to do those aspects first! Even at a very high level, if a team are delivering a new capability for the business, we can still talk about how little we know about that capability, and in what aspects our ignorance is greatest.
When it comes to retrospectives, rather than treating actions as definitive process changes, teams can easily see whether it’s something that will predictably lead to an improvement, or whether it should be treated as an experiment and communicated as such (and that last can sometimes be important – the worst commitments are often the ones we don’t realise we’re making!)
And best of all, rather than pushing back on business uncertainty (“I’m sorry, we can’t accept this into our backlog without clear acceptance criteria”), the teams embrace the risk and potential for discovery instead (“What can we do to get some quick feedback on this?”) They can spike, learn from the spike, then take their learning into more stable production code later (Dan North calls this “Spike and Stabilize”). Risk gets addressed earlier in a project, rather than later. Fantastic!
And all you need to do, to enjoy this magic, is estimate which bits of your work you, and the rest of the world, know least about.
Making Better Estimates
One of the things I’ve noticed about development teams is that they often like to make everything complex, particularly the devs.
Testers are very happy to do the same thing over and over again, with minor tweaks. Their patience amazes and inspires me, even if they are utterly evil.
Devs, on the other hand, will automate anything they have to do repeatedly. This turns a complicated problem into a different, complex one.
The chances are that if we’re actually in a well-understood, complicated domain, rather than a complex one, someone will have solved the problem already and – because we hate having to do the same thing twice – they’ll have written up the solution, either in a blog post, or a StackOverflow or other StackExchange answer, or as an open-source library.
So before you go off reinventing the wheel, you can perform a few searches on the internet to see if anyone has some advice for you first. This can help you work out whether your work really is complex or not.
The Evil Hat
One of the things we need to do in the complex domain is ensure that any experiment is safe-to-fail.
A pretty easy way to do that is to put on the Evil Hat, and think about how you could plausibly cause the experiment to fail. You know – for fun. Think about how you could do it in the most destructive way possible. Then try to think of ways that the nasty, good people might stop you from doing that.
Cognitive Edge have a great method called Ritual Dissent, that’s very similar to the pattern Fly-on-the-Wall that Linda Rising taught me some time ago. This is similar to putting on the Evil Hat, or at least, inviting others to do so.
If you have any difficulty coming up with ways in which to cause an experiment to fail, try asking a tester. They’re really evil, and very, very good at breaking things.
Lastly, take a look at Real Options, a significant part of which is about making decisions into experiments instead. (Another part of it is about getting information before decisions are made, so it plays nicely with both complicated and complex spaces, and even helps us move our problems between them).
Since we don’t always know what we don’t know, and, in a genuinely complex space, things which worked last time might not work this time, it’s a pretty useful tool for when we’re not sure exactly how little we know, too.
The complexity estimates turn out to be all kinds of useful. I’ll be writing a couple more blog posts soon; one about capability-based release planning (which I’ve touched on here), and one about pair-programming, including how it relates to complexity.
Watch this space!