Cynefin for Everyone!

Five years ago, around Christmas 2012, I wrote an article about Cynefin, the sensemaking framework. I focused it on software development, because that was the main industry I worked in, and particularly focused on using it to work out which of our requirements were complex, so that we could embrace uncertainty and risk, and avoid the disorder that so often results from our human desire for certainty, and the legacy of our more traditional ways of working.

Of course, that human desire for certainty is prevalent in all industries, not just software development. Cynefin’s origins are deeply rooted in sociology and anthropology, and while it definitely helps in developing software systems, it’s at its most enlightening when used as a lens to make sense of human ones.

In this article, I want to share some different stories from outside of the realm of software development, and talk about how cultural change happens; how our human tendency to spot patterns, cling to certainty, and want predictability, can get in the way of changes that might be helpful to us… and how seeing the world through a Cynefin lens can help with that.

A Quick Introduction to Cynefin

The Cynefin framework was developed by Dave Snowden. It’s a sensemaking framework; it helps us make sense of different types of problems, depending on how predictable or unpredictable they are.

It introduces four domains – obvious, complicated, complex and chaotic – and a fifth domain in the centre, disorder, for when it’s unclear which type of problem we’re dealing with (and we all have a preferred domain whose practices we fall back into). There’s also a little fold beneath the “obvious” to “chaotic” boundary, to show how easily obvious solutions can cause complacency and tip into chaos.

The Cynefin Framework by Dave Snowden. CC BY-SA 3.0

The Cynefin Framework by Dave Snowden. CC BY-SA 3.0

The domains are not categories. Things may be closer to one border than another and may move across domains; problems which were complex and hard to solve yesterday may be better understood in context today, and the dynamics are just as important as the domains themselves. It’s easiest to start with the domains, though, and to start with the easiest domain of all.

Obvious

Obvious problems are ones that either children can solve, or, if they do require expertise, the solution is obvious.

When the beer runs out in the pub, my landlady knows how to change the barrel. I know that that’s what you have to do; I don’t have the expertise to actually do it, but the solution is still obvious to me. It’s a “duh!” problem; a “one of those”. A lot of problems we encounter fall into this domain, but we don’t think about them very much because they’re boring and easily solved – tying our shoelaces, putting a pen to paper, or wheeling a grocery cart. In the obvious domain we can categorize the problem. “Oh… it’s one of those. Again.”

In the obvious domain, there’s normally one good way to solve the problem – a “best practice”. Note that when people talk about “industry best practices” they’re not usually talking about obvious problems! The “best practices” they talk about usually work for a number of contexts, but not for every context.

This domain used to be called “Simple”, but Dave Snowden renamed it a few years ago. I like the new name better, since some people find complicated problems simple to solve too… even when they’re not obvious.

Complicated

As things become more and more complicated, the solution requires more and more expertise. A watchmaker knows how to fix your watch. A car mechanic knows how to fix your car. The outcome is still predictable, but now it takes an expert to know how to get there.

A lot of mechanical things end up in the complicated domain. You can take them apart and put them back together again; they’re made up of the sum of their parts. There’s usually more than one way to solve the problem: good practices, any of which will work, provided you know how to apply them. If you have the right expertise, you can analyse the problem, closing the gap between where you are, and where you want to be.

Both the Obvious and Complicated domains are called ordered, and they’re predictable. Cause and effect are correlated clearly; either obviously, or with expertise. Ordered problems have repeatable solutions; the same process applied to the same problem will always work.

Complex

Complex problems are ones in which the solution, and the practices which lead to it, emerge. While it’s possible to think of examples of what a solution might look like, attempting to create that solution usually creates unexpected side-effects; other problems or unintended consequences that might need to be solved. Cause and effect are only correlated in retrospect; you can see how you got there, but you couldn’t possibly have predicted it. This is the domain of “wicked” problems that tend to resist being easily solved with expertise.

In the ordered and complicated domain we can plan a solution, carry it out, and expect it to work, at least mostly (humans are complex, and make mistakes).

In the complex domain, we have to probe the problem. That means to try something out in a way that’s safe-to-fail. Ideally we’ll have several probes running in parallel.

It’s not quite the same as an experiment, because experiments are intended to be repeatable, with a provable hypothesis. In complex problems, though, the side-effects mean that you might not get the impact you were hoping for, even with the effect you intended… or you might find that the effect wasn’t really what you wanted after all.

In the 1920s and 1930s, researchers carried out some experiments in an electrical factory outside Chicago. They wanted to find out whether workers were more productive with higher, or lower, levels of light. They found something puzzling; the workers in both groups seemed to be more productive! In retrospect, they found out that it was because they were being observed, and they were working harder as a result. Even observation can cause changes in the complex domain!

Human systems, particularly, are Complex Adaptive Systems or CASs; systems in which the agents of the system (people) can change the system itself. When we want to create changes in human systems safely, we have to be careful not to make those changes too big or sweeping, because we don’t really know what will happen in any given context, even if we have some good ideas.

The Toyota production system, for instance, is famous for being superbly efficient. Their just-in-time manufacturing processes include something called the “andon cord”. If anything goes wrong on the line, the worker who sees it is meant to pull the cord, allowing a supervisor to come in and help. But when this system was taken to GM’s factories in Fremont, Detroit, it didn’t work at all… because the employees were too scared to pull the cord.

As the management guru, Peter Drucker, said, “Culture eats strategy for breakfast.”

Chaos

Chaos is a transient domain; it resolves itself quickly, and not necessarily in your favour. It’s dominated by urgency and the need to act, and act fast. It’s the place of urgent production problems, of your house burning down, of people bleeding to death.

It’s also the place of urgent opportunity, and with care it can be used to generate innovation and help with decision making (more on that later)… but it’s normally regarded as a really bad place to be.

Chaotic scenarios include the September 11 attacks; the financial crisis of 2007/8; Nokia’s troubles of 2011, in which their CEO Stephen Elop described the company as standing on a “burning platform”; and the Equifax data breach.

Chaos is any situation in which value or opportunity is quickly being destroyed. It isn’t safe for things to fail. (If it is safe for things to fail, the problem is complex, not chaotic.)

In chaos, decisive action is the way forwards. There’s no time to get advice or input from anyone else. This leaves people open to finding their own solutions, and as a result, it can generate novel practices… so as we’ll see, we can actually use it, provided we manage the exit from chaos appropriately.

Disorder

Disorder is the domain in which we don’t know which of the domains dominate, so we behave according to our preferred domain.

For some people, experimenting is creative and fun. I don’t always read the instructions on things… and that’s got me into trouble more than once, when it wasn’t safe-to-fail!

More typical in organizations, though, is the tendency to treat complex situations as if they can be analysed, and making plans and commitments based on that flawed analysis. When disorder persists, chaos is the inevitable result.

I teach people this scale to help them avoid disorder, asking, “Who’s done this before?”

5. Nobody’s done this before.

4. Someone’s done this, but not in this context.

3. Someone in our organization has done this (or we have access to expertise some other way).

2. Someone in our team’s done this.

1. We all know how to do this.

I was once working with a government department, in which a manager was creating a new Target Operating Model to help with changes to policy and law. I taught Cynefin to his team, then taught him the scale. He squinted at it. “The 5s and 4s… that’s where the risk is!” he exclaimed.

“Yes,” I said. He was right. The 5s and 4s are both complex. A 5 is something that nobody’s ever done; it might not work at all. A 4 is something we’ve seen other people do… but we still don’t know what they discovered, or whether it will be viable for us to replicate it. The complex domain is the domain of unknown unknowns. We don’t know what the risk is… but we can be pretty sure that when we do something new, there will be more to discover than our experience suggests.

He looked at it again. “But that’s where the value is. That’s why we’re doing what we’re doing.”

“Yes!” I exclaimed. He was right again. Whenever we make a change, it’s because we want a capability we didn’t have before, or we want to be able to do something in a new context. Maybe we want our business to be more responsive to the market, or we want a new software feature, or we’re opening a new office, or changing a contract. All those things are new, and will inevitably come with discoveries.

“So we should do those early,” he said. He had worked it out. We need time to react to the discoveries we make, and if we can focus on the valuable, risky, new things, and get feedback on them quickly, we’ll go faster.

“Yes,” I said.

He looked at the wall behind him. “But the entire industry does it the other way around…”

“Yes,” I said again. “They do.” Because we love predictability, we tend to try and analyse and plan things, and when we can’t, we leave them till later, hoping that the right answer will emerge. By then, of course, we’ve made a lot of investments and commitments… and if the discoveries we make show us that those investments and commitments are wrong, it’s hard to undo them and turn them around.

And the manager had put up on the wall all the aspects of the new TOM that could be analysed and which were well-understood, leaving gaps saying “TBD (To Be Decided)” for all the other bits. His team had focused on the 1s, 2s and 3s, and invited external stakeholders in to give them feedback. And everyone had nodded and been happy with what they had seen… but they’d learned very little as a result. What the manager realized was that putting the 4s and 5s up would have got very different feedback, and he might have learned quite a lot as a result.

Using Cynefin to Manage Different Types of Work

Another team I was coaching was setting up a new office, with new software being generated by a different team elsewhere. They wrote down their plan on post-its; all the things they had to do. They put the numbers from the complexity scale on them: 5s to 1s. “What’s the newest thing you’re doing?” I asked.

They quickly narrowed in on one post-it. “This,” the team lead said. She held up a small post-it that said “Training”.

“What do you need to do to get feedback on that?” I asked.

“Well… it’s actually pretty easy,” she replied. “We’re not going to train the users this time. The software should be fairly intuitive. We’re going to sit in the same room, and if there’s anything the users don’t understand, we’ll help them understand it and tell the devs to make it easier. We shouldn’t need a manual.”

“Ah, okay. Is anyone likely to be worried about that? How can we get their feedback?”

“We have to produce a one-pager for each of these,” another team member said. “So we could show that to them. If they have concerns, we’ll be able to address them early!”

“Do we have to do the one-pagers early for all of these, then?” another asked.

I picked up another ticket that said, “Telephones.” It was marked as a “2”; something they’d done before. “What would the one-pager for this look like?” I asked.

She shrugged. “Order 24 telephones.”

“What will your stakeholders say if they see that?”

“They’ll say, fine, whatever. Oh, I see! We don’t learn anything by getting feedback on that one.”

“Right. It’s just a checklist. 2s and 1s are usually very stable and quite boring. Nobody’s worried about the telephones. The training, though…”

“Yes,” the team lead chuckled. “We should get started on that one-pager. The telephones will take a while, though. Let’s put a date on it so we don’t leave it too late.” Things which are stable sometimes time to create or produce, but usually this is also predictable… with a bit of a buffer or some other options in case things go wrong. All human systems have at least a bit of complexity, even ordering telephones.

By working out which bits of work are new (the 5s and 4s), we can find out where things are most likely to go wrong, and where feedback is most required. We can do them early, so that we make our discoveries before we’ve made too many other commitments.

For 3s, where we need expertise, we can schedule access to the experts or arrange for training or other ways of learning. It’s worth getting feedback too if you can, since human beings make mistakes, and it’s new for whoever’s learning to do it.

2s and 1s, you can worry less about. And that’s how I suggest people, and teams, manage their work.

Organizational and cultural change, though, is a different matter entirely… because it changes context, which means that everything about it is new. And the things we try might not work at all.

Guiding Organisational and Cultural Change

Dave Snowden and his organization, Cognitive Edge, use Cynefin to help change people’s behaviour, and the culture that results from interactions… at scale.

Not just the scale of an organization, but the scale of populations. They’ve done extensive work with various different governments; with healthcare and the military; with charities and FTSE 100 companies and many things in between.

I’m not an expert in sociology or anthropology, so I’m limiting this article to organizational change, because it’s something I can tell stories about. For more stories, see Cognitive Edge’s case studies.

Whenever people change the way they interact with others, or the system of work changes in such a way that they have to change themselves, it introduces uncertainty… and that can be uncomfortable.

If there’s a real sense of urgency and change is needed right now because the company’s in Chaos, a big reorganization will work; but that’s not really suitable for a company that’s already working fairly well. In those situations, evolution is better than revolution, and most of what we do will be complex.

While complicated systems have predictability, complex systems have disposition. That is, some things are more disposed to land than others. In one organization, I can run simulations with Lego and ask people to draw pictures, and people there love playing and having fun so it works. In another organization, perhaps they have a greater awareness of risk or responsibility, and take themselves a bit more seriously, and that doesn’t work so well. It’s impossible to predict what things will work, and what things won’t…

…but the people who’ve been there a while are more likely to know the disposition of the organization than me.

Leadership at Every Level

Everyone, at every level of an organization, can see ways in which it might be improved. They can probably think of things they could try… if only it was safe to do so.

Most of my work involves helping people make things safer. The kind of things I do are:

  • set up processes with regular check-in points so that feedback can be given easily
  • teach people how to deliver personal feedback in a kind but honest way, from a place of care (radical candour)
  • encourage psychological safety so that team members feel able to take interpersonal risks, asking questions or making suggestions that might otherwise seem silly or naive
  • help people to draw out the uncertain, complex aspects of their work so they know how to get feedback more easily
  • help people find ways to get feedback on that work in ways that are safer, with fewer people or shorter time or less money
  • help IT departments create processes that provide technical feedback (“DevOps”).

I do sometimes suggest probes, just to show that there’s room for them… but it’s very much more powerful when people within an organization can come up with probes of their own. Once change begins happening and people start seeing the benefits, trying new things out becomes an organizational habit.

Once that starts, I’m mostly redundant, and the organization can take itself forwards.

This is why I teach Cynefin to everyone I can; so that everyone can understand that cultural change will emerge, rather than being imposed; and so that they start spotting opportunities to try things out, or call out places where things aren’t safe to fail.

Trying something out which might make things safer-to-fail is of course a probe of its own!

What’s in a probe?

There are five things a probe has to have:

  • Indicators of success
  • Indicators of failure
  • A way of amplifying it if it succeeds
  • A way of dampening it if it fails
  • Coherence.

Coherence is described as “a sufficiency of evidence to progress,” or “a realistic reason for thinking that the probe will have a positive impact”. I like to ask, “Can you give me a scenario where this works?” If we can’t imagine it working, it isn’t coherent!

Note that when we actually do the probe, we might find that our unintended consequences step in, and we don’t get quite what we expect!

For instance, when a city puts more police on the beat, the reported crime rate tends to go up rather than down… not because there’s more crime, but because it’s easier to report it. Similarly, we might see more problems to start with, rather than fewer. If teams start finishing off old work before starting new work, the time between an initiative starting and being delivered might also go up rather than down, as we finish off all the things which have been in progress for ages (if we measure them by when they finished, anyway!). So we often get the opposite of the measurements we’re expecting. All metrics in complexity have to be treated with more than a little curiosity!

But being able to imagine a scenario can tell you what metrics or other indicators to look out for.

Stories, particularly, are a very powerful indicator. One of the best indicators of cultural change is that the stories people tell within and about the organization are changing, too.

Amplifying the Positives

Even before an organizational change initiative starts, people on the ground are trying to make things better for themselves and others. Sometimes, those efforts are already working! They’ve found something that’s disposed to land.

The most important thing that we can do is to amplify things that are going well, especially if they’re new. There are lots of different ways to amplify probes in an organization. This list is not exhaustive:

  • If you were involved, tell the story!
  • Present the new thing to other people.
  • Help someone else get it working in their team.
  • Do it with more people!
  • Make it an explicit part of your team charter, your ways of working, or your process policies.
  • Put a poster up to remind people to keep doing it.
  • Put a poster up inviting others to ask you about it.

You get the idea.

In a complicated system, the system’s made up of the sum of its parts.

In a complex system, it’s made up of the interaction between agents: the product of relationships. So the most important probes to amplify are ones in which a relationship has improved, and work involving two people or groups is better as a result. Presenting and telling the stories together can be powerful.

A Shallow Dive into Chaos – generating ideas for probes

Daniel Kahneman, author of “Thinking Fast and Slow”, talks about anchoring, where things that one person says can change the opinion of another. It happens very easily, even when we’re aware of the bias. The phrase “Don’t think of an elephant!” is a really easy way to demonstrate this.

If I asked everyone in the room to write the name of an animal on a piece of paper, we’d probably get quite a lot of different responses. Some people really like insects or have a pet snake or a fish.

But if I ask each person in turn to think of an animal and the first person says, “Elephant!”, then it’s likely that the other animals will be things that people associate with elephants: lions and tigers and other larger animals that they find at the zoo; or if they’re particularly visual, maybe other animals that are grey and wrinkled like walruses and rhinos. It will be less likely that they’ll come up with swallows and spiders.

If we’re facing a “wicked problem” that’s proving hard to untangle, we might need to generate some radical ideas. To get the widest variety of ideas for probes to try out, we separate people, and stop them from biasing each others’ opinions.

A lot of facilitators do this anyway, using “Silent Work” techniques like asking people to write their ideas on post-its before aggregating them together. There’s a process called “concurrent set-based engineering”, in which several people or groups independently try out their ideas for solving a problem before bringing those ideas back together. So this isn’t a new thing at all. We only stay in chaos for long enough to generate the ideas.

If we’ve got a large population of people, we might separate them into smaller groups. We try to make those groups homogenous, with the same kind of people in each group. This isn’t how we’d want to work normally – diversity is important! – but when you’re generating ideas, it provides the widest diversity between the groups. This also helps to prevent the HiPPO effect, where the highest paid person’s opinion wins out.

Once the ideas are generated, Cognitive Edge use a technique called Ritual Dissent, in which the probe is presented to another group, who critique it using the five criteria for a safe-to-fail probe that I mentioned earlier. Their critique helps to refine the probe, as well as helping the people who want it to work to present it more clearly to others.

Sometimes this ritual can be useful. Often though I’ve found that the people who will want a probe to be dampened or stopped when it fails are already in the room, and if they’re willing to go ahead with it and provide feedback, it’s good enough. Getting things moving can be more important than getting them moving exactly right, if you’ve already got safety nets in place.

Conclusion

Cynefin defines five types of problem:

  • Obvious, which you can categorize (boring and easy).
  • Complicated, which you can analyse and requires expertise.
  • Complex, in which outcomes emerge, and which requires trying things out in a way that’s safe-to-fail (probes).
  • Chaotic, which requires swift action, and
  • Disorder, where we’re treating the problem with the wrong approach, causing problems of its own.

Most of our work is driven by the complex, so trying out the newest and uncertain bits early, with minimal investment of time and effort, can be a good idea.

Cultural and organizational change is always complex, unless there’s already enough chaos for a big-bang change (evolution vs. revolution).

By trying out your own safe-to-fail probes, and by amplifying others, you can be part of that change too.