For more on the Chaotic domain and subdomains, read Dave Snowden’s blog post, “…to give birth to a dancing star.” The relationship between separating people that I talk about here, and the Chaotic domain, can be seen in Cynthia Kurtz’s work with Cynefin’s pyramids, as seen here.
On one of my remote training courses over WebX, I asked the participants to do an exercise. “Think of one way that people come to consensus,” I requested, “and put it in the chat box.”
Here’s what I got back…
1st person: Voting
2nd person: Polling
3rd person: Yeah, I’ll go with polling too
And then I had to explain to them why I was giggling so much.
This is, of course, a perfect demonstration one of the ways in which people come to consensus: by following whoever came first. We might follow the dominant voice in the room, or the person who’s our boss, or the one who brought the cookies for the meeting, or the one that’s most popular, or the one who gets bored or busy least quickly.
We might even follow the person with the most expertise.
MACE: We’ll have a vote.
SEARLE: No. No, we won’t. We’re not a democracy. We’re a collection of astronauts and scientists. So we’re going to make the most informed decision available to us.
MACE: Made my you, by any chance?
SEARLE: Made by the person qualified to understand the complexities of payload delivery: our physicist.
CAPA (Physicist): …shit.
If you’re doing something as unsafe to fail as nuclear payload delivery, getting the expert to make a decision might be wise. (Sunshine is a great film, by the way, if you’re into SF with a touch of horror.)
If you’re doing something that’s complex, however, which means that it requires experiment, the expertise available is limited. Experiments, also called Safe-To-Fail Probes, are the way forward when we want our practices and our outcomes to emerge. This is also a fantastic trick for getting out of stultefying complicatedness or simplicity, and generating some innovation.
But… if you stick everyone in a room and ask them to come up with an experiment, you’ll get an experiment.
It just might not be the best one.
More ideas mean better ideas
In a population where we know nothing about the worth of different ideas, the chance of any given idea being above average is 50%. If we call those “good ideas”, then we’ve got a 50% chance of something being good.
Maybe… just maybe… the idea that the dominant person, or the first person, or the expert, or the person with the most time comes up with will be better than average. Maybe.
But if you have three ideas, completely independently generated, what’s the chance of at least one of them being above average?
Going back to my A-level Maths… it’s 1 – (chance of all ideas being below average) which is 1 – (1/2 x 1/2 x 1/2) which is 87.5%.
That’s a vast improvement. Now imagine that everyone has a chance to generate those ideas.
If you want better ideas, stop people gaining consensus too early.
For this to work, the experiments that people come up with have to be independent. That means we have to separate people.
Now, obviously, if you have a hundred and twenty people and want to get experiments from them, you might not have time to go through a hundred and twenty separate ideas. We still want diversity in our ideas, though (and this is why it’s important to have diversity for innovation; because it gives you different ideas).
So we split people into homogenous groups.
This is the complete opposite of Scrum’s cross-functional teams. We want diversity between the groups, not amongst them. This is a bit like Ronald S. Burt’s “Structural Holes” (Jabe Bloom’s LKUK13 talk on this was excellent); innovation comes from the disconnect; from deliberately keeping people silo’d. We put all the devs together; the managers together; the senior staff together; the group visiting from Hungary together; the dominant speakers together… whatever will give you the most diversity in your ideas.
Once people have come up with their experiments, you can bring them back together to work out which one are going to go ahead. Running several concurrently is good too!
If you’ve ever used post-its in a retrospective, or other forms of silent work to help ensure that everyone’s thoughts are captured, you’re already familiar with this. Silent work is an example of the shallow dive into chaos!
Check that your experiments are safe-to-fail
Dave Snowden and Cognitive Edge reckon you need five things for a good experiment:
- A way to tell it’s succeeding
- A way to tell it’s failing
- A way to amplify it
- A way to dampen it
- Coherence (a reason to think it might produce good results).
If you can think of a reason why your experiment might fail, look to see if that failure is safe; either because it’s cheap in time or effort, or because the scale of the failure is small. The last post I wrote on using scenarios for experiment design can help you identify these aspects, too.
An even better thing to do is to let someone else examine your ideas for experiment. Cognitive Edge’s Ritual Dissent pattern (requires free sign-up) is fantastic for that; it’s very similar to the Fly-On-The-Wall pattern from Linda Rising and Mary Lynn Mann’s “Fearless Change”.
In both patterns, the idea is presented to the critical group without any questions being asked, then critiqued without eye contact; usually done by the presenter turning around or putting on a mask. Because as soon as we make eye contact… as soon as we have to engage with other people… as soon as we start having a conversation, whether with spoken language or body language… then we automatically start seeking consensus.
And consensus isn’t always what you want.
On the topic of generating many experiments – have you come across David Bland’s experiment mapping (I first came across it at Agile 2013 in this session > http://www.slideshare.net/7thpixel/experimenting-in-the-enterprise-agile2013-v11/33)?
I’ve not had a chance to sit down with David for some time, so how I run them has probably diverged quite a bit from how he does it in the last couple of years – but what I do is this:
Before hand (in various different ways depending on context);
* Get agreement on problem we’re investigating
* Generate assumptions around that focus problem
* Prioritise assumptions and pick N most valuable to explore
* Small groups (4-6) and split up assumptions among groups so multiple groups exploring same assumptions
* Silent generation of experiments by individuals within group.
* Group placement of experiments on group’s experiment map (rather than the David’s time dimension, I have “risk” (which seems very close your “safety” in intent).
* Group review (this often leads to things like splitting/merging/thinning of assumptions as folk spot different interpretations, hard/easy parts of assumption to validate, etc.)
* Often another round of silent generation / placement
* Bring groups back together, explore each assumption in turn. That’ll usually separate a bunch of things that everybody treats vaguely similarly, from a minority that folk have split/merged differently or approach in very different ways. Sometimes that’ll lead to another round of whatever was used to prioritise assumptions to discover if we’ve found something interesting that needs further poking.
* Pull out N experiments to explore further from the least-risky/best-learning highest priority assumptions (which often leads to a discussion about how many experiments we can explore, set-based design, etc. which is a whole separate thang).
I’ve tried this a couple of different times in larger groups (50ish). Once I did the homogenous groups trick. Once I didn’t. Didn’t notice a whit of difference in the kind of things that came back. Artisanal anecdata of course — but I found it interesting.
Pingback: On Epiphany and Apophany | Liz Keogh, lunivore
Pingback: Chaos and Innovation – James Hall