Using Scenarios for Experiment Design

In the complex domain, cause and effect are only correlated in retrospect, and we cannot predict outcomes. We can see them and understand them in retrospect, but the complex domain is the domain of discovery and innovation. Expect the unexpected! Every time we do something new which hasn’t been done before, or hasn’t been done within the given context, there are going to be complex aspects to it.

The only discovery-free project would be the same project, done with the same people, the same technology and the same requirements. That never happens!

Because of this, analysis doesn’t work for every aspect of a project. People who try to do analysis in the complex domain commonly experience analysis paralysis, thrashing, two-hour meetings led by “experts” who’ve never done it before either, and arguments about who’s to blame for the resulting discoveries.

Instead, the right thing to do is to probe; to design and perform experiments from which we can learn, and which will help to uncover information and develop expertise.

There are few things we can do to design experiments well, with thanks and credit for the strategies going to Cognitive Edge. (They have more material on this, available in their library if you sign up for free). Suggestions around scenarios are mine.

Amplification strategy; recovery strategy

For our experiment to work, we have to know how we’re going to amplify it. That may mean adding it to corporate processes, communicating it to a wider audience, automating it, releasing it to production, etc.. In the complex space doing the same thing over and over results in different outcomes because of the changing contexts, but once we understand cause and effect, we can start to test out that correlation in different or larger contexts, and develop expertise, moving it into the complicated domain.

We also need to have a strategy for recovery in case of failure. This doesn’t mean that we avoid failure!

I’ve seen a lot of people try to analyze their way out of every failure mode. One of my clients said, “Oh, but what if people don’t have the skills to do this experiment?” Does it matter? If the investment in the experiment is small enough (which is also part of being safe to fail) then all we need to know is that failure is safe; that we can recover from people not having skills. We don’t have to put everything in place to ensure success… and perhaps good things will happen from people being forced to gain skills, or using their existing skills to create a novel approach! This is the nature of experiments; that we don’t know the outcome, only that it has coherence, which means a reason for believing its impact, whatever it is, might be positive. More on that later.

If you can think of a scenario in which the experiment succeeds, can you think of how to make it succeed a second time, and then a third?

If you can think of a scenario in which it fails, can you think of how to make that failure safe (preferable to worrying about how to avoid the failure)? I find the evil hat very handy for thinking up failure scenarios.

Indications of failure; indications of success

In order to put into place our amplification or recovery strategies, we need to be able to tell whether an experiment is succeeding or failing. Metrics are fantastic for this. Don’t use them as tests, though, and definitely don’t use them as targets! They’re indicators; they may not behave as expected. We can understand the indicators in retrospect, but cause and effect won’t always be correlated until then.

As an example, one group I met decided to experiment to see if they could reduce their bug count by hiring a new team member and rotating an existing team member each week into a bug-fixing role. Their bug count started to go down! So they took another team member and hired another new person… but the bug count started to go up again.

It turned out that the users had spotted that bugs were being fixed, so they’d started reporting them. The bugs were always there! And the count of known bugs going up was actually a good thing.

Rather than thinking of tests, think of scenarios in which you can see the experiment succeeding or failing. Those things which allow you to see it – specific, measureable, relevant signs – will make for good indicators. These indicators will have to be monitored.

Rationale for Experiment

The experiment should be coherent.

This means that there should be a reason for believing the impact will be good, or as Dave Snowden puts it, “a sufficiency of evidence to allow us to progress“.

If you can come up with some realistic scenarios in which the experiment has a positive impact, you have coherence. The more likely the scenario is – and the more similar it is to scenarios you’ve seen in the past – then the more coherent it becomes, until the coherence is predictable and you have merely complicated problems, solvable with expertise, rather than complex ones.

To check that your scenarios are realistic, imagine yourself in the future, in that scenario. Where are you when you realise that the experiment has worked (or, if checking for safe failure, failed)? Who’s around you? What can you see? What can you hear? What colour are the walls, or if you’re outside, what else is around? Do you have a kinesthetic sense; something internal that tells  you that you’ve succeeded, like a feeling of pride or joy? This well-formed outcome will help you to verify that your scenario is realistic enough to be worth pursuing.

If you can’t come up with any scenarios in which you can imagine a positive impact, then your experiment is not coherent, and you might want to think of some other ideas.

Look out for a blog on how to do that with a shallow dive into chaos soon!

This entry was posted in cynefin, evil hat, Uncategorized. Bookmark the permalink.

4 Responses to Using Scenarios for Experiment Design

  1. Hi Liz, I would say we see this in evidence in enterprise transformation. I’ve seen so many projects get stuck in the analysis phase without the magical ‘answer’. I realised many years ago that one of the best ways to get the change started is to ask some of the more experienced people in the team, upfront, what is your gut feeling on what we should change to reach the goal. Then start with a small sample team and test it to see what happens. If it doesn’t work the test group is small and the impact is minimal. If it works we change the parameter and try again, then iterate our way to success.

  2. In his post entitled “Unintended Consequences” http://cognitive-edge.com/blog/entry/6434/unintended-consequences/ Dave substitutes the word 𝐢𝐧𝐭𝐞𝐫𝐯𝐞𝐧𝐭𝐢𝐨𝐧𝐬 for 𝐞𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐬. I hope this this will encourage people to pay more consideration to situational awareness before immediately jumping in to problem solving methods.

  3. Pingback: The Shallow Dive into Chaos | Liz Keogh, lunivore

  4. Pingback: On Epiphany and Apophany | Liz Keogh, lunivore

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s