Negative Scenarios in BDD

One problem I hear repeatedly from people is that they can’t find a good place to start talking about scenarios.

An easy trick is to find the person who fought to get the budget for the project (the primary stakeholder) and ask them why they wanted the project. Often they’ll tell you some story about a particular group of people that they want to support, or some new context that’s coming along that the software needs to work in. All you need to do to get your first scenario in that situation is ask, “Can you give me an example?”

When the project or capability is about non-functionals such as security or performance, though, this can be a bit trickier. =

I can remember when we were talking to the Guardian editors about performance on the R2 project. “When (other national newspaper) went live with their new front page,” one editor explained, “their site went down for 3 days under the weight of people coming to look at the page. We don’t want that to happen.”

Or, as another organization said, “We went live, and it crashed. It took three months to get the site up and running again. The code was so awful we couldn’t fix it.”

These kind of negative stories are often drivers, particularly when there are non-functionals involved. You can always handle the requirements through monitoring instead of testing, but the conversation can’t follow the usual, “Can you give me an example?” pattern, because all the examples are things that people don’t want.

Instead, keep that negativity, and ask questions like, “What performance would we need to have to avoid that happening to us? Do we have a good security strategy for avoiding the hacking attempt that ended up with (major corporation)’s passwords getting stolen? How do we make sure we don’t crash when we go live?” Keep the focus on the negatives, because that’s what we want to avoid.

When you come to write the scenarios down, whether it’s in terms of monitoring or a test, it’s often worth keeping that negative around. You can create positive scenarios to look at the monitoring boundaries, but the negative reminds people why they’re doing this.

Given we’ve gone live with the front page
When Tony Blair resigns on the same day
Then the site shouldn’t go down under the weight of people reading that news.

Remember that if you’re the first people to solve the problem then you’ll need to try something out, but if it’s just an industry standard practice, make sure you’ve got someone on the team who’s implemented it before.

Part of the power of BDD’s scenarios is that they provide examples as to why the behaviour is valuable. You’ll need to convert this to positive behaviour to implement it, but if avoiding the negatives is valuable, include those too, even it’s just text on a wiki or blurb at the top of a feature file… and don’t be afraid to start there. Negative scenarios are hugely powerful, especially since they often have the most interesting stories attached to them.

Posted in bdd | 3 Comments

140 is the new 17

I took a break from Twitter.

A while back, I ran a series of talks on Respect for People. I talked about systems which encourage respect being those which are constrained or focused, transparent, and forgiving. I also outlined one system which had none of those, and would therefore have a tendency towards disrespect: Twitter.

Twitter is unforgiving, because it’s public. It isn’t transparent; 140 characters isn’t enough to even get a hint of motivation. It isn’t constrained, except by its 140 characters. And, in a week in which a lot of stuff happened offline, followed by yet another Twitter conversation that left me wondering why I bothered, I had enough.

I decided to take a break to work out what it was I actually valued about Twitter, and why I kept coming back to it, even though I knew it was the system at fault, not the people.

I worked it out. I know, now, what it is I’m looking for. And the best way to explain it is to look to a similarly constrained system: the Haikai no Renga.

I want more Haikai no Renga.

Back in the old days, the Japanese used to get together and have dinner parties.

At these dinner parties, they had a game they played. Do you remember those childhood games where one person would write a line of a story, then the next person would write a line then fold it over, then the next person, so they could only see the line that came before? And then, when the whole story unfolds, there are usually enough coincidences and ridiculousness to make everyone laugh?

This game was a bit like that, but it always started with a poem; and the poem was usually 17 syllables long, arranged as lines of 5, 7, 5 syllables.

Then the next person would add to the poem. Their addition would be 7, 7. The two verses, together, form a tanka. Then the next person would add a 5, 7, 5; and the next a 7, 7, and so on. The poem would grow and change from its original context, and everyone would laugh and have a good time.

This game became so popular that the people who were good at starting the game got called on for their first verse a lot, so they started collecting those verses. Finally, one guy called Shiki realised that those first verses were themselves an art form, and he coined the term haiku, being a portmanteau pun on the haikai no renga, meaning a humorous linked poem, and hokku, meaning first.

In Japanese, haiku work really well with 17 syllables; but in English I think they actually work better with about 13 to 15. That’s because there are aspects of haiku which are far more important than the syllable count.

There’s a word which sets some context, usually seasonal, called the kigo.  More important than that, though, is the kireji, or cutting word. This word juxtaposes two different concepts within those three short lines, and shows how they’re related. We sometimes show it in English with a bit of punctuation.

The important thing about the concepts is that they should be a little discordant. There should be context missing. The human brain is fantastic at creating relationships and seeing patterns, and it fills in that missing context for itself, creating something called the haiku moment.

My favourite example of this came from someone called Michael in one of my early workshops:

hot air
rushing past my face –
mind the gap!

(Michael, if you’re out there, please let me know your surname as I use this all the time and would like to credit you properly.)

If you live in London like I do, having a poem like that, with the subject matter outlined but not filled in, can make you feel suddenly as if you’re right there on the platform edge with the train coming in. You make the poem. It’s the same phenomenon that causes the scenes in books to be so much better than the movies; because your involvement in creating them really makes them come to life.

But because there’s not enough context, and because one person’s interpretation can differ from another, the haikai no renga changes. It moves from one subject to another. It can be surprising. It can be funny. It can be joyful. Each verse builds on what’s come before, and takes it into a new place.

Haikai no Renga is a bit like the “Yes, and…” game.

In the “Yes, and…” improv game, each person builds on what the person before has said.

There was a man who lived in a house with a green door.
Yes, and the door knocker was haunted.
Yes, and it was the ghost of a cat who used to live there.
Yes, and…

In the “Yes, and…” game, nobody calls out the person before. Nobody says, “Wait a moment! How can a door knocker be haunted? That makes no sense!”

Instead, they run with the ridiculousness. They make it into something playful, and carry it on, and gradually the sense emerges.

They don’t say, “No. You’re wrong. Stop.” They say, “Hah, that’s funny. Let’s run with that and see where it takes us.”

If people were to play this game on Twitter, they might do it by making requests for more details. “Yes, and what was it haunted by?” “Yes, and did the man own the cat?”

Or they might provide resources to enlightening information. “Yes, and have you read the stats on door knocker hauntings? .” “Yes, and do you remember that film where the door knockers spoke in riddles?”

Or they might just add another line, and see where it leads.

It starts with a Haiku.

Like the “Yes, and…” game, and like Improv, the haiku which kicks off the haikai no renga has to be an offer; something that the other players or poets can actually play on.

If a haiku were insulting to the other poets, or had some offensive content, then I imagine the poets wouldn’t play, and probably quite rightly you’d get an argument instead of a poem. Equally, though, if the haiku is too perfect, and too complete, then there’s no room for interpretation. There’s no room for other poets to look good, then make their own offers.

And this is where I think I can make a change to the way I’m using Twitter. If I state something as if it’s a fact, or I don’t leave the right kind of opening for conversation, then I won’t get the renga I’m looking for. It doesn’t even matter whether it’s true or not, or whether there’s context missing that other people don’t know about, or whether it’s a quote from someone else; if it’s not an offer, I won’t get the renga. If I’m writing something to make myself, or one of my friends, look knowledgeable and wise, I won’t get the renga. If I’m being defensive, I won’t get the renga. If I’m engaging someone who obviously doesn’t want the renga, well, then I won’t get the renga.

And I want the renga.

So that’s what I’m going to be trying to do. I’ll try to make tweets, in future, which are offers to you, my fellow Tweeters, to join in the game and play with the conversation and see if it takes us somewhere surprising and joyful. I’ll try to make tweets which invite the “Yes, and…” rather than the “No, but…”. And if I do happen to get the “No, but…”, as I’m sure will happen, then I’ll work out what to do at that point. At least, now, I know better what it is that I want.

If you want to play, come join me.

It’s just a tweet… but it could be poetry.

Posted in life | Tagged , , | 3 Comments

The Estimates in #NoEstimates

A couple of weeks ago, I tweeted a paraphrase of something that David J. Anderson said at the London Lean Kanban Day: “Probabalistic forecasting will outperform estimation every time”. I added the conference hashtag, and, perhaps most controversially, the #NoEstimates one.

The conversation blew up, as conversations on Twitter are wont to do, with a number of people, perhaps better schooled in mathematics than I am, claiming that the tweet was ridiculous and meaningless. “Forecasting is a type of estimation!” they said. “You’re saying that estimation is better than estimation!”

That might be true in mathematics. Is it true in ordinary, everyday English? Apparently, so various arguments go, the way we’re using that #NoEstimates hashtag is confusing to newcomers and making people think we don’t do any estimation at all!

So I wanted to look at what we actually mean by “estimate”, when we’re using it in this context, and compare it to the “probabilistic forecasting” of David’s talk.

Defining “Estimate” in English

While it might be true that a probabilistic forecast is a type of estimate in maths and statistics, the commonly used English definitions are very different. Here’s what Wikipedia says about estimation:

Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

And here’s what it says about probabilistic forecasting:

Probabilistic forecasting summarises what is known, or opinions about, future events. In contrast to a single-valued forecasts … probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast.

So an estimate is usually a single value, and a probabilistic forecast is a range.

Another way of phrasing that tweet might have been, “Providing a range of outcomes along with the likelihood of those outcomes will lead to better decision-making than providing a single value, every time.”

And that might have been enough to justify David’s assertion on its own… but it gets worse.

Defining “Estimate” in Agile Software Development

In the context of Software Development, estimation has all kinds of horrible connotations. It turns out that Wikipedia has a page on Software Development Estimation too! And here’s what it says:

Software development effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.

Again, we’re looking at a single value; but do notice the “high uncertainty” there. Here’s what the page says later on:

Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.

The Lean / Kanban movement has emerged (and possibly diverged) from the Agile movement, in which this strategy really is dominant, mostly thanks to Scrum and Extreme Programming. Both of these suggest the use of story points and velocity to create the estimates. The idea of this is that you can then use previous data to provide a forecast; but again, that forecast is largely based on a single value. It isn’t probabilistic.

Then, too, the “expertise” of the various people performing the estimates can often be questionable. Scrum suggests that the whole team should estimate, while XP suggests that developers sign up to do the tasks, then estimate their own. XP, at least, provides some guidance for keeping the cost of change low, meaning that expertise remains relevant and velocity can be approximated from the velocity of previous sprints. I’d love to say that most Scrum teams are doing XP’s engineering practices for this reason, but a lot of them have some way to go.

I have a rough and ready scale that I use for estimating uncertainty, that helps me work out whether an estimate is even likely to be made based on expertise.  I use it to help me make decisions about whether to plan at all, or whether to give something a go and create a prototype or spike. Sometimes a whole project can be based on one small idea or piece of work that’s completely new and unproven, the effort of which can’t even be estimated using expertise (because there isn’t any), let alone historical metrics.

Even when we have expertise, the tendency is for experts to remember the mode, rather than the mean or median value. Since we often make discoveries that slow us down but rarely make discoveries which speed us up, we are almost inevitably over-optimistic. Our expertise is not merely inaccurate; it’s biased and therefore misleading. Decisions made on the basis of expert estimates have a horrible tendency to be wrong. Fortunately everyone knows this, so they include buffers. Unfortunately, work tends to expand to fill the time available… but at least that makes the estimates more accurate, right?

One of the people involved in the Twitter conversation suggested we should be using the word “guess” rather than “estimate”. And indeed, that might be mathematically more precise, and indeed, if we called them that, people might be looking for different ways to inform the decisions we need to make.

But they don’t. They’re called “estimates” in Scrum, in XP, and by just about everyone in Agile software development.

But it gets worse.

Defining “Estimate” in the context of #NoEstimates

Woody Zuill found this very early tweet from Aslak Hellesøy using the #NoEstimates hashtag, possibly the first:

@obie at #speakerconf: “Velocity is important for budgeting”. Disagree. Measuring cycle time is a richer metric. #kanban #noestimates

So the movement started with this concept of “estimate” as the familiar term from Scrum and XP. Twitter being what it is, it’s impossible to explain all the context of a concept in 140 characters, so a certain level of familiarity with the ideas around that tag is assumed. I would hope that newcomers to a movement would approach it with curiosity, and hopefully this post will make that easier.

Woody confessed to being one of the early proponents of the hashtag in the context of software development. In his post on the #NoEstimates hashtag, he defines it as:

#NoEstimates is a hashtag for the topic of exploring alternatives to estimates [of time, effort, cost] for making decisions in software development.  That is, ways to make decisions with “No Estimates”.

And later:

It’s important to ask ourselves questions such as: Do we really need estimates? Are they really that important?  Are there options? Are there other ways to do things? Are there BETTER ways to do thing? (sic)

Woody, and Neil Killick who is another proponent, both question the need for estimates in many of the decisions made in a lot of projects.

I can remember getting the Guardian’s galleries ready in time for the Oscars. Why on earth were we estimating how long things would take? That was time much better spent in retrospect on getting as many of the features complete as we could. Nobody was going to move the Oscars for us, and the safety buffer we’d decided on to make sure that everything was fully tested wasn’t changing in a hurry, either. And yet, there we were, mindlessly putting points on cards. We got enough features out in time, of course, as well as some fun extras… but I wonder if the Guardian, now far more advanced in their ability to deliver than they were in my day, still spend as much time in those meetings as we used to.

I can remember asking one project manager at a different client, “These are estimates, right? Not promises,” and getting the response, “Don’t let the business hear you say that!” The reaction to failing to deliver something to the agreed estimates was to simply get the developers to work overtime, and the reaction to that was, of course, to pad the estimates. There are a lot of posts around on the perils of estimation and estimation anti-patterns.

Even when the estimates were made in terms of time, rather than story points, I can remember decisions being unchanged in the face of the “guesses”. There was too much inertia. If that’s going to be the case, I’d rather spend my time getting work done instead of worrying about the oxymoron of “accurate estimates”.

That’s my rant finished. Woody and Neil have many more examples of decisions that are often best made with alternatives to time estimation, including much kinder, less Machiavellian ones such as trade-off and prioritization.

In that post above, Neil talks about “using empiricism over guesswork”. He regularly refers to “estimates (guesses)”, calling out the fact that we do use that terminology loosely. That’s English for you; we don’t have an authoritiative body which keeps control of definitions, so meanings change over time. For instance, the word “nice” used to mean “precise”, and before that it meant “silly”. It’s almost as if we’ve come full circle.

Defining “Definition”

Wikipedia has a page on definition itself, which points out that definitions in mathematics are different to the way I’ve used that term here:

In mathematics, a definition is used to give a precise meaning to a new term, instead of describing a pre-existing term.

I imagine this refers to “define y to be x + 2,” or similar, but just in case it’s not clear already: the #NoEstimates movement is not using the mathematical definition of “estimate”. (In fact, I’m pretty sure it’s not using the mathematical definition of “no”, either.)

We’re just trying to describe some terms, and the way they’re used, and point people at alternatives and better ways of doing things.

Defining Probabilistic Forecasting

I could describe the term, but sometimes, descriptions are better served with examples, and Troy Magennis has done a far better job of this than I ever have. If you haven’t seen his work, this is a really good starting point. In a nutshell, it says, “Use data,” and, “You don’t need very much data.”

I imagine that when David’s talk is released, that’d be a pretty good thing to watch, too.

Posted in complexity, conference | 35 Comments

A Dreyfus model for Agile adoption

A couple of people have asked for this recently, so just posting it here to get it under the CC licence. It was written a while ago, and there are better maturity models out there, but I still find this useful for showing teams a roadmap they can understand.

If you want to know more about how to create and use your own Dreyfus models, this post might help.

What does an Agile Team look like?

Novice

We have a board
We put our stories on the board
Every two weeks, we get together and celebrate what we’ve done
Sometimes we talk to the stakeholders about it
We think we might miss our deadline and have told our PM
Agile is really hard to do well

Beginner

We are trying to deliver working software
We hold retrospectives to talk about what made us unhappy
When something doesn’t work, we ask our coach what to do about it
Our coach gives us good ideas
We have delegated someone to deal with our offshore communications
We have a great BA who talks to the stakeholders a lot
We know we’re going to miss our deadline; our PM is on it
Agile requires a great deal of discipline

Practitioner

We know that our software will work in production
Every two weeks, we show our working software to the stakeholders
We talk to the stakeholders about the next set of stories they wants us to do
We have established a realistic deadline and are happy that we’ll make it
We have some good ideas of our own
We deal with blockers promptly
We write unit tests
We write acceptance tests
We hold retrospectives to work out what stopped us delivering software
We always know what ‘done’ looks like before we start work
We love our offshore team members; we know who they are and what they look like and talk to them every day
Our stakeholders are really interested in the work we’re doing
We always have tests before we start work, even if they’re manual
We’ve captured knowledge about how to deploy our code to production
Agile is a lot of fun

Knowledgeable

We are going to come well within our deadline
Sometimes we invite our CEO to our show-and-tell, so he can see what Agile looks like done well
People applaud at the end of the show-and-tell; everyone is very happy
That screen shows the offshore team working; we can talk to them any time; they can see us too
We hold retrospectives to celebrate what we learnt
We challenge our coach and change our practices to help us deliver better
We run the tests before we start work – even the manual tests, to see what’s broken and know what will be different when we’re done
Agile is applicable to more than just software delivery

Expert

We go to conferences and talk about our fantastic Agile experiences
We are helping other teams go Agile
Business outside of IT are really interested in what we’re doing
We regularly revisit our practices, and look at other teams to see what they’re doing
The company is innovative and fun
The company are happy to try things out and get quick feedback
We never have to work late or weekends
We deploy to production every two weeks*
Agile is really easy when you do it well!

* Wow, this model is old.

Posted in learning models | 5 Comments

What is BDD?

At #CukeUp today, there’s going to be a panel on defining BDD, again.

BDD is hard to define, for good reason.

First, because to do so would be to say “This is BDD” and “This is not BDD”. When you’ve got a mini-methodology that’s derived from a dozen other methods and philosophies, how do you draw the boundary? When you’re looking at which practices work well with BDD (Three Amigos, Continuous Integration and Deployment, Self-Organising Teams, for instance), how do you stop the practices which fit most contexts from being adopted as part of the whole?

I’m doing BDD with teams which aren’t software, by talking through examples of how their work might affect the world. Does that mean it’s not really BDD any more, because it can’t be automated? I’m talking to people over chat windows and sending them scenarios so they can be checked, because we’re never in the same room. Is that BDD? I’m writing scenarios for my own code, on my own, talking to a rubber duck. Is that it? I’m still using scenarios in conversation to explore and define requirements, with all the patterns that I’ve learnt from the BDD world. I’m still trying to write software that matters. It still feels like BDD to me. I can’t even say that BDD’s about “writing software that matters” any more, because I’ve been using scenarios to explore life for a while now.

I expect in a few years the body of knowledge we call “BDD” will also include adoption patterns, non-software contexts, and a whole bunch of other things that we’re looking at but which haven’t really been explored in depth. BDD is also the community; it’s the people who are learning about this and reporting their learning and asking questions, and the common questions and puzzles are also part of BDD, and they keep changing as our understanding grows and the knowledge becomes easier to access and other methods like Scrum and Kanban are adopted and enable BDD to thrive in different places.

Rather than thinking of BDD as some set of well-defined practices, I think of it as an anchor term. If you look up anything around BDD, you’re likely to find conversation, collaboration, scenarios and examples at its core, together with suggestions for how to automate them. If you look further, you’ll find Three Amigos and Outside-In and the Given / When / Then syntax and Cucumber and Selenium and JBehave and Capybara and SpecFlow and a host of other tools. Further still we have Cynefin and Domain Driven Design and NLP, which come with their own bodies of knowledge and are anchor terms for those, and part of their teaching overlaps part of what I teach, as part of BDD, and that’s OK.

That’s why, when I’m asked to define BDD, I say something like, “Using examples in conversation to illustrate behaviour.” It’s where all this started, for me. That’s the anchor. It’s where everything else comes from, but it doesn’t define the boundaries. There are no boundaries. The knowledge, and the understanding, and especially the community that we call “BDD” will keep on growing.

One day it will be big enough that there will be new names for bits of it, and maybe those new names will be considered part of BDD, and maybe they won’t. And when that happens, that should be OK, too.

NB: I reckon the only reason that other methods are defined more precisely is so they could be taught consistently at scale, especially where certification is involved. Give me excellence, diversity and evolution over consistency any day. I’m pretty sure I can sell them more easily… and so can everyone else.

Posted in bdd, conference | 8 Comments

The Shallow Dive into Chaos

For more on the Chaotic domain and subdomains, read Dave Snowden’s blog post, “…to give birth to a dancing star.” The relationship between separating people that I talk about here, and the Chaotic domain, can be seen in Cynthia Kurtz’s work with Cynefin’s pyramids, as seen here.

On one of my remote training courses over WebX, I asked the participants to do an exercise. “Think of one way that people come to consensus,” I requested, “and put it in the chat box.”

Here’s what I got back…

1st person: Voting

2nd person: Polling

3rd person: Yeah, I’ll go with polling too

And then I had to explain to them why I was giggling so much.

This is, of course, a perfect demonstration one of the ways in which people come to consensus: by following whoever came first. We might follow the dominant voice in the room, or the person who’s our boss, or the one who brought the cookies for the meeting, or the one that’s most popular, or the one who gets bored or busy least quickly.

We might even follow the person with the most expertise.

MACE: We’ll have a vote.
SEARLE: No. No, we won’t. We’re not a democracy. We’re a collection of astronauts and scientists. So we’re going to make the most informed decision available to us.
MACE: Made my you, by any chance?
MACE: Made by the person qualified to understand the complexities of payload delivery: our physicist.
CAPA (Physicist): …shit.

— “Sunshine”

If you’re doing something as unsafe to fail as nuclear payload delivery, getting the expert to make a decision might be wise. (Sunshine is a great film, by the way, if you’re into SF with a touch of horror.)

If you’re doing something that’s complex, however, which means that it requires experiment, the expertise available is limited. Experiments, also called Safe-To-Fail Probes, are the way forward when we want our practices and our outcomes to emerge. This is also a fantastic trick for getting out of stultefying complicatedness or simplicity, and generating some innovation.

But… if you stick everyone in a room and ask them to come up with an experiment, you’ll get an experiment.

It just might not be the best one.

More ideas mean better ideas

In a population where we know nothing about the worth of different ideas, the chance of any given idea being above average is 50%. If we call those “good ideas”, then we’ve got a 50% chance of something being good.

Maybe… just maybe… the idea that the dominant person, or the first person, or the expert, or the person with the most time comes up with will be better than average. Maybe.

But if you have three ideas, completely independently generated, what’s the chance of at least one of them being above average?

Going back to my A-level Maths… it’s 1 – (chance of all ideas being below average) which is 1 – (1/2 x 1/2 x 1/2) which is 87.5%.

That’s a vast improvement. Now imagine that everyone has a chance to generate those ideas.

If you want better ideas, stop people gaining consensus too early.

For this to work, the experiments that people come up with have to be independent. That means we have to separate people.

Now, obviously, if you have a hundred and twenty people and want to get experiments from them, you might not have time to go through a hundred and twenty separate ideas. We still want diversity in our ideas, though (and this is why it’s important to have diversity for innovation; because it gives you different ideas).

So we split people into homogenous groups.

This is the complete opposite of Scrum’s cross-functional teams. We want diversity between the groups, not amongst them. This is a bit like Ronald S. Burt’s “Structural Holes” (Jabe Bloom’s LKUK13 talk on this was excellent); innovation comes from the disconnect; from deliberately keeping people silo’d. We put all the devs together; the managers together; the senior staff together; the group visiting from Hungary together; the dominant speakers together… whatever will give you the most diversity in your ideas.

Once people have come up with their experiments, you can bring them back together to work out which one are going to go ahead. Running several concurrently is good too!

If you’ve ever used post-its in a retrospective, or other forms of silent work to help ensure that everyone’s thoughts are captured, you’re already familiar with this. Silent work is an example of the shallow dive into chaos!

Check that your experiments are safe-to-fail

Dave Snowden and Cognitive Edge reckon you need five things for a good experiment:

  • A way to tell it’s succeeding
  • A way to tell it’s failing
  • A way to amplify it
  • A way to dampen it
  • Coherence (a reason to think it might produce good results).

If you can think of a reason why your experiment might fail, look to see if that failure is safe; either because it’s cheap in time or effort, or because the scale of the failure is small. The last post I wrote on using scenarios for experiment design can help you identify these aspects, too.

An even better thing to do is to let someone else examine your ideas for experiment. Cognitive Edge’s Ritual Dissent pattern (requires free sign-up) is fantastic for that; it’s very similar to the Fly-On-The-Wall pattern from Linda Rising and Mary Lynn Mann’s “Fearless Change”.

In both patterns, the idea is presented to the critical group without any questions being asked, then critiqued without eye contact; usually done by the presenter turning around or putting on a mask. Because as soon as we make eye contact… as soon as we have to engage with other people… as soon as we start having a conversation, whether with spoken language or body language… then we automatically start seeking consensus.

And consensus isn’t always what you want.

Posted in complexity, cynefin | 1 Comment

A Stakeholder goes to St. Ives

As I was trying to resolve my problem, I met a portfolio team with seven programmes of work.

Each programme had seven projects;

Each project had seven features;

Each feature had seven stories;

Each story had seven scenarios.

How many things did I need to resolve?

Posted in business value | 7 Comments