A Stakeholder goes to St. Ives

As I was trying to resolve my problem, I met a portfolio team with seven programmes of work.

Each programme had seven projects;

Each project had seven features;

Each feature had seven stories;

Each story had seven scenarios.

How many things did I need to resolve?

Posted in business value | 4 Comments

Using Scenarios for Experiment Design

In the complex domain, cause and effect are only correlated in retrospect, and we cannot predict outcomes. We can see them and understand them in retrospect, but the complex domain is the domain of discovery and innovation. Expect the unexpected! Every time we do something new which hasn’t been done before, or hasn’t been done within the given context, there are going to be complex aspects to it.

The only discovery-free project would be the same project, done with the same people, the same technology and the same requirements. That never happens!

Because of this, analysis doesn’t work for every aspect of a project. People who try to do analysis in the complex domain commonly experience analysis paralysis, thrashing, two-hour meetings led by “experts” who’ve never done it before either, and arguments about who’s to blame for the resulting discoveries.

Instead, the right thing to do is to probe; to design and perform experiments from which we can learn, and which will help to uncover information and develop expertise.

There are few things we can do to design experiments well, with thanks and credit for the strategies going to Cognitive Edge. (They have more material on this, available in their library if you sign up for free). Suggestions around scenarios are mine.

Amplification strategy; recovery strategy

For our experiment to work, we have to know how we’re going to amplify it. That may mean adding it to corporate processes, communicating it to a wider audience, automating it, releasing it to production, etc.. In the complex space doing the same thing over and over results in different outcomes because of the changing contexts, but once we understand cause and effect, we can start to test out that correlation in different or larger contexts, and develop expertise, moving it into the complicated domain.

We also need to have a strategy for recovery in case of failure. This doesn’t mean that we avoid failure!

I’ve seen a lot of people try to analyze their way out of every failure mode. One of my clients said, “Oh, but what if people don’t have the skills to do this experiment?” Does it matter? If the investment in the experiment is small enough (which is also part of being safe to fail) then all we need to know is that failure is safe; that we can recover from people not having skills. We don’t have to put everything in place to ensure success… and perhaps good things will happen from people being forced to gain skills, or using their existing skills to create a novel approach! This is the nature of experiments; that we don’t know the outcome, only that it has coherence, which means a reason for believing its impact, whatever it is, might be positive. More on that later.

If you can think of a scenario in which the experiment succeeds, can you think of how to make it succeed a second time, and then a third?

If you can think of a scenario in which it fails, can you think of how to make that failure safe (preferable to worrying about how to avoid the failure)? I find the evil hat very handy for thinking up failure scenarios.

Indications of failure; indications of success

In order to put into place our amplification or recovery strategies, we need to be able to tell whether an experiment is succeeding or failing. Metrics are fantastic for this. Don’t use them as tests, though, and definitely don’t use them as targets! They’re indicators; they may not behave as expected. We can understand the indicators in retrospect, but cause and effect won’t always be correlated until then.

As an example, one group I met decided to experiment to see if they could reduce their bug count by hiring a new team member and rotating an existing team member each week into a bug-fixing role. Their bug count started to go down! So they took another team member and hired another new person… but the bug count started to go up again.

It turned out that the users had spotted that bugs were being fixed, so they’d started reporting them. The bugs were always there! And the count of known bugs going up was actually a good thing.

Rather than thinking of tests, think of scenarios in which you can see the experiment succeeding or failing. Those things which allow you to see it – specific, measureable, relevant signs – will make for good indicators. These indicators will have to be monitored.

Rationale for Experiment

The experiment should be coherent.

This means that there should be a reason for believing the impact will be good, or as Dave Snowden puts it, “a sufficiency of evidence to allow us to progress“.

If you can come up with some realistic scenarios in which the experiment has a positive impact, you have coherence. The more likely the scenario is – and the more similar it is to scenarios you’ve seen in the past – then the more coherent it becomes, until the coherence is predictable and you have merely complicated problems, solvable with expertise, rather than complex ones.

To check that your scenarios are realistic, imagine yourself in the future, in that scenario. Where are you when you realise that the experiment has worked (or, if checking for safe failure, failed)? Who’s around you? What can you see? What can you hear? What colour are the walls, or if you’re outside, what else is around? Do you have a kinesthetic sense; something internal that tells  you that you’ve succeeded, like a feeling of pride or joy? This well-formed outcome will help you to verify that your scenario is realistic enough to be worth pursuing.

If you can’t come up with any scenarios in which you can imagine a positive impact, then your experiment is not coherent, and you might want to think of some other ideas.

Look out for a blog on how to do that with a shallow dive into chaos soon!

Posted in cynefin, evil hat, Uncategorized | 2 Comments

A Little Tense

Following on from my last blog post about deriving Gherkin from conversations, I wanted to share some tips on tenses. This is beginner stuff, but it turns out there are a lot of beginners out there! It also isn’t gospel, so if you’re doing something different, it’s probably OK.

Contexts have happened in the past

When I phrase a context, I often put it in the past tense:

Given Fred bought a microwave

Sometimes the past has set up something which is ongoing in the present, but it’s not an action as much as a continuation. So we’ll either use the present continuous tense (“is X-ing”) or we’ll be describing an ongoing state:

Given Bertha is reading Moby Dick

Given Fluffy is 1 1/2 months old

It doesn’t matter how the context was set up, either, so often we find that contexts use the passive voice for the events which made them occur (often “was X-ed” or “has been X-ed”, for whatever the past tense of “X” is):

Given Pat’s holiday has been approved

Given the last bar of chocolate was sold

Events happen in the present

The event is the thing which causes the outcome:

When I go to the checkout

When Bob adds the gig to his calendar

I sometimes see people phrase events in the passive voice:

When the last book is sold

but for events, I much prefer to change it so that it’s active:

 When we sell the last book

When a customer buys the last book

This helps to differentiate it from the contexts, and makes us think a bit harder about who or what triggers the outcome.

Outcomes should happen

I tend to use the word “should” with outcomes these days. As well as allowing for questioning and uncertainty, it differentiates the outcome from contexts and events, which might otherwise have the same syntax and be hard to automate in some frameworks as a result (JBehave, for instance, didn’t actually care whether you used Given, When or Then at the beginning of a step; it just told it there was a step to run).

Then the book should be listed as out of stock

Then we should be told that Fluffy is too young

I often use the passive voice here as well, since in most cases it’s the system producing the outcome, unless it’s pixies.

And that’s it!

Posted in bdd | Leave a comment

Deriving Gherkin from Real Conversations

The “Given, When, Then” format was originally developed by Dan North and Chris Matts, way back in 2004. It was originally intended as a way of describing class behaviour using something that didn’t involve testing. It was a way of having useful conversations about code. It turned into a way of having conversations about entire systems, with examples of how those systems behave. It was always meant to be readable by people who didn’t code (Chris Matts being a business analyst at the time) but it was never quite how people actually spoke.

In this post I want to look at ways of turning real conversations into Gherkin, while maintaining as much of the language as possible. If you’ve got any hints and tips of your own, please add them in the comments!

Get examples by asking for them

The best way to get examples is to ask someone, “Can you give me an example?”

Frequently, the people who are thinking of requirements are trying to think of all of them at once, so you’ll probably get something back that’s acceptance criteria rather than a concrete example.

For instance, if I ask:

“Can you give me an example of something that Word does?”

I might get back:

“Yeah, when I select text and make it italic it should be in italics.”

So I make it more specific.

“Can you give me an example of something you might want to make italic?”

“Sure. Let’s say I’ve lost my dog, and I’m making a poster; I want to make ‘Answers to Spot'” appear in italics.”

Usually when people come up with specific examples they’ll come up with something that’s a bit surprising or funny; something which gives you insight into why it might be valuable as well as what you can do with it, without going into how. Listen to what people actually say!

While you’re practicing capturing real scenarios in real conversations, start by writing down exactly what people say. You might come up with something like the above. Or perhaps they’ll say something like:

If I bought the microwave on discount then I bring it back for a refund, I should only get $90 instead of $100.

The closer your scenarios match the real language people use, the more readable and interesting they’re likely to be.

Draw out the Givens, Whens and Thens; don’t force them.

People don’t use “Given, When, Then” in conversation. I frequently find that:

  • people use “if” where Gherkin uses “given”
  • or they use “Let’s say this happened…”
  • or say “well, if I’ve already done X…”
  • people use “then” instead of “when”, when they’re talking about an event they’re performing
  • people often skip “then” altogether when they’re talking about an outcome, and occasionally skip “when”.

So you might get scenarios of the format:

  • Let’s say <context>. I do <event>, then <outcome>.
  • If <context>, then <event>, <outcome>.
  • If we start on <context page>, then <event>, <outcome>.

It’s all pixies.

I do a lot of work with teams that have already done significant analysis before they even get a sniff at code. For those teams, the solution is already designed; but it’s still useful to help them talk about their problem without reference to the solution.

I tell them, “It’s not a <context page>. It’s a pixie. You’re talking to a pixie, and your pixie is saying, ‘Okay, so what have you already said to me?'” And I make a very silly, high-pitched voice, which makes people laugh but also draws them away from the code and into genuine conversation.

“I’ve said that <context…>”

That’s better than “Given we start on <context page>”, because it doesn’t reference the fact that we’re using a web application, so you can always make a nice little app to do it instead. The UI is flexible, which is good, because people keep wanting to change it, especially when things are uncertain.
Or you could let the pixies do it for you.

Success vs. failure

Often people phrase steps exactly the same way, whether they’re successful or not. So you’ll get:

I submit <my application> and <look for this successful outcome>

I submit <my application> but miss out <mandatory field>

I often see this:

Given I have submitted my application successfully
When I enter my details successfully

I find it more readable to make the default event successful, and use try for unsuccessful events, so:

Given I’ve submitted my application

means you succeeded, while

Given I tried to submit my application
But it was audited and rejected

naturally flows; you recognise in the first step that there’s some other information that’s a bit different to normal. You can use this for events as well as contexts.

It’s…

Use the word it, or he, she, etc. Nobody says:

Given John filled in his application
And the application meets the auditor’s regulatory requirements
When John submits the applicaton for approval
Then John should receive an email that his application has been approved…

Closer to the conversation would be:

Given John filled in his application
And it meets the auditor’s regulatory requirements
When he submits it for approval

You get the idea.

It’s OK to ignore bits in code.

If you want to make sure you can handle it, he, she or even an occasional John or the application without writing fifteen step definitions, you can turn them into arguments to the steps and then just ignore them. Call them actor_ignored or something similar so you know you won’t be using them.

It’s also OK to have steps which lend some understanding of why things are important, even though you’ve got no way of actually coding them. For instance:

Given I lost my dog

can just be an empty step. We don’t want to actually lose someone’s dog. Especially not to the pixies.

Posted in bdd | Leave a comment

A dev walks into a bar…

…and says to the barman, “I’m in the bar. I’m thirsty. I have £10.23 in my wallet.”

“Great,” says the barman. “What can I get you?”

The dev looks around. “When you take that glass and put it in front of that pump there,” he says, pointing at a pump, “you should be able to fill it full of beer.”

“Guess so,” the barman says. He picks up the glass and starts pouring the pint.

The dev points to a spot in front of him on the bar. “Given the glass is full of beer, when you put it there on the bar, you should ask me for £3.80,”

“Uhuh,” the barman says. He finishes pouring the pint and puts it in front of the dev.

“You should ask me for £3.80,” the dev says again. “If you don’t, I’m going to throw… um…” He looks around again.

“You know,” the barman suggests, “if you want to learn to use Cucumber you could just start by having an ordinary conversation first.”

Posted in bdd | 2 Comments

Using BDD as a Sensemaking Technique

A while back, I wrote about Cynefin, a framework for making sense of the world, and for approaching different situations and problems depending on how much certainty or uncertainty they have.

As a quick summary, Cynefin has five domains:

Simple / Obvious: Problems are easy to solve and have one best solution. You can categorise and say, “Oh, it’s one of those problems.”

Complicated: Often made up of parts which add together predictably. Requires expertise to solve. You can analyse this problem if you have the expertise.

Complex: Properties emerge as a result of the interactions of the parts. Cause and effect are only correlated in retrospect. Problems must be approached by experiment, or probe. Both outcomes and practices emerge.

Chaos: Resolves itself quickly, and not always in your favour. Often disastrous and to be avoided, but can also be entered deliberately (a shallow dive into chaos) to help generate innovation. Problems need someone to act quickly. Throwing constraints around the problem can help move it into complexity. Practices which come from this space are novel.

Disorder: We don’t know which domain dominates, so we behave according to our preferred domain (think PMs demanding predictability even when we’re doing something very new that we’ve never done before, or devs reinventing the wheel rather than getting an off-the-shelf library). Often leads to chaos when the domain resolves (I know least about this domain, but it’s way more important than I originally thought it was!)

By looking to see what kind of problems we have, we can choose an appropriate approach and avoid disorder.

However, we can also use BDD in conversation as a sensemaking technique!

BDD uses examples in conversation to illustrate behaviour. We sometimes call those examples scenarios, but really they mean the same thing. My favourite technique for eliciting examples is just to ask for them: “Can you give me an example?”

If the scenarios are imminent and dangerous, and we want to avoid them, we’re probably in chaos – and honestly, you won’t be having a “huddle” or a “scenario-writing session”; you’ll be having a hands-on emergency meeting. You’ll know if you’re in chaos. Don’t worry about talking through the scenarios any more. Get people who know how to stem the blood-flow into the room and throw some constraints around the problem like a torniquet (shut down that problematic server, or put up a maintenance page, or send an apology to your customers).

If the scenarios are causing a lot of discussion, and people are looking worried or confused, it’s probably because you’re in complexity. BDD is essentially an analysis tool, and analysis doesn’t work in complexity. You’ll see analysis paralysis, in which people try to thrash out various outcomes, and every answer generates yet more unanswered questions. As a check to see if you’re really in this space, ask, “Are we having trouble analysing this because it’s so new?” If so, see if you can think of a way to spike or prototype the ideas you’re generating, as cheaply as possible, so you can start getting feedback on them rather than deciding everything up-front. These are the 4s and 5s on my complexity estimation scale. We want to do these as early as possible, because they carry the most risk and the most value, so it’s very important not to push back on the business for clear acceptance criteria here! That will just end up pushing the riskiest requirements towards the end of the project, when we have less time to react to discoveries.

If BDD is working well for understanding the problem and gaining expertise in the business domain, you’re probably in complicated territory. Fantastic! BDD will work well for you here. People will be interested, and asking questions about scenarios generates a few more scenarios and helps create a common understanding. This is a 3 on the complexity estimation scale. Don’t forget to get feedback quickly anyway, because we’re all human and we all make mistakes. I tend to get devs to write the scenarios down, either during or after the conversations, since they can then get feedback on their understanding (or lack of it) early, before they even write any code.

If people are getting bored with discussion around scenarios, look to see if the problem is well-understood, or very similar to something that the team is familiar with. This is either a 2, which means it’s on the border of complicated and simple, and well-understood by people who work in that business domain, or a 1, which means it’s simple and obvious and easy to solve. You can always use Dan North’s “Ginger Cake” pattern here. Find a chocolate cake recipe that’s similar (“log in like Twitter”) and replace the chocolate with ginger (“but make them upload a photo instead of creating a username”). I find it enough just to name the scenarios here, without going into the actual steps. As a check, you can ask, “Is there anything different about <scenario> compared to <the other time we did it>?” That will help flush out anything which isn’t obvious.

The most important part of using BDD this way is to pay attention to people’s spoken and body language – bored, interested, worried or panicked, depending on the domain you’re in. I find BDD particularly good for locating the borders of complex/complicated and complicated/simple, and isolating which bits of problems are the most complex. And that’s how I use BDD across all the Cynefin domains, including the ones in which it doesn’t work!

(I’ll be teaching this and other BDD / Cynefin techniques as part of the BDD Kickstart open course with Cucumber founder Aslak Hellesøy in Berlin, 22 to 24 October – tickets available here!)

Posted in bdd, complexity, cynefin | 2 Comments

Goals vs. Capabilities

Every project worth doing has a vision, and someone who’s championed that vision and fought for the budget. That person is the primary stakeholder. (This person is the real product owner; anyone else is just a proxy with a title.)

In order to achieve the vision, a bunch of other stakeholders have goals which have to be met. For instance:

  • The moderator of the site wants to stop bots spamming the site.
  • The architect wants the system to be maintainable and extensible.
  • The users want to know how to use the system.

These goals are tied in with the stakeholder’s roles. They don’t change from project to project, though they might change from job to job, or when the stakeholder changes roles internally.

Within a project, we deliver the stakeholder’s goals by providing people and the system with different capabilities. The capabilities show us how we will achieve the goals within the scope of a particular project, but aren’t concrete; it still doesn’t matter how we achieve the capabilities. The word “capability” means to be able to do something really well.

  • The system will be able to tell whether users are humans or bots.
  • Developers will be able to replace any part of the system easily.
  • Users will be able to read documentation for each new screen.

Often we end up jumping straight from the goals to the features which implement the capabilities. I think it’s worth backtracking to remember what capability we’re delivering, because it helps us work out if there’s any other way to implement it.

  • We’re creating this captcha box because we need to tell if humans are users or bots. Or we could just make them sign in via their Twitter account…
  • We’re adding this adapter because we want to be able to replace the persistence layer if we need to. Or we could just use microservices and replace the whole thing…
  • We’re writing documentation in the week before we go live. Or we could just generate it from the blurb in the automated scenarios…

By chunking up from the features to the capabilities, we can give ourselves more options. Options have value!

But chunking down from goals to capabilities is also useful. The goals for a stakeholder’s role don’t tend to change, but neither do the capabilities for the project, which makes them nice-sized chunks for planning purposes.

Features, which implement the capabilities, change all the time, especially if the capabilities are new. And stories are just a slice through a feature to get quick feedback on whether we’re delivering the capability or not (or understanding it well, or not!).

Be careful with the word epic, which I’ve found tends to refer indiscriminately to goals, capabilities, features or just slices through features which are a bit too big to get feedback on (big stories). The Odyssey is an epic. What you have is something different.

Posted in capability red, complexity, stakeholders | 4 Comments