An open letter to Bob Marshall

Dear Bob,

We’ve had our differences, mostly around a particular set of tweets you sent
out about the Pask award in 2010. I was respectful enough to address those
with you face-to-face. You refused to retract them, particularly the most
hurtful comment, and you told me at the time that I was “naive” for believing
that I won the Pask award fairly.

I told you that it would make me less likely to want to work with you, and you
said you were sorry to hear that. That was as close as I ever got to an
apology.

I didn’t want it to get in the way of learning, so I left it aside and reached
past it. I don’t believe that any one aspect of a person affects or defines
all aspects. I attended your RightShifting conference. I offered you some tips
from my experience in Dreyfus to help with your Marshall Model. I was as
forgiving as I feel I could be, and our community interactions seemed good.

I thought that things had become more positive. You reached out and offered me
a hug last time we met, which I reciprocated. I thought that perhaps you had
started to see that maybe I really did deserve that award; that my work was
meaningful. I had dreams that you might one day apologize.

Then, on Friday, this Twitter sequence happened.

@flowchainsensei: I’d like to work on advancing the theory and practice of software and product development. Can’t see that ever happening in the UK.

@somesheep: @flowchainsensei IME @tastapod @lunivore @PapaChrisMatts for example prove you wrong. You can make advancements. Not the ones you want?

‏@flowchainsensei: @somesheep Wise up, dude.

@PapaChrisMatts: How is this tweet consistent with #NVC &
#antimatterprinciple? Cc @somesheep @tastapod @lunivore

@flowchainsensei: @PapaChrisMatts Hobgoblins, little minds, etc. @somesheep @tastapod @lunivore

Bob, did you seriously just resort to name-calling? Because this Twitter sequence makes it look as if you did, and as if you’re doing it to Dan and I particularly. I also found that you appear to have blocked me on Twitter, with – as far as I can tell – zero precursor on my side to deserve it.

Oh, well.

I looked up the word “Hobgoblin” and found some very positive connotations. I
once played Puck, the most famous hobgoblin, as a child, and remembered that
while he’s mischevous, he’s not all bad.

Those that Hobgoblin call you, and sweet Puck,
You do their work, and they shall have good luck;
Are you not he?

So I am a Hobgoblin. I own that, and will as always seek to expand my “little
mind”. I take your insult and make it mine, and wish you good luck in return.
I will turn it round, and use it to create a light-hearted manifesto designed
to steer the mischief we all end up causing in a positive direction, and to
forgive others who cause it themselves.

Gentles, do not reprehend,
If you pardon, we will mend.
And, as I am an honest Puck,
If we have unearned luck
Now to ‘scape the serpent’s tongue,
We will make amends ere long;
Else the Puck a liar call;
So, good night unto you all.
Give me your hands, if we be friends,
And Robin shall restore amends.

If you want to meet up, have a drink, and talk about whatever the hell is
bugging you, I’m more than happy to do so. Just let me know how to get in
touch and make plans, since I can’t DM you any more.

Cheers,
Liz.

PS: I also apologise for the small jibe I made myself. It was petty and
unnecessary. Sorry.

PPS: I’ve also been pointed to the quote, “A foolish consistency is the hobgoblin of little minds,” which clarifies a lot but not everything. I did ask for clarification over Twitter but of course you probably didn’t see it! Either way, I am happy to join Chris in requesting that if you’re going to preach non-violent communication, please do more of it.

Posted in Uncategorized | 6 Comments

The Dream Team Nightmare: a Review

I remember Steve Jackson and Ian Livingstone very well. They were the duo who published the “Choose your own adventure” books in my childhood; games which took you from one scene to another, battling monsters, solving puzzles and collecting items that would help you later on. The objective was always to escape the dungeon with the treasure, or the rescued captive, or just you, intact… but always in dread of those two words happening too early: “The End”.

Now Portia Tung has recreated the genre, but this time the adventure is real. You are an Agile Coach, trying to help a failing team – already nominally Agile – to deliver their project in time. The dungeon is an office, the monsters are awkward people and overly optimistic managers, and you, the hero, start with only the wooden club of the Agile Manifesto; no armour, no sharp knives and no magic.

This adventure doesn’t provide hard-and-fast rules for “How to be an Agile Coach”, but it does remind me very much of my early days in that role. Mistakes are simple to make and costly; involving other people is hard work yet invaluable; and leaving yourself with no options will usually result in your demise – at least, the demise of your contract. Even though the book is far more linear than any of the early games – frequently taking you from one paragraph to the next – that mechanism did draw me in, keeping me reading from one scene to another. My experience did help me to avoid the obvious traps, but I went down those routes anyway, just to see what would happen, and found the results pleasingly realistic. Those two words – “The End” – also invoke just as much dismay, when they happen without ultimate triumph.

I did miss some of the mechanics of the adventure books I remembered. There are no artifacts to carry around, and even though the artifacts of Agile Coaching tend to be knowledge-related, it would have been a nice twist. “Do you have a vision statement for the project? If so, turn to chapter 52…” Without these artifacts, the book tended to be focused very much on the process, rather than any results or metrics; a style of coaching that I try to avoid, as I often find it leads to process for process’s sake. Sometimes in the book this happened, and I found myself following patterns of work that seemed more habitual than useful, with no explanation as to why a particular practice might be a good idea. Often it also seemed that there really was only one way to succeed, whereas having different resources to draw on would have let me, and other coaches, choose our own routes to success. There’s usually more than one way to coach a team. This would have made the adventure less linear, too.

Still, while I was reading it with an eye to succeeding (rather than chasing down the failures to see what happened) I did find it a useful reminder of all the things that we know we ought to do as Agile Coaches, but frequently don’t when it comes to real life. The need for space and reflection is emphasized a lot, as is the expectation that you’re going to be putting a bit of work in above and beyond the obvious. Reaching the realization that the team were going to deliver, and that previously hostile monsters… I mean managers… were invested in realistic prospects of success, felt suitably triumphant, as did those two words, “The End”, in the right place.

Unlike the adventure books, there are also different levels of success, too. It’s possible to fail while learning, to fail in ways that damage your career, or to succeed utterly. I enjoyed the reminder that failure isn’t always the end of the world.

I recommend this book for anyone wondering what it’s like to be an Agile Coach, or anyone who’s new to that role, or working within a larger organisation that could use that kind of help. I think it’s less useful for experienced coaches, but I would certainly advise anyone I was training as a coach to read it. And even though I found it less useful than I would have done some years back… it was still good fun!

Posted in coaching | 1 Comment

Disorder, or, How I Got a Black Eye

In the Cynefin framework, the Disorder domain is the one in the middle. It’s the state of not knowing what kind of causality exists. Over on the right we have the predictable domains of Simple and Complicated, and on the left the unpredictable Complex and Chaotic domains. When we aren’t sure which domain dominates, we tend to behave according to our preferred domain.

I’ve become used to seeing the Complex domain treated as if it’s Complicated by people who desire predictability. We do it all the time with Scrum, trying to estimate things we’ve never done before, or with BDD when we try to define outcomes in scenarios where the right answer isn’t certain. Getting a grip on Cynefin helped me to spot that very easily, but I laughed; of course I would never fall into Disorder myself!

The Story of the Green Screen

I can easily tell the difference between something that’s Complex and something that’s Complicated, and am very unlikely to treat the first as the second (Spoiler: all humans have bias). Of course, it helps that my preferred domain is Complex (like most developers).

Then one day, my photography screen arrived.

This screen turns up in a neat little circular bag, about a metre wide. When you take it out, you find three layers of circular wood, with some fabric smooshed together in the middle. As you unfold it, the wood starts to act as a kind of spring, and suddenly – Bang! – it pops out into a screen that’s 1.5m by 2m big.

Huge.

Far too big for my little flat! So I needed to put the screen back away in its bag again.

I’ve found that lots of us who have a preference for the Complex also don’t always read the instructions before trying something out. I wrestled with the screen a bit. I couldn’t see how to get it back into the bag again. The wood is incredibly springy, so it takes a bit of strength to bend it back into shape. After a few minutes, I realised that reading the instructions would probably be helpful.

The instructions made no sense.

So I went looking for a video that I thought might explain it to me. I found this one, in which three photography/video professionals attempt to do the same thing. It had me giggling for a few minutes. At least I wasn’t the only person who struggles with this! At the end of the video they finally get the screen away. I watched, but I couldn’t see exactly how they’d managed it.

Still, they managed it after several tries. Trying something – experimenting – is the right thing to do in the Complex domain! So I thought I should experiment a bit more. After all, it worked for them…

I wrestled with the screen some more. I twisted. I turned. I pushed, and let go of one side for just one second…

Thwack! The screen popped out, whacking me in the eye. I’m very lucky it only drove my glasses back into my face, skidding them up onto my eyebrow, rather than breaking them.

After a few tears and a bit of a tantrum (it’s OK to be unprofessional when you’re alone!), I patiently looked for another video that would help. After all, someone had done this before, and others must have had the same problem, the first time they did this. After some searching, I came across this site, where the kind Dr. Daniel C. Doolan shows us how to do it with a smaller reflector, from several different angles, before progressing to the screen.

Finally, having learnt from the expert, I followed his steps. It still took a bit of strength, but – Plop! – the screen collapsed back into its metre-wide circle again, allowing me to pop it back into its case.

Of course, I couldn’t record any videos that week, on account of my swollen and multi-coloured face.

We Are All Biased to Our Domain

The main problem with my “experiment” was that it really wasn’t. Experiments are safe-to-fail, and wrestling with wood without wearing any safety goggles was a bit negligent on my part. I’m sure if I had asked a Tester they would have spotted the possibility of accidents. Testers are very good at coming up with scenarios we haven’t thought of, and not just in software development! Applying Cognitive Edge’s Ritual Dissent might also have helped me spot the problem.

But really, we didn’t need a safe-to-fail experiment. I should have seen that the problem was predictable. People had done it before, and the fact that the screen came with a nice little bag to store it in should have told me that the solution was repeatable, and merely required expertise – and not very much, at that. Once I understood the trick, it reminded me a little bit of a Rubik’s Magic game, which I used to play with as a child. So this was definitely a predictable problem, and learning from the experts was the right thing to do.

Of course the real problem was my impatience, and my bias towards my own domain.

Some Hints and Tips for Avoiding Disorder

A short while back, I wrote a blog post on how to estimate complexity. I’ve found it does help to bring people out of Disorder. Particularly, I’ve found it useful to consider whether someone has solved a problem before, in the same or similar context, and whether their expertise is accessible. We developers do like to “reinvent the wheel”, but a lot of times it’s not really necessary when the problem has been solved before (Complicated). Project Managers and Scrum Masters often demand predictability where none exists, and recognizing that when nobody has done it before the outcome will be emergent can help us communicate the need for experiment (Complex).

Occasionally making sense of our domain is itself a complex process, because we’re human. So if you’re not sure which domain you’re in, here are my hints and tips for making sense of the domain safely.

  • If you have several solutions and you don’t have enough information to be certain which one is right, pick the one that seems right and is safest and easiest to change. In a Complex domain this will be safest to fail, and in a Complicated problem with several solutions (Good Practices as opposed to Best Practice) chances are that the easiest to change is also the simplest to understand. If it turns out to be wrong (Emergent Practices) then you will have better information for making the right choice. This is Real Options 101.
  • If you don’t know whether a solution exists or not, make sure that your experiment really is safe to fail. Ritual Dissent, Black Hat or Evil Hat thinking, and bringing problem-focused Testers into conversations are all useful ways of checking this.
  • Try looking it up. Google, StackExchange, and our many ways of accessing the Lazyweb give us several fairly safe-to-fail experiments, right there.
  • For those of you who love Chaos and prefer command-and-control, treating everything as an emergency, consider delegating to someone else when it isn’t. It’ll be less stress for everyone involved. And for the rest of us, remember that we tend to treat things this way when we’re stressed, because everything feels urgent. Let other people support you occasionally, especially if you’re feeling low on personal resourcefulness.
  • Try bringing people who have different biases into your conversations. As far as I can tell, Testers and Myers-Briggs J-Types generally prefer Complicated domains; developers prefer Complex; children prefer things Simple; emergency services personnel specialize in dealing with the Chaotic. Their perspective can help. Yes, even (sometimes especially) the kids.
  • Be aware of the bias of your own domain, and be forgiving of yourself when you get it wrong. You yourself are a Complex creature, and everyone fails occasionally, whether it’s safe to do so or not.

Like this? Want to know more? I will be running a workshop on BDD with Cynefin in Brisbane, 11th December, as part of Yow! Australia. Registration is still open!

Posted in complexity, cynefin, evil hat, real options, Uncategorized | Leave a comment

BDD Before The Tools

Another client approached me today about BDD and using tools like Cucumber to automate scenarios.

There are a few things I’d love to teams see develop as a focus before heading down the tools path. You may already be doing these things, and if you are, fantastic! You may find the tools helpful (and you may decide that you have enough benefits without them). Otherwise, these are things I would like to see in place first.

An eye on the big picture.

Who are the stakeholders? Who championed the project and who else has needs that have to be met to make that vision live? Answering this also helps teams come up with much smaller MVPs. It should help to define which scenarios are in and out of scope, and also tends to lead to better language and better appreciation of who to talk to to get hold of the scenarios in the first place. See Impact Mapping or Feature Injection, both of which are great for this. Also Tom Gilb’s work on “Evo”. And this blog post on capability planning.

The ability to question and explore.

I generally use two conversational patterns when talking through scenarios. Testers are extremely good at coming up with other scenarios we might have missed, stakeholders whose outcomes need to be considered, etc. Communication around the scenarios should not just be one-way.

The ability to spot and embrace uncertainty.

This is often the first thing that goes out of the window when people head down the tools path. If something has never been done before, it’s highly unlikely that the business will know exactly what they want. They might not be able to define all the scenarios. If the team hear that there’s uncertainty or conflict in requirements, rather than trying to nail it down, collaborate to try something out instead – spiking, prototyping, A/B testing, etc. The scenarios will emerge. This lets teams focus on risk.

(I’ll be talking on this extensively at Modern Management Methods (Lean Kanban UK) next week – please get in touch if you’d like my speaker discount code!)

Great relationships between people.

If people want to use BDD to understand how to help the business better, to help out the testers, or to help the developers code the right thing, then I want to see evidence that that desire to help genuinely exists. There are lots of small ways to help each other that don’t involve BDD. A focus on “stop starting, start finishing”, limiting WIP and helping each other finish off work and get feedback on it, is a good sign. A heavy focus on estimates, commitment and “sprint failures” is not.

If you have these things, go ahead and use the tools if you want to.

It normally takes a couple of weeks to a couple of months for teams to get the hang of the conversational side of BDD. During this period a lot of successful teams use the half-way house of just writing the scenarios down, perhaps on a wiki, where they’re easy to access and change. Some teams focus particularly on the scenarios which are unexpected or from which they learnt a lot, and that’s a good thing. I like to see devs writing the scenarios down if anyone does, as they can then play them back to the business afterwards, and it forces the conversations to happen.

Things I see when teams don’t have these and go down the tool route include:

  • The team ends up producing the wrong thing (which is now hard to change, because automation also cements it in place).
  • The team often stop having conversations altogether, as the business or BAs start writing scenarios down ahead of time and passing them to the devs, losing the ability to spot any misunderstandings.
  • Innovation is stifled and the scenarios start to become used as “contracts” for what the devs produce, creating an “us and them” culture, undoing all the good work of Agile, and pushing risk towards the end of the project.
  • Any schisms which were there before are hugely magnified, and the team are now starting to head down a Waterfall path, only with lots and lots of scenarios instead of lots and lots of paper
  • Sprint planning meetings take 4 hours.

The good news is that if you do go down the tool route and find this happening, it can be reversed fairly quickly by focusing on the conversations again.

In software development, tools should support your interactions, not define them.

If you’d like help adopting these ideas, I run a 1-day BDD course exclusively focused on the conversations, and a “BDD: Train the Trainers” course to help you run the same day in your organizations. The course is highly interactive and can be run remotely for up to 6 people, over 5 mornings or afternoons, depending on timezone. Reasonable remote support afterwards is included. Please contact me on liz at lunivore.com for details!

Posted in bdd, complexity, stakeholders | 6 Comments

A Separation of Concerns

Out at Agile Eastern Europe in Kiev last month, I was privileged to see Bjarte Bogsnes and his talk on “Beyond Budgeting”.

One of the elements that struck me most profoundly was when he separated the concerns inherent in budgeting. He noted that it was used for three things:

  • creating targets
  • forecasting
  • allocation of resources.

The combination of all three, he noted, led to a lot of gaming and anti-patterns. By separating these concerns and addressing them individually, Statoil was able to adopt dynamic, transparent leadership and processes that have allowed them to be innovative at very high levels.

I (and others) see the same problems happening with estimation, which is often used for:

  • predicting, and hence gaining the trust of stakeholders
  • analysis and understanding of requirements
  • prioritization

There are of course different ways to predict (I like probabilistic predictions from past data, as long as it’s in a predictable domain) and to gain the trust of stakeholders (deliver something!). Talking through examples of how a system might work is a great way to understand requirements, and particularly to uncover risk in those requirements. Prioritization is much easier when the risk is uncovered (and the easier it is to estimate, the less likely it is to contain all the discoveries that derail projects!) So estimation is not the only way, and may not be the best way, to do these three things.

Increasingly, I’m coming to see a similar conflation with BDD, which is used for:

  • exploring what the system should do
  • determining whether the system does what it should do (which is different to whether it’s actually useful)
  • reducing the burden on the testers

But increasingly I’m seeing people using BDD as the only way to explore what the system should do, whether the system is correct (let alone useful), and help the testers out.

It’s OK to do traditional testing. It’s OK to knock up a small widget which goes and gets that information from those 3 different places so that the testers don’t have to go searching for it every time (and widgets like this might end up helping with automation too). And it’s OK to talk through what the system should do in whatever language suits you. BDD is just pretty good at those three things, but it isn’t the only way.

These are also three different things that seem to get mixed up together and confused:

  • analysis
  • testing
  • monitoring

And I’m seeing people struggle because BDD isn’t particularly good for things which require monitoring, and any capability which applies to the whole system generally requires monitoring rather than, or at least as well as, testing (code quality and maintainability, security, performance, etc.)

If you’re finding that a process isn’t working for you – be it BDD or anything else – then maybe have a look at what values you’re getting from it; what values others are getting from it; and what other options exist to fill in those gaps.

Posted in bdd, business value | 13 Comments

Capability-based Planning and Lightweight Analysis

As a follow-up to my post on estimating complexity, I wanted to share one of the interesting things I’ve been doing with the ideas and the numbers. This is quite long, but it’s turned out to be faintly revolutionary, and as far as I can tell it works, for the few clients I’ve done this with. It’s also based on ideas that are tried-and-tested by more than just me. Please do share any experiences that you get as a result of this, and especially places where it fails!

The Problem

One of the things that always bugged me about Scrum and similar processes is the idea that we start with a backlog, seemingly full-fledged from thin air. Sometimes the backlog has been drawn from an initial vision and is focused and prioritized appropriately, but more often I find that it’s been created using a “bucket” of requirements, and the bucket has come about because of old, Waterfall habits. When a business department know that they will only get one chance to sign up for what they need, they tend to create options for themselves (because options have value) by signing up for what they might need, and this habit has persisted. They don’t realize that because we now keep our codebase self-documenting and easy to change (which we do, right?) they still have those options, even when the first release is small.

Over the years, the development teams have fought against this tendency in both business and in themselves (YAGNI!), but I’ve still seen plenty of teams with over a year’s worth of fine-grained stories in their backlog, often with acceptance criteria sketched out. Sometimes those stories can take a week or two to put together, and I’ve sat in more than one multi-day release planning meeting.

I don’t like long meetings. I tend to fall asleep after an hour or so, even if my eyes are still open, and looking around the room I get the feeling that I’m not alone. So I’ve picked up my toolbox and gone looking for another way to do it.

It turns out that what I do is a bit similar to Gojko Adzic’s Impact Mapping. That’s not a surprise; both of us were inspired by Chris Matts’ Feature Injection ideas. Adding the complexity estimates (or, working out which things we know least about) is what makes the big difference, for me.

Let’s have a look at the process. As always, it’s a process that focuses on people and interaction, so it should be really lightweight. If you’re finding yourself bogged down by this, you’re probably focusing too much on the destination, and not enough on the journey. It’s OK to get just enough to start with, then start, and worry about the rest as you go.

Differentiators, Spoilers and Commodities

In “Domain Driven Design”, Eric Evans talks about the “core domain”; the thing that creates a company’s niche in a market. I came across the concept again in Waltzing with Bears, which starts Chapter 1 with the words, “If a project has no risks, don’t do it.” Dan North and David Anderson both used the words “differentiator”, and that was the one which rang best with me.

A differentiator is the thing which makes our company different to other, similar companies. It’s what makes this feature different to similar features. It’s the difference between doing it manually and having a computer help you. It’s being able to see a visualization instead of numbers; making it free instead of paid-for; making it expensive and exclusive instead of cheap and freely available. Identifying the differentiator for a project is the heart of this process.

The differentiator is at the heart of every thriving company, every useful product and project, and every successful person. Whether it’s Apple’s marketing strategy, or Wikipedia’s army of volunteers; the maintainability of that replacement system compared to the legacy, or writing the Game of Life using that language that you could never get your head around, doing things differently is important.

I like Niel Nickolaisen’s Purpose Based Alignment Model too, for helping us work out what to do when something can help us in the marketplace, but isn’t actually differentiating for us.

That’s relevant because when we see someone else’s differentiator, we might want it for ourselves, and vice-versa. We create spoilers from those differentiators. Kyocera made the world’s first cameraphone, and then Nokia popularised the idea. Nowadays that differentiator has been so spoilt, it’s commoditised; almost all phones have cameras in them, and often more than one.

The trouble is that when we ship a differentiator, we still have to ship all the commoditised, boring gubbins that comes with it. A phone still has to be able to make calls, receive calls and look up numbers; a cameraphone that doesn’t do that is… well, just a camera. So usually for every differentiating capability, there will be a number of commoditised capabilities that we need to ship too.

Drawing out Capabilities

Usually, every project has a primary or core stakeholder; a person or group of people who have fought for the idea, got hold of the budget or provided it themselves, and who have the primary interest in seeing their idea turned into something real.

(The primary stakeholder is the Product Owner. It doesn’t matter what title we give to other people; they can only be proxy POs at the most.)

In addition to the primary stakeholder, we also have secondary or incidental stakeholders whose goals need to be met to go live. (I have had people object to being called incidental, and others to being called secondary, so watch for people’s reaction to the language used here and find alternatives if you need them!)

We only need to meet the goals so that the secondary stakeholders will tolerate us going live (and Tom Gilb’s work on “Evo”, and putting numbers on these goals, helps a lot).

In order to deliver the goals – both the primary goal, and the secondary ones that need to be met – we deliver system capabilities. In a lot of cases these will be the same as the goals, but they may also be fairly low-level. These often map to what Agile teams call epics, though I’ve seen that word refer to everything from capabilities to features to stories that are just a bit too big for a sprint.

The capability is, quite simply, the capability for the business to do something, or assure something. For instance, we might have:

  • The capability to comment on an article
  • The capability to be on sale in the Apple Store
  • The capability to make a copper trade
  • The capability to prevent traders going over a counterparty limit
  • The capability to stop bots spamming the site
  • etc.

Some of these capabilities will be discrete in nature, and others will be cross-cutting, affecting multiple features related to other capabilities. We’re used to handling the discrete ones, and I’ll talk more about some ideas around cross-cutting capabilities in another post.

Sometimes people already have features in mind. If that happens, I encourage teams to help them “chunk up”, either by asking “Why?” or by asking, “What will that get for you (that you don’t already have)?” This helps people focus on the real goal, instead of worrying about the implementation.

If our capabilities could be met with enough people and bits of paper, or magical pixies that would run round and do the job, it’s probably phrased at the right level. If we’re including pieces of UI, or even looking at which devices it’s going to run on, we’ve dropped down to a feature instead of the capability.

Talking about features and breaking capabilities down into them can be useful for exploring those capabilities, and getting a good understanding of them, so it’s not a terrible practice. I encourage us to learn the difference, though. Knowing what capability is being achieved can help teams to think of other ways of achieving it, and those other ways may also end up being differentiating!

Uncertainty in Differentiators

We can easily see that when we create a new differentiator in a market, we have no guarantees that it will work. We don’t know quite what the outcome will be, or if we will find a working outcome at all.

Even when we’re spoiling someone else’s differentiator, we know that the outcome can be achieved, but we don’t usually know how to do it, or how hard it is. Perhaps the poor manager who took over that differentiating feature was fired for taking too long, and the company then shipped it anyway. Who knows?

These match the levels 5 and 4 on my complexity estimation scale. Once we get down to about a 3, it’s less of an issue; someone has done this before, and they work in our organisation, so we can go get the information. (Anything which has been done in open-source is usually a 3 at most, too, since we can get hold of the “how”.)

So, we now have all the tools to run a capability-based release planning session!

The Process

  • Identify the primary stakeholder and the vision that’s going to be released.
  • Identify any other stakeholders whose needs absolutely have to be met. If there are a few stakeholders that you miss out because you’re so used to meeting their needs, don’t worry too much about it. The important thing is to find the stakeholders who are most likely to stop us from going live.
  • On big cards, write the capabilities that you will need to achieve in order to meet all the goals. These will usually be no more than a few words. You can chunk up with “Why”, and down with “Can you give me an example of that?”, or break things into features, in order to explore the capabilities. We may discover more as we do this! You won’t need to explore commoditised capabilities much (“the capability to register users”) but if you find anything differentiating, it will be worth having a conversation. I find 20 minutes to 1 hour per differentiating “thing” is enough (though you might start with more until you have practice), scaling down to 5 or 10 minutes for well-understood capabilities.
  • If you’re not sure whether something is differentiating, you can ask, “Is there anything different about how we do this compared to normal? Or compared to how we did it in X project? Or compared to how does it?
  • Estimate the amount of ignorance around each capability on a scale of 5 to 1, where 5 is “we know nothing” and 1 is “we know everything”. You can use the scale I mentioned in the previous blog post if you prefer; it’s much the same. You can also break off small parts of capabilities, if there’s only one part that’s differentiating – either as a smaller capability, or as a feature within it (but look for non-differentiating ways to achieve the same thing as they’ll be less risky, and if the feature is differentiating, look for the capability that it’s delivering).
  • We should, by now, have at least one 5 or 4 in the capability backlog!
  • If we have no 5’s or 4’s, we can ask what’s different about this project compared to other, similar projects. If we can’t find any, we’re probably better off partnering with someone who’s already done it, or getting it off the shelf, than doing it ourselves!
  • If we have more than one 5 or 4, it’s possible that we’ve actually got a couple of different visions here. Is there a way to deliver one without the other? Would it be valuable?
  • Once we’ve narrowed down what we need to do for the release, we have a Minimum Viable Product – either something valuable, or something that we’ll learn from quickly!
  • We can even put estimates on the capabilities if you want. I recommend breaking them down only as far as features if you really want to do this, and using historical data about how long previous, similar features have taken, to estimate. I’ve also done this with story points made in 10s and 100s, just to show that there’s still lots of uncertainty around this. Don’t worry if you can’t get accurate estimates for the 4’s and 5’s, because…

Unlike the big backlog of requirements, there’s no point in delivering the most valuable thing first. We’ve already prioritised for value! We need everything in the release. If we don’t need everything, then we can make a smaller release!

So rather than prioritising for value in this backlog, we can prioritise for risk, and the risk will be the things we’re most ignorant about.

It will be the 4s and 5s.

As we take each capability, we can start breaking down the features and analyzing them, starting with the most risky of the features. There are some tricks here, too. You may not have to break down everything to start work.

Deliberate Discovery

A lot of Agile techniques are about predictability, and address the risk of integration, etc. by making vertical slices through the codebase, from end to end. That’s great when you know where the risk is! But for 4s and 5s, we often don’t know what we don’t know. There are tons of discoveries waiting to be made! So anything we can do to short-cut the process and make the discoveries earlier will help.

For instance, one of our early stories might involve a new piece of UI. The last time this happened, we hard-coded some data behind the UI and got it in front of users. It had been analyzed to death by UX experts, but I wanted to do this just because it was new. I didn’t worry about getting information from the database, because that wasn’t particularly risky. In this instance, the traders decided they didn’t like the UI! But it was very early in the project, so we had lots of time to come up with another one.

Another story involved the legacy system. We hard-coded a trade and made sure we had a way of getting that trade into the system, and that it looked much like the legacy system’s trades. We didn’t worry about using a real trade, or validation, or even session vs. request scope, because those were things that we understood (or had access to people who did), and which we could fix, later.

I wrote a post on how to split up stories back here. The important difference is that this time, we’re not focusing on whole slices. We might want to do that once we have validation, but until then, let’s just get some feedback.

Some concerns I’ve encountered

But I’ve already got a backlog!

Identify your main, championing stakeholder, and ask them to explain the vision to you.

Get hold of Gojko’s “Impact Mapping” book. Look at which capabilities your stories are delivering, and who the concerned stakeholders for each capability are.

If you’ve already got a backlog, chances are that there are no “technical stories” on there. You’re probably missing a few stakeholders, possibly ones who will stop you going live. Once you have the capability map, tout it around. Get it up somewhere visible. The other stakeholders will emerge.

If in doubt, try to get something live, early. There’s nothing like being wrong to find people who will tell you what “right” looks like!

We can’t make releases that small!

Get a copy of “The Phoenix Project”, and read it. The short answer is: you have just identified an organisational constraint. Find out if there’s a way of decoupling the deployment process, particularly, or if there are better ways of passing certifications. Remember that your target is not the process itself, but the end goal of that process.

I used to think that hardware manufacturers had a bit of an excuse. After all, you can’t update hardware that quickly, right? But then I saw what Joe Justice was doing with Wikispeed and cars, and how he was passing on those techniques to other manufacturers in different disciplines. I don’t have a huge amount of experience with hardware, but forgive me for being skeptical these days of excuses.

The PMO Office want us to do Scrum / Waterfall / something else!

The PMO office want to know that you’re addressing risk sensibly. The interactions I’ve had with people involved in governance over this process have been extremely positive (in one case leading the PMO member to sit with the team to see what they were doing!) If you’re worried about any particular person or group, treat them as an additional stakeholder. Bring them in, ask them what they need (and why), and include their concerns in the capability backlog.

This project is being done for political reasons, and this process is getting me into trouble!

Congratulations! You’re already a long way further than a lot of people. This means you’re awesome at your job, and will find another one easily. I will be happy to help, especially if you’re in the UK. Get in touch.

Can you help us do this?

Yes, if you’re in the EU or prepared to sponsor a visa. Please get in touch – liz at lunivore.com. I’m pretty good at fitting my work to budgets, and can even mentor remotely.

Enjoy, and let me know how this works for you!

Posted in bdd, capability red, complexity, cynefin | 10 Comments

Estimating Complexity

Over the last few years, teaching people the Cynefin framework early on in engagements has really helped me have useful conversations with my clients about when different processes are appropriate.

There’s one phrase I use a lot, which is self-evident and gets around a lot of the arguments about how to do estimation and get predictability in projects:

“It’s new. If you’ve never done it before, you have no idea how it will turn out.”

This is pretty much common sense. When I teach Cynefin, I also help management and process leads look for the areas of projects or programmes which are newest. These are the areas which are most risky, where the largest number of discoveries will be made, and often where the highest value lies.

A Simple Way to Estimate Complexity

There’s one kind of work which is urgent and unplanned, and we don’t tend to worry about measuring or predicting, because it absolutely has to be done: urgent production bugs, or quick exploits of unexpected situations. This matches Cynefin’s chaotic domain; a place which is short-lived, and in which we must act to resolve the situation lest it resolve itself in a way which is unfavourable to us.

Aside from this domain, all other planned work can be looked at in terms of how much ignorance we have about it.

Something I often get teams to do is to estimate, on a scale of 1 to 5, their levels of ignorance, where 5 is “complete ignorance” and 1 is “everything is known”.

If a team want a more precise scale, I’ve found this roughly corresponds to the following:

5. Nobody in the world has ever done this before.
4.
Someone in the world did this, but not in our organization (and probably at a competitor).
3. Someone in our company has done this, or we have access to expertise.
2. Someone in our team knows how to do this.
1. We all know how to do this.

You can see that if a piece of work is estimated at “5”, it’s likely to be a spike, or an experiment of some kind, regardless of how predictable we might like it to be! This matches Cynefin’s complex domain, and sits at the far edge, close to chaos, since we don’t yet know if it’s even possible to do. 4s are also a high-discovery, complex space; we know someone else has done them, but we don’t know how.

As we move down the numbers, so we move through complicated work – understood by fewer people that we might consider to be experts – through to simple work that anyone can understand.

We can also measure this complexity across multiple axes: people, technology, and process. If we’ve never worked with someone before, or we’ve never made a stakeholder happy; if there’s a UI or architectural component that’s unusual; if there’s something we’d like to try doing that nobody has done; these are all areas in which the outcome might be unexpected, and in which – as with Cynefin’s complex domain – cause and effect will only be correlated in retrospect.

Embracing Uncertainty

Helping teams to be able to estimate the complexity of their work has had a number of interesting outcomes.

Devs are happier to provide estimates in time alongside the complexity estimates. There’s nothing like being able to say, “It’ll take about 20 days, and if you hold us to that you’re an idiot,” with numbers!

Management can then use the estimates to make scoping decisions about releases (in the situations where an MVP might not yet be doable due to large transaction costs elsewhere in the business, like monolithic builds or slow test environment creation). We can also make sensible trade-offs, like whether to use an existing library, or build our own differentiating version now rather than later.

When the scope of a project is decided, be it an MVP or otherwise, it’s very easy to see where the risk is, in the project – and to do those aspects first! Even at a very high level, if a team are delivering a new capability for the business, we can still talk about how little we know about that capability, and in what aspects our ignorance is greatest.

When it comes to retrospectives, rather than treating actions as definitive process changes, teams can easily see whether it’s something that will predictably lead to an improvement, or whether it should be treated as an experiment and communicated as such (and that last can sometimes be important – the worst commitments are often the ones we don’t realise we’re making!)

And best of all, rather than pushing back on business uncertainty (“I’m sorry, we can’t accept this into our backlog without clear acceptance criteria”), the teams embrace the risk and potential for discovery instead (“What can we do to get some quick feedback on this?”) They can spike, learn from the spike, then take their learning into more stable production code later (Dan North calls this “Spike and Stabilize”). Risk gets addressed earlier in a project, rather than later. Fantastic!

And all you need to do, to enjoy this magic, is estimate which bits of your work you, and the rest of the world, know least about.

Making Better Estimates

One of the things I’ve noticed about development teams is that they often like to make everything complex, particularly the devs.

Testers are very happy to do the same thing over and over again, with minor tweaks. Their patience amazes and inspires me, even if they are utterly evil.

Devs, on the other hand, will automate anything they have to do repeatedly. This turns a complicated problem into a different, complex one.

The chances are that if we’re actually in a well-understood, complicated domain, rather than a complex one, someone will have solved the problem already and – because we hate having to do the same thing twice – they’ll have written up the solution, either in a blog post, or a StackOverflow or other StackExchange answer, or as an open-source library.

So before you go off reinventing the wheel, you can perform a few searches on the internet to see if anyone has some advice for you first. This can help you work out whether your work really is complex or not.

The Evil Hat

One of the things we need to do in the complex domain is ensure that any experiment is safe-to-fail.

A pretty easy way to do that is to put on the Evil Hat, and think about how you could plausibly cause the experiment to fail. You know – for fun. Think about how you could do it in the most destructive way possible. Then try to think of ways that the nasty, good people might stop you from doing that.

Cognitive Edge have a great method called Ritual Dissent, that’s very similar to the pattern Fly-on-the-Wall that Linda Rising taught me some time ago. This is similar to putting on the Evil Hat, or at least, inviting others to do so.

If you have any difficulty coming up with ways in which to cause an experiment to fail, try asking a tester. They’re really evil, and very, very good at breaking things.

Lastly, take a look at Real Options, a significant part of which is about making decisions into experiments instead. (Another part of it is about getting information before decisions are made, so it plays nicely with both complicated and complex spaces, and even helps us move our problems between them).

Since we don’t always know what we don’t know, and, in a genuinely complex space, things which worked last time might not work this time, it’s a pretty useful tool for when we’re not sure exactly how little we know, too.

Coming Up

The complexity estimates turn out to be all kinds of useful. I’ll be writing a couple more blog posts soon; one about capability-based release planning (which I’ve touched on here), and one about pair-programming, including how it relates to complexity.

Watch this space!

Edit 2019-04-12: Also made it more obvious that 4s are complex too, even if not quite as much.
Edit 2021-06-22: Turned the scale upside-down (5s first) to match how I’ve found it best to teach it.

Posted in capability red, complexity, cynefin, evil hat, real options | 27 Comments

Behavior-Driven Development – Shallow and Deep

I had a bit of a chat with Eric about Shallow Kanban. The idea with Shallow Kanban is that you’re doing the basics – visualizing the workflow, improving collaboratively, limiting WIP, making policies explicit, etc. – but perhaps only some of them, or just doing it within your team and not looking more widely at your company, business and IT risk, and how you can contribute to the company’s overall goals. You might get considerable improvement, but there’s plenty left to do.

“It’s like people who say they’re doing BDD, but they just use Cucumber to automate scenarios without actually talking to anyone,” Eric said.

“That’s not Shallow BDD,” I replied. “That’s not BDD at all. In fact, if they do that, they’ll probably run into trouble. Shallow BDD is when you just have conversations around scenarios, and you’re not capturing the scenarios or trying to automate them. You’ll get considerable benefit from that, and there’s plenty more to look at, but it’s a good start.

“If you’re not having conversations though, you’re not doing BDD at all. You might be doing Acceptance Test Driven Development… but we always reckoned that if that was done well it would look like BDD, and if you’re not having conversations, you’re not even doing ATDD well.”

In this post, I’m going to call out two common anti-patterns I’ve seen with BDD adoption, and suggest some ways of deepening it that might help avoid them.

Anti-pattern 1: The BAs* get bored.

* By BA, I mean “someone who can analyse that aspect of the business” – might be a PO, domain expert, or user.

  • The team starts by having conversations.
  • The BAs write the conversations down. The devs automate the scenarios.
  • The BAs get bored with having the conversations, so they start writing the scenarios in the tool ahead of time.
  • The devs go back to misunderstanding the scenarios (less than before BDD, but more than when they were having conversations). They’re also now translating the steps that the BAs have written into ones that they already have, so they can automate them more easily.
  • The BAs are also spending more time coming up with scenarios, so they don’t like it when the devs show them that something can’t be done easily.
  • The devs take on work regardless of how easy it is. Conversations about alternatives never happen. It’s harder for the business to change their mind, because of all the scenarios.
  • Innovation is stifled. Everyone is sure there must be a better way but can’t see how.

Anti-pattern 2: The Devs get bored.

  • The team starts by having conversations.
  • The devs write the scenarios down and automate them.
  • The devs get bored with having the conversations and start writing scenarios themselves.
  • This seems to be working! Planning meeting times have now gone down. There are fewer conversations.
  • The BAs no longer care about the scenarios. They’re happy because the planning meeting times have gone down too.
  • The devs deliver the wrong stuff. And it’s hard to change, because of all those scenarios they were writing.
  • The BAs lose trust in the devs, and they don’t want to read any of the scenarios because the feature files are boring and lengthy.
  • Delivery is stifled. Everyone is sure there must be a better way but can’t see how…
  • …unless they get the BAs to write the scenarios again instead.

Deepening BDD: Don’t Be Bored, Stop Pretending

  • Find the places where the conversations are least boring. Focus on those. They’ll probably be the places where things are new and interesting, and where lots of discoveries will be made.
  • If the BAs are uncertain about outcomes of scenarios, that’s a good call to try something and see what happens, instead of forcing them to be certain (see video “When Outcomes Don’t Come Out“). Good teams will always be looking for different ways to achieve an outcome, or possible scenarios that might go wrong, especially if it’s something new. That’s really hard to do if you’ve already decided what the scenarios are up-front and you have no chance to talk about them.
  • If the conversation is getting boring, the chances are it’s something that the devs have done before. In this situation, it’s OK to let the devs go code something. The chances of getting it wrong are small. The chance of spotting wrongness quickly is big. A question I ask a lot in this situation is, “Is there anything different about X this time, compared to how it’s normally done?” That brings the conversation back to anything interesting. If there’s nothing different, there’s probably a library that will do it. This boring stuff is *very unlikely to change*. It probably doesn’t need regression testing, except for maybe some unit tests to help with the internal design. It needs isolating as a service or a well-defined module. It might need monitoring. It doesn’t necessarily need automated scenarios. Logging in, user registration or email gateways are all good examples of this.
  • If you write the scenarios down, don’t use an automation tool for the first pass. Use a wiki. I recommend that the devs do the writing, then immediately send it back to the BAs for review, to see if there’s anything they misunderstood or missed out.
  • Remember that BDD isn’t about testing; it’s about examples of behaviour. If you’re going to automate, think about whether you have enough examples. For instance, use a couple of examples of validation, rather than all 32 possible failures for that screen. Unit test the rest.
  • Remember the pyramid. A small number of full-stack scenarios, some integration tests, lots of unit tests. The longer your tests take to run, the fewer of them there should be.
  • Refactor your feature files, then use that to refactor your code and architecture. If a more boring scenario’s behaviour is covered in a more interesting scenario, delete the boring one. If there are too many scenarios, consider which behaviour might be boring and see if you can refactor that code into a service or well-defined module. It’s boring because it’s not likely to change any more. Make it so that you don’t need to regression test it, and delete the scenarios or move them to that new service.
  • Towards the end of a project, all the pieces left ought to be low-risk stuff that just needs finishing off to get the product shipped. If you’re in that space, you won’t need to talk through scenarios so much. That’s fine; you don’t have to do BDD – but if you do do it, have the conversations! And next time round, see if you can ship earlier by focusing on what’s different and interesting.
  • If your conversations are effective, you’ll find that they steer the project, saving you work, calling out risk, and helping you to build and architect your software appropriately. If they’re not doing this, look to see if the conversations you’re having are truly effective. If it’s not the conversations, look to see if there’s some place in the process where people are pretending that conversations have been had – writing scenarios for the devs, or vice-versa. Eliminating that, either by having conversations, or making them unnecessary, might help.

If you’re not having conversations, you’re not doing BDD.

The shallowest form of BDD consists of just having conversations around scenarios – not capturing them, not automating them, but just having the conversations.

If you’re not having conversations, you’re not doing any kind of BDD.

Posted in bdd | 5 Comments

There’s no such thing as Declarative and Imperative

I finally managed to catch up with Dan North last week at NDC Oslo (since we don’t seem to be able to do that in our home city).

We were talking about Declarative vs. Imperative, when Dan said something that surprised me. “There’s no such thing.”

He went on, “Every Declarative is a chunking-up of something else, and every Imperative is a chunking-down, and you can chunk up and down to different levels of abstraction.” Chunking up and chunking down are terms from NLP, and mean either becoming more abstract, or more specific respectively. Dan continued, “Even Java is a more declarative form of bytecode, and bytecode is a more declarative form of machine language.”

Of course, he’s correct. I wrote a blog post a while back on the Myth of What and How that explores this idea, too, so I shouldn’t have been that surprised.

The terms are still useful for getting us out of our feature-driven, code-oriented brain and into the mind of the business, focusing on the goals we want to achieve and the capabilities we’ll need to deliver.

However, from a BDD perspective, the most important thing is that if your scenarios are too hard to maintain, and becoming too detailed, you might want to try chunking them up a notch, and capture some conversations with people who naturally speak in terms that don’t involve the UI. And if that doesn’t help, try chunking it up some more, and talk to the people who have the goals and the visions.

Having the conversation is more important than capturing the conversation is more important than automating the conversation.

Posted in bdd, nlp | 6 Comments

Deliberate Discovery, Real Options and Cynefin

Last week, I held a hangout with a few people from the ALE community. A few people very much enjoyed it and got quite a lot out of it – it’s far less formal than most presentations. It takes me a minute or so to get going, but once I do, this turns out to be the first time I’ve ever approached the idea of embracing uncertainty without starting with BDD or using it as a backbone.

There are quite a lot of experiments in this – it’s the first time I ever led a hangout – so please excuse the lack of preparation!

If you want to see something more formal, I also held a webinar called “Fail Fast, Fail Safe”,  on similar and related issues, complete with slides. I get to rant a lot in this one about people who “guarantee success” with Agile and Lean, and show some examples of how we all tend to do it without realising, anyway. Unfortunately we had a few technical difficulties outside of my control, so I drop out for a minute or so a few times. It got good votes despite the problems.

Big thanks to Ivana, and to Emily at Arrows Group, for arranging these sessions and making sure they were recorded for posterity!

Posted in Uncategorized | Tagged , , , | Leave a comment