Yes, and…

Imagine two actors standing on stage. “I like your penguin,” the first says. The other turns round, looking at the empty space where one might imagine a penguin could be following.

There are two things that can happen.

Perhaps the second actor frowns at the empty space. “I don’t have a penguin,” they say. “Oh,” says the first, and that’s it. The scene is dead.

Perhaps the second actor goes along with it. “Why, thank you!” they exclaim. “I only got it yesterday. Rockhoppers, I tell you, you got to get one of these…” And the scene continues, and now it’s funny, because it has a penguin in it, and penguins are funny – especially imaginary ones. The humour emerges, with jokes that are unexpected, both to the audience and the participants!

This is the principle of “Yes, and…” that forms the basic rule-of-thumb of Improv.

It’s also one of the most important principles for Agile transformation… or any kind of change involving people. This post is about why, and how to spot the principle at work and support it.

Complexity has  Disposition, not Predictability

In a complex space, where cause and effect are only correlated in retrospect, we’re bound to make unexpected discoveries. Anything we do in that space needs to be safe-to-fail. Things that we try out that might improve things and which are safe-to-fail are called probes.

As leaders, our temptation is to come in and start poking the space, persuading people to pick up new ideas and try them out. Maybe that’s safe-to-fail; maybe it isn’t.

The people best positioned to know aren’t the coaches and consultants brought in for the transformation. They’re not the leaders and the managers. They’re the people on the ground, who’ve been working in that context for a while. They know what’s safe and what’s not. They know what’s supported and what isn’t. If you listen to the stories that people tell then you’ll hear about what works and what doesn’t… and those stories play into the space, too, and the people on the ground have been listening to them for a while.

They might not know what’s guaranteed to succeed. I once asked a group whether a football game counted as complicated (predictable) or complex (emergent). One of the attendees asked, “Is England playing?”

I think I might have made a face at that! However, it gave me a useful place to explain the difference between predictability and disposition. Just because the England team seem unlikely to win the football match doesn’t mean that they can’t. They don’t lose every match, after all! So perhaps the world of football is disposed against England winning the world cup.

However, we’re disposed for good beer and a beautiful summer with at least a bit of sunshine, so not everything is bad.

In our organizations, different probes will be differently disposed to succeed or fail, and to require amplification or dampening accordingly. The thing about human organizations, though, is that they are Complex Adaptive Systems, or CAS, in which the agents of the system (in this case, humans) can change the system itself.

And they already are.

People are generally interested in doing things better. They’re always trying things out, and some of those things will be succeeding.

The most important thing we can do as coaches and consultants is to encourage those successful probes; to spot those things which are succeeding and amplify them, and to make safe spaces in which more probes can be carried out, so that we can see more success and amplify it again.

Amplifying means taking what’s already in place (yes!) and building on it (and…).

A Really Common Pattern: “That won’t work because…”

Humans have a real tendency to see patterns which aren’t there, and to spot failure more than success. We don’t like failure. It makes us feel bad. We tend to avoid it, and when we see failure scenarios we invoke one big pattern in our head: That won’t work because…

There’s one word that’s short-hand for that tendency to see and avoid failure.

“I want to learn TDD, but it will take too long.”

“We could co-locate the team, but we’d need to find desk space.”

“We’ve got some testers in India, but they’re not as good as we need them to be.”

The word “but” negates everything that comes before it. It’s the opposite of building on what’s already there.  What might happen if, instead, we use the word “and”?

“I want to learn TDD, and I’ll need to set aside some time for that.”

“We could co-locate the team, and we could look for desk space to do that.”

“We’ve got some testers in India, and we need to help them improve.”

Now, instead of shutting down ideas, be they our own or other people’s, we’re looking at possibilities for change. Just changing the language that we use in our day-to-day work can help people to spot things they could try out.

We already use “Yes, And…” in TDD

The TDD cycle is really simple:

  • Write a failing test
  • Make it pass
  • Refactor

However, when we’re dealing with legacy code, which has behaviour that we’re interested in and want to keep, the first thing we do is to amplify and anchor what’s already in place! We write a passing test over the existing code, then refactor with that safety in place, then we can add the new failing test and the change in behaviour.

  • Write a passing test (Yes!)
  • Refactor
  • Write a failing test (And…)
  • Make it pass.

We can do this with our co-workers, our teams and our organizations as well. Of course, we don’t refactor people! Refactoring code, though, usually means separating concerns and focusing on responsibilities. In human terms, that means anchoring the behaviour or probe that’s working, and helping people or organizations focus on their strengths and differentiators. This helps to outline the disposition of the space too, so that whatever we try next has a good chance of working.

Of course, in a complex space, a “passing test” might not always be possible, depending on what we discover. At least, though, by anchoring what we value, we’ve made sure that it won’t be any worse. If we keep the investment in our probes small, it’s safe-to-fail.

  • Anchor valuable behaviour (Yes!)
  • Recognize and celebrate strengths
  • Design the next probe, or just encourage people with more context to do that (And…)
  • Give it a try.

Tim Ferris’s book, “The 4-hour Work Week”, is all about this; focusing on our strengths and letting other people cover our weaknesses, so that we can change our lives to allow ourselves to learn new things.

The Feedback Sandwich is a “Yes, and…” pattern

You’ve probably come across the feedback sandwich before:

  • Say something good
  • Say something bad
  • Say something good

It’s got some other, less polite names, and a poor reputation, so I’d like to show you a different way to think about it. Here’s what we’re really doing:

  • Anchoring valuable behaviour
  • Suggesting improvements
  • Outlining a good place we’d like to get to.

Human beings are complex, and we make complex systems. It’s entirely possible that the good place we might like to get to won’t be easy, or we’ll make discoveries that mean we need to change direction. So that good place we want to get to is actually just an example of coherence; a realistic reason for thinking a probe might have a positive impact. We can think of a good thing that might result from trying out that improvement.

The thing is, people are adaptive. If you can outline a place you’re trying to get to, and it’s good, and other people want to get there, and you’ve already made them feel safe by anchoring the valuable behaviour, then they’ll make their own improvements, and probably try things you didn’t even think of. You don’t need the bit in the middle. People know themselves better than you can, and they know their own disposition. They know what feels safe, and possible. By giving them a suggestion of a good place to get to, we’re freeing them up to find their own way to get there… and they might end up going to a different place that’s better than before, and that’s good too.

In fact, just making sure that people’s most valuable behaviour is appreciated can be a really great way to create change. “Thank you” is a very simple way of doing that, so remember to use those two words a lot! “Thank you” is the “Yes!” of personal feedback.

It’s All About Safety.

improving-agile-teamsThis is the book which prompted this whole post.

In “Improving Agile Teams”, Paul Goddard dedicates a number of chapters to the concepts of safety, failure and fear of failure. I highly recommend this book, especially the simple exercises which are designed to help a team adopt principles that will help to make things safe.

It’s possible to look at an entire Agile transformation through this lens, too.

Typically, an organization starts adopting Agile at the team level, just in software development, often by taking on Scrum or some form of Kanban. It’s pretty easy to do, because we’ve got a lot of tools which enable us to change our software safely. BDD and TDD provide living documentation and help to anchor the behaviour that we care about. Our build pipelines and continuous integration makes sure that whatever we’ve got working stays working. Of course, we can’t quite ship yet! That’s because shipping to production is still a bit of a commitment, and it isn’t safe.

But* now it’s safe for Development in an organization to get it wrong.

Gradually we adopt practices alongside IT Operations which enable us either to fail in a place where it’s safe – making our test systems more like production – or which enable us to roll back in case of failure. Of course, once you’ve got a production-like test system that can be destroyed and rebuilt as required, it’s not a big step to actually make production systems that can be destroyed and rebuilt easily. Jez Humble and Dave Farley’s book, “Continuous Delivery”, is all about this (they call them “Phoenix Servers”, which gives me an excuse to shout-out to “The Phoenix Project” by Gene Kim et al, too; an excellent novel on the same theme).

So now we have DevOps, and it’s safe for the whole of IT to get it wrong.

As we start shipping smaller pieces of functionality more and more smoothly, the Business starts to pay attention. “Well… could I try this out?” someone says, and suddenly we’ve made it OK for them to do probes too, finding out what the market and their customers do and don’t want.

Now we’ve made it safe for the business to get it wrong.

At this stage we can really start churning out probes as a whole organization. There’s a really weird thing that happens in complexity, though. Sometimes people use things in a way that they weren’t intended to be used.

I often tell the story of Ludicorp and their online game, “Neverending”. They built tools for the players to share screenshots. They weren’t just used for screenshots, though, and those tools became the foundation of Flickr. Google started as a search engine. Amazon sold books. These companies don’t do what they started out doing. This is “exaptation”. It’s people, taking what we created (yes!) and using them for something else (and…)

It’s what makes it OK for us to have not discovered everything yet. It’s OK for humanity to be wrong.

This is how I always think of the Agile Fluency Model ™; with these levels of making it OK for people to be wrong. An Agile transformation is all about making it OK for an organization to get things wrong. If we think of it this way, it becomes obvious that safety is going to be a truly important part of that.

If you can’t think of a probe to try that might improve things and which would be safe-to-fail, maybe you can think of a probe to try that increases levels of safety. Something which puts in place the “Yes!” so that the “And…” can follow later. Something which helps people reduce their investments and commitments, providing themselves with options, or helping them deliberately discover information while it’s still cheap to change.

Make it Safe for Yourself, Too

*You can usefully use the word “But” to negate negatives. That’s about all it’s good for. It took me a fair few months to adopt the “Yes, and…” habit, and even I still use “but” occasionally, because I’m human. I slip up. I make mistakes.

So will you.

Celebrate your successes and your strengths, and make it safe for yourself to fail too Forgiveness is pretty much the greatest gift you can give yourself… or anyone else.

It lets you focus on the “Yes!” of your life so that you, too, can have “and…” in it.

Posted in complexity, cynefin, deliberate discovery, real options | 7 Comments

Breaking Boxes

I love words. I really, really love words. I like poetry, and reading, and writing, and conversations, and songs with words in, and puns and wordplay and anagrams. I like learning words in different languages, and finding out where words came from, and watching them change over time.

I love the effect that words have on our minds and our models of our world. I love that words have connotations, and that changing the language we use can actually change our models and help us behave in different ways.

Language is a strange thing. It turns out that if you don’t learn language before the age of 5, you never really learn language; the constructs for it are set up in our brains at a very early age.

George Lakoff and Mark Johnson propose in their book, “Metaphors we Live By”, that all human language is based on metaphorical constructs. I don’t pretend to understand the book fully, and I believe there’s some contention about whether its premise truly holds, but I still found it a fascinating book, because it’s about words.

There was one bit which really caught my attention. “Events and actions are conceptualized metaphorically as objects, activities as substances, states as containers… activities are viewed as containers for the actions and other activities that make them up.” They give some examples:

I put a lot of energy into washing the windows.

Outside of washing the windows, what else did you do?

This fascinated me. I started seeing substances, and containers, everywhere!

I couldn’t do much testing before the end of the sprint.

As if “testing” was a substance, like cheese… we wanted 200g of testing, but we could only get 100g. And a sprint is a timebox – we even call it a box! I think in software, and with Agile methods, we do this even more.

The ticket was open for three weeks, but I’ve closed it now.

How many stories are in that feature?

It’s outside the scope of this release.

Partly I think this is because we like to decompose problems into smaller problems, because that helps us solve them more easily, and partly because we like to bound our work so that we know when we’re “done”, because it’s satisfying to be able to take responsibility for something concrete (spot the substance metaphor) and know you did a good job. There’s probably other reasons too.

There’s only one problem with dividing things into boxes like this: complexity.

In complex situations, problems can’t be decomposed into small pieces. We can try, for sure, and goodness knows enough projects have been planned that way… but when we actually go to do the work, we always make discoveries, and the end result is always different to what we predicted, whether in functionality or cost and time or critical reception or value and impact… we simply can’t predict everything. The outcomes emerge as the work is done.

I was thinking about this problem of decomposition and the fact that software, being inherently complex, is slightly messy… of Kanban, and our desire to find flow… of Cynthia Kurtz’s Cynefin pyramids… and of my friend and fellow coach, Katherine Kirk, who is helping me to see the world in terms of relationships.

It seemed to me that if a complex domain wasn’t made up of the sum of its parts, it might be dominated by the relationship between those parts instead.  In Cynthia Kurtz’s pyramids, the complex domain is pictured as if the people on the ground get the work done (self-organizing teams, for instance) but have a decoupled hierarchical leader.

I talked to Dave Snowden about this, and he pointed me at one of his newer blog posts on containing constraints and coupling constraints, which makes more sense as the hierarchical leader (if there is one!) isn’t the only constraint on a team’s behaviour. So really, the relationships between people are actually constraints, and possibly attractors… now we’re getting to the limit of my Cynefin knowledge, which is always a fun place to be!

Regardless, thinking about work in terms of boxes tends to make us behave as if it’s boxes, which tends to lead us to treat something complex as if it’s complicated, which is disorder, which usually leads to an uncontrolled dive into chaos if it persists, and that’s not usually a good thing.

So I thought… what if we broke the boxes? What would happen if we changed the metaphor we used to talk about work? What if we focused on people and relationships, instead of on the work itself? What would that look like?

Let’s take that “testing” phrase as an example:

I couldn’t do much testing before the end of the sprint.

In the post I made for the Lean Systems Society, “Value Streams are Made of People”, I talked about how to map a value stream from the users to the dev team, and from the dev team back to the users. I visualize the development team as living in a container. So we can do the same thing with testing. Who’s inside the “testing” box?

Let’s say it’s a tester.

Who’s outside? Who gets value or benefits from the testing? If the tester finds nothing, there was no value to it (which we might not know until afterwards)… so it’s the developer who gets value from the feedback.

So now we have:

I couldn’t give the devs feedback on their work before the end of the sprint.

And of course, that sprint is also a box. Who’s on the inside? Well, it’s the dev team. And who’s on the outside? Why can’t the dev team just ship it to the users? They want to get feedback from the stakeholders first.

So now we have:

I couldn’t give the devs feedback on their work before the stakeholders saw it.

I went through some of the problems on PM Stackexchange. Box language, everywhere. I started making translations.

Should multiple Scrum teams working on the same project have the same start/end dates for their Sprints?

Becomes:

Does it help teams to co-ordinate if they get feedback from their stakeholders, then plan what to do next, at the same time as each other?

Interesting. Rephrasing it forced me to think about the benefits of having the same start/end dates. Huh. Of course, I’m having to make some assumptions in both these translations as to what the real problem was, and with who; there are other possibilities. Wouldn’t it have been great if we could have got the original people experiencing these problems to rephrase them?

If we used this language more frequently, would we end up focusing a little less on the work in our conceptual “box”, and more on what the next people in the stream needed from us so that they could deliver value too?

I ran a workshop on this with a pretty advanced group of Kanban coaches. I suggested it probably played into their explicit process policies. “Wow,” one of them said. “We always talk about our policies in terms of people, but as soon as we write them down… we go back to box language.”

Of course we do. It’s a convenient way to refer to our work (my translations were inevitably longer). We’re often held accountable and responsible for our box. If we get stressed at all we tend to worry more about our individual work than about other people (acting as individuals being the thing we do in chaos) and there’s often a bit of chaos, so that can make us revert to box language even more.

But I do wonder how much less chaos there would be if we commonly used language metaphors of people and relationships over substance and containers.

If, for instance, we made sure the tester had what they needed from us devs, instead of focusing on just our box of work until it’s “done”… would we work together better as a team?

If we realised that the cost might be in the people, but the value’s in the relationships… would we send less work offshore, or at least make sure that we have better relationships with our offshore team members?

If we focused on our relationship with users and stakeholders… would we make sure they have good ways of giving feedback as part of our work? Would we make it easier for them to say “thank you” as a result?

And when there’s a problem, would a focus on improving relationships help us to find new things to try to improve how our work gets “done”, too?

Posted in complexity | 7 Comments

How do you terminate a project in your org?

We all know that when we do something new, for the first time, we make discoveries; and all software projects (and in fact change efforts of any variety) target something new.

(You can find out what that is by asking, “What will we be able to do when this is done that we can’t do right now? What will our customers, our staff or our systems be able to do?” This is the differentiating capability. There may be more than one, especially if the organization is used to delivering large buckets of work.)

Often, though, the  discoveries that are made will slow things down, or make them impossible. Too many contexts to consider. Third parties that don’t want to co-operate, let alone collaborate. A scarcity of skills. Whatever we discover, sometimes it becomes apparent that the effort is never going to pay back for itself.

Of course, if you’ve invested a lot of money or time into the effort, it’s tempting to keep throwing more after it: the sunk-cost fallacy. So here are three questions that orgs which are resilient and resistant to that temptation are able to answer:

  1. How can you tell if it’s failing?
  2. What’s the process for terminating or redirecting failing efforts?
  3. What happens to people who do that?

If you can’t answer those questions proudly for your org, or your project, you’re probably over-investing… which, on a regular basis, means throwing good money after bad, and wasting the time and effort of good people.

Wouldn’t you like to spend that on something else instead?

 

 

Posted in business value, capability red, deliberate discovery, real options | 1 Comment

On Learning and Information

This has been an interesting year for me. At the end of March I came out of one of the largest Agile transformations ever attempted (still going, surprisingly well), and learned way more than I ever thought possible about how adoption works at scale (or doesn’t… making it safe-to-fail turns out to be important).

The learning keeps going. I’ve just done Sharon L. Bowman’s amazing “Training from the Back of the Room” course, and following the Enterprise Services Planning Executive Summit, I’ve signed up for the five-day course for that, too.

That last one’s exciting for me. I’ve been doing Agile for long enough now that I’m finding it hard to spot new learning opportunities within the Agile space. Sure, there’s still plenty for me to learn about psychology,  we’re still getting that BDD message out and learning more all the time, and there’s occasional gems like Paul Goddard’s “Improving Agile Teams” that go to places I hadn’t thought of.

It’s been a fair few years since I experienced something of a paradigm shift in thinking, though. The ESP Summit gave that to me and more.

Starting from Where You Are Now

Getting 50+ managers of MD level and up in a room together, with relatively few coaches, changes the dynamic of the conversations. It becomes far less about how our particular toolboxes can help, and more about what problems are still outstanding that we haven’t solved yet.

Of course, they’re all human problems. The thing is that it isn’t necessarily the current culture that’s the problem; it’s often self-supporting structures and systems that have been in place for a long time. Removing one can often lead to a lack of support for another, which cascades. Someone once referred to an Agile transformation at a client as “the worst implementation of Agile I’ve ever seen”, and they were right; except it wasn’t an implementation, but an adoption. Of course it’s hard to do Agile when you can’t get a server, you’ve got regulatory requirements to consider, you’ve got five main stakeholders for every project, nobody understands the new roles they’ve been asked to play and you’re still running a yearly budgeting cycle – just some of the common problems that I’ve come across in a number of large clients.

Unless you’ve got a sense of urgency so powerful that you’re willing to risk throwing the baby out with the bathwater, incremental change is the way to go, but where do you start, and what do you change first?

The thing I like most about Kanban, and about ESP, is that “start from where you are now” mentality. Sure, it would be fantastic if we could start creating cross-functional teams immediately. But even if we do that, in a large organization it still takes weeks or months to put together any group that can execute on the proposed ideas and get them live, and it’s hard to see the benefits without doing that.

There’s been a bit of a shift in the Agile space away from the notion that cross-functional teams are necessarily where we start, which means we’re shifting away from some of the core concepts of Agile itself.

Dan North and Chris Matts, my long-time friends and mentors, have been busy creating a thing called Business Mapping, in which they help organizations match their investments and budgets to the capacity they actually have to deliver, while slowly growing “staff liquidity” that allows for more flexible delivery.

Enterprise Services Planning achieves much the same result, with a focus on disciplined, data-driven change that I found challenging but exciting: firstly because I realise I haven’t done enough data collection in the past, and secondly because it directs leaders to trust maths, rather than instincts. This is still Kanban, but on steroids: not just people working together in a team, but teams working together; not just leadership at every level, but people using the information at their disposal to drive change and experiment.

The Advent of Adhocracy

Professor Julian Birkenshaw’s keynote was the biggest paradigm shift I’ve experienced since Dave Snowden introduced me to Cynefin, and those of you who know how much I love that little framework understand that I’m not using the phrase lightly.

Julian talks about three different ages:

The Industrial Age: Capital and labour are scarce resources. Creates a bureaucracy in which position is privileged, co-ordination achieved by rules, decisions made through hierarchy, and people motivated by extrinsic rewards.

The Information Age: Capital and labour are no longer scarce, but knowledge and information are. Creates a meritocracy in which knowledge is privileged, co-ordination achieved by mutual adjustment, decisions made through logical argument and people motivated by personal mastery.

The Post-Information Age: Knowledge and information are no longer scarce, but action and conviction are. Creates an adhocracy in which action is privileged, co-ordination is achieved around opportunity, decisions are made through experimentation and people are motivated by achievement.

As Julian talked about this, I found myself thinking about the difference between the start-ups I’ve worked with and the large, global organizations.

I wondered – could making the right kind of information more freely available, and helping people within those organizations achieve personal mastery, give an organization the ability to move into that “adhocracy”? There are still plenty of places which worry about cost per head, when the value is actually in the relationships between people – the value stream – and not the people as individuals. If we had better measurements of that value, would it help us improve those relationships? Would we, as coaches and consultants, develop more of an adhocracy ourselves, and be able to seize opportunities for change as and when they become available?

I keep hearing people within those large organizations make comments about “start-up mindset” and ability to react to the market, but without having Dan and Chris’s “staff liquidity”, knowledge still becomes the constraint, and without having quick information about what’s working and what isn’t, small adjustments based on long-term plans rather than routine experimentation around opportunity becomes the norm.

So I’m going off to get myself more tools, so that I can help organizations to get that information, make sense of it, and create that flexibility; not just in their products and services, but in their changes and adoptions and transformations too.

And I’ll be thinking about this new pattern all the time. It feels like it fits into a bunch of other stuff, but I don’t know how yet.

Julian Birkenshaw says he has a book out next year. I can’t wait.

Posted in learning models, scale, Uncategorized | 2 Comments

Correlated in Retrospect

A few  years back, I went to visit a company that had managed to achieve a high level of agility without high levels of coaching or training, shipping several times a day. I was curious as to how they had done it. It turned out to be a product of a highly experimental culture, and we spent a whole day swapping my BDD knowledge for their stories of how they managed to reach the place they were in.

While I was there, I saw a very interesting graph that looked a bit like this:

9wcjjha

“That’s interesting,” I said. “Is that your bug count over time? What happened?”

“Well,” one of them said, “we realised our bug count was growing, so we hired a new developer. We thought we’d rotate our existing team through a bug-fixing role, and we hypothesized that it would bring the bug count down. And it worked, for a while – that’s the first dip. It worked so well, we thought we’d hire another developer, so that we could rotate another team member, and we thought that would get rid of the bugs… but they started going up again.”

“Ah,” I said wisely. “The developer was no good?” (Human beings like to spot patterns and think they understand root causes – and I’m human too.)

“Nope.” They were all smiling, waiting for me to guess.

“Two new people was just too many? They got complacent because someone was fixing the bugs? The existing team was fed up of the bug-fixing role?” I ran through all the causes I could think of.

“Nope.”

“All right. Who was writing the bugs?” I asked.

“Nobody.”

I was confused.

“The bugs were already there,” one of them explained. “The users had spotted that we were fixing them, and started reporting them. The bug count going up… that was a good thing.”

And I looked at the graph, and suddenly understood. I didn’t know Cynefin back then, and I didn’t understand complexity, but I did understand perverse incentives, and here was a positive version. In retrospect, the cause was obvious. It’s the same reason why crime goes up when policemen patrol the streets; because it’s easier to report it.

Conversely, a good way to have a low bug count is to make it hard to report. I spent a good few years working in Waterfall environments, and I can remember the arguments I had about whether something in my work was genuinely a bug or not… making it much harder for anyone testing my code, which meant I looked like a good developer (I really wasn’t).

Whenever we do anything in a complex system, we get unexpected side-effects. Another example of this is the Hawthorne effect, which goes something like this:

“Do you work better in this factory if we turn the lights up?”

“Yes.”

“Do you work better if we turn the lights down?” (Just checking our null hypothesis…)

“Yes.”

“What? Um, that’s confusing… do you work better with the lights up, or down?”

“We don’t care; just stop watching us.”

We’ve all come across examples of perverse incentives, which are another kind of unintended consequence. This is what happens when you turn measurements into targets.

When you’re creating a probe, it’s important to have a way of knowing it’s succeeding or failing, it’s true… but the signs of success or failure may only be clear in retrospect. A lot of people who create experiments to try get hung up on one hypothesis, and as a result they obsess over one perceived cause, or one measurement. In the process they might miss signs that the experiment is succeeding or failing, or even misinterpret one as the other.

Rather than having a hypothesis, in complexity, we want coherence – a realistic reason for thinking that the probe might have a good impact, with the understanding that we might not necessarily get the particular outcome we’re thinking of. This is why I get people creating probes to run through multiple scenarios of success or failure, so they think about what things they might want to be watching, or how they can put appropriate signals in place, to which they can apply some common sense in retrospect.

As we’ve seen, watching is itself an intervention… so you probably want to make sure it’s safe-to-fail.

Posted in complexity, cynefin | 4 Comments

BDD: A Three-Headed Monster

676px-Hercules_and_Cerberus_LACMA_65.37.151Back in Greek mythology, there was a dog called Cerberus. It guarded the gate to the underworld, and it had three heads.

There was a great guy called Heracles (Hercules in Latin) who was a demi-god, which means he would have been pretty awesome if the gods hadn’t intervened to make go mad and kill his entire wife and family. Greek gods in general aren’t very nice, and you wouldn’t want them to visit at Christmas.

Heracles ended up atoning for this with twelve tasks, the last of which was to capture Cerberus himself. (It was meant to be ten tasks, but his managers decided that he had collaborated with someone from another team on one of them, and got paid by an external stakeholder for a second, so they didn’t count.)

Fortunately, it turns out that BDD’s a bit easier to tame than Cerberus, and it works best if you involve other people from the outset.

The first thing we do with BDD is have a bit of a conversation. If I know nothing about a project, I’ll ask someone to tell me about it, and whenever I think it’s useful, I ask the magic question: “Can you give me an example?”

If they aren’t specific enough – for instance, they say that there are these twelve labours they have to do – I’ll get them to be more specific. “Can you give me an example of a labour?” Labour to me means Jeremy Corbyn, not wrestling the Nemean Lion, so having these conversations helps me to find out about any misunderstandings early.

Eventually, we end up with something we can think about in concrete terms. There’s a template that we use in BDD – Given a context, when an event happens, then an outcome should occur – but even without the template, having a concrete example which starts from some context, has an event happen and ends with a desired outcome (or the best possible outcome that we can reasonably expect to happen, at least) is useful. It lets us talk about those scenarios, and ask questions about other scenarios that might exist.

Exploration by Example (what it could do)

I like to ask two questions, the patterns for which I call Context Questioning, and Outcome Questioning. “Is there any other context, which, for the same event, gives us a different outcome?” And, “Is there any other outcome that’s important?”

The first is easy. We can usually think of ways that our outcome might be thwarted, and if we can’t, then our testers can, because testers have that “break all the things!” mindset that causes them to think of scenarios that might go a different way to the way we expect.

The second is especially useful though, because it lets us talk about what other stakeholders might need to be involved, and what they need. This is particularly important if there’s a transaction happening – both stakeholders get what they want from it, or three if you’re buying a product or service via a third party like Uber. Without it, you might end up getting one stakeholder’s desired outcome but missing the other.

“Why did they leave that wooden horse out there anyway?”

“We should ask a tester. Do we have any left that haven’t been eaten by sea-serpents?”

With these two questioning patterns, we can start to explore the scope of our requirements. We now know what the system could do. What should it do? By deciding which scenarios are out of scope, we narrow down the requirements. If we don’t know enough to decide what our scenarios ought to look like, we can either do something to try it out in a way that’s safe-to-fail and get some understanding (useful if nobody in the organization has ever done it before) or we can find an expert to talk to (if they’re available).

If you find you never talk about scenarios which you decide are out-of-scope or irrelevant, either quickly or later on, you may not be exploring enough.

Specification by Example (what it should do)

Now we’ve narrowed it down to what it should do, and we’ve got some scenarios to illustrate those aspects of behaviour, we can start turning the ideas into reality. The scenarios give us a focus, and let us continue to ask questions. If we ever come across an area we’re not familiar with, we can fall back into exploration, until we have understanding and can specify once more what our system should do.

“So, if we frown really fiercely and heavily at Charon, he should let us past to get to Cerberus. What does a really fierce frown look like? Can you give me an example?”

“Well, imagine someone just used the words ‘target velocity’ in your hearing…”

“Oh, like this?

And, once we’ve made our ideas into something real, we can use our scenarios to see whether it does what we thought it should do – a test.

Test by Example (what it does)

We can run through the scenario again, either manually or using automation, to see if it works.

And, ideally, we write the automated version of the test first, so that it gives us a very clear focus and some fast feedback; though if we’re in high-uncertainty and keep having to fall back to exploration, we might want to lay off of that for a bit.

“So, what did happen?”

“Um, I chopped off a head, and it grew back. Twice…”

“Oh, sugar. Hold on… I think Jason ran into the same behaviour last week. I can’t remember whether it was intended or not…”

And that’s it.

Some people really focus on the tools and testing. Others focus on specification. Really, though, it’s exploration that we start with; those thought-experiments that are cheap to change, and not nearly as dangerous as the real thing.

In our software, we’re not even dealing with three-headed dogs or mythical monsters; just people who want things. They aren’t even intent on making it hard for us. Even better, those people often find value in small things, and don’t need us to finish all twelve tasks to have a completed story. It’s pretty easy to explore what they want using examples, then use the results of that conversation as specifications, then as tests.

Even so, if you do manage to tame BDD, with all three of its heads, you’re still pretty awesome.

Just remember that a puppy is not just for Christmas.

Posted in bdd, deliberate discovery, stories, testing | 3 Comments

Capabilities and Learning Outcomes

When I started training, I taught topics. Lots of topics!

Nowadays, thanks to some help from Marian Willeke and her incredible understanding of how adults learn, I get to teach capabilities instead. It’s much more fun. This is how I do it.

First off, because I’m into BDD and hypnosis, I sit and imagine some scenarios in which people actually use the learning I’ve given them. Maybe they’re the Product Owner of a Scrum Team, or using BDD for the first time, or they have a good understanding of Agile, and now they’re learning how to coach. I watch them in my head and look at what they do, or I think about what I’ve done, in similar situations.

As with all scenarios, the event that’s happening requires capabilties; the ability to do something, and do it well.

So, for instance, I imagine a team sitting together in a huddle, talking through BDD’s scenarios. Well, you’ll need to be able to use the different strengths of the different roles. And you’ll need to be able to construct well-formed scenarios, and to differentiate between acceptance criteria and a specific example.

If I get stuck thinking about what capabilities I need to teach, I go look at Bloom’s Taxonomy, and the Revised Cognitive Domain – I really like Don Clark’s site. Marian gives some advice; when you’re teaching adults, aim higher than merely remembering; give them something they can actually do with it. The keywords help me to think about the level of expertise that the learners will need to get to (though I don’t always stick to them).

So for instance, I end up with capabilities like these:

  • Explain BDD and its practices
  • Apply shortcuts to well-understood requirements to reduce analysis and planning time
  • Identify core and incidental stakeholders for a project

If I’m training, I use these in conjunction with a bit of teaching, then games or exercises that help attendees really experience the things they’re able to do for the first time, and give me a chance to help them if I see they need it. The learning outcomes make a great advert for the course, too! And I use them as a backlog while I’m running the course, so I always know what’s done and what’s next.

More recently, I’ve been using this technique to put together documents which serve the same purpose for people I can’t train directly. I put the learning outcomes at the start: “When you’ve read this, you will be able to…” It’s fun to relate the titles of each section back to the outcomes at the beginning! And, of course, each capability is an embedded command to someone to actually try out the new skill.

Best of all, each capability comes with its own test. As the person writing the course or document, I can think to myself, “If my student goes on this course or reads this document, will they be able to do  this thing?”

And, if they do actually take the course, I can ask them directly: “Do you feel confident now about doing this thing?” It gives me a chance to go over material if I have time, or to offer follow-up support afterwards (which I generally offer with all my courses, anyway).

You can read more about Bloom’s Taxonomy, and see the backlog for one of my BDD courses, on Marian’s site.

Now you should be able to create courses using capabilities, instead of topics. Hopefully you really want to, as well… but the Affective Domain, and what you can do with it, is a topic for another post.

Posted in bdd, learning models | 2 Comments