Commitment – a novel about managing project risk

If you’ve heard me speak at any conferences or read my blog over the last few years, you’ll know that I’m really, really into Real Options.

I’m half-tempted to get “Options have value. Options expire,” added to my tattoo. The principles which Chris Matts and Olav Maassen champion have become such guiding forces in my life that many of the decisions I make are only done in order to increase the number of options available to me. Once the whole innate phobia of uncertainty is out of the way, it’s a fantastic way to live life in freedom and with fun.

Real Options are also a phenomenal model for managing risk on software project – and not the only brilliant idea that Chris and Olav have!

So imagine my absolute joy when I saw this in the works – a graphical novel, fun and easy to read and understand, dealing with all kinds of ideas around risk management. The skiing story at the end of the first chapter (downloadable from the link) is the same one that Chris shared with me to help me understand his ideas.

Chris and Olav describe it as “a business graphic novel on managing risk. The book is a culmination of their collaboration on the real options model over the last six years. It provides examples of how to manage a project using the real options model and outlines a simple technique for making better (informed) decisions. It also covers more advanced topics such as information arrival process, game theory, feature injection, paradox of choice and how to deal with uncertainty. While geared towards project managers, the book benefits anyone making decisions in their job or daily life. Chris and Olav focus on making risk management easily understandable as everybody does this every day without being aware of it. ”

I am very confident that this book is going to be ground-breaking. I am so confident that I’ve already laid out £500 of hard-earned cash for a chance to star in the book (sorry, that option has expired). I don’t regard this as charity. This is me doing something small to pay Chris and Olav back for their guidance and help over the last few years, and it’s money they’ve helped me earn anyway. Estimated publishing date is Winter 2012… assuming they manage to raise the funds to kick it off.

If you would like to help these two fantastic people get this novel out of the door and benefit from the same help they’ve given me, please go to the Sponsume site before April 14th and pick an option. You never know – you could be the proud owner of a limited edition of a book that turns out to be as ground-breaking as “The Goal”*.

*What do you mean, you haven’t read it? Go. Buy it. Now! It’s already out and will give you something to read in the meantime while you’re waiting for “Commitment”.
Posted in business value, life, stories | 2 Comments

Cynefin for Devs

Every now and then, someone comes up with a new way of looking at the world that becomes the next fashionable thing to do. Every time I’ve seen this, there’s usually a space of time in which a lot of people say, “Meh, it’s irrelevant”, or “Meh, consultants”, or “Meh, they’re only in it for the money.” After a while, things settle down and everyone is used to that new model or concept, and it’s no longer seen as edgy or strange. I’ve seen it with Agile, and Lean Software Development, and BDD, and now it’s the turn of Cynefin and Complexity Thinking.

I’d like to shortcut some of that with Cynefin, because I think it’s kind of cool, I’ve found it useful, and it’s not actually that hard to get your head around once you make the small mindshift. I’m going to share a bit about what I know of it, then talk about how it might actually be a useful concept for a dev to have in their head.

Some of us went to the Calm Alpha meet up, created a list afterwards, and have been using it to discuss which kinds of software can be categorised by which domains. I think our need to categorise everything is part of the problem, but I haven’t worked out how to solve it yet. In this post, I’ll be categorising things according the Cynefin definition of the terms, so resist the urge to say, “No! What I’m doing is simple!” or “But it’s complicated!” because Dave Snowden, who created the Cynefin model, uses those terms in a slightly different way to our common usage.

So here’s Dave’s Cynefin model.

The Cynefin model - Simple (sense, categorize, respond), Complicated (sense, analyze, respond), Complex (probe, sense, respond), Chaotic (act, sense, respond) with Disorder in the middle

Dave Snowden, released under CC BY 3.0 – thank you!

The Cynefin model consists of four domains – simple, complicated, complex and chaotic – with disorder in the middle. The edges of the domains aren’t strict – they’re domains, not quadrants. There’s also a little fold beneath the “simple” to “chaotic” boundary, to show how easily simplicity can tip into chaos. If you’re going to use this for a lot of items, it might be worth using the domains as attractors, rather than categories. Put the items down on a table, then work out where the lines go afterwards. If you can get your head round that, you’re already on your way to understanding complexity thinking.

Here’s how I tell what kind of programming I’m doing.

Simple

Everyone can work out how to do this. Like, everyone. The example the CE guys used was a bicycle chain falling off. It’s easy to work out how to get it back on again. If you’re programming, I imagine stuff like the turtle at the science museum, or Lego Mindstorms with its graphical drag-and-drop interface, might fall into this space. Children and non-programmers can do it.

In a simple environment, you sense, categorize and respond. You say, “Oh, it’s one of those problems.” No analysis is required.

Complicated

Complicated stuff is predictable, but requires expertise to understand. A watch is complicated. If you’re programming, complicated stuff will be well-understood, done before, not going to change as you develop it. Writing yet another CRUD form probably falls into this space.

In a complicated environment, you sense, analyze and respond. You say, “Let me have a look at this problem and I’ll tell you how to solve it, because I’m an expert at this.”

(I have a theory that most devs get really bored by doing the same predictable but complicated thing over and over again. We tend to turn it into open-source or automate it, reducing it to the far smaller but more complex problem of how to do the automation or use the open-source. Devs are drawn to complexity like moths to a bonfire. And we make it when we don’t have it…)

I reckon that if everything in software development was merely simple or complicated, Waterfall would work very well. You’d be able to set out to achieve some goal, work out how to do it, achieve the goal and say, “Job done.” Software doesn’t consist of only simple and complicated stuff, though. So let’s have a look at the other two domains.

Complex

My favourite way to understand complexity is that acting in the space causes the space to change, and cause and effect can only be understood in retrospect.

When you start writing tests, or having discussions, and the requirements begin changing underneath you because of what you discover as a result, that’s complex. You can look back at what you end up with and understand that it’s much better, but you can’t come up with it to start with, nor can you define what “better” will look like and try to reach it. It emerges as you work.

In a complex environment, you probe, sense and respond. You do something that can fail, safely, and it tells you things about the environment which you respond to, changing the environment . This is the land of high-feedback, risk and innovation: generally stuff you’ve never done before, anything that the business are unsure about, new technologies, etc. This is the domain in which Agile techniques really flourish. If you have a look at Cynefin’s pyramids, this is the one with very loose leadership and the people at the bottom all connected together, collaborating to solve a problem.

This is the most interesting domain for me. It’s the reason why we do things like BDD – using examples to discover more about what we’re doing – but it’s also the reason why, if we focus on trying to pin every small requirement down, we fail.

Chaotic

Chaos is your house catching fire. Chaos is accident and emergency. Chaos is that bug you released to production that brought your site down on the day of release, and you need to drop everything and fix it now.

In chaos, you act. You get out of the house. You stem the bleeding. You do something to get the situation under better control. When Egor Homakov hacked Github this week, Github responded by suspending his account immediately. They acted on the threat. After that they analyzed his actions, considered what he had done and reinstated his account. Act, sense, respond.

So why should I care as a dev?

I’ve found this model really useful for understanding why certain ways of approaching software work best in certain situations, and fail at other times.

The biggest failures I’ve seen have come from treating complex problems as if they’re complicated. For instance, a common Agile practice is to divide a problem into small chunks that we call “stories”, then start working on them. That’s actually a good way to work when most of the problem is complicated, but if you’re doing something new then you may want to go down the “probe, sense, respond” route instead. Hack something out, and get feedback on it. There’s no point guessing how you’re going to reach the goal, because as soon as you get feedback, there’s a good chance the goal itself will change.

Also, sensemaking is itself a complex thing. By trying to get feedback, you might find out how easy or hard it is just to get feedback. That could change the way in which you engage with the business. It might affect how much time you put into preparing for a planning meeting, whether you make a feature work fully or just fire off a screenshot, etc. As you seek to get that feedback the business will change the way in which they respond, too, so it might become easier or harder, and you need to be watching for those signs so you can help to change the process to match. As a dev, you’ll be the one feeling the frustration or ease from the process. It’s no good just relying on your coach or Scrum Master to help you, because he or she needs your insight too.

Knowing this model helps me know when to do TDD or automate BDD – defining a well-understood outcome, and working creatively with the software to reach it – and when to just use examples as ways of discovering more about what we’re trying to do. If I can clearly articulate the outcome and everyone agrees on it, then probably it’s good for TDD and BDD automation. Otherwise, having conversations is more important than automation, whether it’s with a pair-programmer at a class level or a business spokesperson or tester at a system level. Being able to tell the difference can help ensure that the conversations are the most interesting and effective conversations we can have. If I spot the conversations becoming boring, with people yawning or drifting off, then I know we’re trying to apply a complex method to a complicated or simple space and I can say, “Okay, I think we understand this well enough,” and work on something different instead.

(Also, knowing that the act of sensemaking is complex itself makes me look out for situations in which we’re misapplying methods.)

When chaos erupts, I know that letting my PM order us about for a bit is OK. Dropping everything we’re doing is also OK – forget the whole “developers need to be left alone to do their work” idea that the Scrum timebox normally provides. We fix the bug, hack the workaround, take the feature down, do whatever we need to do to get things under control. Chaos almost matches the “expedite” class of service in Kanban, except that I think you probably don’t even bother putting a card on the board until the need to expedite is over. After the emergency is over, we can look at whether it was just a one-off or whether there’s something we need to do to avoid happening again. For instance, being able to roll back a deployment cleanly and quickly gives us the ability to probe a release instead of crossing our fingers, moving something that might become chaotic into a complex space instead.

These are the kind of decisions that we make intuitively anyway, as developers, but human intuition is often flawed. We’re often uncomfortable with uncertainty, so we usually try to define outcomes regardless of whether they can be clearly defined or not. We’re definitely uncomfortable with chaos, and often make the mistake of treating a one-off chaotic incident as systemic, stamping controls over everything when what we really need is a way of probing, or trying things out safely. Having this model in my head has really helped me to become more comfortable with those situations, and to find approaches that fit them much better. Of course, this is massively useful when I’m coaching and acting as a change agent or helping a PM work out how to lead a team – but even as a dev, having this model makes a big difference to me and my day-to-day work.

I hope it will help you too.

Posted in complexity, cynefin | 40 Comments

The Myth of “What” and “How”

I often hear things like, “Tell the team what to build, but don’t tell them how to build it.”

Or, “A feature is what you’re building. A story is how you’re going to build it.”

Or, “When you’re doing TDD, don’t worry about the internals of the class. The API is what it does. The internals are how it does it.”

Here’s the thing. When we write code, that’s how we’re creating a story or a feature. It’s also how we’re implementing the architecture. It’s how we’re managing security and providing an audit trail and doing a bunch of other stuff.

And that’s how we’re selling more goods. And how we’re keeping things maintainable for the future. And how we’re preventing data theft. And how we’re correcting our mistakes.

And that’s how we’re staying in business. And how we’ll be able to react in the future. And how we’re able to sleep at night. And how we learn.

  • The code is how we deliver a story.
  • The story is how we deliver a feature.
  • The feature is how we give the users the capability to do something.
  • The users’ capabilities are how they deliver a business goal.
  • The business goals are how stakeholders implement a vision.
  • The vision is an idea of how to make money, save money or protect revenue.
  • And we could keep going if we wanted to…

Every goal, no matter how big or small, is the how for someone’s what. (The how and the what may come from the same person, but the interesting stuff happens when they’re different people. Or behave like they are.)

If only life worked by being able to divide a bigger what up into smaller pieces and manage them appropriately, then it would all be fine! Unfortunately the code has to work with other code to deliver its value. The features have to work with other features. The system has to work with other systems, users have to work with other users and the goals of one stakeholder have to align with the goals of another.

This is why we have root cause analysis and the “5 Whys” – because when we can see the higher-level goals and deeper-rooted problems, we can understand better how our own actions fit into them, and what they deliver – for better or worse. This is also true for other domains than software.

It doesn’t really matter much whether we call them features, stories or tasks, as long we appreciate that they’re how we’re delivering to those higher-level whats, and we have a pretty good understanding of the whats and how to test that we’ve achieved them, even (especially!) if the what is a learning or exploratory goal.

Of course, we’ll get the how wrong occasionally and fail to deliver the what. But that’s what feedback, in its many forms, is for.

Posted in bdd, business value, stories, testing | 1 Comment

BDD Tutorial Slides are up

I’ve been meaning to do this for a while. I have released my BDD Tutorial slides on SlideShare.

There are notes underneath each slide which are a cut-down version of the kind of things I talk about. I’ve even left the exercises in, with a description of what I do in each.

I am releasing these under my usual favourite licence – Creative Commons share-alike. That means that as long as you attribute me, and are similarly generous with any derived work you make, you can use the slides commercially and base your own work on them.

Yes, that means that if you want to, and you feel qualified to do so, you can use the slides, run the course and get paid for it. If you don’t feel qualified, feel free to experiment. Just please give a nod my way when you do!

Posted in Uncategorized | 1 Comment

It’s about the examples you can’t find, not the ones you can

A long time ago, I was toying with the idea of starting my test methods with “will”, instead of “should”.

Dan explained to me, “If you start with the word ‘will’, you’re already making the assumption that you understand what you’re doing. Using ‘should’ allows you to question whether you understand or not.”

For me, that was the moment that I suddenly got it. BDD and TDD aren’t actually about making sure that something works well. They’re about uncovering the parts you don’t understand; the parts that are hard, and the gaps. Dan’s post introducing “Deliberate Discovery” takes this idea even further, but it started here: replacing the word “test” with the word “should”.

This is why I find it more useful to look for the gaps in the examples; scenarios we haven’t thought of, or don’t know what to do. Chris Matts calls this, “Breaking the model”. Often whatever we do here is wrong, so we need to write down our suggestions and then get feedback on them.

If you do anything which starts suggesting to people that these examples are correct and complete – like, for instance, automating them or referring to them as “specifications” – then you may find yourself running into trouble. Automation is a commitment, specification can be seen as one, and we know by now that we never commit early unless we know why.

You might also want to keep a small record of the looser requirements or behaviour that spawned the examples – for instance, as blurb in a scenario or as a method name – just so that you can give yourself a chance of spotting those gaps later. You won’t always get it right first time, and that should be OK.

Posted in bdd, breaking models, real options | Leave a comment

CALMalpha – the second request

CALMalpha was meant to be a mash-up between the Lean, Agile and Cynefin / Complexity Theory practitioners.

The outcome of the unconference wasn’t really stated. When you understand that a complex domain is one in which the cause of an outcome can’t be perceived except in retrospect, this might make more sense. The only thing we were trying to do was see if there was a way of using complexity theory to help inform our practices, and if there were some practices from Agile and Lean that complexity theorists might find interesting – a mash-up!

There’s one problem with this.

Currently, the best-known leadership of Complexity Theory revolves around the company Cognitive Edge. These guys have some amazing methods for making sense of domains, spotting complex problems and providing data which calls out “weak signals” that might otherwise be lost. I paid good money and took time off work for the course last year, and it was worth every penny. For the non-initiated and tl;dr, imagine five new types of retrospective, a method for reducing planning meetings to five minutes, and six different ways of making the output from them heard, and you’ll get a vague idea of the impact and scope. Oh, and they’ve got software for running the retro across countries.

Except…

I can’t currently use the methods they taught me, not as a professional coach. The methods are open-sourced, but released under a non-commercial, non-derivative Creative Commons license.

Cognitive Edge, your Wiki says (emphasis mine):

The Cognitive Edge wiki exists to provide a collaborative space for accredited members of the Cognitive Edge Network. All accredited practitioners should feel welcome to contribute to the ideas and concepts in these pages.

The licence prevents me from using your methods as a professional coach:

You may not use this work for commercial purposes.

The licence also prevents non-accredited people, which is most of our communities and a lot of CALMalpha attendees, from creating their own ideas:

You may not alter, transform, or build upon this work.

While I might be able to build on your work, I’m unwilling to do so as long as my efforts fall under this licence. I also can’t pass on anything to the people I work with for their contribution.

Can you see how this doesn’t mesh with the idea of a “mash-up”, and goes completely against your ideas around multiplying perspectives?

So here’s my request.

Cognitive Edge, please, please open your licence up for commercial and derivative work.

The stuff you do is amazing. If you were working solely for the money, you wouldn’t have come up with these ideas. I can only assume that you, like us, are trying to make the world a better place. We will continue to attribute the methods to you and talk about how amazing they are. Those of us who’ve seen it will continue to point people towards your SenseMaker software (which is ground-breaking, world-changing, worth paying for the 1-day demo, and deserves the more rigorous patent applied to it – I look forward to the day when it’s a bit cheaper!)

As it stands, we can’t do anything useful with your methods. Worse, because you’re working in a space full of narratives and I’m working in a space full of very similar examples, I have to be very careful that my work – released on my blog under CC, non-commercial, non-derivative – is actually based on other sources (mostly Dan North and Chris Matts) and not on yours.

Please. Be generous. Reach out to your contributors, ask them, and release what you can.

Sometimes it’s worth doing something you can’t go back from.

Posted in conference, learning | 12 Comments

CALMalpha – the first request

I came away from CALMalpha with a profound sense of depression.

Our industry is in an awful state. A really awful state. It took me a day and a half to recognize one of the problems. There’s a prevailing sentiment that I keep hearing that, “If only we provide the right bucket for them to deliver in, teams will deliver.”

You know what? This isn’t true.

Of the many developers I’ve worked with over the years, I’ve been lucky enough to work with the best. These are the developers who can code well, work in teams, learn quickly and respect and learn from differing opinions.

I shouldn’t have to be lucky to work in teams like this. Engineers don’t get away with it. Architects, pilots, restaurant staff and dustbin men don’t get away with it*. We have such a low barrier to entry in this industry, it hurts. I shouldn’t have gotten away with it for as long as I did!

While I was at the conference, a gentleman linked to one of my blog posts about Scrum and Kanban, calling it “pontificating rubbish”. I wrote it mostly because I was already fed up of the fighting between the two communities: there’s more similarity than difference. This gentleman said, “Can we just ship some software already?” Iain, I need to thank you. Your comments must have stuck with me, because by the second day I found myself thinking, “He’s right.”

One of the things we strive for in Agile and Lean is a high-trust environment; one in which the business give us the space and time that we need to deliver the things that they really want. As an industry, though, we don’t have that trust, because we haven’t deserved it. Even Agile and Lean haven’t solved the problem. Even Agile and Lean don’t ship on time, and even Scrum projects occasionally wait until the last moment to tell people it’s going to be late (and yes, I know this isn’t how you ought to run a Scrum project). So here’s the first problem:

Many of the developers I’ve interviewed over the last decade can’t communicate. They can’t code, even when they’ve been told that it’s part of the interview. I had one dev provide a code sample that they were completely unable to work with – I’m guessing a friend wrote it. I’ve had samples submitted which were ripped off of the internet. I’ve worked “with” devs who were sidelined onto useless projects, deleted afterwards, because they were no good at coding and too nervous to help. No wonder, as an industry, we can’t ship. No wonder those of us who spend our time at conferences and self-improvement workshops are befuddled as to why. The faculty at CALMalpha talked about how crews work, and the training they undergo. We don’t do that.

One of the things that David Snowden taught us is to aim for fail-safe probes – a thing that we try, in order to make a change, which might fail and which is safe to do so. He also taught us not to attack the problem directly. I’m not sure that this is even a complex problem, but the fact that it hasn’t been solved yet suggests it might be. So rather than saying, “Just ship it already!”, here’s what I suggest (and please, if you have an understanding of complexity theory and can see some stuff that might not be safe to fail, let us know quickly – I’m new at this).

Get devs to code in the interviews.

Do it, and spread your success stories. I’ve not yet found a company who’s done this and stopped. Code with them – pair on some silly, simple problem – and you’ll also find out what they’re like to work with, and whether they’re able to learn and talk to you.

The idea is not necessarily to filter the developers. If this happens industry-wide, we’ll quickly run out of devs who can code, and something else will happen. Perhaps a lot of them will be able to learn quickly, and you may find after some time that you’re frustrated looking for the ones who can code and you’ll settle for the ones who can learn. Or maybe the universities will up their game. Perhaps it will become accepted practice that developers spend time learning, and companies will give them more space.

If this fails, and we find that we can’t get people to code in the interview, we’ll be no worse off than we were and we might find out something more about the problem.

*Dear Dustbin Men

Without you life would very quickly grind to a messy and disease-ridden halt. You are my heroes. You do the job that I do least want to be doing, for a lot less pay than I’d do it for. This is why I’ve picked on you – not because I think you’re low on the list of people who need to work as a team, but because your job is so stressful in my estimation that I believe you have the right to work in any way you want. That I always see you working together, usually smiling, carting away all our rubbish and doing extra work to make up for Christmas etc., is amazing. Thank you.

(I have never yet seen a Dustbin Woman).

Devs, our job is ranked as really not all that stressful. We do not have the excuse.

Posted in conference | 14 Comments

The Real Cost of Change

We have a strange desire for control.

I was in a planning meeting with my project manager and several of the devs. “What happened?” the project manager said. “Why did this one story take so long?”

“There was some functionality we needed and didn’t know about,” I replied. “We managed to get it in before the deadline, though.” The business had been quite happy with that, and they were notoriously hard to please.

“If they’re going to change their mind like this,” the PM replied, “we’re going to have to introduce some kind of change control.”

“Please don’t,” I begged. “If you do that, the business will spend more time investing in getting things right to start with.”

“Exactly.”

“But they’ll still get it wrong. No amount of planning would have spotted that missing piece before we showed it to them! When they get it wrong now, though, they’ll encounter the change control and they’ll want to spend even more time getting it right first time. And they’ll still get it wrong, but now we’ll have made it more expensive for them to be wrong. We’ll have a formal process which means it takes even longer for us to find out what’s missing, by which time us devs will have to work to remember the code and the change will take longer. It will slow us down. So they’ll see that, and spend even more time trying to get it right, and before you know it they’ll be planning whole projects up front.”

We have a word for that. It’s called Waterfall.

Waterfall's reinforcing feedback loop

That desire to control change creates a reinforcing feedback loop, in which the cost of change makes us want to invest up-front, which makes the change expensive later on, which makes us want to invest up-front, and so on.

In this case, it would have been pointless, except as a way of shifting blame and risk back to the analysts (and this was an internal project). The cost of change was quite low; we had clean code with a good suite of tests. It was only the cost of discovery, and the implementation that followed, that was really expensive.

Don’t confuse the cost of discovery with the cost of change.

Discovering something later on only costs more than planning for it if you’ve made a commitment to something else.

In fact, if you don’t plan for it, it can cost less. The newly-discovered knowledge will be fresh in everyone’s minds. Because the ideas haven’t been known for long, nobody’s mentally committed to them either, which makes them easier to question and clarify.

This is even more important when there’s a chance that the analysis might be wrong (and there’s always that chance; we learnt this from Waterfall). If you plan for something that you later have to revert, you’ve introduced a cost of change right there. Perhaps the cost is just changing some documents. Perhaps the developers have designed for the plan, and built on top of the thing which needs changing.

But we need to do some planning, right? Otherwise the chances of us being wrong and building on top of the wrong thing are even bigger, and there’s even more chance that we’ll have made commitments the wrong way. So how should we plan?

I’ve got few more guidelines I’d like to share. Here’s the first:

Keep the cost of change low.

This is more important even than planning, because there’s always a chance that we’ll get something wrong.

This is what we’re aiming for: the ideal, low cost of change on an Agile project.

The wonderfully low cost of change on a very nice Agile project.

The first thing to notice about this is that the cost of change is not zero. That’s going to become important in the next section, and it’s what drives the team’s desire for change control and starts kicking off that Waterfall loop.

The second thing to notice is that this bears little resemblance to actual, real Agile projects.

On a real Agile project, it’s likely that we have fluctuating levels of stress, concentration, experience and desire for feedback. All of these – or lack of them – will lead us to occasionally write code that is, shall we say, less than ideal.

It takes discipline to write code that’s easy to change. On a real Agile project, we tend not to do it all the time. Oh, we might say we do, but we don’t. Not always. And if we do, our team-mates don’t. The real skill isn’t in writing clean code – it’s in cleaning up the horrible mess we made the week before. And it takes even more discipline to do it afterwards, especially if you’re under pressure to just hack the next working thing in too.

The real cost of change on an Agile project, showing how we clean up after ourselves.

If we don’t clean up and keep that cost of change low, we’re making a commitment to the wrong thing. The longer that commitment stays in place, the higher the cost of change will become. We’ll find ourselves on that Waterfall cost of change curve, and the longer we’re on it, the more expensive it is.

I’ve found the skill to clean up afterwards is even more important in high-learning projects, where a lot of the technology or domain is new, at least to the team. There’s no point in writing tests up front for a class when you don’t even know what’s possible, or what other frameworks and libraries will do for you, or what the business really want. In those environments, rather than TDD or BDD, teams tend to Spike and Stabilize. The spikes aren’t really prototypes – they’re small pieces of features, often with hard-coded data behind them, designed to get some feedback about techology or requirements. Dan North, who gave me the term, might write more about this later if we ask nicely, but for this post, we can simply bear in mind that the skill to stabilize later – ensuring that the cost of change is lowered – is often more important than the skill to keep the cost of change low up-front.

Because we get it wrong too.

Assume you got it wrong.

Human beings are hard-wired to try and get things right, and to pretend that they did even when they didn’t. I love this list of cognitive biases on Wikipedia. These are just some of the ways in which we get it wrong and don’t even notice.

If we assume that we got it wrong, then we start to look for feedback, and quickly. This is difficult for most human beings. We much prefer to get validation than feedback; to be told that we did it right, rather than finding out what we did wrong. Our brain gives us these little kicks of nice chemicals when we learn that we did something right, and it feels much better than the other kind.

If we can remember, though, that we probably got it wrong, our focus will change. Instead of trying to invest in good requirements and nice code, we’ll try to find out what we got wrong in the things we’ve already done. Of course, we need to invest in stabilizing what we’ve done, or the cost of change goes up, and that will make it more expensive later if we find out we were wrong, which is the assumption we were trying to make in the first place… ah, the paradox!

There’s a fine balance to be struck between getting quick feedback – often itself an expensive proposition, given the business of most domain experts – and getting it right up front. So where does the balance lie?

Don’t sweat the petty stuff.

If it’s easy to change, don’t worry about it. Analysts, learn what’s easy to change. Typos, colours, fonts, labels, sizes, placement on the screen, tab order, an extra field… these are all usually easy to change, and do not normally need to be specified up-front. Even if you have a particular style you want to see on a page or form, this can usually be abstracted out and changed later – just let the devs know that you want that consistency at some point.

It’s more important for us to know the rough shape of what you want to see, and the flow and process of that information. We don’t want to know every field on an order item. We just want to know that it needs to be sent to the warehouse and stored locally because you’re going to check the money in the till and count the stock. The fine detail of that is pretty easy to change, so we can get feedback on it later. Getting the fine detail right would definitely be an investment, and we might have got the big picture wrong.

Deliberately discover things you’ve never done before.

Dan North wrote an excellent post on Deliberate Discovery, and I’ve been using it to manage risk on my projects for a while now. It’s one of the most important tools in my toolbox, along with Real Options to which it’s strongly related, so I want to cover how I use it here.

I really like using Cynefin to help me work out what to target for discovery. We treat a lot of software as if it’s complex, and we talk about self-organising teams and high learning environments, but in reality there are huge chunks in most applications which follow well-established rules, have been done a thousand times before and probably have libraries or third-party apps to do the job for you. They’re not complex. They’re complicated. They require the application of a bit of expertise, and are likely to be done right and never changed again. User registration and logging in are great examples of this. You don’t need a big, thick document to describe them. The fine details might change, of course, but we already know not to sweat the petty stuff.

It is OK to plan some aspects of a system as if it’s Waterfall – for instance, deciding up-front whether you want to use your own login or let Google authenticate. Even better than requirements documents, and quicker, is to say, “It’s user registration. Make it work like Twitter’s, but we also need the user’s loyalty card number. We should offer to send them a card if they don’t have one.” Dan North calls this pattern “Ginger Cake” – it’s like a chocolate cake, but with ginger. He even cuts and pastes code. And it’s OK! Honestly, it is! This code is also absolutely prime for TDDing – if you actually have to write it yourself, that is, since it’s been done before so someone’s probably written something to do it for you already. You can also give this code to junior devs, for whom it’s new, and guide them in TDDing, making it perfect pair-programming territory. Everything you have ever been told about Agile software development applies particularly in this place.

Fortunately, most applications have a minimum set of requirements that they share with other, similar applications. David Anderson calls these commodities – table stakes that you have to have just to play the game – so *most* code in an application will end up going this way.

The places in which we’re most likely to get it wrong, and need fast feedback, are places where we’re doing something new. They might be technological, particular to a domain, or just things that the team themselves have never looked at. My favourite book for understanding risk is “Waltzing with Bears”, which starts the first chapter with, “If a project has no risks, don’t do it.” It’s these new, risky aspects of the project that differentiate it from others and make it valuable in the first place!

For new and risky aspects of a project, the best thing to do is assume you got it wrong, and work out how quickly you can get feedback on how wrong you are.

Any new or unknown aspect of a project will need to be changed.

I was chatting to one of our analysts. “I can see this feature is in analysis at the moment,” I said. “Does that mean it’s the next thing we want the developers to do?”

“Oh, no,” the analyst said. “It’s only there because the analysis is quite complex. It’s all new stuff, so we’re having to be careful with it and it’s taking a bit of time. Once we get the analysis done, the development should be very easy, so we’ll do it later.”

“Oh, the development will be easy, I’m sure… but wouldn’t you like to find out what you did wrong now, rather than later, while it’s still fresh in your mind?”

The analyst smiled. The company was very much more used to Waterfall, and the idea that it was OK to get it wrong was something very new.

It’s OK to get it wrong, as long as you get feedback quickly, while the cost of change is still low. By working out which parts of a project are unknown or new, and targeting those first, we make small investments while the cost of change is still low.

Keep your options open – do the risky stuff first and keep tech debt low.

Anyone who’s run into me at conferences will know how much I love Real Options, and it’s really at the heart of the cost of change.

The only reason change costs more is because of the commitment that we already made. Chris Matts describes technical debt as an “unhedged call option”. Edit: while Chris came up with the metaphor, it was Steve Freeman who described it. He says, “You give someone the right to buy Chocolate Santas from you at 30 cents each. That’s fine, as long as the price of chocolate stays low. As soon as it goes up, you still have to pay to make the Santas, and now you’re in trouble and your company is going bust, because you didn’t give yourself the option to get the chocolate somewhere else.”

Similarly, technical debt is absolutely fine until we’re called on to act, and act fast. At that point, we’re in trouble. This is the biggest reason for keeping the cost of change low – because it gives us the option for change, later. It’s a frequently-cited reason for replacing legacy projects – and, bizarrely, often forgotten when the pressure mounts and the business want their replacement app.

This isn’t helped by common practices of estimation and the associated promises, which often lead to that pressure building up in the first place. Rather than making these promises up-front, why not try the risky bits first? I often hard-code data so that I can get feedback on a new UI early, or I spike something out using a new library or framework, or connect to that third-party application just to see what the API is really like to use, or have a chat with the team writing that other system we’re going to need in June, so I can find out how communicative and receptive to feedback they are. Doing this means that we give ourselves the most time to recover from any unexpected discoveries, and we can worry about the more predictable aspects of a system later.

Once we’ve got spikes out of the way, adding tests to act as documentation and examples for any legacy code we’ve created, cleaning it up so it’s self-commenting, ensuring that architectural and modular components are properly decoupled, etc., all help us to stabilize the code. At the same time, the effort involved in creating stable code is itself an investment. If there’s a good chance that the code might be wrong, it could be worth getting feedback on that – knocking up integration tests, showing it to the business, testing it, getting it live – before it’s made stable.

That way, the commitment made is small, and the cost of change is low.

Just remember to clean up and keep it that way!

Update: Dan is currently writing about Spike and Stabilize and Ginger Cake as part of his “Patterns of Effective Delivery”. If you’re interested in finding out more, you might like to watch his Roots talk on the topics.

Posted in breaking models, cynefin, learning, spike and stabilize, testing, uncertainty | 14 Comments

You’re doing it wrong

The first time I did it wrong, it was because I didn’t know any better.

The second time I did it wrong, it was because I forgot about the first time.

The third time I did it wrong, it was because I was really tired.

The fourth time I did it wrong, it was because someone else told me to.

The fifth time I did it wrong, it was because nobody lets anyone do things right around here.

Posted in breaking models | 1 Comment

Scrum and Kanban: both the same, only different

When I started coaching Agile methodologies, I didn’t know how much I didn’t know.

I had come from Thoughtworks, a company whose tools and processes are mostly driven by Extreme Programming, aka XP. In that respect, most of what I learnt and coached was very similar to Scrum, albeit with different words. We called them iterations instead of sprints, and had stand-ups instead of daily scrums. We had planning meetings, but we didn’t make commitments – just estimates. We had collaborative code ownership and a focus on delivery instead of a cross-functional team, which meant that we ended up with flexible and blurred roles anyway. We had the same problems – getting people co-located, helping business stakeholders become more comfortable with the risk and uncertainty of software delivery, and changing the culture and the infrastructure of the organisations in which we worked. We also had a number of technical practices like unit testing and continuous delivery that aren’t really prescribed by either Scrum or Kanban, but which both put forward as a Really Good Idea.

In the last few years, I’ve been privileged to be part of the Kanban community. I don’t consider myself part of the Scrum community as such, but I’m part of the larger Agile movement and strongly aligned through my background in XP.

In this post, I’d like to cover some of the differences I’ve seen between Scrum and Kanban, and add in some insights from the Cognitive Edge training I’ve done using Cynefin and complexity thinking. This isn’t going to be a full description or comparison, but it should hopefully provide some food for thought, and allow people to see some of the different tools available from both approaches.

Disclaimer: I haven’t explained every term I’ve used, or every practice I’m referencing. I’m assuming familiarity. If you don’t have it, you can to run a search on anything I haven’t linked. I’m also using the terms “Scrum” and “Kanban” as aliases for the community and / or leadership, and I haven’t made much distinction between Scrum.org and the Scrum Alliance. This is deliberate, and I welcome feedback.

Scrum and Kanban have more similarities than differences.

Both methodologies put people and their interactions at the heart. Both have a clear focus on value, fast delivery and the continuing growth of the team and its ability to achieve those valuable deliveries. Both contain mechanisms for feedback and improvement, allowing processes to change according to context. Scrum used to have a very prescriptive format – not dissimilar to XP – but this has been flexed in recent editions of the Scrum Guide, as alternatives to some of the practices have emerged. It’s not even beyond the realms of possibility that some of the alternatives have emerged from the Kanban movement!

Also recently, Scrum has started to evolve from being a set of processes to being a very loose framework in which processes can themselves evolve. Used this way, it joins Kanban as a meta-process, from which the real process emerges and continues to emerge.

IMO, both are infinitely better than that broken model called Waterfall, and both are better than having no process at all.

Scrum isn’t about the Scrum Meetings, and Kanban isn’t about the Kanban Signal.

Both methodologies are named after just one small aspect of their whole. In Scrum, the scrum meetings allow the team to share learning and information about how they’re doing and to make decisions about what to work on next. In Kanban, the kanban signal – showing that someone is free to help move a piece of work closer to delivery – provides a similar focus point. Scrum teams frequently use a card wall that’s similar to that of a Kanban team. Kanban teams frequently have daily meetings like Scrum. To an outside observer, the differences could seem so small as to be irrelevant.

Kanban measures lead and cycle time, Scrum measures velocity.

Both Kanban and Scrum have their origins in “Lean” thinking. We like to think of “Lean” as the set of principles behind the Toyota Production System – the process by which Toyota builds and churns out its cars – but both methodologies have an implicit recognition that software development holds more similarities with product development than a production line. In Toyota, kanban cards are used to help eliminate large amounts of spare parts and other inventory building up; providing a buffer which lets work flow through the system. Both Scrum and Kanban value this flow. Kanban suggests limits on the amount of work in progress, allowing constraints to be addressed. Scrum encourages collaboration, causing less work in progress. Scrum uses the proxy of velocity and estimation, which can help to prevent metrics around productivity becoming targets. Kanban uses lead and cycle time, tying its measurements to valuable targets that are hard to game.

Scrum starts with the right context; Kanban improves the existing context.

I once asked a Scrum practitioner, “What do you do if you don’t have a cross-functional team and you’re not co-located?” He told me that that’s the hardest bit of Scrum – starting from the right context. I realised that these difficulties, once overcome, also provide significant value. Just starting from the right context might be a good idea!

On the other hand, Kanban is neutral regarding context. Of course we think that being co-located is a good idea. Of course we’d love to have multi-skilled, flexible, collaborative professionals. If only this was as common in the industry as we’d like. Kanban’s focus on metrics and its measurements of lead and cycle time might make the impact of not having these apparent, so it’s a good place to start. (I used to think it was only suitable for highly skilled, disciplined, advanced teams, but experience and experimentation has taught me otherwise).

Kanban visualises what’s happening; Scrum visualises an ideal.

This is one of the biggest differences for me. The extent to which Kanban visualises reality is extreme enough that the board might not even have linear flow. Whatever the process policies are – whether helpful or otherwise – Kanban focuses on making them explicit, so that they can be addressed and improved. If the team happen to work with five different phases, this is reflected. If the team write technical stories, they go on the board.

In contrast, Scrum teams tend to set up a visualisation of an ideal process, helping teams to adopt that process. Done prescriptively, Scrum provides a “big bang” starting point. Because it consists of step-by-step practices, it’s easier for beginners to adopt. That could be Scrum’s blessing – but it’s also its curse.

Certification and early adoption.

Scrum has certification. Kanban doesn’t.

Yet.

In the early days of Scrum, there were few enough trainers and experienced Agilists that no matter who taught Scrum, it was done pragmatically. Early adopters tend to experiment, and those experiments led to a better understanding; to a focus on people and interaction, rather than process. Unfortunately the “Scrum Master” certificates are easy to get, compared to any other discipline that asserts a quality of mastery or independence. The training provides simple practices that are relatively easy for teams to adopt. Scrum is taught by a wider group of people, and the quality control over that teaching has become harder to maintain.

As a result, Scrum has been widely adopted and is considered pretty mainstream – the default Agile methodology – but the certificates associated with it provide a level of confidence that the training generally doesn’t support. I’ve met a couple of excellent Scrum trainers, but I’ve also seen “masters”, armed with their certificates, instituting mini-waterfall and silo’d teams as they replace those parts of Scrum that aren’t prescriptive, or that they don’t understand, or that they can’t achieve within a context that they can’t change.

Any examination or certification body suffers from a paradox: while the people who rely on those qualifications need them to be rigorous, the people teaching and taking those qualifications would much rather they were easy. The Scrum Alliance and Scrum.org between them have both helped to set up and reinforce the desirability of their certifications, and I can only imagine the difficulty that their leaders face in balancing the financial incentives involved against the good of the IT community and industry.

In contrast, Kanban is still in an early adopter phase. We’re still working out, as a community, what’s possible. Most Kanban practitioners and coaches are bright, experienced, willing to experiment, desiring of feedback and able to share and learn from each other in a relatively small community. Scrum no longer has that luxury. We don’t know what will happen if and when the Kanban community treads the same path, but we do have the advantage of being able to learn from what’s happened with Scrum.

You can also bet that the leaders are being watched carefully by the rest of the community to see how they meet this challenge. So far I think they’re doing an excellent job. So far there are no certificates available. So far.

Some thoughts spurred by Cynefin.

The Cynefin model of complexity thinking teaches us that in a complex environment – one in which cause and effect can only be understood in retrospect, and which includes most systems with people in, as far as I can tell – we should increase the opportunities and incentives for interaction, so that the practices best suited to the context can emerge.

Waterfall treated software development as complicated, using Cynefin’s domain definition; as though each project was a thing that could be taken apart into many pieces, analyzed, then put back together as a whole. Unfortunately the ability of human beings to make mistakes, combined with our inability to either effectively communicate our intent or see into the future, has meant that this was always doomed to fail. (Most successful Waterfall teams, who complain to me that I label it unfairly, have not practiced it in a pure form, and have included elements of iteration, feedback and common sense. When these are lacking, any methodology is doomed to failure, but Waterfall above all others doesn’t mandate or even leave much room for them.)

When we look at applying Scrum prescriptively to a Waterfall team, we’re looking at pushing forward increasing levels of interaction in a context in which interaction is easy – where teams are co-located, have all relevant skills, are willing to collaborate and can share learning. Scrum’s acts of estimation and breaking stories into tasks force team members to talk to each other. The act of commitment which Scrum recommends in planning meetings causes a team to discuss their concerns frankly. The cycle of feedback and retrospection allows the team to discuss whether they’re delivering value and how to do it more effectively.

But Waterfall is no longer the context from which we always come. Many Agile adoptions aren’t taken up by teams doing Waterfall; they’re taken by teams who have abandoned their process altogether, and are talking and sharing their learning in order to work out what to do next, while having no metrics against which they can improve or track their progress and risk.

The context in which Scrum starts is not often possible. We’re not always co-located – many start-ups and companies now allow developers to work from home, and the industry still seems to suffer from a delusion that offshore work is cheap to obtain. We may not have multi-skilled people – it takes some time to learn skills. In both the environments which are already more collaborative than Scrum, and the ones which are a long way from being able to do it successfully, Kanban can be useful.

I fear that this may include the majority of environments and the IT industry, and that’s why I choose to hold myself closer to the Kanban community than the Scrum one. IMO, Kanban works in a larger set of contexts than Scrum does, even though for a subset Scrum might achieve results faster.

They’re still more similar than they are different… and I still like XP too.

Posted in coaching, kanban, scrum | 29 Comments