Goals vs. Capabilities

Every project worth doing has a vision, and someone who’s championed that vision and fought for the budget. That person is the primary stakeholder. (This person is the real product owner; anyone else is just a proxy with a title.)

In order to achieve the vision, a bunch of other stakeholders have goals which have to be met. For instance:

  • The moderator of the site wants to stop bots spamming the site.
  • The architect wants the system to be maintainable and extensible.
  • The users want to know how to use the system.

These goals are tied in with the stakeholder’s roles. They don’t change from project to project, though they might change from job to job, or when the stakeholder changes roles internally.

Within a project, we deliver the stakeholder’s goals by providing people and the system with different capabilities. The capabilities show us how we will achieve the goals within the scope of a particular project, but aren’t concrete; it still doesn’t matter how we achieve the capabilities. The word “capability” means to be able to do something really well.

  • The system will be able to tell whether users are humans or bots.
  • Developers will be able to replace any part of the system easily.
  • Users will be able to read documentation for each new screen.

Often we end up jumping straight from the goals to the features which implement the capabilities. I think it’s worth backtracking to remember what capability we’re delivering, because it helps us work out if there’s any other way to implement it.

  • We’re creating this captcha box because we need to tell if humans are users or bots. Or we could just make them sign in via their Twitter account…
  • We’re adding this adapter because we want to be able to replace the persistence layer if we need to. Or we could just use microservices and replace the whole thing…
  • We’re writing documentation in the week before we go live. Or we could just generate it from the blurb in the automated scenarios…

By chunking up from the features to the capabilities, we can give ourselves more options. Options have value!

But chunking down from goals to capabilities is also useful. The goals for a stakeholder’s role don’t tend to change, but neither do the capabilities for the project, which makes them nice-sized chunks for planning purposes.

Features, which implement the capabilities, change all the time, especially if the capabilities are new. And stories are just a slice through a feature to get quick feedback on whether we’re delivering the capability or not (or understanding it well, or not!).

Be careful with the word epic, which I’ve found tends to refer indiscriminately to goals, capabilities, features or just slices through features which are a bit too big to get feedback on (big stories). The Odyssey is an epic. What you have is something different.

Posted in complexity, stakeholders | 1 Comment

Discrete vs. Continuous Capabilities

Warning: really long post. More of an article. This is a reasonably new and barely tested train of thought, so all feedback is most welcome! Also I do not know as much as I’d like about Ops, so please feel free to educate me.

A capability is more than just being able to do something.

The word which describes being able to do something is ability. I do sometimes use this while describing what a capability is, but there are connotations there that are missing.

When we say that someone is capable, we don’t just mean that they can do something. We mean that they can do it well, competently; that we can trust them to do it; that we can rely on them for that particular ability. The etymology comes from the Latin meaning “able to grasp or hold”. I like to think of a system with a capability as not just being able to do something, but seizing the ability, grasping it, triumphantly and with purpose.

Usually when I talk about capabilities, I’m talking about the ability of a system to enable a user or stakeholder to do something. The capability to book a trade. The capability to generate a report. These are subtly different to stakeholder’s goals. The trader doesn’t want to book a trade; he wants to earn his bonus on the trades he books. The auditor is responsible for governance and risk management; his goal is not to read the report, but to ensure that a company is behaving responsibly. The capabilities are routes to that.

We’re most familiar with discrete capabilities.

In order to deliver capabilities, we determine what features we’re going to try out, and at that point we start moving more firmly into the solution space. Capabilities are a great way of exploring problems and opportunities, without getting too far into the detail of how we’ll solve or exploit them.

Features, though, are discrete in nature. Want to book a trade? Here’s the Trade Booking feature. Auditing? Here’s the Risk Report Generation feature. For each of these features, a user starts from a given context (counterparties set up, trades for the year available), performs an event (using the feature) and looks for an outcome (the trade contract, the report).

For anyone into BDD, that will be instantly familiar:

Given a context
When an event happens
Then an outcome should occur.

But what about those capabilities which can’t easily be expressed in that easy scenario form?

  • The capability to support peak time loads.
  • The capability to prevent SQL injection attacks.
  • The capability to be easily changed.

Those are, of course, elements of performance, security and quality, also known as non-functionals.

A system which requires these continuous capabilities must always have them, no matter how much it’s changed. There is no particular event which triggers their outcome; no context in which their behavior is exercised, save that the system is operational (or in the case of code quality, merely existent).

Because of that, it’s harder to test that they’ve been delivered. Any test needs to be carried out regularly, as any changes to the system stand a chance of changing the outcome, and often the result of the test isn’t a pass or fail, but a comparison which notifies at some particular threshold. If the system has become less performant, we can always make judgement calls about whether to release it or not. Become a bit hard to change? Oh, well. Slightly less performant than we were hoping for? Let’s fix it later. Open to SQL injection attacks? Um… okay, that’s slightly different. Let’s come back to that one later, too.

There’s a name for a test that we perform regularly.

It’s called monitoring.

The capability to monitor something is discrete.

While it’s hard to describe the actual capabilities, most monitoring scenarios can be expressed easily. Let’s have a conversation with my pixie, and see what happens.

Thistle: So you want this system to be performant, right?

Me: Yep.

Thistle: And we’re going to monitor the performance.

Me: Yep.

Thistle: What do you want the monitor to do?

Me: I want it to tell me if the system stops being able to support the peak load.

Thistle: Okay. Give me an example.

Me: Well, let’s say our peak load is 10 page impressions / second, and we want to be able to serve that in less than 1 second per page. I don’t know what the actual figures are but we’re finding out, so this is just an example…

Thistle: That’s fine; I wanted an example.

Me: Okay. So if the load starts ramping up, and it takes longer than 1 second to serve a page, I want to be notified.

Thistle: So if I write this as a scenario… scuse me, I’m just going to change your three “ifs” to “givens” and a “when”, and add your silent “then”. So it looks like this:

Given a peak load of 10 pages per second
And a threshold of 1 second
When it takes longer than 1 second to serve a page
Then Liz should be notified.

Me:  Yep.

Thistle: Do you want this just in production, or do you need to know about the performance while we’re writing it too?

Me: Well, I need to know about it in production, because then we’ll turn off some of the ads, but that will make us less money so ideally this will never happen. You guys should make sure it supports peak load before it goes live.

Thistle: Okay, so we can use overnight performance tests as a way of monitoring our changes in development. For yours, though, if we’re already sure we’re going to deliver something that meets your peak load requirements, why do we need to worry?

Me: Well, I might end up with more than the peak loads we had before. If the site is really successful, for instance.

Thistle: Ah, okay. So you want to be notified if the page load time rises, regardless of load?

Me: Yep.

Thistle: So your monitoring tool looks like:

Given the system is running
When it takes longer than 1 second to serve a page
Then Liz should be notifed.

Me: That would work.

Thistle: Fantastic.

Me: Are you going to make this magically work, then?

Thistle: Will you pay me in kittens?

Me: What? No!

Thistle: Probably best to get the dev team to do it.

Some things are hard to monitor.

There are some aspects of software which are really hard to monitor. Normally these are the kind of things which result in chaos if the capability is negated, though that’s a symptom rather than a cause of the difficulty. Security is an obvious one, but hardware reliability and data integrity spring to mind. And remember that time Google started storing what the Google Car heard over open wifi networks? Even “being legal” is a capability!

When a capability is hard to monitor, here are my basic guidelines:

  • Learn from other people’s mistakes
  • Add constraints
  • Add safety nets
  • Monitor proxy measurements.

Learning from other people’s mistakes means using industry standards. Persistence libraries are already secured against SQL attacks. Password libraries will salt and hash and encrypt authentication for you.

Constraints might include ensuring that nobody has permission for anything unless it’s explicitly given, refusing service to a particular IP address, or in the worst case scenario, shutting down a server.

Safety nets might include backups, rollback ability, or redundancy like RAID arrays. Reddit’s cute little error aliens ameliorate the effects of performance problems by making us smile anyway (and then hit F5 repeatedly). Being in development is safe; best to fail then, if we can find any problems, or bring experts in who can help us to do so.

Proxy measurements include things like port monitoring (you don’t know what someone’s doing, but you know they shouldn’t be trying to do it there), CRC complexity (it doesn’t tell you how easy it is to change the code, but it’s strongly correlated), bug counts, or the heartbeat from microservices (thanks, Dan and Fred George).

Unfortunately, there will always be instances where we don’t know what we don’t know. If a capability is hard to monitor, you can expect that at some point it will fail, especially if there are nefarious individuals out there determined to make it happen.

It’s worth spending a bit of time to stop them, I think.

Complexity estimation still applies.

The numbers to which I’m referring are explored in more detail over here.

If the capability is one that’s commonly delivered, most of the time you won’t need to worry about it too much. Ops are used to handling permissions. Devs are used to protecting against SQL attacks. Websites these days are generally secure. Use the libraries, the packages, the standard installations, and it should Just Work. (These are 1s and 2s, where everyone or someone in the team knows how to do it). Of course, the usual rules about complacency tipping from simplicity into chaos still apply too.

For capabilities which are particular to industries, domain expertise may still be required. Get the experts in. Have the conversations. (These are 3s, where someone in your company knows how to do it – or you can get that person into your company).

For any capability which is new, try it out. If it’s discrete, look for anything else that might break. If it’s continuous, make sure you try out your monitoring. And look for anything else that might break! (These are the 4s and 5s, where nobody in the world has done it, or someone has, but it was probably your competitor, and you have no idea how they did it.) It’s rare that a continuous capability is a differentiator, but it does sometimes happen.

Above all, and especially for 3s to 5s, make sure that you know who the stakeholder is. For the 4s and 5s it should be easy; it will be the person who championed the project, and your biggest challenge will be explaining the continuous capability in terms of its business benefits. (The number of times I’ve seen legacy replacement systems, intended to be easy to change, lose sight of that vision as sadlines approach…) For 3s, though, the stakeholders are often left out, as if somehow continuous capabilities will be magically delivered. Often they’ve never been happy. And making them happy will be new, so it might end up being a project in and of itself.

That’s mostly because continuous capabilities are difficult to test, so a lot of managers cross their fingers and hope that somehow the team will take these nebulous concerns into account. But now you know.

If you can’t test it, monitor it.

As best you can.

Posted in bdd, stakeholders, testing | 2 Comments

Using BDD with Legacy Systems

One question I keep being asked is, “Can we use BDD with our legacy systems?”

To help answer this, let me give my simplest definition of BDD:

BDD is the art of using examples in conversation to illustrate behaviour.

So, if you want to talk through the behaviour of your system, and use examples to illustrate that behaviour, yes, you can do this.

There are a couple of benefits of this which might be harder to achieve. The biggest is that using examples in conversation helps you explore behaviour, rather than just specifying it. Examples are easy to discuss, but it’s also easy to decide that they don’t matter, or that you can worry about that scenario later, or that you want different behaviour. It’s harder to do that when you’re talking about tests or specifications, and it’s harder still when the behaviour you’re discussing already exists. If you do find yourself talking through the different examples, you’re probably clarifying behaviour. That’s OK; just recognize that you’re doing it, so some of the usual advice we all give around BDD, and particularly around scope management with BDD, won’t apply. Asking questions, though, is still a great idea. You might find all those places where the behaviour is wrong, or annoying, or isn’t needed at all. Using examples is a great way to illustrate bugs, and helps testers out a lot.

The second aspect is the automation. (If you do this, please consider the other post I wrote about things I like to see in place first; these still apply.) Automation is usually harder with legacy systems, because often they weren’t designed with automation in mind. Websites have elements with no identifiers or meaningful classes, windows applications have complicated pieces with no automation peers, some Adobe widgets assume that if you’re using automation it must be because of reading disabilities so it will helpfully pop up boxes on your screen to ‘help’ you (thank you for that, Adobe).

But the real reason why BDD becomes hard with legacy systems is because often, the system was designed without talking through the behaviour, and the behaviour itself makes no sense.

I recently tried to retrofit SpecFlow around my little toy pet shop. The pet shop itself was designed just as a way of showcasing different automation elements, so it wasn’t particularly realistic. Because of that, I find it impossible now to have conversations about its behaviour, because its behaviour simply isn’t useful. It isn’t how I would design it if I were actually designing a UI for pet shop software. I can’t even talk to my rubber duck about it. I won’t be able to sensibly fit SpecFlow to this until I can actually change the behaviour to something sensible.

If you’re in one of those unfortunate environments with a bit of a blame culture, BDD will help introduce transparency into the quality of your process – or lack of it. Just so you’re warned. (In my instance it was a sensible trade-off at the time, since I originally wanted automation software, not a pet shop, and it’s my software so it’s my problem. You may not be so lucky.)

Automation on legacy systems can give you a nice safety net for any other changes, so it might be worth trying this for a few key scenarios. Teams and particularly testers I’ve worked with have been saved a lot of time in the past by just having one scenario that makes sure the app can start - automation is particularly useful if your build system is closer to your production system than your development one; frequently the case for legacy systems.

If you do happen to find an aspect of behaviour that you like and want to capture, then by all means, do BDD. Talk through the examples, stick them in a wiki, automate them if you can, remembering that having conversations is more important than capturing conversations is more important than automating conversations. You might even find out why things behave a certain way, and come to like the existing behaviour better.

Otherwise, you might want to wait until you’re changing the behaviour to something you like.

Posted in bdd | 2 Comments

The Hobgoblin Manifesto

For Chris, the most Puckish person I know and who I am proud to call my friend.

A Hobgoblin is mischevous from birth
We never mean to harm, but to disrupt,
To muddle up the stasis of the Earth
That others may solve problems they gave up.

A Hobgoblin defies all expectations
Of ugliness and meanness, seeming fair
Creating mirth and joyful celebrations
Whatever else may linger here and there.

A Hobgoblin lives moments day to day
And unintended consequences show.
We own our actions, learn, then seek a way
To make amends, if others will allow.

A Hobgoblin loves fools, and gives them ears
If jackasses they are, and such ears fit
Remembering that when enchantment clears
We’re still fools too – at least a little bit.

A Hobgoblin forgives. One silly act
Does not a person’s whole or part define.
We offer peace, if others will retract;
To err is human; to forgive divine.

A few people have asked if they can sign this! In the spirit of the Community of Thinkers, please post it to your own blog and attrib this if you want these words to be yours too. Derivatives and responses also welcomed.

Posted in Uncategorized | 3 Comments

An open letter to Bob Marshall

Dear Bob,

We’ve had our differences, mostly around a particular set of tweets you sent
out about the Pask award in 2010. I was respectful enough to address those
with you face-to-face. You refused to retract them, particularly the most
hurtful comment, and you told me at the time that I was “naive” for believing
that I won the Pask award fairly.

I told you that it would make me less likely to want to work with you, and you
said you were sorry to hear that. That was as close as I ever got to an

I didn’t want it to get in the way of learning, so I left it aside and reached
past it. I don’t believe that any one aspect of a person affects or defines
all aspects. I attended your RightShifting conference. I offered you some tips
from my experience in Dreyfus to help with your Marshall Model. I was as
forgiving as I feel I could be, and our community interactions seemed good.

I thought that things had become more positive. You reached out and offered me
a hug last time we met, which I reciprocated. I thought that perhaps you had
started to see that maybe I really did deserve that award; that my work was
meaningful. I had dreams that you might one day apologize.

Then, on Friday, this Twitter sequence happened.

@flowchainsensei: I’d like to work on advancing the theory and practice of software and product development. Can’t see that ever happening in the UK.

@somesheep: @flowchainsensei IME @tastapod @lunivore @PapaChrisMatts for example prove you wrong. You can make advancements. Not the ones you want?

‏@flowchainsensei: @somesheep Wise up, dude.

@PapaChrisMatts: How is this tweet consistent with #NVC &
#antimatterprinciple? Cc @somesheep @tastapod @lunivore

@flowchainsensei: @PapaChrisMatts Hobgoblins, little minds, etc. @somesheep @tastapod @lunivore

Bob, did you seriously just resort to name-calling? Because this Twitter sequence makes it look as if you did, and as if you’re doing it to Dan and I particularly. I also found that you appear to have blocked me on Twitter, with – as far as I can tell – zero precursor on my side to deserve it.

Oh, well.

I looked up the word “Hobgoblin” and found some very positive connotations. I
once played Puck, the most famous hobgoblin, as a child, and remembered that
while he’s mischevous, he’s not all bad.

Those that Hobgoblin call you, and sweet Puck,
You do their work, and they shall have good luck;
Are you not he?

So I am a Hobgoblin. I own that, and will as always seek to expand my “little
mind”. I take your insult and make it mine, and wish you good luck in return.
I will turn it round, and use it to create a light-hearted manifesto designed
to steer the mischief we all end up causing in a positive direction, and to
forgive others who cause it themselves.

Gentles, do not reprehend,
If you pardon, we will mend.
And, as I am an honest Puck,
If we have unearned luck
Now to ‘scape the serpent’s tongue,
We will make amends ere long;
Else the Puck a liar call;
So, good night unto you all.
Give me your hands, if we be friends,
And Robin shall restore amends.

If you want to meet up, have a drink, and talk about whatever the hell is
bugging you, I’m more than happy to do so. Just let me know how to get in
touch and make plans, since I can’t DM you any more.


PS: I also apologise for the small jibe I made myself. It was petty and
unnecessary. Sorry.

PPS: I’ve also been pointed to the quote, “A foolish consistency is the hobgoblin of little minds,” which clarifies a lot but not everything. I did ask for clarification over Twitter but of course you probably didn’t see it! Either way, I am happy to join Chris in requesting that if you’re going to preach non-violent communication, please do more of it.

Posted in Uncategorized | 6 Comments

The Dream Team Nightmare: a Review

I remember Steve Jackson and Ian Livingstone very well. They were the duo who published the “Choose your own adventure” books in my childhood; games which took you from one scene to another, battling monsters, solving puzzles and collecting items that would help you later on. The objective was always to escape the dungeon with the treasure, or the rescued captive, or just you, intact… but always in dread of those two words happening too early: “The End”.

Now Portia Tung has recreated the genre, but this time the adventure is real. You are an Agile Coach, trying to help a failing team – already nominally Agile – to deliver their project in time. The dungeon is an office, the monsters are awkward people and overly optimistic managers, and you, the hero, start with only the wooden club of the Agile Manifesto; no armour, no sharp knives and no magic.

This adventure doesn’t provide hard-and-fast rules for “How to be an Agile Coach”, but it does remind me very much of my early days in that role. Mistakes are simple to make and costly; involving other people is hard work yet invaluable; and leaving yourself with no options will usually result in your demise – at least, the demise of your contract. Even though the book is far more linear than any of the early games – frequently taking you from one paragraph to the next – that mechanism did draw me in, keeping me reading from one scene to another. My experience did help me to avoid the obvious traps, but I went down those routes anyway, just to see what would happen, and found the results pleasingly realistic. Those two words – “The End” – also invoke just as much dismay, when they happen without ultimate triumph.

I did miss some of the mechanics of the adventure books I remembered. There are no artifacts to carry around, and even though the artifacts of Agile Coaching tend to be knowledge-related, it would have been a nice twist. “Do you have a vision statement for the project? If so, turn to chapter 52…” Without these artifacts, the book tended to be focused very much on the process, rather than any results or metrics; a style of coaching that I try to avoid, as I often find it leads to process for process’s sake. Sometimes in the book this happened, and I found myself following patterns of work that seemed more habitual than useful, with no explanation as to why a particular practice might be a good idea. Often it also seemed that there really was only one way to succeed, whereas having different resources to draw on would have let me, and other coaches, choose our own routes to success. There’s usually more than one way to coach a team. This would have made the adventure less linear, too.

Still, while I was reading it with an eye to succeeding (rather than chasing down the failures to see what happened) I did find it a useful reminder of all the things that we know we ought to do as Agile Coaches, but frequently don’t when it comes to real life. The need for space and reflection is emphasized a lot, as is the expectation that you’re going to be putting a bit of work in above and beyond the obvious. Reaching the realization that the team were going to deliver, and that previously hostile monsters… I mean managers… were invested in realistic prospects of success, felt suitably triumphant, as did those two words, “The End”, in the right place.

Unlike the adventure books, there are also different levels of success, too. It’s possible to fail while learning, to fail in ways that damage your career, or to succeed utterly. I enjoyed the reminder that failure isn’t always the end of the world.

I recommend this book for anyone wondering what it’s like to be an Agile Coach, or anyone who’s new to that role, or working within a larger organisation that could use that kind of help. I think it’s less useful for experienced coaches, but I would certainly advise anyone I was training as a coach to read it. And even though I found it less useful than I would have done some years back… it was still good fun!

Posted in coaching | 1 Comment

Disorder, or, How I Got a Black Eye

In the Cynefin framework, the Disorder domain is the one in the middle. It’s the state of not knowing what kind of causality exists. Over on the right we have the predictable domains of Simple and Complicated, and on the left the unpredictable Complex and Chaotic domains. When we aren’t sure which domain dominates, we tend to behave according to our preferred domain.

I’ve become used to seeing the Complex domain treated as if it’s Complicated by people who desire predictability. We do it all the time with Scrum, trying to estimate things we’ve never done before, or with BDD when we try to define outcomes in scenarios where the right answer isn’t certain. Getting a grip on Cynefin helped me to spot that very easily, but I laughed; of course I would never fall into Disorder myself!

The Story of the Green Screen

I can easily tell the difference between something that’s Complex and something that’s Complicated, and am very unlikely to treat the first as the second. Of course, it helps that my preferred domain is Complex (like most developers).

Then one day, my photography screen arrived.

This screen turns up in a neat little circular bag, about a metre wide. When you take it out, you find three layers of circular wood, with some fabric smooshed together in the middle. As you unfold it, the wood starts to act as a kind of spring, and suddenly – Bang! – it pops out into a screen that’s 1.5m by 2m big.


Far too big for my little flat! So I needed to put the screen back away in its bag again.

I’ve found that lots of us who have a preference for the Complex also don’t always read the instructions before trying something out. I wrestled with the screen a bit. I couldn’t see how to get it back into the bag again. The wood is incredibly springy, so it takes a bit of strength to bend it back into shape. After a few minutes, I realised that reading the instructions would probably be helpful.

The instructions made no sense.

So I went looking for a video that I thought might explain it to me. I found this one, in which three photography/video professionals attempt to do the same thing. It had me giggling for a few minutes. At least I wasn’t the only person who struggles with this! At the end of the video they finally get the screen away. I watched, but I couldn’t see exactly how they’d managed it.

Still, they managed it after several tries. Trying something – experimenting – is the right thing to do in the Complex domain! So I thought I should experiment a bit more. After all, it worked for them…

I wrestled with the screen some more. I twisted. I turned. I pushed, and let go of one side for just one second…

Thwack! The screen popped out, whacking me in the eye. I’m very lucky it only drove my glasses back into my face, skidding them up onto my eyebrow, rather than breaking them.

After a few tears and a bit of a tantrum (it’s OK to be unprofessional when you’re alone!), I patiently looked for another video that would help. After all, someone had done this before, and others must have had the same problem, the first time they did this. After some searching, I came across this site, where the kind Dr. Daniel C. Doolan shows us how to do it with a smaller reflector, from several different angles, before progressing to the screen.

Finally, having learnt from the expert, I followed his steps. It still took a bit of strength, but – Plop! – the screen collapsed back into its metre-wide circle again, allowing me to pop it back into its case.

Of course, I couldn’t record any videos that week, on account of my swollen and multi-coloured face.

We Are All Biased to Our Domain

The main problem with my “experiment” was that it really wasn’t. Experiments are safe-to-fail, and wrestling with wood without wearing any safety goggles was a bit negligent on my part. I’m sure if I had asked a Tester they would have spotted the possibility of accidents. Testers are very good at coming up with scenarios we haven’t thought of, and not just in software development! Applying Cognitive Edge’s Ritual Dissent might also have helped me spot the problem.

But really, we didn’t need a safe-to-fail experiment. I should have seen that the problem was predictable. People had done it before, and the fact that the screen came with a nice little bag to store it in should have told me that the solution was repeatable, and merely required expertise – and not very much, at that. Once I understood the trick, it reminded me a little bit of a Rubik’s Magic game, which I used to play with as a child. So this was definitely a predictable problem, and learning from the experts was the right thing to do.

Of course the real problem was my impatience, and my bias towards my own domain.

Some Hints and Tips for Avoiding Disorder

A short while back, I wrote a blog post on how to estimate complexity. I’ve found it does help to bring people out of Disorder. Particularly, I’ve found it useful to consider whether someone has solved a problem before, in the same or similar context, and whether their expertise is accessible. We developers do like to “reinvent the wheel”, but a lot of times it’s not really necessary when the problem has been solved before (Complicated). Project Managers and Scrum Masters often demand predictability where none exists, and recognizing that when nobody has done it before the outcome will be emergent can help us communicate the need for experiment (Complex).

Occasionally making sense of our domain is itself a complex process, because we’re human. So if you’re not sure which domain you’re in, here are my hints and tips for making sense of the domain safely.

  • If you have several solutions and you don’t have enough information to be certain which one is right, pick the one that seems right and is safest and easiest to change. In a Complex domain this will be safest to fail, and in a Complicated problem with several solutions (Good Practices as opposed to Best Practice) chances are that the easiest to change is also the simplest to understand. If it turns out to be wrong (Emergent Practices) then you will have better information for making the right choice. This is Real Options 101.
  • If you don’t know whether a solution exists or not, make sure that your experiment really is safe to fail. Ritual Dissent, Black Hat or Evil Hat thinking, and bringing problem-focused Testers into conversations are all useful ways of checking this.
  • Try looking it up. Google, StackExchange, and our many ways of accessing the Lazyweb give us several fairly safe-to-fail experiments, right there.
  • For those of you who love Chaos and prefer command-and-control, treating everything as an emergency, consider delegating to someone else when it isn’t. It’ll be less stress for everyone involved. And for the rest of us, remember that we tend to treat things this way when we’re stressed, because everything feels urgent. Let other people support you occasionally, especially if you’re feeling low on personal resourcefulness.
  • Try bringing people who have different biases into your conversations. As far as I can tell, Testers and Myers-Briggs J-Types generally prefer Complicated domains; developers prefer Complex; children prefer things Simple; emergency services personnel specialize in dealing with the Chaotic. Their perspective can help. Yes, even (sometimes especially) the kids.
  • Be aware of the bias of your own domain, and be forgiving of yourself when you get it wrong. You yourself are a Complex creature, and everyone fails occasionally, whether it’s safe to do so or not.

Like this? Want to know more? I will be running a workshop on BDD with Cynefin in Brisbane, 11th December, as part of Yow! Australia. Registration is still open!

Posted in complexity, cynefin, evil hat, real options, Uncategorized | Leave a comment