BDD Training – a bit differently

Did you know that there are types of requirements which are so uncertain that talking through them will just end in arguing about them? Or that some requirements are so well understood they’re simply boring? That some requirements are more likely to change than others? Would you like to be able to tell the difference, ask relevant questions and know when you’ve got enough understanding to start coding?

Would you like to be able to use your stakeholders’ time more effectively, and have them be truly engaged in conversations? How about using those conversations as a risk management tool? What if the conversations gave you more clarity on the big picture – the vision of the project, the stakeholders involved and their goals, and the capabilities being delivered?

What if you could use these ideas to help keep automated scenarios and tests minimal and maintainable, phrased in a language the business use?

Would you like to use these techniques beyond software – in everyday life, as a coaching tool, or to help find smaller personal goals that will give you more options for reaching the larger life-changing ones?

Behaviour-Driven Development uses examples of how a system should behave to explore the intended value and behaviour of that system, with a heavy focus on discovering uncertainty, uncovering risk and avoiding misunderstandings and premature commitments.

If you’re a developer wanting to learn Cucumber, I highly recommend Matt Wynne’s BDD Kickstart. If you come to my sessions, you might not see any code at all. Instead, I focus on helping people have effective conversations and a mindset that makes a difference across the entire team. As part of my BDD tutorials you’ll get an introduction to Cynefin and complexity thinking, Deliberate Discovery, Real Options, Feature Injection, and – of course – Behaviour-Driven Development. The tutorials are highly interactive, full of experiential exercises, and consistently highly rated by attendees, with unique workshops and conversations that you won’t get anywhere else.

If you like to have powerful conversations and help to create software that makes a difference, watch this space and my Twitter feed for upcoming tutorials near you, or get in touch if you’d like them run internally (whole teams welcome!). 2 and 3-day Agile, Lean and BDD courses are also available.

My next 1-day tutorial will be at Agile Island 2012, Reykjavik, on the 28th of November. There’s still time to sign up now!

Posted in bdd | 4 Comments

Learning English

Since much of my focus is on getting people to talk and communicate effectively, I thought I would share a couple of sites that might be of use to people wanting to learn the language we tend to communicate in – English.

English StackExchange is “a free, community driven Q&A for linguists, etymologists, and serious English language enthusiasts” – experienced writers and speakers who are wondering about more advanced topics. Favourite questions include, “How do you quote a passage that has used ‘[sic]’ mistakenly?”, and “Did English ever have a formal version of ‘you’?” (It turns out to be you, with thou being informal – who knew?)

However, the community has little patience for people who ask questions about correct use of grammar and vocabulary, and heaven help you if you don’t actually use those in your questions! A new site has now been proposed, and backed by some of the experts over at the advanced site: English Language Learners StackExchange. It needs a few more people to enter its beta, so if you feel like you could use some help with your English as she is spoke, or you think you know enough to help out others, why not head on over and make the commitment to ask or answer 10 questions?

While I’m here: it’s means it is, its means belonging to it. Please be kind to your apostrophes.

Posted in Uncategorized | Leave a comment

The Joy of Arrogance

I’m awesome, and arrogant.

I’m awesome, and arrogant, and I know it, and this makes me joyful. I wanted to share that joy with you, and explain why I think arrogance is so important, and humility overrated.

For a start, humility suffers from a Catch-22 condition: if you have it, you can’t know you have it, because if you know you have it you don’t. Only arrogant people believe themselves to be humble (but not all arrogant people, since some of us know we are).

Secondly, we’re all arrogant. Arrogant people are unable to learn, thinking that they already know the answer. This is a natural part of the human condition – it’s called confirmation bias – and we all suffer from it. The world simply has too much data in it to be able to take it all in, so we abstract from what we observe, come to conclusions as a result, form beliefs based on those conclusions, and filter our observations based on our beliefs. This is perfectly normal behaviour.

There are some people who believe that because they are aware of confirmation bias, they don’t suffer from it any more. Dan North calls this bias bias. People who are seeking humility are trying to escape from confirmation bias. It would be a fantastic goal, if it wasn’t essentially impossible. The quest for humility is itself a form of bias bias.

Tobias Mayer wrote in his recent blog post on humility, “Humility allows for quiet, internal reflection; it is a tool for rightsizing oneself, and thus opens up greater possibilities for thoughtful, considerate, and open interaction with others.” And yet, this quality isn’t something that just magically appears out of thin air. The only way to discover that we have opinions which we hold too strongly is to share those opinions, and sharing an opinion that we hold strongly but don’t recognise as being biased by our beliefs… well, we will share it as if it’s a fact, and come across as arrogant. Fact.

If we reflect internally, what changes? Where do the new opinions come from?

They come from other people who are sharing their opinions. If humility listens and arrogance talks, then we need arrogance in order for humility to be of any use whatsoever.

There’s a rather lovely phrase: “Strong opinions, weakly held.” That blog post that I’ve just linked to talks about “Wisdom as the courage to act on your knowledge AND the humility to doubt what you know.” Have you ever tried to doubt what you know? Not just suspect, but actually know? Again, we’re asking for the impossible.

Here’s what I’m going to do. Instead of doubting what I know, I’m going to focus on finding out what you know. I’m going to do that by sharing my strongest beliefs – as I have in this blog, as have all the bloggers on humility. I will ask questions only about those things about which I am uncertain. I will do this because there’s a good chance that I’m right, and because my essential human nature makes it impossible for me to do anything else. If I do ask questions, I will gain certainty, and then I will share my wonderful new knowledge with you, because it will be true and I will be right. If I’m wrong, you will no doubt set me straight, because you believe you’re right too. I probably won’t believe you at first, because I’ll be busy filtering whatever you say to fit my model, so you might have to persist and perhaps remind me that I am human and therefore arrogant. Your duty to me, as a fellow human being, is to be arrogant enough, and forgiving enough of my arrogance, to do that.

Because I’m awesome, and arrogant, and so are you.

Posted in breaking models | 17 Comments

The Deliberate Discovery Workshop

This is a pretty simple workshop which I ran at Agile 2011 and have run as part of my BDD tutorial since. It usually gets people thinking differently, so I thought I’d share it with you too. Some teams have taken this away to use as another retrospective format, and I’m always fascinated by the stories that come out! It can often lead nicely into an explanation of Cynefin, and also introduces Real Options (it turns out that Deliberate Discovery and Real Options are two sides of the same coin).

I tend to give the instructions out one column at a time, to avoid people overthinking and trying to aim for some outcome at the end. I also walk round the room and ask if anyone’s having difficulty coming up with ideas, particularly with respect to the commitment. Identifying the moment of commitment – or recognising that someone else is making the commitment for you – is the heart of the workshop. Big thanks to Dominica for trying this out and helping with reflection on what works.

Instructions

In groups of 3 to 4 people, get a big piece of paper and divide it into 4 columns.

1: Tell a story

Each person tells a story about a discovery; either a problem they encountered, or something they found out that they wish they’d known earlier. Give the story a title. Dan North likes “the one where (this happened)”. Put the title on a post-it or just write it into the first column.

If working with a large group, get 1 person from each small group to share their story.

2: Identify the commitment

What decision had already been made that meant it was a problem? Did you make a promise or have one made on your behalf? What was the point at which the decision became hard to reverse? When did an investment get made that was hard to recover? Put this in column 2.

If working with a large group, it might be worth picking out one of the examples to work through so they get the idea.

Examples might include decisions around scope and deadlines, external communications, spending money, or writing the wrong code.

3: Deliberate Discovery

Were there any ways you could have discovered information earlier that would have led to a different decision? Who could you have talked to? Where was the information located, and could you have gone there? This goes in column 3.

Examples might include talking to customers, sitting with users, trying to release early.

4: Real Options

Were there any ways of paying to keep options open, perhaps making the commitment later, when more information was present? Was the commitment simply made too early?

Examples might include good rollback procedures, using technology that’s easy to change, agreeing a range of dates instead of a deadline.

Wrap Up

If working in a large group, get each smal group to share their insights and anything interesting they discovered.

How likely is it that something similar will happen again?

If very likely, would adding the discovery process you identified help you avoid this problem? (This matches the “complicated” Cynefin domain, so cause and effect can usually be correlated and making the discovery early will often prevent the problem occurring.)

If it’s unlikely to happen again, would adding the options process also provide options in any other cases? (This matches the “complex” Cynefin domain, and processes which create options allow decisions to become safe-to-fail experiments or probes instead.)

Whatever the insights, this can then be used as a way of changing the process – and everyone loves to tell stories about problems they encountered!

Posted in complexity, cynefin, deliberate discovery, real options | 1 Comment

Examples in the large

A couple of posts ago, I wrote about Feature Injection. Here’s a quick summary of how I do it:

  • Vision: A primary or core stakeholder has a vision which will save money, make money or protect revenue.
  • Goals: As well as his own goal, the goals of other stakeholders will have to be met too, in order for the vision to go live.
  • Capabilities: To deliver the goal, we provide the users and the system with the capabilities to do something.
  • Features: Most capabilities could be achieved with people, paper and pen, but we like to help by providing software. A feature is a widget, or a part of an application, or a page on a website, which supports the capability.
  • Stories: Features are quite large, so we break them into vertical slices on which we can get feedback.
  • Scenarios: A scenario is an example of a user using the system. We can use scenarios to explore the requirements, discover things we haven’t thought about and break up our features and our scope.
  • Code: Finally, we get to see the vision become reality!

I also wrote about two patterns I use in conversation with scenarios: context questioning, and outcome questioning.

Given a context, when an event happens, then an outcome should occur.

  • Is there any other context which produces a different outcome for the same event?
  • Is this the only outcome that’s important, or does something else need to happen too?

Normally when we think of scenarios, we think of a user using the system. However, we can use these two patterns at scale, as well.

Given a stakeholder, when we deliver this software, then we should meet his goals. Is there any essential stakeholder whose goals we haven’t met? Is there any other goal we need to meet at the same time?

As an example, think of writing software for air traffic control. Obviously the pilots are stakeholders. What about the regulatory authorities? What are their goals? Will the airlines be able to stop us going live? Are there legal concerns?

Given a feature, when we deliver this software, then the user should have the capability to do something.

As an example, consider a trading system. Does the feature for booking a trade provide the whole capability? What about auditing? Approvals? This is also a great way to discover other missing stakeholders!

Given a vision, when we deliver this software, then we should protect our revenue.

Will this really stop our customers from going somewhere else? Is there some context in the market which we’re missing? If we deliver this software, are there any other possible outcomes or repercussions? Perhaps Facebook or RBS could have used this pattern…

By questioning what should happen, and asking, “Should it?”, we can use the same familiar patterns at scale to discover even more potential scope, explore our ignorance and reach a common understanding or find places where there isn’t one.

Wouldn’t it be nice if we could write tests for those examples at scale, too?

Posted in bdd | 4 Comments

Beyond Test Driven Development

Clarification: this isn’t a post about BDD vs. TDD, it’s a post about Spike and Stabilize. Tagging those now.

For love of TDD

Dan North and I have been talking about some different ways of writing software that matters. We’ve been talking about not doing TDD, and not automating BDD scenarios.

A lot of people have reacted to these suggestions with something approaching visceral horror, as if we’ve committed outright heresey, which I guess we have. TDD is such a game-changing practice that those who’ve discovered it recently may not be able to even imagine a world without it, since that would mean going backwards. So I want to explain a little bit about my take on TDD, just to make it really clear, and then introduce some different ideas.

TDD is amazing.

TDD is a truly important practice that every developer who writes code for a living ought to know about and attempt to master. TDD is how I learnt to write maintainable code; design classes with appropriate and single responsibilities; understand what it was that I was really trying to achieve.

The people who came up with TDD are deserving of every respect, which I haven’t always given them. I decided to start with that this time just to make it clear. TDD is a flippin’ awesome practice. When we talk about “BDD isn’t just TDD”, it definitely evolved from it. If TDD hadn’t existed, BDD and ATDD wouldn’t have either. So a big, massive thank you to all those people who helped create and espouse the values of TDD.

If you haven’t experienced really great TDD, stop reading. This post is not for you. Go read Bob Martin‘s many articles and posts, or Kent Beck’s book, or find your local code retreat instead. You will get more out of that time than reading this article.

Beyond TDD

Dan North and I have been talking about moving beyond TDD for a while. Here’s a confession: I don’t always write my tests first. I often don’t automate scenarios up-front either. And I consider it right for me not to do so.

Dan recently blogged on the opportunity costs of our various practices, and used TDD as an example of a practice that carries such a cost. It can take longer to produce software with TDD than without. I know this because I’ve tried it both ways, as has Dan. We do something that Dan calls “Spike and Stabilize” – trying one or several things quickly to get feedback, then stabilizing the end result after we’ve managed to eliminate some of the uncertainty around what we’re doing. I can’t tell the difference between the end result when I TDD something to when I spike and stabilize, except that the second one lets me get feedback from my stakeholders faster, and is easier to change in response to that feedback.

For me, that means starting with a UI and knocking out something approximate, frequently with hard-coded data. If the UI is the place of uncertainty, I’ll show it to the stakeholder immediately, or try it a couple of ways. Sometimes it might be a performance concern, in which case the UI will be something that exercises the performance. Sometimes I might need some simple behaviour behind the UI, which I also just “hack out”.

I keep all the code I’m writing clear, readable and easy to change. I can do this because I can write examples of how my class is going to work (a.k.a. unit tests) in my head. That’s come from hours of doing TDD, at work, at home, at code retreats, after work, during lunch hours. If you want to be really, really great at coding, I think you do need to have the 10,000 hours of practice that Malcolm Gladwell talks about. I’m pretty good rather than really good (and particularly, I have shallow rather than deep knowledge) but I get TDD enough to move beyond it. I’m emphasising this not to show off my knowledge, but to stop anyone who’s still learning TDD from thinking, “Oh, this is great! I can just do what Liz does.” Get good at TDD first.

Dan works with amazing people who are way better than I am. He’s damned good too. Developers like Dan and his team don’t need TDD in order to create good designs, separate responsibilities or simple code that does just enough to do the job. They’re really good at TDD, and have moved beyond it.

For most people, TDD is a mechanism for discovery and learning. For some of us, if we can write an example in our heads, our biggest areas of learning probably lie elsewhere. Since ignorance is the constraint in our system, and we’re not ignorant about much that TDD can teach us, skipping TDD allows us to go faster. This isn’t true of everything. Occasionally the feedback is on some complex piece of business logic. Every time I’ve tried to do that without TDD it’s stung me, so I’m getting better at working out when to do it, and when it’s OK to skip it. If in doubt, I do TDD. I recommend this as a general principle. Otherwise, I try to eliminate the ignorance I have. Here are the rules I’ve learnt which help me to spike things out and stabilize them.

  • If in doubt, do TDD.
  • If it’s complex enough or has enough learning in it that pairing is useful, do TDD.
  • When spiking, hard-code the context rather than adding behaviour wherever possible. Hard-coded data doesn’t require TDDing.
  • Don’t pick the right libraries or technology. Pick the technology that’s easy to change. And don’t write tests around other people’s libraries – isolate them through interfaces or adapters instead.
  • Once the uncertainty you’re dealing with is no longer the area of greatest ignorance, stabilize by adding tests. You’ll be able to tell at this stage if you should have TDD’d first based on how much rework you have to do to make the tests readable.
  • Adding tests afterwards is especially important if you’re working in a team with mixed skills or silo’d roles, or other teams in the same code-base, as your team-mates and colleagues will need them to provide documentation.
  • Adding tests afterwards is especially important if you’re the kind of person who forgets what it was you were working on after a couple of months (i.e.: just about everyone).
  • Adding tests afterwards requires even more discipline than adding them before-hand. If you find you’re under pressure to skip that step, do TDD instead.
  • No amount of automated testing is a substitute for trying out the code you’ve just written manually.
  • If your user is another system, hack something out to allow you to pretend to be that other system, then stabilize that. You’ll need it later.

By doing TDD – writing our tests first, or before we’ve got feedback on what it is we’d like to achieve – we are creating a premature commitment. The principles of Real Options say, “Never commit early unless you know why”. If you’re working with uncertainty, and if you can avoid adding investment before getting feedback – for any practice – then you will actually go quicker and be able to respond to discovery faster. You will be more agile.

However, if you don’t yet know how to write software that’s easy to change and maintain, every piece of code you write is a more concrete commitment than it will be if you learn to do TDD, since TDD helps you to adopt these practices, as well as providing nice documentation and living examples of how to use your code, how it behaves and why it’s valuable. If you haven’t yet experienced amazing TDD and you try doing things this way, you’ll probably find that you end up with a lot of rework when you come to stabilize the spike (and if you don’t stabilize the spike, you’ll eat up cost in maintenance later).

It’s interesting to me that this is actually how a lot of people learn to do TDD – by adding tests afterwards, until they gain enough confidence to add tests first. It really does remind me of this quote from Bruce Lee:

Before I studied the art, a punch to me was just like a punch, a kick just like a kick. After I learned the art, a punch was no longer a punch, a kick no longer a kick. Now that I’ve understood the art, a punch is just like a punch, a kick just like a kick. The height of cultivation is really nothing special. It is merely simplicity; the ability to express the utmost with the minimum.

Posted in spike and stabilize, uncertainty | 41 Comments

BDD in the large

One of the biggest problems I encounter with BDD is that it was started by developers, for developers, so that we could get a better grip on what it was we were developing.

As we began to understand the complexities of what we were developing, conversations with business stakeholders started to become more fluid. We started to understand what it was like to handle multiple stakeholders; to have uncertainty around requirements where stakeholders weren’t really sure about the outcomes they wanted; to pay to delay decisions about implementation until later.

There are three practices / principles / bodies of knowledge and experience which I use and teach as part of BDD, and I don’t think BDD works well without them. Good BDDers have taught these practices anyway, sometimes without giving them a name, so I wanted to call them out explicitly so you know that they’re there and can use their names to go find out more.

This is the stuff that happens in conversation, before developers get anywhere near a keyboard.

Feature Injection

A large number of Agile projects have backlogs of user stories. Some of these backlogs can be quite large. One team had over a year’s worth of stories, covering four walls of a room. Scrum and other Agile practitioners talk about “grooming the backlog”; reacting to their growing understanding of the real problem by removing some stories and adding more in. Even Agile teams can be sidetracked by “quick” two-week-long analysis phases, or two-hour planning sessions, or additional functionality that still fits within their deadline when they could have shipped three weeks ago.

What if you didn’t have to do this?

Lean proponents talk about minimum viable products or increments, in which we determine the smallest thing that could make a difference to the problem, then do the minimum necessary to ship. Feature Injection, introduced to me by Chris Matts, is a fantastic technique for discovering that minimum. Here’s how it works.

  • Every project worth pursuing has a vision – either something new, or something that you’ve seen someone else do and you want to do it too. The vision is designed to increase revenue, decrease costs or protect revenue – for instance, keeping customers from going to a competitor (note that there’s no ROI for a vision like that!)
  • There may be other stakeholders whose goals need to be met in order to allow the vision to go live. Frequently these stakeholders have traditionally gatekeeping roles – UAT, Architecture, Legal, Security. By recognizing these stakeholders early, we get a chance to turn them into educators instead.
  • To meet the stakeholder’s goals, we create capabilities. For instance, employees might need the capability to book a holiday. Notice that they could quite easily do that with paper! To make our differentiating vision compelling, we may need a certain amount of commoditised functionality – things that people will expect in that kind of application. When David Anderson talks about differentiators he uses the mobile phone market. An example of a differentiator is the first ever camera in a phone. We’ll also need to be able to make calls, receive calls, look up numbers, etc.
  • To deliver the capabilities, we design features. We may decide that some features are too complex, and create manual workarounds instead. The features represent the way in which users will use the capabilities we’re giving them – through a browser, a windows client, a mobile phone, etc. Now we’re starting to move into the areas of UI design, preferably with a UI designer, and developers and testers can really get involved if they weren’t already! Even if the development team wasn’t involved in the conversations before this point, I recommend they should be aware of the outcomes and able to map their features back into the capabilities, stakeholder goals and initial vision. Even better, the team might only have the capabilities on their backlog – breaking them down into features as and when they reach the next capability (see Deliberate Discovery for prioritisation).
  • To really explore the features, we talk through scenarios; examples of the outcomes a system should produce when a user uses it in different contexts. We can use the scenarios to decide what we want to get feedback on, and thus divide the features into stories. The main purpose of dividing things into stories is to be able to get feedback quickly.

Scenarios are the BDD artifacts people are most familiar with, but look how much happens before we even get this far! And that’s without budget negotiations, portfolio management, company strategy, recruitment and all the other gubbins needed to get a development team up and running…

  • After that, we can move into the traditional development space – TDD (or unit-level BDD if you prefer), code, integration and, we hope, production.

Deliberate Discovery

In Agile projects, we normally prioritise stories by their value. But wait a moment! If we’re doing Feature Injection, our vision is for the smallest valuable thing we could ship (otherwise, that smaller thing would be our minimum viable product). So we can’t really say that one feature is more valuable than another, because we need them all (and if we don’t, that’s our minimum viable product!).

What we can say, though, is that we know more about some of the features than others. Commoditised features, particularly, will have been done before, and we can probably estimate them and make predictions about them, as long as we have enough expertise in the domain. Anything that’s new, or that’s new to the team, we’ll know less about. We can bet that there are some things in there that we’ll discover as we code, some of which will come back to bite us. These are the kind of things which can really derail a project.

In the spirit of failing fast, we’d really like to have our projects derailed early, while we can still change our minds and perhaps think of something else to do instead. Deliberate discovery is the act of assuming ignorance, then going to look for it and optimizing to address it first. There will always be things we don’t know we don’t know – but since those things will usually show up in the functionality we’ve not delivered before, why don’t we do the new stuff first?

If you’re into Theory of Constraints, you know that in a production line environment, you tailor your system to the constraining machine, putting it first where possible. But software development is more like product development; it’s creative knowledge work, rather than doing the same thing over and over again. Ignorance is the constraint.

This is how we prioritise – by the features and capabilities about which we’re most ignorant. It helps us to address the risk early. Since the risky stuff is usually what keeps our stakeholders awake at night, this is a great way of establishing trust, too.

Real Options

As well as the things we don’t know, there will also be things we don’t know that we don’t know. No amount of analysis will ever allow us to discover everything! So we have to keep ourselves ready to change.

Unfortunately, we human beings don’t work well with uncertainty, and have a tendency to try and eliminate it. One of the ways we do that is to break down problems into their components. That works very well for predictable stuff that we’ve done before, but not for new things which we don’t understand so well. Doing this for new functionality only leads to the uncertainty being hidden away!

So what can we do to keep our options open, so that our ignorant ignorance doesn’t bite us?

Chris Matts took the concept of trading options – the right, but not the obligation to buy copper at $1000 a tonne in October, for instance – and worked out which aspects of options still apply to Real Life. Here are his principles:

  • Options have value.
  • Options expire.
  • Never commit early unless you know why.

With Deliberate Discovery, we’re getting what information we can before we make a larger commitment, so Deliberate Discovery plays into this very nicely. There’s another thing we can do, though. We can pay to keep our options open. This is what we do when we take time out to design our code well or refactor it so that it’s understandable. We’re creating options for the future.

When we talk through our scenarios and we see our P.O. frowning or saying, “Um…” then we know there’s uncertainty there. It might be worth spiking out a feature cheaply, to see if it does the job, before pinning that feature down with automation (or even, dare I say, unit tests). Discovering that uncertainty and ignorance is at the heart of BDD – not finding the examples, but finding the examples we can’t find.

Another practice Chris taught me off the back of this is, “Don’t pick the right technology. Pick the technology that’s easy to change.” Whenever we have a choice and not enough information to help us make the decision, picking a decision that’s cheap to change later usually turns out to be a good way forward, as we can change when we have more information (or keep it if it turns out to be good enough!)

BDD is just TDD?

Sure, if you’re a developer, and you only get to see the stuff that happens in development. Even then, it may be worth talking through not just the scenarios, but the larger-scale picture; discovering uncertainty and ignorance, paying to keep options open, and working towards a clear and compelling vision to deliver software that’s not only well-tested and functional, but which really, truly matters.

In this post, I’ve focused on the differences between BDD and TDD, most of which have been spawned by the language of BDD which allows for more appreciation of uncertainty than the language of TDD. For the similarities, and more about the history of how BDD was originally derived from TDD, see this article.

Posted in bdd, business value | 23 Comments

Showcasing the language of BDD

Since there are a few debates going on (again!) about whether BDD is just TDD done well, I thought it might be interesting to showcase some examples where BDD language made a difference to people’s understanding.


From StackOverflow, TDD – Should Private/Protected methods be under unit test?:

Original Poster wrote:

In TDD development, the first thing you typically do is to create your interface and then begin writing your unit tests against that interface. As you progress through the TDD process you would end-up creating a class that implements the interface and then at some point your unit test would pass.

I wrote: Please let me rephrase this in BDD language:

When describing why a class is valuable and how it behaves, the first thing you typically do is to create an example of how to use the class, often via its interface*. As you add desired behavior you end up creating a class which provides that value, and then at some point your example works.

*May be an actual Interface or simply the accessible API of the class, eg: Ruby doesn’t have interfaces.

This is why you don’t test private methods – because a test is an example of how to use the class, and you can’t actually use them.

Response: Brilliant post. Clarifies a lot.


From StackOverflow, should i only be testing public interfaces in BDD (in general, and specifically in ruby) (sic):

I wrote:

Instead of writing a test, you’re going to write an example of how you can use your class (and you can’t use it except through public methods). You’re going to show why your class is valuable to other classes. You’re defining the scope of your class’s responsibilities, while showing (through mocks) what responsibilities are delegated elsewhere.

At the same time, you can question whether the responsibilities are appropriate, and tune the methods on your class to be as intuitively usable as possible. You’re looking for code which is easy to understand and use, rather than code which is easy to write.

If you can think in terms of examples and providing value through behaviour, you’ll create code that’s easy to use, with examples and descriptions that other people can follow. You’ll make your code safe and easy to change. If you think about testing, you’ll pin it down so that nobody can break it. You’ll make it hard to change.

Response: good answer! very concise!


From comments on my blog post, “Translating TDD to BDD”:

Ben asked: Given this terminology, can you reword these sentences to eliminate the TDD terminology, but in such a way that someone who doesn’t know what BDD is can understand it?

Writing the tests first ensures that the code we write to make them pass is actually testable. Otherwise, we may wind up with classes and methods that, although fairly well designed, are not easily tested

I replied:

Writing examples of how to use the code, before we write the code that makes those examples work, ensures that the code we write is easy to use and understand. Otherwise, we may wind up with code that, although adhering to SOLID principles, has been written to be easy to write rather than easy to use or maintain.

(Of course, if you’re doing true outside-in then you already have one example of how to use a piece of code, from its consuming class. I reworded this from ‘well-designed’ to ‘adhering to SOLID principles’, since they’re necessary for good design but IMHO not sufficient.)

Ben responded: Liz, that’s beautiful. I especially like how you phrased your points about use and maintenance. Takes the focus off “testing” and puts it on something that’s more directly tied to value.


Taking it up a level, again from StackOverflow, Are BDD tests acceptance tests?

BDD “tests” exist at multiple different levels of granularity, all the way up to the initial project vision. Most people know about the scenarios. A few people remember that BDD started off with the word “should” as a replacement for JUnit’s “test” – as a replacement for TDD. The reason I put “tests” in quotes is because BDD isn’t really about testing; it’s focused on finding the places where there’s a lack or mismatch of understanding.

Because of that focus, the conversations are much more important than the BDD tools.

Acceptance testing doesn’t actually mandate the conversations, and usually works from the assumption that the tests you’re writing are the right tests. In BDD we assume that we don’t know what we’re doing (and probably don’t know that we don’t know). This is why we use things like “Given, When, Then” – so that we can have conversations around the scenarios and / or unit-level examples.

We don’t call them “acceptance tests” because you can’t ask a business person “Please help me with my acceptance test”. Try “I’d like to talk to you about the scenario where…” instead. Or, “Can you give me an example?” Either of these are good. Calling them “Acceptance tests” starts making people think that you’re actually doing testing, which would imply that you know what you’re doing and just want to make sure you’ve done it. At that point, conversations tend to focus on how quickly you can get the wrong thing out, instead of about the fact you’re getting out the wrong thing.

Response: 20 upvotes and a “nice answer” badge!


These are just a few instances of using BDD’s language instead of TDD’s. I’ve found it even more powerful in spoken language, and more powerful still when talking to business-focused stakeholders about examples of how we might use a system to deliver the value they want. I taught BDD to my dad once, showing him how to write examples in Java in order to ensure that his code would be usable and maintainable. After we’d written a couple of classes, I said, “Of course, this also gives you a nice set of regression tests.”

“Oh, yes!” he said, “So it does. I hadn’t thought of that.”

Posted in bdd | 6 Comments

LSSC 2012

A lot of times when I go to conferences, I’ve already got a slew of half-formed ideas and thoughts from the amazing communities with whom I hang out. The conferences have been fantastic for helping me refine and validate those ideas, but it’s been a while since I’ve come across anything genuinely new.

LSSC 2012 was very, very different – challenging a ton of sacred cows, trashing some long-held ideas and giving me a bunch of new thoughts and understanding. The format was perfect: seemingly flawless organisation; amazing speakers, including some that we wouldn’t have heard if we weren’t in Boston; a Lean Camp unconference where I actually felt panic at not being able to attend all the sessions; long breaks giving us plenty of time to talk, network, reflect and synthesize ideas; innovative sponsors whose presence added value; lightning and ignite talks with hours of brilliance compacted into small spaces of time (thanks for letting me rant instead!); great venue, food and service and of course the fantastic mix of attendees.

Here are some of the joyous nuggets that I learnt and the people from whom I learnt them, paraphrased for your quick enjoyment.

Hillel Glazer: Demand pulls most of us to start projects, but it only really works when it pulls us to finish them and produce something. For that, we need to work on capacity. David Joyce echoed Hillel’s sentiments in his talk when he told us to stop disbanding teams right when they’re at their most productive. That’s part of the problem.

Steven Spear: If you can’t make predictions, you lose your ability to be surprised, which means you can’t learn. This spawned a Twitter conversation with Karl Scotland and others about complexity and whether expectations apply in the complex space, which helped me with my ignite talk. I really love Karl’s take – that we should at least predict impact if not outcome. Steven was talking about nuclear reactors, which I hope are a little more predictable than the software we produce.

David Anderson: It turns out that the community is doing more than just software these days, so the Lean Software Systems Consortium has rebranded as the Lean Systems Society. It’s modelled on the Royal Society, and I’m honoured to be named as a fellow. Particularly interesting was David’s comment that they’d observed the behaviours and values of the community, and drawn up their Credo or value statement accordingly – not the other way round! Talk about making process policies explicit…

Michael Kennedy: From TPS – “Product development is not about developing cars, it is about developing knowledge about cars.” Michael showed how sharing knowledge across projects can help us to build quality in. He showed how Toyota explore new ideas, resolving complexity while developing expertise before using them in production, which falls right into place alongside some of the Cynefin conversations that have been happening in the UK, as well as Dan North’s call for Deliberate (as opposed to accidental) Discovery. One excellent quote: “They’re only standards because they’re the best we can do right now. Engineers are expected to break the standards. Managers decide which standards they want to be broken.”

Gregory Howell: It turns out that the construction metaphor we’ve been using in IT doesn’t even work out in construction! One study found that 85% of construction managers had underestimated the complexity of the project. Pushing people to go faster isn’t as useful as working on the interfaces between people. The stories he told mostly centred on slowing things down and increasing quality and fluidity of hand-offs in order to go faster.

Robert Charette: All exchanges of goods and services are exchanges of risks and opportunity. Robert used the example of a plumber who’s taken on risk by investing in his own skills. He has an interesting model for risk and particularly market share, and challenged us to think about the kind of things which might end up bringing a company as successful as Apple down.

Jim Benson: If you have a repetitive task, record the metrics for how it went each time on your kanban board. Jim was talking in terms of Personal Kanban, but I can see how that could be useful for eg: retrospectives. Already added the “exercise” column to mine! Jim also taught one table at the Brickell Key awards that if you play buzzword drinking games with the Lean & Kanban crowd, you’re going to get what you measure… dangerous stuff. Congratulations to both Brickell Key winners, Jim and Arne Roock.

Mike Burrows: Limit WIP in portfolio too. Mike also suggests limiting the amount of money carried in the portfolio, allowing you to have a few big projects or several small ones. Richard Hensley apparently thinks a similar way. Karl Scotland shared another of Mike’s suggestions: limit dots in progress. Put dots on the cards every day they’re in progress and you’ll be able to see the flow very clearly.

Karl Scotland and Larry Maccherone: Our data is really dirty. If you start using 2-sigma thresholds to make SLAs, you’ll probably get stung. It doesn’t follow a normal distribution, in part because developers simply don’t update the statuses on time – they’re too busy doing the work! Outliers are also interesting. Rally has some neat tools to show you where your percentiles really are, but they’re in competition with Lean Kit’s lounge with its hot dogs and ice-cream. Lean Kit definitely win the prize for brand awareness this trip.

Yochai Benkler: This was the most mind-blowing talk for me. It turns out that our desire to collaborate is partly genetic, and only prevalent in 60-70% of the population. The industrialised world is slowly moving towards models in which we *do* collaborate – it does actually form part of our evolution, and making collaboration the norm can help – but our innate desire to collaborate can easily be disrupted by extrinsic rewards and punishment for failure to collaborate, amongst other things. Motivating people to collaborate is way more complex than we thought.

This is only part of the learning I take away from LSSC 2012. Already signed up for LSSC 2013 – hope to see you there!

Posted in conference, cynefin, deliberate discovery, kanban, lean | 8 Comments

Upcoming engagements

There are a few exciting speaking and training engagements coming up, many of them UK based!

First off, two tutorials:

I’m running my first Deliberate Discovery, Cynefin and Real Options tutorial at Skills Matter on 29th May. These three ways of thinking and modelling software development and the world in general have really helped me, and I’d like to pass the techniques on. Highly workshop-driven and not at all technical.

At the end of September I’ll be running my 1-day BDD tutorial as part of Agile Cambridge. This is the only BDD tutorial I’ll be running this year, so if you’re interested, get in there now! Dan North (my mentor in all things BDD and Agile) and David Snowden (Complexity Thinking and Cynefin guru) are keynote speakers, so it promises to be an excellent conference!

I’ll also be speaking and running workshops through the year:

The Next Generation Testing Conference organised by Unicom is on the 23rd. I’ll be on the panel talking about Agile, and particularly ranting about our obsession with granularity and our need for certainty even where it doesn’t really exist.

I’m going to be talking about Real Options at Dev Tank in London on the 29th, after the Progressive .NET Tutorials. It shouldn’t be a long talk, so lots of opportunity to catch up with your fellow devs over a beer. More details to follow – watch the space!

On June 7th, I’ll be giving an overview of BDD, how to do it well and why it works at Agile East Anglia. If you’ve been out of the loop on BDD, this is a great chance to get into it, and I’ll be answering any questions you have too. I think there may be just one ticket left…

In August I’ll be at Agile 2012 in Dallas, running “Turning and turning in the Widening Gyre”, a workshop on complexity and deliberate discovery, and “BDD: Look, no Frameworks!” on how to do BDD using custom DSLs instead of BDD tools, while keeping steps maintainable and readable.

On 21st – 22nd September I’m honoured to be one of the keynote speakers at Lean Agile Scotland. Topic still to be decided, but it’s going to be related to people and the inside of our heads; one of my favourite minefields.

I’m also speaking at GOTO this year (October 1st to 3rd, Aarhus, Denmark). In my talk, “To be honest…” I’ll be looking at why honesty is so important and yet so hard to do.

In the meantime, I have some days free for in-house training or coaching. Please get in touch now, before the rest of the year gets this busy too!

Posted in coaching, conference | Leave a comment