I haven’t written in a while for various reasons, and will probably post more about the climate later (being an even bigger and more important problem), but today I wanted to write about a Really Common Problem that I keep encountering in teams doing Scrum.
It’s a manifestation of the usual problem (humans like predictability) combined with certain aspects of Scrum. Experienced Scrum folk will say that it’s not really Scrum, and I think they’re right, but it is really common. If this matches your team, don’t feel bad; it’s really common, and I’m going to follow with some hints and tips that might help you. If it doesn’t, congratulations; you get to sit smug because you avoided this really common trap.
Here’s how the problem manifests:
The product owner or team lead or manager wants to know what the team are going to deliver for the next sprint, to ensure that sufficient analysis has been done on the work that the team can pick it up or to do longer-scale planning.
To do that, the stories / tickets need to be fleshed out ahead of time, with clear enough scope and acceptance criteria that they can be estimated. (Even two weeks is enough to cause the problem to manifest, but I’ve seen backlogs fleshed out like this for months ahead or even year-long projects.)
The experts who know those areas best (often tech leads, sometimes the whole team) are asked to help elaborate those tickets, so they context-switch away from the work that they were originally doing in the current sprint to go look at the work for the next one. (Some teams even add estimates at this stage.)
By the next sprint of course, some unplanned work will have disrupted things, or discoveries will have slowed us down (which they tend to do more than they speed us up), so there are often outstanding stories / tickets left over. This can sometimes be a bit depressing, especially if the team works on the principle that doing the tickets = the sprint goal = the commitment.
So the devs get hyper-focused on estimates, probably with encouragement from managers who want that all-elusive predictability. In the worst manifestations, they push back on stories that are “unclear”, because they can’t estimate them easily. Those are the stories or tickets with the highest discoveries. If you’re really unlucky, they get pushed all the way to the end of the project or release so that the discoveries are all forced when you have no time to deal with them.
Then the stories / tickets that didn’t get done are moved to the next sprint, and that probably includes some which came in from the previous sprint too. Now not only are experts being context-switched away from their existing work, but the thing that they were context-switched away to look at is rusting in their heads and might never actually get done after all.
(For bonus negative points, use JIRA’s scrum boards so that this problem is hard to see since work that isn’t assigned to the sprint doesn’t show up.)
To try and ensure that as much of the work gets done as possible, team leads or managers will make sure that the people with the right skills are assigned to the right work. This usually manifests as people signed up to the work in JIRA, before the sprint’s even started.
So now you don’t have a Scrum team focused on the goal.
You have a bunch of individual developers assigned to individual tickets, based on their existing skills; and when they have done one ticket, they move it to “Ready to Test” (or just “In Test”) and grab the next one that’s assigned to them. They don’t have a conversation with the tester, because there’s a ticket waiting for them with their name on it and they’re already behind in the sprint.
So now if someone needs their help with a different ticket, or the tester comes back with a bug, they have to context-switch yet again. Context-switching is painful for devs. It can feel hard to get any work done.
Does this seem familiar?
If so, here are some hints and tips you can try to see if it makes things better.
Is it a deadline, or a sadline? External deadlines like Christmas or a Trade Show aren’t movable, and generally business stakeholders are pragmatic about getting things working to a good level of quality in time. When nobody and no project and no opportunity is going to die, but someone will be sad, generally reputation is on the line and that pragmatism seems to go out of the window in favour of being able to mark all the stories as “done”. I encourage everyone to spread around copies of “Commitment” liberally, and to have honest conversations about what promises have actually been made, where, and what might be movable.
Plan and estimate stuff at higher levels. I like capabilities as a way of breaking down big pieces of work; someone will be able to do something they couldn’t do before, or in a new way, or in a new context. (The newness is why there’s unpredictability.) To get that working they also need to be able to do some other stuff, or some other people do. This capability level feels like about the right size of granularity to be working out what the scope looks like and how long it might take. The capabilities work well with story mapping later on, too. You can flesh out the occasional story within that ahead of time if it’s useful, especially if it’s a proof of concept or a spike that will help surface new discoveries, but try to…
Keep the analysis as close to the work being done as possible. If it takes a couple of days to get a story fleshed out, do it a couple of days before a dev is likely to be free. One team I worked with had a “trigger” – the UI dev who generally led the way on new features – who knew when they were likely to finish, and would let the team know that it was time to kick off analysis on another feature. Limiting Work In Progress can also help with this, as long as you recognize that analysis is WIP.
I reckon that before the sprint starts (or the work is picked up if you’re doing Kanban) you want enough knowledge written down to have the conversations, but not so much that you’re replacing them. Keep an eye on cards which will take a long time to analyze; otherwise leave it as late as you can. Getting hold of test data or user research are good reasons to kick off analysis early. Be pragmatic about it.
Devs are great at writing stories. Once the problem is understood, devs usually have a pretty good idea of where the greatest unknowns are, and can slice off spikes or PoCs to handle technical difficulties, then weave disparate pieces of the system together to work with reduced or hard-coded contexts and minimal outcomes, with further contexts, rules and outcomes added in later. (Even if you don’t feel like releasing an MVP, I suggest coding it first regardless.)
Devs also tend to spot when there’s work to do that doesn’t easily fit into existing stories, including “technical stories” (stories that do actually have an impact on users, but devs know what the solution is so that’s what goes in the title). This is work that you want to have visible, even if it hasn’t been estimated. The harder it is for a dev to get work added to a tracker, the more likely it is that this work will end up in a hidden backlog somewhere else… possibly in multiple developers’ heads.
Letting devs carve stories out themselves also provides autonomy; clarify the purpose and keep the analysis out of the tickets, putting it either in docs in the codebase or on your intranet. JIRA and other trackers are terrible places to store long-term knowledge. So is email.
Keep stuff in the backlog unassigned and in priority order. The goal is to pick up the highest ticket; but if the dev who’s free doesn’t have the skills to pick up at least one of the top 3, go pair with the person who does on whatever they’re working on until they become free and can pair on the new ticket. Skills will gradually spread in the team until devs can reliably pick up high priority items and the whole team is focused, together, on solving the larger problem.
Brownies think of others before themselves and do a good deed every day. Before picking up a ticket, team members can look to see if anyone else needs help. This spreads skills too; but even better, it releases devs from the habit or need to quickly pick up the next ticket, which means refactoring and documentation stands a better chance of actually being done as part of the story instead of a big catch-up afterwards.
Commit to demo. The only real commitment that gets made in most sprints (meaning it’s external to the team and can’t be changed easily) is the calendar invite to get stakeholders to see what’s been developed. It also helps developers to focus on what other stuff might need to be done to get their work in a demoable state, which might end up with them reaching out for help or helping others, which is great for sharing skills.
If you can’t release, get feedback anyway. The Scrum guide says that an increment must be usable, and ideally you have a nice, finished piece of functionality that’s been released or is releasable, but not every team is there yet, and some have constraints that make it very hard to release (high transaction costs result in large batches; if you have to drive to a customer base with an encrypted USB stick you’re not going to be doing that several times a day). So if you can’t release, find a way at least to get feedback, even if it’s showcasing from a test environment. Don’t let the “definition of done” stop you from getting feedback. You can also look at ways to release in limited contexts earlier; to friends and family or early adopters or even to your product owner – bonus if you can leave the very boring and predictable “logging in” story until last!
Also if you’re using JIRA I suggest using Kanban boards instead of Scrum boards. You can still keep a 2-week (or however long) cadence, but it makes it much easier to see all the work in progress, and to stop treating Sprints like buckets, or, as Daniel Terhorst-North once put it, “You turned a Gantt chart on its side and called it a backlog.”
The manifestation of the common problem that you laid out essentially starts with estimation. Recently, I happened to write my thoughts on process of estimation (https://www.linkedin.com/pulse/estimation-flaws-avoiding-pitfalls-direct-duration-vinayak-kumbhakern) inspired by something written by Mike Cohn where he mentioned estimation is all about “estimate the size, measure the velocity, and derive the duration”. My take on that is that “estimation cannot be trusted if the smaller work used to measure velocity is not complete from an end-user value perspective or doneness point of view” which is what I have seen happening in some of the so-called Agile projects I am involved in. I would love your feedback on that observation.
The “doneness” helps to surface discoveries; the unknown unknowns that may be encountered in the process of release. So I agree. However, as I said, not every team is there yet. There are some reasons for estimating even with that level of uncertainty – comparing two priorities, for instance, or judging whether an effort is even reasonable to attempt. Uncertain estimates are often enough for comparison and coherence (a sufficiency of evidence to progress, or a realistic reason for thinking it’s a good idea).
The problem isn’t so much the estimation, then, as the other use they are put to: the attempt to make things predictable, as if all work can be treated with the same level of confidence. “Doneness” is ideal, of course, but even that isn’t always enough. You can make capabilities like logging in or basic CRUD web form submission releasable and estimate similar familiar functionality, but completely miss when it comes to, say, integrating with the API to that feed that you’ve never used before. Since we’re always trying to solve something new in software development, unfamiliar, high-discovery problems always exist. Being able to slice off a proof of concept or a spike, even if it isn’t releasable, can help to surface other risks; and organizations which don’t have the agility yet to properly make releasable slices can still benefit from surfacing that kind of discovery too.
I wrote more about that in “Estimating Complexity” – https://lizkeogh.com/2013/07/21/estimating-complexity/ – where the estimates are not of time, but of how familiar, or not, the work is. I suggest that estimating beyond comparison or coherence for anything which is a 5 or 4 on that scale is largely a waste of time; the only way to know how long the most unfamiliar work will take is to actually do it.