A couple of weeks ago, I tweeted a paraphrase of something that David J. Anderson said at the London Lean Kanban Day: “Probabalistic forecasting will outperform estimation every time”. I added the conference hashtag, and, perhaps most controversially, the #NoEstimates one.
The conversation blew up, as conversations on Twitter are wont to do, with a number of people, perhaps better schooled in mathematics than I am, claiming that the tweet was ridiculous and meaningless. “Forecasting is a type of estimation!” they said. “You’re saying that estimation is better than estimation!”
That might be true in mathematics. Is it true in ordinary, everyday English? Apparently, so various arguments go, the way we’re using that #NoEstimates hashtag is confusing to newcomers and making people think we don’t do any estimation at all!
So I wanted to look at what we actually mean by “estimate”, when we’re using it in this context, and compare it to the “probabilistic forecasting” of David’s talk.
Defining “Estimate” in English
While it might be true that a probabilistic forecast is a type of estimate in maths and statistics, the commonly used English definitions are very different. Here’s what Wikipedia says about estimation:
Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.
And here’s what it says about probabilistic forecasting:
Probabilistic forecasting summarises what is known, or opinions about, future events. In contrast to a single-valued forecasts … probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast.
So an estimate is usually a single value, and a probabilistic forecast is a range.
Another way of phrasing that tweet might have been, “Providing a range of outcomes along with the likelihood of those outcomes will lead to better decision-making than providing a single value, every time.”
And that might have been enough to justify David’s assertion on its own… but it gets worse.
Defining “Estimate” in Agile Software Development
In the context of Software Development, estimation has all kinds of horrible connotations. It turns out that Wikipedia has a page on Software Development Estimation too! And here’s what it says:
Software development effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Again, we’re looking at a single value; but do notice the “high uncertainty” there. Here’s what the page says later on:
Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.
The Lean / Kanban movement has emerged (and possibly diverged) from the Agile movement, in which this strategy really is dominant, mostly thanks to Scrum and Extreme Programming. Both of these suggest the use of story points and velocity to create the estimates. The idea of this is that you can then use previous data to provide a forecast; but again, that forecast is largely based on a single value. It isn’t probabilistic.
Then, too, the “expertise” of the various people performing the estimates can often be questionable. Scrum suggests that the whole team should estimate, while XP suggests that developers sign up to do the tasks, then estimate their own. XP, at least, provides some guidance for keeping the cost of change low, meaning that expertise remains relevant and velocity can be approximated from the velocity of previous sprints. I’d love to say that most Scrum teams are doing XP’s engineering practices for this reason, but a lot of them have some way to go.
I have a rough and ready scale that I use for estimating uncertainty, that helps me work out whether an estimate is even likely to be made based on expertise. I use it to help me make decisions about whether to plan at all, or whether to give something a go and create a prototype or spike. Sometimes a whole project can be based on one small idea or piece of work that’s completely new and unproven, the effort of which can’t even be estimated using expertise (because there isn’t any), let alone historical metrics.
Even when we have expertise, the tendency is for experts to remember the mode, rather than the mean or median value. Since we often make discoveries that slow us down but rarely make discoveries which speed us up, we are almost inevitably over-optimistic. Our expertise is not merely inaccurate; it’s biased and therefore misleading. Decisions made on the basis of expert estimates have a horrible tendency to be wrong. Fortunately everyone knows this, so they include buffers. Unfortunately, work tends to expand to fill the time available… but at least that makes the estimates more accurate, right?
One of the people involved in the Twitter conversation suggested we should be using the word “guess” rather than “estimate”. And indeed, that might be mathematically more precise, and indeed, if we called them that, people might be looking for different ways to inform the decisions we need to make.
But they don’t. They’re called “estimates” in Scrum, in XP, and by just about everyone in Agile software development.
But it gets worse.
Defining “Estimate” in the context of #NoEstimates
Woody Zuill found this very early tweet from Aslak Hellesøy using the #NoEstimates hashtag, possibly the first:
@obie at #speakerconf: “Velocity is important for budgeting”. Disagree. Measuring cycle time is a richer metric. #kanban #noestimates
So the movement started with this concept of “estimate” as the familiar term from Scrum and XP. Twitter being what it is, it’s impossible to explain all the context of a concept in 140 characters, so a certain level of familiarity with the ideas around that tag is assumed. I would hope that newcomers to a movement would approach it with curiosity, and hopefully this post will make that easier.
Woody confessed to being one of the early proponents of the hashtag in the context of software development. In his post on the #NoEstimates hashtag, he defines it as:
#NoEstimates is a hashtag for the topic of exploring alternatives to estimates [of time, effort, cost] for making decisions in software development. That is, ways to make decisions with “No Estimates”.
And later:
It’s important to ask ourselves questions such as: Do we really need estimates? Are they really that important? Are there options? Are there other ways to do things? Are there BETTER ways to do thing? (sic)
Woody, and Neil Killick who is another proponent, both question the need for estimates in many of the decisions made in a lot of projects.
I can remember getting the Guardian’s galleries ready in time for the Oscars. Why on earth were we estimating how long things would take? That was time much better spent in retrospect on getting as many of the features complete as we could. Nobody was going to move the Oscars for us, and the safety buffer we’d decided on to make sure that everything was fully tested wasn’t changing in a hurry, either. And yet, there we were, mindlessly putting points on cards. We got enough features out in time, of course, as well as some fun extras… but I wonder if the Guardian, now far more advanced in their ability to deliver than they were in my day, still spend as much time in those meetings as we used to.
I can remember asking one project manager at a different client, “These are estimates, right? Not promises,” and getting the response, “Don’t let the business hear you say that!” The reaction to failing to deliver something to the agreed estimates was to simply get the developers to work overtime, and the reaction to that was, of course, to pad the estimates. There are a lot of posts around on the perils of estimation and estimation anti-patterns.
Even when the estimates were made in terms of time, rather than story points, I can remember decisions being unchanged in the face of the “guesses”. There was too much inertia. If that’s going to be the case, I’d rather spend my time getting work done instead of worrying about the oxymoron of “accurate estimates”.
That’s my rant finished. Woody and Neil have many more examples of decisions that are often best made with alternatives to time estimation, including much kinder, less Machiavellian ones such as trade-off and prioritization.
In that post above, Neil talks about “using empiricism over guesswork”. He regularly refers to “estimates (guesses)”, calling out the fact that we do use that terminology loosely. That’s English for you; we don’t have an authoritiative body which keeps control of definitions, so meanings change over time. For instance, the word “nice” used to mean “precise”, and before that it meant “silly”. It’s almost as if we’ve come full circle.
Defining “Definition”
Wikipedia has a page on definition itself, which points out that definitions in mathematics are different to the way I’ve used that term here:
In mathematics, a definition is used to give a precise meaning to a new term, instead of describing a pre-existing term.
I imagine this refers to “define y to be x + 2,” or similar, but just in case it’s not clear already: the #NoEstimates movement is not using the mathematical definition of “estimate”. (In fact, I’m pretty sure it’s not using the mathematical definition of “no”, either.)
We’re just trying to describe some terms, and the way they’re used, and point people at alternatives and better ways of doing things.
Defining Probabilistic Forecasting
I could describe the term, but sometimes, descriptions are better served with examples, and Troy Magennis has done a far better job of this than I ever have. If you haven’t seen his work, this is a really good starting point. In a nutshell, it says, “Use data,” and, “You don’t need very much data.”
I imagine that when David’s talk is released, that’d be a pretty good thing to watch, too.
I’m sorry Liz, but to define ‘estimates’ in such a way that it only allows known bad practise, and then to extrapolate that to beat with a stick everything that ‘estimates’ covers (which our #noestimates friends do) is strawmanning in extremis.
Only an idiot would produce single point estimates.
Only an idiot would communicate single point estimates that didn’t contain a sane amount of contingency (i.e. that understood the range and probability components)
What this is an argument for is *better* estimating and *more mature* discussion of the degree of confidence and risk of an estimate.
After all, on what basis *do* people estimate, other than their accumulated experience, most strongly influenced by the current context? Or, to give it another name, data.
And yes, if you have a long running process that has a lot of stability, the experience of individuals probably isn’t needed; you can simply run the numbers and count (with perhaps some size categorisation). But does that sound like a majority of software development environments? Sounds like your complexity level 1 and maybe 2 to me. So what do we do in levels 3 & 4?
Do we entirely reject the needs of external stakeholders when they want to plan some aligned deliveries? Do we snottily respond that they’ll get what we are pleased to deliver to them whenever the hell we are pleased to do it? Doesn’t really work for me, unless you are working out how to decrypt Enigma while simultaneously creating the field of computer science as scaffolding for that outcome.
Martin, the overwhelming majority of my clients still practice estimates (of the kind I’m moaning about here). If I get to spend enough time with them, I do help to wean them off and over to cycle time metrics instead. Hopefully this post is persuasive in that regard.
I’d hesitate to call them “idiots”, though.
On my scale, with level 3, you’ll need to make sure the expertise you have is available to you. Commonly this is done in Scrum with a PO who’s part of the team, but that often ends up leaving out important stakeholders who understand other 3’s.
Level 4 is a spike, because you’ve never done it before, at least in the context of the organization you’re in. There’s always a 4 or 5 that’s heading up every effort.
Re planning for aligned deliveries: no, and I don’t think we should reject estimation entirely. I do however heed the call of the #NoEstimates group that there may be aspects of that planning for which estimation isn’t the right answer; for instance, finding out whether it’s even possible to do the thing in the first place. One of Neil’s suggestions is that we should be aiming to make those projects smaller, too. I think we can both agree with that.
So, after a bit of reflection, I agree that I overstepped with “only an idiot would…”
A better expression might be along the lines of:
Maturity in setting expectations for stakeholders would always include an expression of risk, and advocate an appropriate amount of schedule/cost contingency that would reduce the risk to an acceptable level – there is usually a viable tradeoff curve between raw early and confidence which is useful to discuss with stakeholders.
In getting to that, the original estimating method (of which, as Jon Terry points out, there are several) should involve a range and confidence level. To build a delivery programme that requires integrating several elements before full business value (i.e. more than the sum of the parts) can be realised will likely form some kind of assessment such as “what’s the earliest date where we can confidently expect(1) all elements to be ready to a minimally effective standard?”
Whether you achieve this through bottom up task estimating, parametric assessment, estimating by analogy, Monte Carlo method, probabilistic statistical forecasting, blink estimating, Wide Band Delphi or a combination of some or fewer of the above (and I’ve done *all* of these) is a contextually sensitive question, not least sensitive to the data and people you have available to you.
Generally you find that the time consuming bit that requires nearly all the effort and certainly all the expertise and experience in all of these is understanding what is being asked for. In no sense is this ever wasteful.
Once you have a strong understanding of that, coming up with the actual number/numbers/range/confidence assessment is usually pretty quick.
However, you must be mindful and respectful of the needs of your stakeholders, which of course also change over time as the process progresses. Simply because we do not (and often can not) know the answer to that assessment with high accuracy and precision and confidence is too often used (perhaps only rhetorically) as a way of refusing to give *any* answer. And I can understand that in environments where fear is prevalent (context-sensitive, see?)
If we seek to use probabilistic forecasting, the riskiest time is at the start, when each new data point can swing the outcome hugely, and when progress is nearly always unstable anyway. I hear Troy’s argument for a smaller number of data points than one might expect for statistical robustness and high confidence from Stats 101 classes, but very much doubt whether that’s anything but tough to achieve in the early days.
That’s the point at which you call on the experience of your team members, for which you’re paying pretty high premium (otherwise we’d just hire the first N people off the street, right?). That experience – while of course subject to cognitive bias – is nothing but collected, abstracted data (which is why I like Dan North’s Blink Estimating) that can be expressed as judgement under uncertainty. Ideally with self-knowledge of just how much uncertainty is involved.
If some in our community wish to dismiss that experience as being not-helpful under any circumstances, I’d be very disappointed, particularly as the same individuals tend to be strong on ‘trust the team’ in other ways.
I think what we need is a better discussion that doesn’t presuppose the answers, but can talk about when different approaches may be indicated by different contextual variables.
(1) In Statistical Process Control terms, this means assuming the impact of Common Cause Variation applies throughout. If there is Special Cause Variation, the expected behaviour is that it is called out quickly, and acts as a prompt for replanning.
Martin, I appreciate the reflection. Thanks.
I think your comments here on estimation is now getting closer to what David was talking about. I think your comments on the changing needs of stakeholders are also getting closer to what the #NoEstimates crowd are talking about too! So yes, it’s contextual. Completely agree.
However, I’m wondering if rather than asking experts for a blink estimate at the beginning, it might not be better to gather their experience as to how long similar projects have actually taken, and use that to make a probabilistic forecast instead… in fact, given that the maturity of the company is hugely at play, you might even just be able to use the other projects from that company, regardless of what they were. I wonder if there’s any kind of correlation there? I mean, if your company tends to go for projects that are somewhere between 6 months and 18 months long, because they’re the only kind worth going through the analysis process and they know that anything longer fails, then you could say with some certainty that your project will be between 6 and 18 months and probably about a year, right? No matter what it is!
Re the #NoEstimates movement at its most absolute: I can kind of see it. There’s a lot of push for a start-up model, even in large organizations, and if you have that, then you have one of those contexts in which estimates are indeed largely pointless, at least within projects (most of the rants I’ve seen have actually been about the devs creating estimates when the project is already underway). But I’m not completely convinced that start-up models work everywhere, despite my 5-4-3-2-1-there’s-always-a-5-or-4 scale, and certainly most large organizations aren’t there yet, so I expect to see estimates of various kinds around for a while at least.
Wow.
You decry “strawmanning in extremis” then go on to create your own straw man argument: “Do we entirely reject the needs of external stakeholders when they want to plan some aligned deliveries? Do we snottily respond that they’ll get what we are pleased to deliver to them whenever the hell we are pleased to do it?”
I don’t know any of the prominent #noestimates proponents who take this position. Can you support this claim? I call straw man.
And BTW, you don’t need a long running process that has a lot of stability. See the referenced Magennis content.
Yes, Troy, that position IS indeed the thrust of many NoEstimates advocates, who go on and on about how estimates are useless, wasteful, deceptive, etc.
One of the major NE advocates writes, “People often tell me that “even though we know the estimates are inaccurate, they are better than nothing”. I prefer nothing.” Or, “Estimates are not a business reality for software development. They are a wasteful and deceptive practice.” Or, “A truly Agile development approach eliminates the need for estimates. Focus on discovery and delivering value, not prediction.” Or (one last one for now; I have lots more): “I can’t recall any situation where estimates were of value to the actual job at hand.”
At most, they grudgingly assent to giving the users the estimates the users ask for, but while very much metaphorically holding their noses while doing so: “If you have chosen to work with the sort of customer that requires an estimate, then give them what they want: an estimate”. And then go on to basically say, you need to find better customers then: “you get to choose who you do business with.”
Finally, the NoEstimates sneering can take even more extreme form: for example, one NoEstimates advocate writes, “The team should commit on that part which is known: mostly tomorrow, maybe day after. Longer commitment is not possible.”
These are actual quotes, not a straw man at all. Does anyone really think that someone with views like that will successfully hide those views from stakeholders when stakeholders come a-calling?
Peter, the original quote was:
Please try not to quote out of context. As you can see, context makes a huge difference. I’d also be wary of grabbing quotes from Twitter, as a lot of the context is necessarily missed there, too; I’ve got a whole other post about why we need to be forgiving if we want to converse in that space (and have said as much in a couple of talks already).
Hi Liz!
Nice post! Love it!
I’d just like to comment one thing.
You look for the English definition of “estimate” and turn to Wikipedia on “estimation”. It kind of feels that you’re looking for a definition that proves your point.
There’s an online dictionary that I love. It’s widely used and i think it’s the oldest on the web: Merriam Webster. Here’s how it defines “estimate”:
“: to give or form a general idea about the value, size, or cost”
And I love that definition. Because it’s that simple. No need to complicate stuff. And: “to give or form” includes forecasting (mostly the “form” part).
And “general idea” is also perfect. Exactly that. It’s not about single points or anything. Just a general idea. Software development seems to have obscured the meaning or made it unnecessary complicated. The purpose is not about “most realistic”or being “right” or “wrong”, the purpose of an estimate is to inform decisions by having a general idea about the impacts (value, size or cost) of the choices we’re choosing among. And here forecasts are included as well.
I also love that the definition has “value” as the first. Because estimates are equally about value.
I’ve written some about the more positive form of NoEstimates, the Yestimates hashtag: http://kodkreator.blogspot.se/2015/03/the-yestimates-hashtag.html 🙂
Kind Regards
Thanks Henrik!
I did look at Merriam Webster before writing this post. Here’s the full definition it gives:
a : to judge tentatively or approximately the value, worth, or significance of
b : to determine roughly the size, extent, or nature of
c : to produce a statement of the approximate cost of
That “tentatively” screamed out to me. Here’s what it says about forecasts:
a : to calculate or predict (some future event or condition) usually as a result of study and analysis of available pertinent data; especially : to predict (weather conditions) on the basis of correlated meteorological observations
b : to indicate as likely to occur
Again, this difference is massive. I linked to the Wikipedia page because I wanted to be consistent, but I find the same connotations in pretty much every source I’ve looked at. The etymology of the words is also fascinating, if you’re into that.
I love your Yestimates! It is actually possible to love estimates, and still seek alternatives and question the need on occasion. I’m definitely not in the absolutist “no estimates, ever” camp, though I do wish we could get rid of story points. My experience is that they come with a cost that’s rarely, if ever, justified.
Just my tuppence. But as long as we’re questioning and exploring, I’m all good.
Thanks Liz! I agree, there’s a big difference. Personally I love the “tentative” part, because it emphasizes what it is – i.e informs what we should perhaps do to move forward.
And even if it’s different (I still think the definitions says that forecasts are a subset of estimates) why do we need “no” of one thing? I’d like to have both if I can and it’s possible. “Yes”, right? 🙂
I’ll settle for people actually having a clear idea of which decisions they want to make with them first! 😉
Indeed! 🙂
We could continue to have an extensive discussion about the everyday meaning of the words, but surely we care about their meaning within our technical domain (software development)? (Our community, like most specialisms, uses a wide array of jargon that departs from the everyday meanings of words).
In the domain of software, and project management, the term has historically been “estimation” – for example, go to Amazon and search for “software estimation” versus “software forecasting”. And these books (such as Steve McConnell’s) cover a wide range of techniques, from “guessing” through to data-intensive statistical techniques.
Or try Google trends: https://www.google.co.uk/trends/explore#q=%22software%20estimation%22%2C%20%22software%20forecasting%22&cmpt=q&tz=
If we redefine words, for whatever reason, then we cut ourselves adrift from our history and literature, and other parts of our community, and will find ourselves unable to communicate effectively even within our own domain, never mind with others.
The meaning of a word isn’t just about formal definition; it’s also about usage, connotation and context. I haven’t redefined the word here. I’ve just outlined the meaning in the context in which it’s being used, and associations with that context. The best way I’ve found to communicate effectively is to ask questions and explore. That might include asking, “How are you defining X?”
I spend a lot of time doing this in BDD, because English isn’t a formal specification language, but rich, changing and highly contextual.
I like the “how are you defining X?” approach.
Part of my worry is that in the #NoEstimates argument, it has been defined very narrowly and quite carefully to only include problematic elements, which are then used to attack a broader definition.
I think it was defined as a short-hand for the problematic elements that the originators were encountering, which might well have been true for all estimates they encountered, and has grown a bit out of control. Welcome to Twitter. That happens a lot.
Our language is what it is. Languages aren’t static, and attempts to keep them static are futile. Language will alway change based on the current users of that language.
So, we have to wait until the dictionary changes the meaning of word 🙂
I am not attempting to keep language static for all time – that would indeed be futile.
I am observing/lamenting that various *current users* of the language are (demonstrably, hence the very existence of this thread, and many previous ones on Twitter etc) finding it very difficult to understand one another, because they are using terms in different ways – even though they work in the same industry.
If we have to carefully define every term afresh in every discussion, because we have no stability in our common definitions, we are not going to have much time left for real communication – (especially on constrained media like Twitter!)
Narrowing a term is particularly troublesome. If “estimation” now means “guessing”
Actually, when a word means a lot of things the people writing the definitions (highly skilled linguistic people) try to find broader definitions, to cover all meanings. They don’t do the opposite; and try to exclude some of the meanings of the word.
Estimates are actually a good example of this.
Actually, dictionaries are lagging indicators of meaning an usage. They always will be. We can go with what’s in the dictionary, or we can use “estimates” as it’s commonly used by our contemporaries in the field of software dev. Which one do you think is most useful to gain a shared understanding? Hmm… “shared understanding” that sure sounds familiar….
Agreed, dictionaries are a snapshot of a language. A dictionary does not define terms. Technical standards etc do that.
Agreed, and a blog which publicizes any relevant standards might have a more positive effect than picking on otherwise valid language, I think. (Any volunteers? 😉 )
Liz, your reply to me oddly ignores my main point (i.e., that NoEstimates advocates do indeed reject the needs of external stakeholders, by disdaining estimates so strongly) and focuses on trying to catch me in a misquote from just one of the many quotes I included (the majority of them from blog posts, NOT from Twitter, by the way).
On the quote that you claim was out of context: the original text of that quote was EXACTLY as I quoted it. Check the Internet Wayback Machine. The author actually changed the text in that portion of his blog post when I mentioned it previously as a telling example of NoEstimates’ attitude; he attempted to attach a proviso to it to make it milder, perhaps. But he failed: if a writer attaches a condition to a categorical statement (“I prefer nothing, if there are better ways”) and then instantly turns around and states that the condition is true (“There are better ways”), the original categorical statement still holds: “I prefer nothing.” This is basic logic. The meaning is crystal clear.
If anything, his clumsy attempt to edit his statement proves my point: there’s no hiding the basic NoEstimates attitude here, which amounts to a basic, deep-rooted disdain for providing the estimates that stakeholders need and expect. Hence my question to Troy still stands: “Does anyone really think that someone with views like that will successfully hide those views from stakeholders when stakeholders come a-calling?”
Peter, one of those quotes is saying, “Keep me away from that horrible thing!” and the other is saying, “Come over here, we’ve got better stuff.” The conditional does give it a different meaning. This is English; it isn’t logical. If Woody fixed his blog, consider that he’s probably learnt more about how to express what he really wants better, and stop trying to catch him out. That’s not conducive to a good conversation.
Part of the reason I’m taking a break from Twitter is because it’s unforgiving and lacking in context. Fortunately I don’t have to put up with that behaviour in the comments section of my own blog. If you can use language which invites discourse, I’ll happily have a conversation, but pulling out the most extreme quotes you can find as samples of what a community is like is hardly inviting. Words like “sneering” and “grudgingly” are also emotive. It doesn’t make me feel like you actually want to have a discussion.
The other reason I didn’t address your main point was because you sent it to Troy, and I felt he might be better placed to respond. Please try to be respectful, forgiving and generous to each other if that happens.
Liz, wIth all due respect: no, the conditional does NOT give the sentence a different meaning in any way. Much of this response from you, and in your original piece, seems to question the very possibility of using language to define things, or (now) the very possibility of expressing logic in English. Well, if we are outside the realm of pure mathematics, logic IS always expressed via language, and we all need to accept that and be willing to scrutinize the logical rigor with which arguments are being made. Words matter. “Grudgingly” and “sneering” are quite descriptive of what I and others see in NoEstimates attitudes and arguments on these issues, again and again over a long period of time.
The blog author quite clearly states that he prefers “nothing” versus estimates; that’s not trying to “catch him out”, it’s just a clear conclusion any reasonable non-partisan person would draw from what he wrote. I realize that you may be personal friends of the author in this case, but I would hope that we could all talk about ideas without such biases being an influence.
And by the way, the quotes I supplied aren’t “extreme” or cherry-picked outliers: one sees examples of this kind of thinking every day posted to NoEstimates blogs and to the Twitter hashtag. I have many dozens of such examples, and cited just 7 in my comment.
Finally, to believe that well-supported critique of an idea is somehow “not conducive to a good conversation” is, well, itself not conducive to a good conversation.
I
Peter, you yourself suggested that he might have added those elements to make it “milder”. It changes the language, and with it, the way we interpret it. It does it because it allows the worth of the two sentences to be considered independently. If what you have a problem with is the assertion that “There are better ways”, then include it, and say so.
Your assumption that the writers have negative motivation is just that: your assumption. I don’t know the author well, but I do know him well enough that I doubt he writes with a sneer on his face. Perhaps he gives out the estimates with a friendly warning as to their perceived uselessness, or provides contexts in which he forsees them being wildly off. You have absolutely no idea. This is what we call in hypnotherapy mindreading. You can’t read minds. It’s a little daft to try and, if I may say so, a little rude as well.
It’s also an ad-hominem attack on the behaviours of individuals around the content, rather than the content itself. You aren’t even critiquing an idea; just the positions and the people behind it.
The most negative language I’ve seen so far on this subject has included “snottily”, “sneering”, “grudging”, “idiots”, and it’s all come from the opponents of the #NoEstimates movements, in these comments. If you want to encourage more positive conversation, please set an example.
Hi Liz, great thoughts and discussion! We need this on-going exploration of the estimation concept to help move to a place that is more useful and robust than where we are today. Re your point “The idea of this is that you can then use previous data to provide a forecast; but again, that forecast is largely based on a single value. It isn’t probabilistic.” it could be argued that there is a rough element of probabilistic forecasting in story points if you use a non-linear scale such as the popular Fibonacci series. Plus, if you calculate a confidence interval from your velocity data that would also provide a measure of probability. Would appreciate your take on this.
Eddie, thanks for the feedback, and for the encouraging and respectful approach.
Re probabilistic forecasting; watch the video of Troy’s talk that I linked close to the bottom if you want a really good understanding. The techniques he outlines allow you to say things like, “When we model what the future looks like using the data we already have, 80% of the time our model shows us we’ll be finished by September; 50% by July.” So you have a range of numbers with their associated probabilities, rather than one single number with a degree of confidence. He also uses the models to find out what could be tweaked to make things faster, cheaply. Awesome stuff, highly recommended.
I also find that the gaming which goes on with story points (a lot of it unconsciously) means that they’re not the best data. If the estimates matter enough that you need a confidence level, then IMO take measurements.
Having said that, a lot of the time, just moving away from the big budgeting and high analysis of Waterfall is a challenge, and if story points help to do that then fantastic. Every day, in every way, we get better and better.
The reason we have dictionaries is exactly for this reason; we need to have a shared definition of what we’re discussing/talking about. Yes, I agree the dictionary is lagging. But it’s not accepted as *the* definition (to have a discussion around) until the definition actually is changed. (And my prediction: the definition won’t change during my lifetime).
And until it does change – we have to stick with the dictionary definition.
As I said, it’s the reason dictionaries was “invented” from the beginning. People had useless/endless discussions about things when they weren’t even sharing the same meaning of the words they were discussing (we recognize that, right? :). Hence, the need for “a dictionary”.
And “estimate” is not a technical standard. It’s a word. It has a definition. Use that. Move on.
There’s a lot of comments here about the use of words but there’s also a lot of detours from the original subject, which is the value and purpose of estimates. Do they work for the people you’re working with, do they understand them, and do they contribute to the process should be something you can observe and experiment over time with no? If there’s multiple points of view then can’t there be multiple ways to skin that cat?
Neil, the original subject of #NoEstimates might have been the value and purpose of estimates, but the subject of this blog was deliberately about the use of words, since apparently it was confusing people.
Hopefully now the confusion is better resolved, and we can agree on which definitions we’re using and have those positive and useful conversations you’re calling for.
Thanks for clarifying Liz. I would add, given the extra info then, that usefulness = able to be used for a practical purpose or in several ways, and run the rest down to semantics.
…but then I just test stuff
See the original conversation linked above to see how much time and effort was being wasted on semantics. I’m with you all the way. Hopefully the practical purpose is to move the conversation on to something more useful!
Interesting – I recently had the same experience in Twitter. Being a bit provocative with the hashtag #NoEstimates makes a discussion blowing up. The topic is so emotional, as there seems to be only black and white anymore. You are asked to decide for or against estimates – irrespective of the context and the needs. But that’s probably the problem of everthing that starts with “NO”.
Harald, freeagile.org
Yep. Twitter is by necessity a place of shortcuts and anchor terms. I once had someone say of a tweet I made that it was “good, but not sufficient”. What is sufficient and fits into 140 characters? I do wish people would learn to take tweets with a pinch of salt and ask for context if in doubt… but that would go against eons of evolution and the resulting confirmation bias, so it’s never going to happen.