March 23rd, 2021 by inflectra
Estimation has always been an inflection point in the software engineering world. Many developers admit that one of the hardest parts of their job is not about naming things or invalidating caches (as the old saying goes) but giving estimates. Certainly, for many experienced software engineers, estimates have always been the most frequent area of friction between software developers and managers. Both sides seem to have valid arguments: managers want to know how long things will take, as they need to manage budgets and customer expectations. Developers, on the other hand, know that most software tasks cannot be estimated accurately enough to satisfy the manager's needs. This friction has been so intense that it's given rise to the #NoEstimates movement and many flame wars on social media. But is the choice truly between one of these two extreme positions, or can a happy medium be found that satisfies both sides?
This series of articles attempts to explore and answer this question.
Estimating is what you do when you don't know
-- Sherman Kent, a.k.a "the father of intelligence analysis"
Estimation is all about risk management - that is, predicting the impact of the "known unknowns" and allowing for the
'unknown unknowns," as Donald Rumsfeld once stated. The Scrum framework attempts to mitigate these unknowns' risks (among other things) by prescribing an iterative and incremental development life cycle. A Sprint lasts no more than a month. At the Sprint Planning event, the Scrum team will be called to decide how many Product Backlog items it can deliver within the Sprint. The items selected for the Sprint should have a level of detail adequate enough for the team to estimate their size or complexity. The short horizon and focused delivery established by the Sprint means that estimates are given within the narrow end of the Cone of Uncertainty. The cone of uncertainty is the graphical depiction of an important estimation axiom: Estimates are more accurate the closer they are to the point of delivery.
It is self-evident that the estimate we give for an item at the start of the Sprint will not be as accurate as the estimate for the same item half-way through the sprint. It is also obvious that estimates for more longer-term events, such as release planning or product roadmaps, will have an even greater margin of error.
Generally speaking, there are two main estimation approaches in the agile world:
Relativistic methods come naturally to most people and are usually easiest to apply. In this series of articles, we'll look at the efficacy of such methods and discuss improvements.
Relativistic methods are all based on the same principle: the team provides an estimate (or size) of a work item relative to the size of other work items. Most of the relativistic approaches employ a scale (usually a Fibonacci series) and one or more baseline values to denote the 'smaller' and 'larger' items. The estimate values assigned to an item are usually called 'story points,' as most work items as expressed as user stories. The way estimation is performed is normally along these lines:
The sum of story points a Scrum Team can deliver during a Sprint is known as the team's *velocity* and serves as a metric of the team's work cadence and to make longer-term forecasting.
Planning poker and similar methods are a great way to generate discussion and analysis of work items or tasks. However, experience shows that they are a poor way to estimate things. They can be described as 'finger-in-the-air' methods because they are like trying to guess the wind's speed and direction by putting a wet finger up in the air. A very experienced outdoorsy person may give a reasonable estimate, but most people will get it horribly wrong. Let me explain why planning poker and the like are not good estimation methods.
Dev A is a very skilled senior developer. Dev B is a junior developer with little experience in the application and business domain of the project. Dev A estimates a work item as a 3 (on a 1-10 scale). Dev B estimates the same item as a 7. Dev A will make coherent and persuasive arguments why that item should be a 3. Dev B lacks the experience to counter these arguments, so they will reluctantly agree that the item is a 3 (or will maybe settle at a 4). During development, Dev B has to deliver the item. They find it very difficult and time-consuming. The team wonders why an item estimated as a 3 takes so much time and effort. Stress and mistrust ensue.
In addition, some frameworks such as Scrum, time-box the planning meeting. This means that teams get pressured to reach consensus quickly, making developers even more susceptible to peer pressure. Such human factors also account for the fourth and final point.
4. They are inconsistent. Here is a simple experiment you can perform yourself. Ask a developer for an estimate on a task unrelated to their current project. Wait a few weeks, ensuring that the developer does not spend any time thinking about the task. Then ask again for an estimate for the same task. There is a good chance that you're going to get a different estimate. So, if the developer, the task, and the developer's understanding of the task remain the same, how can we be getting different estimates at different times? The answer is simply that our estimation is affected by our emotional and mental state. We've all been there: there are days when we think we can take on the whole world and days where we feel much less confident. Something that seems big or difficult today seems smaller and simpler tomorrow. We tend not to consciously think about our emotional and mental state when making decisions, but we are affected by them all the same. A good estimation method should account for such human factors that affect productivity and estimation ability.
It is for these reasons that these methods might not be fit for purpose. But if this is true, then why are they so popular? The simple answer is cultural acquisition:
We need to improve our estimation methods so that they adhere to the rules of good estimation. A valid estimation method should yield estimates that are:
I call this the ROC principle. In the follow-up article, we'll examine techniques we can apply to ROC-ify our estimation. Until then, stay tuned.
Fred Heath is a software jack of all trades, living in Wales, UK. He is the author of Managing Software Requirements the Agile Way, available for purchase on Amazon.