Of course, the only valid reason for estimation is prediction. We want to predict the cost and duration of delivery for some set of working software capabilities. If we didn’t care about time and cost, we wouldn’t waste our time trying to predict. We realize that software task estimates alone are insufficient for this prediction. But at the heart, the task estimates being predictable; correctly representing the work that the team needs to do within some finite predictable variance is essential. Task estimates are the core of the plan.
Software estimates are unreliable because there no reliable units of measure for software output. We act like the work is repeatable, because we are building components that have the same name, follow the same pattern, or use the same paradigm. In reality, every software component we build solves a slightly different problem. The environment that we build in and test against can act as constraints. The company sponsoring the build or hosting the environments can impose constraints in practice that limit our output. Even though it appears to us like we are doing repetitive or repeatable tasks, they vary internally, and environmentally, in ways that alter our delivery cost.
Project planning requires making and adjusting assumptions on top of the task estimates. How much time does my team spend doing administrative activities, how many developers can I assign to the same feature without making people idle waiting on each other. How many developers can we effectively manage without hitting a point of diminishing returns. How much time will developers spend fixing quality issues. How much time will developers spend on normal re-work when decisions invalidate work performed under earlier decisions or assumptions. Developer task estimates cannot account for these assumptions.
Software developers are not always aware of the risks surrounding the tasks. Standing up an enterprise architecture infrastructure, coming to grips with design decisions, team formation, collaboration costs, poorly defined or continually evolving business need, staffing, etc – so those are not really task estimation risks – those are project estimation risks – in my mind a completely different problem. BTW – we usually suck at that too.
I was recently working with a team to determine what tasks are necessary for a single screen with a grid, a dropdown, and a couple buttons with navigation to and from the screen and oh some business rules governing input; my current team was short about 4 ideal days or effort days on a 10 day initial estimate. Interestingly enough, we finished nearly as scheduled, but we missed a couple of tasks because me made design decisions along the way that necessitated additional work, or re-work. My experience is that teams start task estimation with a average variance of about 50% and ultimately get no better than a average variance of 20% as they grow together. The best teams get better at estimating as they go along, because they remind themselves of their former errors and assumptions that turned out to be flawed. They learn. They become more predictable over time.
However simple task estimates are insufficient to plan the delivery of an application. The person managing the risk – project manager, scrum master, team lead is responsible for understanding all the risks not directly related to the tasks themselves. If they are unfamiliar with business domain, customer behavior patterns, organizational structure and strategy, enterprise technical constraints, people and personalities within the team, management style and issues in the technology organization – they will be less effective capturing, quantifying and articulating these risks, and planning to mitigate them (adding work or slack to the plan) or negotiating the acceptance of risk by stakeholders beyond the project team. Leadership is key to risk management.
The 2-3x variance between estimate and actual result that people often say is common is usually not the result of developers estimating tasks poorly. It is the result of project leadership making bad risk decisions, forcing the development team to accept risk because they were not able to capture, quantify or articulate it to stake holders and get better decisions made.