Product Owner Excellence

What makes a product owner excellent? Is it subject matter or domain knowledge? Is it discipline around following the rules of the delivery management practice? Is it ability to elicit value propositions from or to sell value propositions to stakeholders?

In my last post ProductOwnerTraining – I listed out a set of core activities that product managers do. I think being able to perform these activities makes a product owner competent. I think that for someone to excel, it is not necessarily expressed in what they can do, but how they do it, or even in how they think about doing it.

I think for a product owner to excel, she must have the ability to produce a long term vision for the product, aligned with organizational vision and strategy. She must also be able to abstract meta-value propositions, like the ability to extend the product along certain axes (new client types, new reports, new process variants) without drama, or even without development effort. She must be able to see not only the visible skin of the product, but aspects of the framework around which it is built, so that she can speak, not only to what product capabilities are valuable, but what framework capabilities are valuable as well.

Product Owner Training

How do you develop the mindset and skills needed to be a successful software product owner?

In a technology organization, (software vendor, tech startup) product owners tend to come out of a technology background. They are ex-developers, ex-architects and sometimes ex-sales engineers.

In a non-technology enterprise (a normal company) the product owner is more likely to come out of a business background. In client facing software products, that background is likely to be marketing or business product management. For internal facing software products that background is likely to be that of a business practitioner, or a business analyst.

Regardless of background, product owners need to be able to do the following activities:

 

  • organize stakeholders into specific communities based on desire for specific value delivery.
  • sell the existing value propositions of the product into the stakeholder communities.
  • elicit new value propositions for the product from the stakeholder communities.
  • distill problem statements / value deficiencies from specific feedback from stakeholders.
  • maintain a prioritized/sequenced backlog of deliverables, contemplating the sponsoring stakeholder community, value proposition and business strategy.
  • sell/explain business value propositions to technical team.
  • evaluate whether proposed technology solutions add the value expected by stakeholders.
  • develop high level test cases for acceptance for each deliverable.

Maybe there are some additional things, but this is the core list. You can see that depending on background, some might gravitate to activities from this list that are more comfortable. Training product owners is about helping them grow abilities to perform outside their comfort zone.

A Definition of Done

In his Herding Cats blog, Glen Alleman, asks a very pertinent question. What is the definition of done? Well?

Done (Enterprise software delivery project) – when software capabilities have been delivered that support the business value proposition per the customer’s business capability requirements.

In our agility, we recognize that requirements are clarified by “emerging information”. That doesn’t mean that they “change”. When a requirement “changes”, it is effectively a new requirement. We often experience a case where the business value proposition is inadequately defined at the outset of the project. In this case, it is necessary for this requirement to be clarified by emerging information.

We also recognize that there may be different paths to deliver different software capabilities that support a particular business value proposition. Chosing a different path or delivering different software capabilities that support the same value proposition does not mean the requirements have changed, more likely that we are responding to emerging information.

I like to break requirements into business capability requirements and software capability requirements.  Regardless of your project methodology you must contend with these facts:

Fact: Enterprise software projects are created to deliver some business value. The requirements should define both the value (business capabilities) and a path to deliver it (software capabilities). Requirements form the basis of the definition of done.

Assertion: If the business value does not change, there is not a new requirement.
Assertion: If during the delivery of the business value, we learn things about the domain, the technology, or the world external to the domain that alter the path to deliver the value – there is not a new requirement – but emergent information.
Assertion: How we manage the (the documentation of our understanding of) requirements is a method.

Fact: Some software projects are required to change course mid-stream. Sometimes the business value we initially intended to deliver is overturned by market pressure, financial impact, or a change in management or strategy.

Assertion: When the business value for a project changes, there is a new requirement.
Assertion: When the work done toward that value is abandoned, there is a loss.
Assertion: How we manage the (documentation and accounting for) change to requirements is a method.


Recognizing that there is a difference between allowing emerging information to clarify, influence, or stabilize the delivery of value and accounting for changes in the definition of the value stream for a project is key to understanding agile and how agile methods manage both cases.

In either case, the definition of done is when we have delivered working software capabilities that support the business value proposition(s) that have been “commissioned” by the customer.

Enough (as a damping mechanism)

In a recent post, Esther Derby describes a tendency of organizations towards oscillating between centralized or decentralized controls – I thought it was a brilliant insight in that she exposed that at the extreme reaches of the pendulum in either direction, there are evidence of lower performance, but for different causes. I have experienced this myself, but really never thought up the food chain to where these decisions get made, only about the local effect, and my own efforts to locally mitigate said effects.

In the article, it is suggested that centralizing controls tends to squelch innovation and creativity, slowing down decision making and reducing ability to maximize opportunistic growth, while decentralizing controls can allow decreased visibility, subunit goals to trump larger organizational goals and decisions that do not align with the mission of the organization. Both of these extremes tend to lead to lower performance, and so in reation to the downside of one extreme, we tend to lurch towards the other extreme.

Esther went on toward the need for feedback loops which would allow managers/management to detect and react to the pendulum and adjust to reduce the periods of lower performance induced by the swing.

What Esther is describing is a damping mechanism, like a shock absorber in your car. The damping factor is based on a target range of motion, and the further out of range the motion, the greater the damping force. As I thought about this, what is needed is a conversation about enough (a definition of sufficiency will be required) centralization or decentralization, and perhaps some discipline on sticking close to the definition of enough. Enough is the target range of motion we want to contain any oscillation between enough decentralization to prevent the loss of performance from those causes, and enough centralization to prevent loss of performance on the other extreme.

So the questions that management must be required to ask is:

1) How much decentralization of control is sufficient to prevent the loss of innovation and creativity in our workflorce?

2) How much centralization of control is sufficient to prevent the abandoning of corporate strategic vision for more localized goals?

I also think that this is a matter of establishing policies, aligned with rational management incentives backed by meaningful measures. This is where management usually falls down from my experience. Here are some management anti-patterns that tend to interfere with this process:

1) No measurement process is established, therefore the policy must be written in absolutes.
2) Policy is written and implemented in a “one size fits all” paradigm, which fails to contemplate how the policy might be implemented differently in different contexts to achieve a similar net effect.
3) Incentives do not go far enough down or up the management hierarchy, so the desired behavioral changes are not properly incented at the point where they are necessary.
4) Incentives do not align with the policy, creating a paper tiger – a policy that has no teeth – one that is paid lip service, but limited effort is applied because there is no perceived benefit to behaving differently. Draw a correlation between policy and compensation and you will see behavioral change.

Once we have overcome the management anti-patterns, then and only then do the feedback loops become important – because the necessary constraints and enticements are in place to ensure appropriate action when the feedback is delivered.

Esther promised a future post about feedback loops, and I am keen to read it, because she is typically very insightful.

Agile Estimation Sanity

Some of the things I read about estimation in the agile community appear to me to be insane. Story points and the reasons that they are the preferable unit of estimate is one of those things. Before you call me a heretic, and burn me at the stake for my blasphemy, and my castigating your dogma – hear me out. I simply don’t find any value in the practice of estimation using story points.

I want to start by stating some facts about software development and estimation.

1) There are no measureable units of output for software development. We have tried source lines of code, and function points, but they really are not a measureable unit of output, they are simply facts about the underlying software product. This is like saying that the number of nails I used (source lines of code) or the number of rooms (function points) is the measurable unit of output for building a house. Both are too variable (rooms vary in size and nails vary by job and practitioner).

2) We estimate using only Units of Input (the denominator) (Since we have no measurable units of ouput), so are estimates are neither repeatable nor generalizable. The input of software practitioners is time. We can choose to represent that time in any way we want, but it is still time.

3) We measure productivity in estimated units of input per actual unit of duration. Since we do not have measurable units of output, we can only measure against estimates.

4) Actual units of input may vary from estimated units of input, but since we have no measurable unit of output, actual units of input are irrelevant to a productivity measure.

5) Cost and duration are forecast based on assumptions around productivity metrics. There are some number W of estimated units of input, and our team’s productivity metric is some number V of estimated units of work(input) per unit of duration, and our bill rate for the team is some number B per unit of duration, so Duration = W/V and Cost = Duration * B – it is really that simple. The magic is in assumptions around V and B and getting the team estimate W so that our assumptions around V become true over time.

Given these truths, it doesn’t much matter what you name your units of input and units of duration, as long as you estimate W in terms of units of input and V in terms of units of input and units of duration.

So from all of you advocates of story points (which appear to vary in size from team to team and from person to person, and from project to project because they are uncorrelated with the actual unit of input (which is time):

How do you project an initial value of V from which to baseline your plan and make your commitment for the initial time box?

Here is my supposition about Story Points:

Story points propose to be an intentionally imprecise metric of software effort. They are uncorrelated to actual units of input. Actual units of input are time. But it doesn’t matter, because we don’t project or predict or forecast or commit based on actuals, we use estimated units of input divided by actual units of duration to do this.

From all the conversations that I have had, the primary purpose of this uncorrelated unit is to prevent or limit the weaponization of metrics. What I mean by this, is the use of metrics in a punitive or manipulative way leading to unsustainable practices: overtime, unexposed technical debt, whip-cycle management, etc. Once metrics are weaponized, they can be used by either the customer, or management but when the weapons are used, there will be destruction.

What else is there?

The alternative to using uncorrelated units of input (story points) is to expose the assumptions behind your projections, predictions, and commitments. Walk your management through how you build your forecast, your schedule, your resource calendar, etc. Explain to your customer what they need to know to make decisions.

While it sounds stupid to tell your customer that your developers only deliver 3-3.5 days of effort per week, it is all that most developers everywhere deliver. There are very sound reasons why it is true. They are the cost of doing business. Collaboration, administration, personal management and learning, team leadership and process improvement all take time away from delivery work, yet all are necessary for a healthy team. Your management knows this and so does your customer, they simply aren’t used to seeing it as an assumption underlying a software delivery projection.

Treat management and customers like “grown-ups” and the trust you earn will be empowering. Don’t give them projections and forecasts without exposing or explaining underlying assumptions, and simply proclaim “You can’t handle the truth” when they ask questions, or misinterpret your intentions.

The value you get out of using correlated units of input is simple. You get the ability to compare productivity across teams using units that have the same underlying basis. You get the ability to compare actual units of input to estimated units of input, understand what the estimate missed, and continuously improve estimating practices. You get a greater level of transparency to management and customer.

I am open to suggestion – but truly don’t see a reason to switch to uncorrelated units of input – so convince me.

Agile Versus Whatever

In a number of posts over the last few months, Glen Alleman (Herding Cats) has been saying that Agile comparisons to “waterfall” or “predictive” are bogus, because the practices that they compare themselves to are simply BAD practices or anti-patterns in the domain of project management.

While I don’t disagree with Glen in the slightest, I want to start an argument about it. Just because they are bad practices, doesn’t mean they aren’t prevalent. There are many organizations with a software development life cycle (SDLC) that is waterfall-ish. And there are many organizations that have responded to regulations (Sarbanes-Oxley) by increasing the documentation requirements without thought to impact on software productivity or quality, that have effectively polluted their software development lifecycle with audit/regulatory oriented policies.

There are thousands of PMP certified morons who do not know (in practice) how to measure a project using any other tool than a gantt chart and a ruler. There are many software development managers who use project plans and process gates to indemnify themselves when things go wrong, who think that by mandating that requirements be “signed off” and requiring “change control” that they are immune to needing to deliver anything more than the minimum specified in the requirements.

There are thousands of software practitioners who have refused to give reasonable estimates because those estimates have been repeatedly weaponized by managers. (If I give you a stick and you hit me with it, next time I won’t give you the stick, or at least will wrap lots of padding around the it.) Many organizations have used “process” as an excuse to move much of the coding work “offshore” leaving ex-developers onshore in unacustomed roles as liasons, managers, project managers, analysts, and testers. They have imposed “process” in order to get work to “cheaper” resources, but have not invested in process maturity.

The thing about agile is that it appears to ALL to be a game changer. It makes it easy for us to drop all our anti-patterns at once. While I recognize that Glen is right – the dumb things that agilists say are similar to the dumb things that born again Christians say (“I don’t know how I would make it through the day with out Jeeesus.”) Agilist are often like ex-smokers – they can’t stop telling you how great they feel since they quit. Yeah – alot of the claims are based on comparisons to bad practices and known anti-patterns.

So Glen, to riff an old joke – why is using enterprise project management anti-patterns like hitting yourself in the head with a ball-peen hammer? Because it feels so good when I stop.

When your experience as a developer or project manager is fraught with project management anti-patterns, and you are a couple of pay-grades below the decision makers who are instituting said anti-patterns, what are your options?

a) Tell senior management that they don’t know their project management keister from a hole in the wall?

b) Find a new job at a better firm (oh wait – that assumes that there really is a better firm…)

c) Find some industry literature that shows a better way – a way that without faulting the folks who instituted the anti-patterns, can be adopted in small doses. A way that tries to put all members of a software development team on equal footing, creating collaborative realtionships rather than emnity.

So in the Dilbert world that most software projects are found in, – option C sounds like a huge winner. You really can’t fault the agilists whose primary exposure to project management practices has been in these environments from making those comparisons – that is their reality.

Agile Risk Mitigation

Before we talk about whether agile practices provide any benefits toward risk mitigation or risk reduction, we really need to talk about the nature of risk in a software development project or process.

In any discussion of risks, there are any number of attritbutes by which we can classify or characterize risks, but in order to discuss mitigation, comparing two methodologies side by side, it makes sense to me to use the source or cause of the risk as the primary taxonomy. So I will describe these risk sources, and how agile might offer some advantage in dealing with risk.

risk mitigation

Risk Source – Customer: Risks emanating from the customer inevtably involve scope, schedule and money. Sometimes it is that scope cannot be succinctly defined, or understood (requirements), sometimes it is that the buiness need changes more rapidly than expected due to external factors. Sometimes it is simply that the customer for whatever reason cannot make decisions.

Risk Source – Project team: Risks emanating from the project team inevitably involve productivity, skill, or leadership.

Risk Source – Organization: Risk emanating from the company or department producing the software, involve policy, resource management, coordination.

Risk Source – Technology/Environmental: Risk emanating from technical infrastructure or software development technology platform.

I have listed these in the order of frequency or likelihood based purely on my own experience. You can argue with me on the customer risk being the most frequent or likely, and that all scope, schedule and financial risk emanate from the customer, but it’s their scope, it’s their schedule, and it’s their money, and all decisions around those three elements are made by the customer.

So how do agile practices help mitigate risk emanating from the customer?

 

  • By producing working software as early as possible, effectively shortening the feedback and delivery cycles, agile practices actually retire risk of unknown. When you produce working software capabilities that is ready to deploy, any risk of unknown that was attached to those capabilities is effectively retired. This includes risk of incomplete or incorrect requirements, or inaccurate translation between business and technical abstractions.
  • By assuming a model that inherently drives down the cost of deferred decisions by eliminating pre-work, the risks of changes to schedule or budget emanating from the customers funding model or the customers market strategy are mitigated.
  • By developing a model with a higher frequency of customer interaction, and reducing the scope (size and impact) of customer decisions (one milestone at a time) risks emanating from the customers own organizational decision making process, or internal committment model are mitigated.
  • By establishing a value scoping paradigm with finer granularity, agile practices can reduce the scope of customer decisions, and the minimum time span of value delivery. More granular deliverables (stories?), make it easier for the customer to prioritize and sequence, ensuring that key value propositions are delivered as early as possible. Customers have a increased ability to discriminate essential scope from non-essential (apple polishing or lily guilding) scope, and so can defer the non-essential, until all of the essential scope is accepted.

Likewise, how do agile practices help mitigate risk emanating from the project team?

 

  • By increasing the frequency of measurement (of productivity), agile practices give more timely feedback on the productivity from each skill to management, enabling adjustments as early as possible, rather than later.
  • By establishing a regular cycle during the project of milestone retrospectives, the project team can adapt and adjust without the permission or engagement of outside leadership.

And for risks emanating from the delivery organization?

 

  • By establishing cross-functional teams, and empowering the team to self-adjust their process, agile practices reduce policy oriented risks, as separate teams forming separate (and un-aligned) policies do not inherently collaborate or coordinate.
  • By increasing the frequency of measurement, feedback to resource management is actionable sooner, mitigating risk emanating from resource management.
  • Agile practices do not appear to provide distinct advantage in coordinating across delivery teams, or with service providers or vendors. But it also does not appear to disadvantage the team in any material way.

Finally, for risks emanating from the technical infrastructure or development platform?

 

  • Agile practices do not appear to provide distinct advantage in dealing with hardware or software environments or development tools. But it also does not appear to disadvantage the team in any material way.

While I am not suggesting that traditional software development life cycle does not have risk management capabilities, only that they are different, and especially, do not retire risk as early. The basis of the traditional lifecycle is that risk of unknown is mitigated by doing ALL of the analysis and ALL of the design up front, in an effort to optimize the development, which is thought to be the risky portion. I think that modern software tools, paradigms and techniques have made the development less risky, which means that the risk is more likely that the requirements that we captured change over time, or simply were inadequate, and that we cannot retire that risk, until we put software capabilites in front of users. That is where agile practices take us – in exactly that way.

Agile Value Delivery

Agile practices claim to delivery greater value to the customer.  This claim is based on agile tendencies to deliver software capabilities more frequently, which in and of itself, means that the customer starts realizing value from our efforts sooner. OK – this one is obvious. Traditional phase gated life cycles effectively require you to do all the work through each phase – all the requirements, all the design, all the development, all the testing – before any value is actually delivered to the customer. Agile practices – actively proclaim (this is not a passive benefit, but an assertion of the philosophy) that it is better to finish the smallest increment of usable software capability and deliver it to the customer as early as possible.

While agile does not claim to have invented this concept, it does seem to have taken it to the extreme. Here is the “thing” – in a corporate culture where results are measured in quarters and not years, the faster we can implement even incremental improvements in work process, the faster we can show results (cost savings, risk mitigation, staff headroom, etc). The balancing factor is how frequently/rapidly can the organization assimilate change. Yet, even in an organization that is not particularly good at change management, the early delivery of value emanating from agile practices, puts the organizational management in the drivers seat. You can have a “pile” of completed, validated software changes ready to go, and you control the pace of change by scheduling releases. Whether the company wants two releases per year or twelve doesn’t matter to the team – because they just keep piling up the value and the customer controls when it is released into the enterprise. The customer controls the sequence of value that is being piled up, and the customer controls the timing of when the pile is rolled out.

Here is how it works pragmatically:

1) The customer constructs a list of valuable changes (scope).
2) The customer ensures that the list of changes are organized by sequence of delivery (converts scope to a backlog).
3) The team and the customer analyze (write requirements and estimate) these valuable changes in sequence and decide how many of them can be done in a measurement milestone (sprint).
4) The team delivers the changes (designed, coded, tested) for customer review.
5) The customer reviews the delivered changes, and adds any observed deficiencies to the backlog.
6) Repeat steps 2 – 5

One of the things that agile does, simply by sequencing by deliverable value, rather than by skill or life cycle phase, is that it makes it easy to decouple releases from life cycle phases. In a phase-gate lifecycle, I can’t release until all phases (analyze, design, code, test) are complete. Using an agile lifecycle decouples release from phase, because it assumes sequencing by deliverable, instead of sequencing by activity. Other than the “extra” activities required to release (regression testing, data conversion, code migration and deployment) the software itself is continually in a “releaseable” state.

This change in sequence assumption actually gives the “customer” more control over what order value will be delivered in, and allows them to alter the order of value delivery without loss of productivity. This is true, when the order is not altered during a measurement milestone, but for the next milestone, because very little work is done on deliverables before the milestone begins. In a phase-gate life cycle, all of the analysis is done for all deliverables before the design starts, all the design is done for all deliverables before the code starts, all the coding is done before the testing starts, etc – so the later you alter the scope (or change the sequence by swapping one scope item for another) the more completed work you are “throwing away”, which becomes lost productivity or waste. This is why everyone in phase-gate life cycle works so hard to avoid change in scope – we use change control because we recognize that change after analysis is complete means lost productivity.

Now of course WE ALL KNOW that it doesn’t really work like that in phase-gate life cycles – phases are allowed to overlap, and plans are optimized for dependencies, and we can carry risk of unknowns for later analysis or design on our projects – but because of the assumption in sequence by activity inherent in the life cycle model, these things add complexity to the planning process, and to the measurement process, and to the execution of the project because they are execptions, rather than the rule. This complexity itself has a cost in lost productivity and extra coordination or collaboration. The point being that because overlap and deferred analysis are inherent to the agile lifecycle, they do not increase the complexity of the project, plan, or execution, and the collaboration and coordination points are baked into the practices designed to implement the agile life cycle.

Running Agile

The key difference between agile and more traditional phase-gate life cycles is the key assumption around delivery sequence, and all of the implications emanating from that assumption, especially the greater control of delivery of value afforded to the customer. It is this greater control over the delivery of value that the term agile describes – agile life cycles respond to changes in strategy, direction, market conditions, available budget much more gracefully, and with much less drama than traditional phase-gate life cycles. This is a direct win for the customer, and as such, can be sold as an advantage for the provider.

Agile Predictability

Predictability is probably the least hyped benefit of agile practices. It is not sexy or fun, nor does the team gain from it, in a positive way. But the team does benefit from it, from a management perspective.

so predictable

The benefit of a predictable software delivery process is realized in three ways:

  • Rational Planning Process – when I can measure the capacity of the team, I can create a rational plan. This allows consulting firms to bid project or enterprise software organizations to budget more accurately.
  • Builds customer trust – when I can predict the delivery pace, I can communicate this to the customer, and build trust and reputation. This allows for repeat engagements in consulting, or greater latitude in decision making within the enterprise. When the team demonstrates predictable delivery, the customer can relax and focus on defining the output rather than the mechanisms of accountability.
  • Sustainable Pace – when I can project delivery based on the capacity of the team, I can ensure a more sustainable pace. I can negotiate in trust with the customer, understanding their need. I can increase capacity when needed, rather than forcing the team to stretch (work overtime) to meet goals.

These are serious benefits, but we haven’t said that agile practices deliver – only why and how predictability benefits an organization. So how does agile deliver on this benefit?

  1. Frequent, measurement on regular intervals – by defining arbitrary milestones at regular intervals ( iterations, sprints, time boxes, etc) and measuring the output of the team through each interval, agile practices generate the metrics that support predictability. While it is important to measure, it is also important to have a light weight measurement practice – so that the team doesn’t trade productivity for predictability. The velocity metric (calculated as completed units of estimated effort per unit of duration) is typical. Combined with a remaining estimated effort metric, and planned cost per unit of duration, or average cost per unit of estimated effort allows an accurate projection of both duration and cost. If my measurement frequency is a week, I can know each week how far off plan I am in terms of both schedule and budget. I can choose how I react to this knowledge.
  2. Cross-functional milestones – what differs within agile software life cycles from more traditional gated software life cycles is that each milestone in agile is self-contained, and has elements of all phases of software delivery – analysis, design, development, testing. This means that each skill is consumed and measured in each milestone. In a phase gate life cycle, the last skill (testing) is not consumed or measured until the last milestone – and so is not predictable – so I am using measurements from other projects at best. If I form virtual teams for projects (consulting), I can only predict at best from prior individual performance, which does not account for team dynamics.
  3. Repeatable, lightweight measurement framework – agile practices propose a lightweight estimation practice, a repeatable practice of measuring delivery against estimate. Agile estimates are done with less precision, but much less effort than gated methodologies – again trading productivity for predictability. Traditional project managers (Glen Alleman) get aggravated by this, because it does not have the precision of the more industrial strength estimation and measurement frameworks, however, it also does not have the overhead. Agile applies that same principle to planning that it applies to software delivery (YAGNI) – that is, I do not implement more sophisiticated measurement or estimation schemes, unless the nature of the work, or the project merits that level of precision. If the customer provides requirements that are less defined, I will by definition, provide estimates that are less precise, even using the industrial strength mechanism. Since software development is a largely non-repeatable process, with no standard units of measurement for output, agilists argue that using a precise estimation and measurement framework that costs more is a waste, because it does not yeild greater predicitability.
  4. Lower planning skill requirement – agile planning practices are typically simpler, easier, and less effort, than other practices. This means that your project manager doesn’t need to know how to do a PERT chart, a Critical Path Method (CPM) or Earned Value measurement. In agile plans, you don’t focus on dependencies between tasks, you simply sequence the deliverable capabilities in the order of customer value, and go. All of the more sophisticated plan maintenance, are optimizations that may not be necessary, or beneficial, given the level of unknown in the work being contemplated. Here is the kicker – because it is easier to implement, it gets done more consistently. It gets done with more discipline – so a lower fidelity process executed with greater discipline actually delivers higher predictability over time than a higher fidelity process being executed with less discipline.
  5. Emphasis on learning and improvement – agile practices call for retrospectives after each milestone. The purpose of the retrospective is to identify opportunities to improve practices in use. As the team works together, they find ways of working better, and have a stated process for proposing, deciding and implementing improvements. Since the team owns the process, and the improvements they are incented to implement (it was their idea) and thus each team adapts agile practices to suit their specific situation and goals. Through the process of self-improvement, the team becomes more predictable, because this milestones surprises become opportunities.
  6. Progressive elaboration – in phase gated software development life cycles which assume that all inputs are known early – (requirements, design, etc.) the temptation is to plan in more detail than you actually know. This introduces fiction and reduces predictability. Agile’s preference for progressive elaboration in requirements, designs, and plans recognizes the FACT that you cannot know everything before you start. This leads to plans that grow and shrink as knowledge is acquired and accounted for, but always currently reflect the state of our understanding of the work. This provides the ability to decide and adjust based on a plan that is as accurate as it can be with the knowledge that we have. Anything more is speculation, or fiction, that leads to poor decisions and improper adjustments.

The benefit of predictability is one that is less intuitive that the rest. Who really benefits from it? In the end, all benefit. Management has more control, the customer is satisfied, and the team can reach a sustainable pace – that sounds like win-win-win.

Agile Productivity

One of the hyped benefits of agile software development practices is increased productivity. It is also the benefit that I am most skeptical of. Software productivity is notoriously difficult to measure. That is because there are no relatively standard units of measurement of output.  Martin Fowler said this in 2003, and as far as I can tell it is still relatively correct.  This is true regardless of whether you are using agile practices or not. In fact, the measurement of software output itself is an expensive activity, so in any normal case study it would be hard to prove. Since books have been written on this topic, I do not feel the need to delve deeper into this topic, except to say that all of the problems described in Fred Brooks’ famous Mythical Man Month still exist today, and there has not been a universally adopted solution.

Nevertheless, I am not totally willing to call bullsh*t on the productivity benefit. There are three completely anecdotal reasons for this:

1) Agile practices tend to focus less on documentation and more on producing working software. This leads to spending less time on speculative documentation that ends up being revised many times. Rather than guessing what users want, agile teams build it as quickly as possible, and get feedback, and revise it, until it IS WHAT THE CUSTOMER WANTS! So from a productivity perspective, it is less work to deliver what the customer wants, not what they thought they wanted or what we thought they wanted. Is this real measurable productivity? NO – but it sure as hell feels good to both the developer and the customer.

2) The shorter measurement cycles that agile favors help resources focus on the needful, rather than the fanciful. This means that developers are focused on working in the next feature. Management theories put forth by Elliot Jacques talk about time span – the individuals ability to plan into the future. Developers typically do not a have long time span. I have often experienced developers who became overwhelmed trying to figure out what order to work in, and ended up being unproductive or paralyzed by unscoped tasks. The rhythm of consistent pacing, tends to allow the team to relax around planning, because there are regular intervals of planning activity, broken by longer periods of uninterrupted development. Is this real measurable productivity? NO – but the developers will tell you that they are able to relax and focus on solving technical problems rather than worrying about deadlines and planning.

3) The agile principle YAGNI (Ya Ain’t Gonna Need It) reduces the investment in complex architecture, or over-engineered solutions, or premature optimization. The principle is less about improving the productivity of individuals, and more about improving the output of the team. It is a principle that allows the team to hold itself accountable to deliver the smallest, thinnest, cheapest version of each capability, rather than the most ornate, most performant, most scalable, most beautiful version. Agile principles direct us to deliver an acceptable solution with the least effort, rather than an acceptable solution that anticipates all future events. Is this real measurable productivity? NO – but as I learned early in my career – the best way to get more done is to figure out all the things that don’t need to be done and not do them.

When I evaluate these reasons, I recognize that the perceived increase in productivity may be much greater than any actual increase, but at the same time, perceived increase in productivity can improve morale and reputation in ways that by themselves lead to real increases in productivity. Agile life cycle projects often feel less like a “death march” than gated life cycle projects, and that alone can lead to increased productivity and motivation.