Why “Velocity” Is Not As Bad As They Say

Glen Alleman of the Herding Cats blog is one of the most experienced project managers I have ever read.  His background is amazing.  He is bright, and has practical knowledge of projects that I will never have.  However, he is missing the point on agile and velocity.
I recently read his post “Simple Never Is” and have this response to offer.
I have been working on software projects for some 20 years, and while I am not a practitioner of any formal agile methodology either project management or software development, I have been exposed to incredibly ad hoc practices in managing software projects.  I agree with Glen when he says that agile (spokesmen) railing against traditional project management have never been exposed to good traditional software management practices.  Most project managers that I have worked with, take a couple classes, manage a couple projects, and maybe get a PMP cert and then do the same thing over and over, with varying results, because they personally suck, not because their methodology is fundamentally flawed.  I have been this project manager, and I sucked as well.  I think that most of this is because the organizations that I have worked for did not sponsor, fund, or require good project management practices.

Agile, and specifically scrum, are a reaction to the suckage.  It is the project team reacting to the fact that project managers don’t use practices that help.  What scrum and other agilish methods and practices around project management do is help the team remove two specific smells from project management:

  • plan fiction – the two most prevalent fictions I have experienced in project plans are a) plans that don’t reflect how the work is really done b) capture of partial task completion in percent based on the developers opinion.  Neither of these conditions necessitate a deliberate deception, but can be the simple result of laziness or communication issues.  
  • inconsistent plan maintenance – when at the outset of a project, there is no establishment of team values and rules around how estimates will be performed, adjusted and maintained in the plan, how the plan will be elaborated, what acceptable task effort is.  When there is no enforced measurement frequency, or commitment around reflecting the current work breakdown and task sequence in the plan, even though it changes from week to week, then the plan itself becomes useless.

The thing that Glen is missing is that velocity is not a project completion measure, but a team productivity measure.  Agile project management is more about team than project.  Thus it makes sense that the most visible measure is team productivity, not project progress.  From a project progress perspective, velocity is exactly what Glen says – predicting the future by evaluating the past.  He calls this driving by rear view mirror.  Velocity, however, is good for the team.  It tells them whether their current pace is likely to match the published/committed schedule, and when measured frequently enough (like every week), it tells them early enough, that they can decide to adjust something to stay on schedule.  

Velocity is a factor that is involved in the measurement of current projected duration.  Current projected duration is a simple measure – remaining work / average velocity = projected duration.  This measure has a bunch of assumptions:

  1. productivity is the primary predictor of duration – this is a big leap, but I think that this is what Glen reacts to, because there are not many large projects where this is true of the project overall.  However for many software projects during the development milestone, the team is focused on this, and are held accountable for this and this alone.
  2. there is enough parallelism in the plan that sequencing of task execution is not rigid – that is the team can reassign tasks and resequence tasks to mitigate the impact of delays on one task or deliverable.
  3. the plan is updated at least as frequently as it is measured –  As the work goes on, the plan is updated – newly discovered work is added into the plan, and work that is determined to be less, or not necessary is removed from the plan.  that is, remaining work is not only adjusted by the work complete, but by periodic adjustments to task estimates, and by addition of tasks when new work or re-work is required.

I think that Glen’s characterization of velocity as a “rear view mirror” and “yesterday’s weather” metric is largely because of his believe that past performance is not always a good indicator of future results.  While this is true, velocity and current projected duration are useful metrics that can be applied in many ways to measure productivity and schedule.  Glen says that velocity is a level of effort measure, and I disagree.  Here’s why:

Remaining work is always an estimate.  Properly calculated, velocity is the sum of the effort estimates for the completed tasks for a period, and requires that the task estimates not be adjusted during the measurement period when the tasks are completed.  Velocity is meaningless if not used to project duration, therefore velocity must be in the same units as remaining work.  

If calculated this way it is not, as Glen suggests, a LOE measure, but much closer to EV, because it approximates the budgeted effort.  The thing about velocity that many project managers struggle with is that it is not precise, and it doesn’t contemplate many aspects of the project timeline, only team productivity.  I believe this is because agile disciplines are mostly focused on the behaviors of the team, and increasing/maximizing team productivity, and as a result, the delivery of value by the team.

Here are some things that I have done with velocity (in combination with other measures):

  1. Working backward from an published end date and remaining work, assuming resource allocation is constant, we can calculate required periodic velocity – or the amount of work that must be completed per period (week?) on average to acheive completion by the end date.  I use this to ensure that we are planning (committing) to complete enough work each period (week) to stay on schedule.
  2. We can capture personal velocity metric to capture each team members average productivity, versus their own estimates.  I have measured velocity vs. committment to help team members plan more effectively, and to help the team re-allocate work to stay on target.  This is a great help to managers who want to coach developers into higher performance, on both development and estimation.  

In my experience, velocity alone is insufficient for many projects, as it doesn’t really contemplate the complexity of external dependencies, interactions between teams, etc. When these factors are in play additional measures can be devised or adopted to present conditions where the project is at risk.  When my team is largely in control of the project and there are fewer connections and dependencies, I have experienced the benefit of this simple measure (velocity) because it is easy to understand, and difficult to fake out.  These two factors allow a development of trust between project manager and team and a focus on solutions (adjusting the plan).

Metaphors in Requirements

This week I was asked to review job descriptions for analyst roles within our IT function. The roles were “Analyst, Business Systems”, and “Senior Analyst, Business Systems”.

The person who asked me was looking for my opinion because I am strongly opinionated, blunt, and have experiencing hiring and leading business analysts and requirements engineers. She wanted to understand the difference between an analyst and a senior analyst.

Other than the expression of leadership within a project or team context that I would expect of any senior contributor, my answer had to do with metaphors.

Metaphors are the solid business abstractions that software is designed around. Metaphors are the unambiguously defined concepts that ground the business process. Metaphors are the litmus test to see if requirements are cohesive and complete. If your metaphors suck, so do your requirements.

Back in the ’90s when I was learning object oriented application design (OOAD) we were taught that each application or major feature had a “central object” that was the focus of it’s existence. Microsoft word has a “document”. Every e-mail client has a post or a message. This central object is the metaphor around which the application is designed, we just didn’t call them metaphors back then.

When designing an application all of your actions are performed on metaphors, all of your business rules contemplate metaphors, and your data model expresses your knowledge of your metaphors. Your metaphors are the essence of your understanding and modeling of the REAL BUSINESS stuff that your users and customers have to deal with in the REAL WORLD as part of their job. The more your metaphors align with reality, and the more you can eliminate ambiguity among and between metaphors in your requirements, the easier it will be for software designers, architects, and developers to model and fashion a system around them.

That’s my story and I am sticking to it. The senior analyst gets this, and carefully and thoughtfully identifies, defines, and clarifies essential metaphors within business requirements, and knows the the requirements aren’t complete until all of the metaphors defined hold together with the business rules and actions required.

Metaphors work most effectively when you can get your user community to communicate (to you, and to each other) in terms of metaphors that you helped define. When you have defined useful metaphors that clarify the business process, your users will adopt your terminology because it helps them understand what is important. Then in that context your software (being true to those metaphors) will be intuitive.

An analyst can capture and document the business process. The senior analyst (one with mastery) can change the conversation casting metaphors that help the business community define its value proposition and supporting processes more effectively.

Requirements Success Factors

Last January my role was redefined, and since then I have been managing two teams covering diverse aspects of two software programs. The first team is responsible for requirements, functional design, quality assurance, and the second team is responsible for support.

In this role, I have been focused on analysis and have been interviewing more business analyst type resources than I have before. I don’t call them business analysts, although my company has a job description for “senior analyst, business process”. The reason is because their job is not to analyze business process, their job is to elicit, understand, organize and document requirements for software products. I prefer to use the term “Requirements engineers”, and I like to talk about requirements engineering because requirements are not simply a description of the current or desired business practice, or a wish list. Requirements are a high cohesion document, describing capabilities that are required to deliver specific business value through software automation.

In a recent interview of a candidate for a contract “requirements engineer”, I asked a question that I usually ask candidates of any skillset – “What the top three critical success factors for practitioners of ?”

My friend Johanna Rothman would say that this is not a very good behavior description question, because it does not give the interviewee an opportunity to tell how he has done this. I believe that it is a very good question, because it asks two things at once? Does this candidate see him or herself as a practitioner of a discipline, or as someone doing a job. It also forces them to describe how they practice this skill set. If this answer rolls off of their tongue, then they have spent some time thinking about how to do a better job. If they struggle with it, it is likely that they don’t think about it much, they just do it.

Then there is the answer it self – this tells me what they think is important. I usually ask this question towards the end of the interview, after I have already asked the behavior description questions. I look for answers that are cohesive with the earlier responses, to see if they are spitting out what they think I want to hear, or they practice what they preach.

This candidate did pretty well – after he answered, he asked me what I thought the three critical success factors were.

My answer:

Semantic Clarity or Disambiguation – terms, and concepts, especially metaphors must be precisely defined.

Cohesion – the document must add up with mathematical precision.

Organization, especially abstraction or generalization – the basis of software is abstraction, and this must begin with requirements, classes of problems, value propositions must be clearly identified and categorized.

His answer:

He had cohesion and disambiguation or something close to it, but substituted scope management for organization.

I don’t think he is wrong, but in my organization, scope is fixed after requirements, not before. This is because premature scope management inhibits value delivery, IMO.

I think I’ll hire him…

— Correction —

My candidate did not have cohesion, he said communication, and he talked about making sure that each person walks away from a conversation with the same understanding of the topics discussed. I agree that this is a success factor for gathering requirements. Certainly anyone who goes into a business to understand what is required, in order to add some specific value to the business must have ample communication skills, and most importantly, establish appropriate feedback mechanisms to ensure that the understandings are shared. This for me is a component of semantic clarity and disambiguation, call it a sub-factor.