Feats Instead of Processes

In my last post on Software Capabilities and Feats I said that feats are better [to model in a software capability] than processes, because processes are merely organized, consistent, managed ways to accomplish the feat.

A process is one way to accomplish a feat. The feat is the result you want.

Process is constrained by capabilities. So when I am modeling new capabilities, I should not be constrained by existing capabilities. When I introduce the new capability into the wild, the process will need to “re-form” around the new capability.

A process has steps. If we build software capabilities to support a process, we treat the steps of the process like “feats”. I can model a capability to make that step faster or more effective. That assumes that the step is necessary. In order to decide whether a process step is “necessary” I need to understand how it contributes to the valuable result. I need to understand why I do the step.Continue Reading

Software Capabilities and Feats

In some recent conversations, I find that it is hard to explain the notion of a capability. People want to talk in terms of software features or project requirements.

Software capabilities define value in the following ways:

  • enabling the user to accomplish a “feat” in less time than they otherwise would.
  • enabling the user to accomplish a “feat” at a greater scale than they otherwise would.
  • enabling the user (or a group of users) to accomplish a “feat” more effectively than that they otherwise would.
  • enabling the user to accomplish a “feat” that he could otherwise not accomplish at all.
  • enabling the user to accomplish a “feat” better than he otherwise could.
  • enabling the user to focus on “feats” that require decisions, rather than repetitive steps.

Every other benefit of software can be composed from these.

To define value, a software capability must contemplate (model) the “feat” that the user wants to accomplish.Continue Reading

The Slide

Did ya ever work on a project that seemed like the schedule was too aggressive? Where the team had to constantly fight to stay on schedule, and to keep moving forward? Where things maybe got behind and we piled more resources on to catch up? Where things felt bad, but we kept on fighting until…


…it was too late. Like in baseball, when the only way get on base is to slide in? Like in football, when you just need that one more yard, and you then turn it over?

The thing is, software development isn’t a sport. We don’t have an opposing team. We don’t have a team owner or coach. Software development is a business activity. It is an activity of manufacturing, of logistics, of research and development and of analysis.

The enemy is risk. In order to defeat risk, we have to understand where risk comes from, and we have to understand how to retire it. Risk is tricky because it costs you little (or nothing) to carry it, but much to quantify, even more to retire it, and potentially even more when it is realized.Continue Reading

100% SLA Requirements

Sometimes our customers fail to understand the complexity of modern

Our customer expects a system that NEVER fails to complete some critical
process. The problem is that when that critical process reaches a certain
level of complexity, and/or has a single point of failure this is not a
feasible requirement.

We all recognize that people are imperfect, and that all man made products
are imperfect. Computer hardware and software are imperfect, whether we
built it or bought it matters not. The 100% SLA is not a feasible
requirement because it assumes perfection that cannot be achieved.

Sometimes failures come because things outside a process change without our
knowledge. These events can happen without our awareness and the faster we
detect them, the faster we resolve.

If the customer needs to ensure that his business process completes 100% of
the time, then he should define requirements to “detect and inform” process
failures, to feed a backstop process that corrects them.

If these failures are frequent, he should perform root cause analysis to
create new requirements to “harden” the process against future failures of
the same root cause.

This model provides a path to correct and another to improve. These paths
lead to maturity and quality, a 100% SLA is a path to disappointment and

Chicken and egg

I continually argue with myself about whether tools or process should come

Adding process without tools makes it hard to enforce process rules,
implement consistently, or measure compliance.

Adding tools without clear process definition is also futile, as people
tend to use tools differently, and tools can impose limitations on process
that make later process improvements impossible without replacing tools.

— the balm here is to grow your own tools and process in concert. Build
just enough tool to bootstrap process, then evolve.

This requires the stomach to build tools, and not give up. —

Why “Velocity” Is Not As Bad As They Say

Glen Alleman of the Herding Cats blog is one of the most experienced project managers I have ever read.  His background is amazing.  He is bright, and has practical knowledge of projects that I will never have.  However, he is missing the point on agile and velocity.
I recently read his post “Simple Never Is” and have this response to offer.
I have been working on software projects for some 20 years, and while I am not a practitioner of any formal agile methodology either project management or software development, I have been exposed to incredibly ad hoc practices in managing software projects.  I agree with Glen when he says that agile (spokesmen) railing against traditional project management have never been exposed to good traditional software management practices.  Most project managers that I have worked with, take a couple classes, manage a couple projects, and maybe get a PMP cert and then do the same thing over and over, with varying results, because they personally suck, not because their methodology is fundamentally flawed.  I have been this project manager, and I sucked as well.  I think that most of this is because the organizations that I have worked for did not sponsor, fund, or require good project management practices.

Agile, and specifically scrum, are a reaction to the suckage.  It is the project team reacting to the fact that project managers don’t use practices that help.  What scrum and other agilish methods and practices around project management do is help the team remove two specific smells from project management:

  • plan fiction – the two most prevalent fictions I have experienced in project plans are a) plans that don’t reflect how the work is really done b) capture of partial task completion in percent based on the developers opinion.  Neither of these conditions necessitate a deliberate deception, but can be the simple result of laziness or communication issues.  
  • inconsistent plan maintenance – when at the outset of a project, there is no establishment of team values and rules around how estimates will be performed, adjusted and maintained in the plan, how the plan will be elaborated, what acceptable task effort is.  When there is no enforced measurement frequency, or commitment around reflecting the current work breakdown and task sequence in the plan, even though it changes from week to week, then the plan itself becomes useless.

The thing that Glen is missing is that velocity is not a project completion measure, but a team productivity measure.  Agile project management is more about team than project.  Thus it makes sense that the most visible measure is team productivity, not project progress.  From a project progress perspective, velocity is exactly what Glen says – predicting the future by evaluating the past.  He calls this driving by rear view mirror.  Velocity, however, is good for the team.  It tells them whether their current pace is likely to match the published/committed schedule, and when measured frequently enough (like every week), it tells them early enough, that they can decide to adjust something to stay on schedule.  

Velocity is a factor that is involved in the measurement of current projected duration.  Current projected duration is a simple measure – remaining work / average velocity = projected duration.  This measure has a bunch of assumptions:

  1. productivity is the primary predictor of duration – this is a big leap, but I think that this is what Glen reacts to, because there are not many large projects where this is true of the project overall.  However for many software projects during the development milestone, the team is focused on this, and are held accountable for this and this alone.
  2. there is enough parallelism in the plan that sequencing of task execution is not rigid – that is the team can reassign tasks and resequence tasks to mitigate the impact of delays on one task or deliverable.
  3. the plan is updated at least as frequently as it is measured –  As the work goes on, the plan is updated – newly discovered work is added into the plan, and work that is determined to be less, or not necessary is removed from the plan.  that is, remaining work is not only adjusted by the work complete, but by periodic adjustments to task estimates, and by addition of tasks when new work or re-work is required.

I think that Glen’s characterization of velocity as a “rear view mirror” and “yesterday’s weather” metric is largely because of his believe that past performance is not always a good indicator of future results.  While this is true, velocity and current projected duration are useful metrics that can be applied in many ways to measure productivity and schedule.  Glen says that velocity is a level of effort measure, and I disagree.  Here’s why:

Remaining work is always an estimate.  Properly calculated, velocity is the sum of the effort estimates for the completed tasks for a period, and requires that the task estimates not be adjusted during the measurement period when the tasks are completed.  Velocity is meaningless if not used to project duration, therefore velocity must be in the same units as remaining work.  

If calculated this way it is not, as Glen suggests, a LOE measure, but much closer to EV, because it approximates the budgeted effort.  The thing about velocity that many project managers struggle with is that it is not precise, and it doesn’t contemplate many aspects of the project timeline, only team productivity.  I believe this is because agile disciplines are mostly focused on the behaviors of the team, and increasing/maximizing team productivity, and as a result, the delivery of value by the team.

Here are some things that I have done with velocity (in combination with other measures):

  1. Working backward from an published end date and remaining work, assuming resource allocation is constant, we can calculate required periodic velocity – or the amount of work that must be completed per period (week?) on average to acheive completion by the end date.  I use this to ensure that we are planning (committing) to complete enough work each period (week) to stay on schedule.
  2. We can capture personal velocity metric to capture each team members average productivity, versus their own estimates.  I have measured velocity vs. committment to help team members plan more effectively, and to help the team re-allocate work to stay on target.  This is a great help to managers who want to coach developers into higher performance, on both development and estimation.  

In my experience, velocity alone is insufficient for many projects, as it doesn’t really contemplate the complexity of external dependencies, interactions between teams, etc. When these factors are in play additional measures can be devised or adopted to present conditions where the project is at risk.  When my team is largely in control of the project and there are fewer connections and dependencies, I have experienced the benefit of this simple measure (velocity) because it is easy to understand, and difficult to fake out.  These two factors allow a development of trust between project manager and team and a focus on solutions (adjusting the plan).