Agile Benefits

Of course there has been loads of hype about agile in the software development community. Lots of folks have now been certified as Scrum Masters. I have been involved in agile transformation since 2004/5 when I had a proprietary methodology forced on me from above. Said methodology was brought by a “Management Science” consultant, who incidently was also selling a software product that would help implement said methodology. This same consultant was also married to a signficant client of our firm, and so you can do the math.

Like any other practice, people who have a positive experience with agile, are looking for opportunities to repeat that positive experience. After I rid myself and our firm of the management science consultant and his methodology and his product, I took a long hard look at what he was trying to accomplish. I read some of the broader agile literature. I was looking specifically at the benefits that were being hyped.

Like any other business change program – if you are going to adopt agile practices, you must start by describing the benefits you expect to get. Before you challenge me on business change, let me just smack you down. Software development is a business activity. Whether it is a consulting practice for cash, as a software vendor building a product to sell, or as a company building bespoke systems – software development is a business activity. To adopt a new methodology, life cycle, set of practices, technology stack, etc. is therefore a business change program and should rightly be treated like a business change. That means:

  • you don’t start until you are prepared to invest sufficient resources to make the change stick.
  • you have to be ready to sell executive management on the investment.
  • you have to be able to articulate the benefits you expect to get from the investment.

So what are the benefits that are so overhyped?

1) Productivity – agile practices claim to be “lighter weight” processes, meaning you spend less effort on maintaining the process and more delivering valuable software.

2) Predictability – agile practices claim to deliver greater predictability, meaning that we will know earlier that our plans are reasonable, and have more time to adjust if they are not.

3) Improved Value Delivery – agile practices claim to delivery greater value to the customer, this claim is based on agile tendencies to deliver software capabilities more frequently, which in and of itself, means that the customer starts realizing value from our efforts sooner.

4) Improved Risk Mitigation – agile practices claim to reduce certain types of project risks faster and more effectively than other practices, meaning after the same amount of time an agile project has less remaining risk than otherwise would be true.

In a following series of posts, I would like to describe each of these benefits in more detail, and how I have seen agile practices deliver these benefits. I think that agile practices certainly can deliver these benefits, but I do not believe that there is a guarantee, nor is agile the only way to get these specific benefits.

Vision, Strategy, Policy

Sometimes it seems like we get “hung up” on moving forward with some initiative, because we cannot separate vision from strategy, and strategy from policy. I want to posit some definitions of these terms that might help us to keep making progress. One of the reasons that we conflate these is that they all have goals. In my initiatives, I see many goals, some of them are complementary, others are conflicting. Vision, Strategy, and Policy each have goals, but the goals are different, and understanding the origin or alignment of goals with vision, strategy or policy can help us stay on the path.

Vision – a description of the end state of some change initiative. The vision should be a picture of what the “world” looks like when this change initiative is successful. The “world” is determined by the scope of the change. From the vision, one should be able to discern the differences from current state, and define clear success criteria for the initiative. The goals deriving from the vision, are clearly detailed success criteria articulated so as to be measurable.

Strategy – a description of the plan to change from the current state, to the end state as described in the vision. The strategy should be articulated as a series of changes from current state to end state, encompassing interim states necessary to acheiving all success criteria. From the strategy, one should be able to develop an action plan, consisting of a series of distinct changes in a defined sequence providing a path to arriving at the end state described in the vision. The goals defined by the strategy are really the success criteria for each distinct change in the action plan.

Policy – a description of rules that we will hold ourselves accountable for following. Policies can be designed to maintain the status quo, or they can be designed to support the end state, or interim states defined in the strategy. Policies define constraints necessary to implement the strategy, and the goals of policies define the desired behavior of individuals and organizations that are impacted by the change initiative.

— I am sure that people will pick these definitions apart, but at the end of the day, if we don’t articulate vision, strategy and policy in ways that make the objectives of a change initiative clear, confusion will swallow the change.

Premature Plan Optimization

In PlanningSequencingElaboration , I shared my realization that sequencing is less a simplification of scheduling, than scheduling is an optimization of the plan around a time constraint.

Here are some typical variables and constraints that a project plan must contemplate:

1) Cost – can we complete the necessary work and deliver the expected value at a cost that someone is willing to pay.
2) Time – can we complete the necessary work and delivery the expected value in a timeframe that supports some goal.
3) Resources – are the resources/skills/knowledge we need to do the necessary work available.
4) Risk – what unknowns exist in the definition of value or the elaboration of work

As project managers, we are trained how to elaborate the work (build a WBS), how to find the critical path, how to optimize the sequence for dependencies, how to assess and mitigate risk, how to estimate cost, but all of those activities assume that we are optimizing for a single point in time: the completion of the project as a whole. It is this assumption that causes us to optimize in a certain way, and it may exclude the most important variable to optimize for (customer desire).

What if your customer asked you to deliver valuable releases of software to them every month. How would any of these activities or processes help? The answer is they really would be of very little use in that model. The most important thing would be the customers desired sequence of value delivery; which UnitsOfValue will be delivered in each monthly release. In order to balance this, we also would want to understand an optimized risk retirement sequence; that is which of the UnitsOfValue have unknown risk, that might interfere with the delivery of other UnitsOfValue. We might want to negotiate sequence of value delivery to balance the retirement of risk against the delivery of value. This delivery sequence of the UnitsOfValue is then the initial basis of the plan. We do not have an estimated cost or duration, we have not converted UnitsOfValue to UnitsOfWork, and we have not contemplated resources.

What we have done is construct a simple delivery roadmap, a planned sequence of delivery, and agreed with our customer to deliver value in a rational order, on a regular schedule. From there, we can convert units of value to units of work, estimate, allocate resources, form a team (if we don’t already have one), build a schedule.

The point of this all is: if you assume that delivery is single in time, you cannot optimize for value, value is assumed to be fixed. Except that we all know that value (aka scope) is not fixed. So if you assume that scope is fixed, and you cannot optimize for value, when you run out of time or money, you may have nothing to show for it, or you may not have completed the most valuable units.

While this is completely logical and intuitive, what we react to is that optimizing the sequence for value often introduces an opportunity to spend more, or take more time to get to done. The logic goes like this: If we deliver the most valuable unit first, some subsequent unit may require us to build differently than the first, and so we introduce some “re-work” with a less valuable unit. Re-work has a cost. It appears to be a waste. Building something that we know or suspect is temporary, or will be replaced appears to be foolish. When thinking about the plan with a view towards optimizing cost and duration, this is an unacceptable compromise, but may be completely rational when optimizing value delivery. Especially if future funding is uncertain (budget cuts are imminent), and we want to get the most value for our spend.

We have become so accustomed to optimizing cost and time that we don’t think about value in the same way. If we deliver value early, our customer can realize the value sooner, and so might be willing to negotiate on price or schedule.

Decision as Attribute

Business process documentation often reflects decisions that are made during the process that affect subsequent steps in the process. Somtimes the decision can affect the necessity of a step or the outcome of a step. When decisions are made during the process that affect subsequent steps, the result of that decision becomes a data element or attribute that feeds the decision framework for the affected steps.

By making these decisions during the process, the user is adding information to the process that must be modeled and captured.

Sometimes the decisions are non-discretionary, meaning that the decision is always made the same way. In the manual version of the business process, the human evaluates input data attributes, and makes a decision that reflects the state of the data at the time of the decision. While it appears that the human is expressing discretion over the process, in fact they are acting as automaton, merely processing the information and reflecting the state.

Other times decisions are discretionary, meaning that the decision involves some “art” of the human executing the process. This “art” or skill, may require the ability to evaluate information that is not readily available within an automation context like customer preferences or long-text instructions. Other times, the art may be evaluation of the physical state of equipment, or work product. Sometimes the “art” is knowing other facts about the world (like market conditions, competive environment, etc.) and how they impact our execution.

For discretionary decisions, often the “art” or skill is executed differently by different practitioners. In these cases, software requirements often are written to expose or present the information needed by the majority of those expressing the art or skill. Capturing their decisions, and the data that was presented, is often required for future analysis and refinement in the decision making process.

Again, discretionary or non-discretionary capturing or rendering the decision as an attribute of the process or metaphor is important in developing the decision framework for subsequent steps and processes. Capturing the information used to render or make the decision at the point of capture is important for future process improvement or operator skill evaluation.

Undocumented, Unrecognized Process

Following a twitter conversation recently, Matthias Weimann posted a quote from Clay Shirkry about process:

“Process is an embedded reaction to prior stupidity” – Clay Shirky

and it started. Scott Ambler – one of my favorite Tech authors replied:

There is always a process being followed.
Your process may or may not be documented, or even recognized by the people following it.

and it started me thinking.

A person who is performing an activity is either following a process or he is inventing one. You can’t “follow” a process that has never been invented. Invention may be simply combining elements together from other processes, but it is not following.

A group performing an activity is either following a process or they are collaborativey inventing one. The collaboratively inventing is messy. It typically involves arguments and disagreements. These collaborators will continue to invent like this until one or more get tired of the mess – they they recognize and documents elements of process to reduce the clutter of invention.

A manager looks at the mess of invention, he foresees chaos, unpredictability, disaster. He also foresees his own responsibility for same chaos and disaster and proclaims, “This cannot continue”. Shirky implies an incompetent manager who waits for the disaster to occur, but competent managers try to impose controls before the disaster occurs. 

A leader will join the fray and guide the chaos, gradually reducing the mess, so that we don’t continually reinvent the process over and over.

 

Impact of Design

Design is the great time to assess the impact of a software change. I want to talk about three aspects of impact that we should consider:

Business Impact: Any material change to the business process that is not specified in the requirements should be considered impact. This could be beneficial or detrimental, or value neutral – it is still change. If each user needs a second monitor on their desk, this is impact. If two steps are removed from the process, this is impact. If users now need acess to and training for an additional system in order to complete a process, this is impact.

Business impact is important because the changes need to be planned, and paid for. The user community needs to be able to accept or reject this impact. It should not come as a surprise.

Software Impact: Any material change to the software that is not explicitly specified in the requirements is to be considered impact. If the data model changes in ways that require an adjustment to software capabilities so that they continue to work correctly, this is impact. If the design calls for a change to a shared service, such that all other consumers of that service must adjust or ensure that they are still operational, this is impact. If the design requires a change to an interface language (i.e. XSD), then every application that participates in that interface language must change.

Software change is important because the collateral impact may create additional coding effort that was not originally considered, even beyond the boundaries of the application that is changing. Additionally, this can simply add additional scope to regression testing, to prove that changes to central/common code components do no harm to unchanged features.

Environmental Change: Environmental change is rarely if ever specified in requirements, however, design often proposes change to infrastructure. Storage space, or other system resources, new servers, even external conectivity all are typical environmental change. Occasionally, non-functional requirements can predict environmental change like requirements to improve performance. Likewise requirements to get data from external sources can expose needs to build connectivity.

The importance of this impact is pretty clear. The provisioning of these resources is work in the project plan that needs to be planned. It also adds complexity to testing and deployment.

Fallacy of the PMO

In UnitsOfWork, I talked about how teams who stay formed through a series of projects become better at converting units of value to units of work consistently. While re-reading this post, I realize that a great fallacy of the PMO is that it is not the project manager who is responsible for this process, but analysts and technical resources who make up a software development team. They key point of a PMO is to provide the consistent planning process so that the organization can form, disband, and reform teams to complete projects.

For the past few years, I have run something like a PMO. I managed a pool of analysts and project managers and testers that were assigned to virtual teams. I tried repeatedly to establish repeatable practices around planning and estimation. I established (with my project managers) standard practices around these. The two biggest frustrations the project managers were that the business stakeholders did not have a consistent practice for reflecting UnitsOfValue in requirements, and that the analysts and architects did not have a consistent practice for converting units of value into Units Of Work.

I am not saying that the projects or the project managers weren’t successful – most of them were. Good PM’s know how to wrestle a plan to the ground and make it tap out. What I am saying is that the practice of forming and disbanding virtual teams for each project is orthogonal to the development of a consistent repeatable planning practice, because of the project manager is not directly responsible for these two key inputs into the planning process.

The theory of having individuals grow together to refine a set of practices toward consistency when every project has different individuals is the opposite of what a PMO does. What the PMO does is to construct a set of theoreticals, and one or more flavors of project practices to cover different types of projects, and push these practices, top down, into the field. It (typically) does not get feedback from every project to improve its practices, because its focus is on driving consistency at a high level into all projects to improve organizational capabilities around governance, rather than driving consistency, predictability, and repeatability into every team, to improve the execution of every project.

Design Decisions

Software design is all about decisions. What language or platform is best suited to solve the problem? What pattern(s) will we adopt? What components need to be built? What layers are required and what will each layer be responsible for?

In a “good” software design, decisions are made efficiently. That means that we make decisions as little as possible. Stated differently, we try not to make the same decision over and over for every situation, but rather to generalize our decisions, forming guiding principles for the remainder of our design.

In order to generalize decisions, and to decide efficiently, we have to have some method for “prioritizing” or “sequencing” our decisions. We need a way to reflect the relative impact of a decision (the cost of changing our mind). We also need a way to reflect dependencies between decisions. I like to use two measures to accomplish this:

 

  • Scope – this is a measure of how much of the application is affected by this decisions. So the scope might have the following values: Component, Feature, Layer, Subsystem, System, beyond. The purpose of this is to understand “potential impact zone” for changing this decision.
  • Time Span – This is a measure of how much work or time it will take to unwind a decision once made. It correlates to cost. The higher cost of the decision, the more certainty with which I want to make it. Of course these are estimates, but if it takes me 3 months to build something, and I have to build over because of a bad decision, the time span could be 3 months… It could be more, it could be less. The zone of impact tells me that even things that aren’t directly affected could be affected.

When I look at my design, one of the things that I do is to try to reduce the scope and time span of my decisions through layering, and encapsulation. Basically, designing in ways that minimize the scope of each design decision. Why? Because it reduces the risk of bad decisions! All those design principles that we learned (separation of responsibility, etc) are really isolating design risk in as small an impact zone as possible. For the most part, it reduces the cost of screwing up. Is it the fastest way to produce software? No – In most cases, it increases overall software complexity, in order to simplify and remove risk from the implementation of individual components.

Many design decisions are influenced by the familiarity of the resources tasked with the design. Most people would not design something that they had no familiarity with, unless compelled by requirements. Many design decisions are influenced by the designer’s understanding of non-functional constraints and requirements. Many design decisions are influenced by the availability of tools and resources to construct the software.

If you do not explicitly set guiding principles for a design exercise, these types of influences are hard to mitigate, and can lead to unexpected consequences. So before you “make” decisions, define the decisions, put them in an appropriate sequence, look for themes or principles that emerge, and then promote the principles to the front of the list. Start making decisions.

Behavioral Taxonomy

When developing requirements for a business process in which there are valid process variants, one usually describes the process variants as a behavior. When modeling these variants, it is useful to consider each aspect of a process that has variants, and isolate unique behaviors based on some decision framework. Both the distinct list of behaviors (behavioral taxonomy) and the attributes driving the decision framwork (driver mapping) are important to the model.

The list of behaviors is important, because the words that identify each behavior become important words in the language used to describe your business process. When you talk about these behaviors, you all (technicians and subject matter experts) can use the same unambiguous terminology to describe the process.

Sometimes in the business process documentation, the language is about the steps and the business rules that govern each step. At a higher level of abstraction, we benefit from aggregating these business rules into patterns, or behaviors that have some cohesion around a small number of attributes or facts. When we observe these patterns, we can simplify the language around the business process, by naming the distinct patterns as specific behaviors, and identifying them with a business driver as observed in the attributes.

When we isolate and name the patterns as specific behaviors, we also can understand the data elements or attributes and values that drive the decision framework. This mapping of data elements and values to select a behavior is also part of the requirements, as it is important to ensure that all valid values for each attribute are considered in the behavior selection.

More complex processes may have several different aspects that are governed by distinct behavioral taxonomies, and isolating these taxonomies from each other is important. Sometimes when we try to render a single behavioral taxonomy that governs a process, and find that we cannot easily recognize the behaviors, we actually have a more complex case, where there are nested behavior variants, or we have non-correlated (independent) behaviors governing several aspects of a process. In these cases, if we isolate the individual lists or taxonomies of behaviors, then review against each other to determine whether relationships exist between taxonomies. Those relations can be classified as governing hierarchies (where available selections in one taxonomy are limited by the selection in another “governing” taxonomy.), or incidental constraining relations (where as it happens, the selection of a behavior in one taxonomy, either requires or invalidates one or more behaviors in other taxonomies, but those constraints are not imposed exclusively in any one direction between two taxonomies)

 

Clearly identifying the distinct behaviors of the business process, and the data elements or attributes that can be used to select each behavior supports a good modeling practice.