System Replacement Assumptions

I am in the middle of my umpteenth system replacement project.  There are some universal assumptions that are endemic to the user community in every system replacement project.  They are born of hope and frustration.  They are almost universal.

1) The new system will do everything the old one does, only better.
2) The new system will support all of my existing processes and procedures without significant change.
3) The new system will be faster that the old system.
4) The new system will have better data quality than the old system.
5) The new system will address ALL of the shortcomings of the old system.

If you have ever done one of these projects, you know.  They are assumptions that you must actively work against.  They require a constant stream of communication to dispel. I offer you my rationale for why they are never, ever, true.Continue Reading

Connecting the Boxes

2013 was the year that I finally decided to replace my ancient two channel sound system with a 5.1 channel home ads l520receiver1theater capable system. My old setup was very simple. A pair of ADS speakers that I had bought when I was in college (1981) and a Nakamichi receiver that I had bought shortly after I was married (1989 or so). I had other components, but most had been replaced with a cheap Blu-ray player that I “won” at a silent auction a few years back that played most of my CD’s. I still have a B&O turntable that is serviceable, but I rarely play my vinyl these days.Continue Reading

Planning Bug Fixes

Jason Yip posted recently about scheduling bug fixes. I liked what he had to say. He is a very thoughtful person. I wanted to extend his thoughts with my own…

1) Fixing bugs is unpredictable – you never know how many you will have or how long they will take to fix.

Defects in delivered code fall in to categories:


  • non-working new capabilities
  • usability issues
  • collateral damage to surrounding code

If your goal is to have a quality product, there is no way to prioritize these apart. They all need to be addressed. So this taxonomy or similar taxonomies usually do not give you reasonable leverage for prioritizing defects.

2) Usability issues can be prioritized based on time, risk and frequency

usability issues may be deferred:

  • if the usability issue does not add unacceptable business risk.
  • if the usability issue does not add an unreasonable amount of user time to a business process.
  • if the usability issue does not affect a frequently used feature or path that would generate an unacceptable support call volume.

3) Defects can be caused by poor code design, or less thoughtful implementation

Sometimes, the problem is that the new capability is not adequately supported by the current design. The developer ended up playing whack-a-mole with the issues because he couldn’t quite wrap enough duct tape or baling wire around it to get it to stay working. Sometimes the prior releases duct tape needed to be re-wrapped.

Other times, the problem is that the developer simply didn’t think through the implications of his design pattern or understand the usage scenarios deeply enough so his design was simply inadequate.

Still other times, the implementation was just plain sloppy, so it passes the happy path scenario, but fails most alternates or edge cases.

These defects all mean that the fix may take as long as the original implementation of the capability to correct. They can result in a story getting backed out of a sprint if they are found late.

4) sometimes the source of the defect report is an important determinant of priority.

Defect reports from key users or evangelists tend to get prioritized above those of testers or non-savvy users. Sometimes also defects reported by users who are know to have the ear of management also get prioritized, as a noise limiting mechanism.

5) Assuming all bugs reported represent defects. Sometimes testers and users report how they “thought” things would work or should work.

Many times a validation is necessary to ensure that all reported bugs are really defects. Sometimes reports are simply enhancement requests masquerading as defects. Other times the tester simply mistook a working feature for a defect because of how they had envisioned it working, or because they assumed (untrue) things about the solution. Often these types of defect reports are an indication of counter-intuitive design, other times they are just wishful thinking. We need to be very careful about working on these defects as they can add uncommitted scope to a sprint, and can add risk to the committed scope via collateral damage.

6) When you have enough time and resources, prioritization is not necessary.

The closer to the completion of a feature the testing (and bug reporting) is executed, the less prioritization is necessary. When testing is done the “day after” the feature is complete, we often have the luxury of working on defects in sequential order. When testing is done days or weeks after the feature is complete, we can end up with more defects than we can fix and a larger completed codebase to re-engineer if the defect is related to a design issue. The opportunities for thoughtful re-design are often long past, and we end up wrapping duct tape and baling wire around the feature to fix the defect. This is one way we accrue technical debt. Perhaps the sequence of our testing and user feedback activities relative to our development activities is as or more important to manage than the sequence of our defect remediation.

Design Assumptions

Pragmatism – design assumptions are about pragmatism. Pragmatism is the easiest path to good enough. If I already have tools available that will allow me to do the job "good enough", why would I go buy and learn new tools? If I already have a team that supports 5 applications written in Java, why would I build a new application in C# or Ruby? If I have an application written in C# using HTML/AJAX/Jquery/CSS to do user experience, why would I build my next screen using Silverlight? Why would I go out of my way to add cost, time, and risk to a software project? Simple answer, I wouldn't.

Design assumptions, are about documenting the "status quo". They are typically written before I have thoroughly understood requirements, in fact, they are not about the requirements. They are about everything but the requirements. They are statements that require a compelling reason to change. That reason should be related to I can't meet the requirements unless I undo this assumption.

Design assumptions can also be used to document constraints that come from the organization or the project. For example, the normal time frame for establishing a new test environment is 30 business days. The desired time frame for delivery of these capabilities to production is 3 months. The project budget is $120,000. Only approved open source software can be incorporated into application build.

These assumptions and constraints have the following properties:

  • Known apart from the software requirements.
  • Express inertia, obstacles, or guardrails that the project must contend with.
  • Cannot arbitrarily be removed, changed, or overturned.

Documenting design assumptions is important, because they communicate to the team (especially when bringing new team members on board) a starting point for design. We are doing it this way, unless there is a compelling reason to change. We are constrained by these factors that can only be changed for good reason. They also communicate to your customer (who may be reviewing your design) some things that they have the power to change if it makes a difference.

When we make design decisions to comply with time or budget constraints, it may mean a reduction in quality or sustainability. When organizational constraints make it cost more, or take longer to produce the same solution, we can decide whether the constraints are reasonable.

Finally, in a project oriented culture, assumptions are a convenient way to "roll" design decisions forward release by release. If your organization maintains documentation by project, rather than by software product, it can be easy for core design decisions to get lost after a few releases, especially if the development team has substantial turnover. After a a short period of time, nobody remembers how or why we decided to organize things, and new features can be built contrary to the original design principles that were followed, only because the people who built them didn't know there were principles (or couldn't see them clearly in the code).


Sometimes it’s really beneficial to get a team all on the same page
before you start something. Starting a design initiative for a
software project is a good time to do this. One good way to get
consensus or at least to determine how far apart developers are
ideologically, is to document the assumptions that will guide your

It is not really important how you document them, the purpose really
is to get the team to articulate and discuss and ultimately agree on
the values, methods, approaches, and ideas that will guide their
design (and ultimately their build process).

Laying out your assumptions before you try to make decisions could be
a great way to shortcut some of the inevitable disagreements caused by
unexposed differences in the religious beliefs held by software
developers. Software theology and design jihad go hand in hand.
Assimilating new members into a team, requires understanding their
religion, with a view to preventing heresy from creeping into your
designs, and ultimately your code base. Having to declare a jihad
against all things that violate layer isolation, or the normalization
in the conceptual domain model during the design review is a bad idea.
Better to get the differences out on the table early.

It also makes your documentation easier to read because the
assumptions need much less justification than the decisions you are
documenting, and frankly can get assimilated by new team members
faster. Most of the WTF’s in the design review meeting are really
poorly documented assumptions that one or another team member trips

Most likely, though, the exercise will trip over semantics, and when
when you get down to the core values of the team, and what is really
important, good people agree more than they disagree, and they will
feel good about the things they agree on, and they will have a sense
of starting well, rather than a subtle dread of the design review,
expecting the “Here we go again…” approach.