Jason Yip posted recently about scheduling bug fixes. I liked what he had to say. He is a very thoughtful person. I wanted to extend his thoughts with my own…
1) Fixing bugs is unpredictable – you never know how many you will have or how long they will take to fix.
Defects in delivered code fall in to categories:
- non-working new capabilities
- usability issues
- collateral damage to surrounding code
If your goal is to have a quality product, there is no way to prioritize these apart. They all need to be addressed. So this taxonomy or similar taxonomies usually do not give you reasonable leverage for prioritizing defects.
2) Usability issues can be prioritized based on time, risk and frequency
usability issues may be deferred:
- if the usability issue does not add unacceptable business risk.
- if the usability issue does not add an unreasonable amount of user time to a business process.
- if the usability issue does not affect a frequently used feature or path that would generate an unacceptable support call volume.
3) Defects can be caused by poor code design, or less thoughtful implementation
Sometimes, the problem is that the new capability is not adequately supported by the current design. The developer ended up playing whack-a-mole with the issues because he couldn’t quite wrap enough duct tape or baling wire around it to get it to stay working. Sometimes the prior releases duct tape needed to be re-wrapped.
Other times, the problem is that the developer simply didn’t think through the implications of his design pattern or understand the usage scenarios deeply enough so his design was simply inadequate.
Still other times, the implementation was just plain sloppy, so it passes the happy path scenario, but fails most alternates or edge cases.
These defects all mean that the fix may take as long as the original implementation of the capability to correct. They can result in a story getting backed out of a sprint if they are found late.
4) sometimes the source of the defect report is an important determinant of priority.
Defect reports from key users or evangelists tend to get prioritized above those of testers or non-savvy users. Sometimes also defects reported by users who are know to have the ear of management also get prioritized, as a noise limiting mechanism.
5) Assuming all bugs reported represent defects. Sometimes testers and users report how they “thought” things would work or should work.
Many times a validation is necessary to ensure that all reported bugs are really defects. Sometimes reports are simply enhancement requests masquerading as defects. Other times the tester simply mistook a working feature for a defect because of how they had envisioned it working, or because they assumed (untrue) things about the solution. Often these types of defect reports are an indication of counter-intuitive design, other times they are just wishful thinking. We need to be very careful about working on these defects as they can add uncommitted scope to a sprint, and can add risk to the committed scope via collateral damage.
6) When you have enough time and resources, prioritization is not necessary.
The closer to the completion of a feature the testing (and bug reporting) is executed, the less prioritization is necessary. When testing is done the “day after” the feature is complete, we often have the luxury of working on defects in sequential order. When testing is done days or weeks after the feature is complete, we can end up with more defects than we can fix and a larger completed codebase to re-engineer if the defect is related to a design issue. The opportunities for thoughtful re-design are often long past, and we end up wrapping duct tape and baling wire around the feature to fix the defect. This is one way we accrue technical debt. Perhaps the sequence of our testing and user feedback activities relative to our development activities is as or more important to manage than the sequence of our defect remediation.