In this post Glen Alleman retorts the commonly lauded platitude that:
“There is no way to prove that the requirements we have are complete and correct until we have working code.”
And as Glen is a very smart guy, from the perspective of the domains within which Glen most frequently works and thinks, his perspective that this is wrong, is essentially on target.
In those domains, requirements are imposed by the overall design of the program (not the software) with tremendous specificity:
The emergency shutdown system shall stop the turbine driving the compressor in 1.3 seconds or less to prevent overspeed of the RB-211
But in the domain of service business operations (investment management), a requirement for the program might look much less specific:
The investment platform shall allow portfolio managers to rebalance open architecture portfolios without requiring any non-trade transaction instructions implemented outside the investment platform. This operation shall take no more than 2 minutes from initiation to completion for portfolios with up to 500 positions.
These requirements sound similarly specific. The thing is they are not. What is missing in the service business case is how many steps in the process require human interaction. When the human interaction is taken into account, human variance and experience is important.
For any requirement that must contemplate the information and mental flow or process of a professional decision maker – or some number of the same, it truly is the confidence and articulation of the decision makers themselves that makes the requirements work. In my experience those types of people may only contemplate the simplest, most annoying or most arcane cases when confronted with a requirements elicitor.
It is then up to the requirements elicitor and his elaboration process to elaborate the entirety of the scope of the requirements including all edge cases and variants in flow or process that are necessary to achieve the requirement as stated.
So here is my extension of the thing that Glen is railing against.
When building software to support a business process that either:
- requires the software to implement effectively (business is not currently doing the process because they need to software first).
- is not deterministically known by those providing domain expertise (you have the wrong people informing the requirements)
- when those providing domain expertise are not themselves practitioners (end users are not involved in requirements elicitation).
– There is no way to prove that the requirements we have are complete and correct until we have working code in the hands of practitioners.
To Glen’s point, the first one is a bit of chicken and egg and may be a challenge. The other two cases are organizational problems that can be overcome by management with the appropriate will and desire.
From the developers perspective, these are the problem of the requirements elicitor, not the developer.
Having hired a large number of requirements analysts, I have not found in interview or in practice very many that have any kind of a developed practice in evaluating the “sufficiency” of requirements. I wrote some posts about that a couple years ago if that is interesting.
For you agilists, requirements sufficiency is not a matter of the template (use case, user story, brd, frd, prd) but of the talent of the elicitor in organizing the requirements in a way that allows said evaluation. When implementing by feature (more agilely?) it is just that the sufficiency analysis is performed on a smaller scope of requirements (the feature) rather than larger (the release, the product, etc).