In my former post about progressive elaboration, I promised some “examples” from different domains of software development. This is the first of these. As I said before, progressive elaboration is about acknowledging that we can’t know everything we need to get to done before we start.
One of the difficulties of requirements elicitation is to maintain a consistent level of detail, both in documentation and in our conversation with practitioners, subject matter experts, and individual contributors. Another typical difficulty is understanding whether requirements are sufficient to begin some subsequent activity like design or coding.
While some methodologies have made recommendations, most merely call out the need. This leaves it up to the requirements analyst to build their own practice. In fact, most analysts do not subscribe to any specific methodology that they practice. I have often used these interview questions for analysts that I expect to elicit and organize requirements for software applications.
The intelligent ones admit that these are problems to be solved. The few brilliant ones could describe their process for working through specific levels of detail. Most default to a domain specific answer – from their most familiar business domain. To date, none have ever articulated what amounts to a reasonable framework for maintaining consistency or measuring sufficiency. I don’t think that it is because they don’t do either of these things, because I have worked with some very gifted and successful analysts. Most likely, they are simply applying years of experience in an informal way that is not repeatable or transferable.
I have a solution that I am comfortable with in my current work:
User Stories
I like the user story pattern from eXtreme Programming (XP) – where there are three levels of detail: story, conversation, acceptance criteria.
If you familiar with user stories, you will recognize the story portion as follows:
or alternatively in the value first structure:
The difference in the value first structure is that it supports dramatically easier prioritization, by simply aggregating the list of values apart from the stories, and prioritizing those, then ranking the stories in terms of their support of the overall value.
Using Taxonomies to Progress Elaboration
This story form is the highest level of detail, and is tremendously variable – so that stories can be very large – one of the problems that agile methodologies often face, is that a story is larger than they can complete in a single iteration, sprint, or time box.
I think that the complexity or size of the story is discovered during the conversation aspect of the requirements elicitation. In this aspect, the requirements analyst, engages in detailed conversations about the importance of the story, capturing various visions for implementation and process integration from practitioners. One often uncovers decisions that are made, data that needs to be presented, constraints that need to be implemented, business rules or algorithms needed to support the story.
At this time, if the story is complex it can be broken into smaller stories. The rule for stories, is that they have to be somewhat stand-alone. Able to be implemented independently of each other, or at least in a particular sequence.
A Taxonomy for Complexity
In my current project, we have co-opted the story form, to have three levels of complexity within the story framework:
Epic – a big story that requires breaking down before delivery.
Story – a right sized idea, that may not be completely elaborated.
Detail – a idea about a story that does not stand on it’s own, but is an essential expression of the done state of that story.
A Taxonomy for Implementation Details
A story may not be fully “elaborated” until it has essential details describing the “hows” for the what. I use a specific taxonomy or set of categories for filling in these details:
1) Invocation – how does the user interact with this element (action, screen, report, etc)
2) Governance – how is the invocation or result of this element affected by environment or data (business rules)
3) Appearance – how does this element appear, or how does this element affect the appearance of other elements.
4) Operation – how does this element work (calculations, algorithms, sequence of steps, etc)
5) Information – how does data flow into, out of, or get altered by this element.
I use these categories to drive and sequence my knowledge gaps about a story.
A Taxonomy for Capabilities
I also have a set of categories that describe different types of elements:
1) displays – elements that present information to the user
2) actions – elements that cause change when invoked by the user
3) navigations – elements that are a form of action that cause little or no change, other than to display different elements, or make other actions available to the user.
4) inputs – elements that allow the user to express information to the application
5) outputs – elements that allow the user to extract information from the system (not interactive like reports or interfaces)
After the conversation has been captured, reviewed, corrected and organized – then the requirements analyst can construct a set of assertions or truth statements which form the final level of detail – acceptance criteria.
These define the “done” state of the story. The story is “done” when the software capability conforms to each of the acceptance criteria. This is where detailed data validation and domain is housed. Developers and testers should be able to implement these tests without much modification or re-thinking, based on the selected implementation path.
Continuous Elaboration
A story can be said to be fully elaborated when it has all the necessary details defined, and each of the details has acceptance criteria defined. This is two dimensional – we have the details and when have a testable description of the done state for each detail.
Mutable Stories
But do I do that before I start designing and coding? Hell no. You know as much as you know. During the delivery questions, issues, ideas, and information will emerge that will cause us to evaluate and re-evaluate and adjust the stories. A fully elaborated story is not “done” – or “immutable” – we fully expect that during the delivery stories will be adjusted in response to this emergence.
Agility
This is what makes us agile – we start the development process before requirements are fully elaborated, and we don’t expect requirements to stop being elaborated until the software is accepted.
Progressive and Continuous Elaboration Break Down When
1 Artifacts of Analysis allow stakeholders to adhere to concrete ideas.
Users and practitioners tend to think in concrete terms, like screens, reports, command buttons, input fields and other visual artifacts. They really want to know how, and requirements are focused on “what”? So often between the first draft of conversation and the final draft of conversation, we must introduce some visual designs, mockups or wireframes. Once we do this, we may have incomplete states as expressed in our stories. We draw mocks as if all the stories are implemented, but plan to deliver stories sequentially. The mockups are just throw away artifacts to complete and validate requirements elicitation, but they tend to grow legs, and become more important than they should. They form a visual expectation that gets in the way later.
2 Elaboration and Presentation of Requirements does not contemplate the needs of specific audiences.
Different audiences care differently about different aspects of the requirements, and so we need to construct different views for users, sme’s, practitioners, developers, project managers, etc. There are useful requirements taxonomies, that can organize requirements for specific audiences, but it is a complex problem to solve, when your audiences are split and fragmented. I find that sometimes I have to have the conversation grouped by constituency that cares which sometimes causes duplication. This must be stitched that back together for the acceptance criteria to be cohesive.
Conclusion
There is no silver bullet, there is hard work in requirements analysis. This is just one example of how to use user stories to help requirements elaboration. This hopefully provides an example of how to build a framework for progressive and continuous elaboration of user stories.
I shared several taxonomies that I use for elaborating stories. I AM NOT suggesting that these taxonomies work for every situation, or that they are complete. I am not even saying they are “good”. They have been useful to me, and I am continually adjusting and adapting.
I AM saying that the adaptation of taxonomies or groups of categories and labels for requirements or parts of requirements is helpful as a framework for assessing the “completeness” or “sufficiency” of requirements, and for ensuring the “consistency” of detail in requirements, especially when there are multiple people participating in the elaboration process.
No Comments