Normalized Perspectives

As software professionals, our beliefs about what qualifies as “best practices” often depends on our experience and our expertise. This is one of the reasons that it is so very difficult to run the self-organizing software teams, frequently described in agile literature. The fact is, teams cannot be self-organizing, until the members share mental maps of both problem and solution domains. Until these maps are truly shared by the team, an external organizing force is required to ensure consistency and solidarity within different aspects of the solution.

I want to share some experience from a recent project, where I have been this “external organizing force”, and how I have observed these varying perspectives come into play, and the impact that they have had. My hope is that some among you will be able to learn from my experience, and be more prepared when you face similar challenges.

The Story

Of course, there are always more choices to make when standing up a new system, “from scratch”, than when extending an already working system, or building on a well understood framework. Likewise, it is always easier to lead from positions of “been there, done that” technical expertise. In our case, we had a new team, entirely comprised of contract developers from varying firms and backgrounds, and because of some of the technical requirements and constraints of the project, were having difficulty hiring lead developers who were truly “been there, done that.” within the technical paradigms that we were constrained to build against.

We started, under resourced, and because we did not have one person, with expertise across all physical and logical layers of our application we divided leadership by layer – a person for the user interface, a person for the model/business logic, and a person for the service/persistence layer. Initially, this seemed to be working, as each person selected had a reasonable amount of expertise in their layer. What we learned later when trying to integrate was that each had varying perspectives, that caused problems.

The engineer who was leading services and persistence was an expert within a specific pattern of development, but followed the traditional approach used by most applications built on the selected platform. But our application had some unique requirements, and the traditional approach did not always serve us well.

The developer (he refuses to believe that software is an engineering practice) who was leading the model and business logic wanted to pedantically follow a behavior driven and test drive development approach, and believed that following these two practices would ensure that the code in the model was error free. The problem he faced was that none of the existing developers really understood behavior driven or test driven, and so development in his layer took 2-3 times as long as it should have, and the results were still not shining examples that sold the methodology. Mind you as we resourced the team, we hired developers who were more aligned with these practices, but it hurt us for a long time.

The engineer who was leading the User Interface layer was a believer in big frameworks, and following the patterns expounded by the authors of the frameworks. So he selected some frameworks, and worked together a practice, but the frameworks themselves created work (feed the beast) just to make the application have “normal” behavior, and so while the frameworks gave us some advantage, they also cost us. Moreover, finding developers who were familiar with (or happy with) these frameworks was difficult, and so every new developer had to climb the framework learning curve.

What made matters the worst, was that each of these developers tended to disrespect the opinions of the others, because they did not come from a common perspective. So when it came time to integrate, there was lots of finger pointing and name calling and blame throwing, and everyone thought their crap didn’t stink and the smells were emanating from someone else’s code pot.

So while the team started with 6 developers including these leads, and because we had planned for about 10, we continued to add resources to the team. And along the way, we lost some resources and two months later we were at 7 resources, and 2 months later we were at 10 resources. And we had lost basically 13 man months, so to “catch up” we agreed to extend the team to 14 people, and it took 2 more months to get that done, and we had planned for an additional reporting team of 3 developers to start, and so 5 months into the project we are at 18 people on the team.

As each person was on-boarded, they gravitated towards one layer/perspective or another, and we had to continuously socialize both the problem domain, and the solution domain so that new resources would not veer too far off course, taking the team with them. Because we still had much of the same varying perspectives, we found that there was still much to do, to get people settled in, and ready to move forward.

When the reporting team started, it felt like a breath of fresh air. They all came from a single firm, and they had agreed on practices, and since their integration was primarily with the model layer, there was little initial friction. It felt like things were going to be OK and they were not going to add to the mess. This optimism lasted until our first attempt at integrating their work into the application. Poor communication between the reporting team and the other resources resulted in numerous integration challenges that were completely unexpected.

While this case study is interesting, I fear it represents a fairly common experience among those in application development. This is not the first time that I have experienced similar challenges. While I was much more prepared for it this year, it seems like I learned more about the challenges this time through.


The challenge really reflects the problems of standing up a new platform, while building a new team. In fact, the challenges I want to expose are really about building shared mental models. The process of building an effective application team is a problem of normalizing perspectives. I daresay that we are comfortable with the idea of normalizing data, but we rarely think about normalizing the way individuals think about problems or solutions. In building teams for application development, it is very important that the team maintain some minimal set of agreed on principles so that we don’t have to make decisions over and over. It is also important that the software be implemented in a consistent fashion, so that it is reasonably easy to move from one part to another without feeling like you are moving between countries.

The new platform contributes to the challenge, simply because of the number of decisions that have to be made and socialized within the team. An existing platform would already have some core assumptions and practices established, that we could use as gauges to hire against. We need people with experiences in and a generally positive opinion about some set of technologies and practices, or who would be excited to gain said experience and learn said practices. Without those already established guides, it is hard to know whether you are hiring people who will ultimately normalize successfully.

Here are some things that I have learned:

1) Two (or more) heads are not better than one – a single technical lead or architect is essential for making platform strategy decisions. Distributing this responsibility across two or more people is a recipe for failure. While that person may delegate or collaborate, someone needs to be responsible, and at minimum to be a tie breaker, referee, coach for the rest.

2) Technical leaders must be pragmatic – tendencies toward academic,  or dogmatic approaches to things will create problems with other developers who are not initially inclined towards those approaches. Pedantically stating that one particular method, pattern, approach is the “right” approach without explaining how it solves the problem at hand elegantly, efficiently is likely to get a leader disrespected. If we are asking developers to learn new methods, frameworks, approaches, paradigms, practices, we have to answer two questions: 1) why is it worth the effort to learn (personally), 2) how will it help me deliver this project.

3) A stable team structure is better – Changing the makeup of the team over time always causes problems. Decisions have to be revisited, as every new expert shows up asking the same old questions. New developers have to learn new methods, practices, etc adding to their on-boarding time. Large teams form smaller cliques of like thinking individuals, which dissolve over time, but re-emerge, as new adherents show up.

4) Evaluate individuals starting perspective – While having many perspectives can be a benefit (offer alternative approaches), they can also be a drawback (defaulting to different familiar patterns). As the team forms, or reforms, essential elements of each contributors perspectives must be understood, and normalized – to avoid conflict with team members, and code design strategy.

Example 1: Our reporting resources come with a Business Intelligence perspective. Assumptions informing their perspective are:

1) the state of the data is beyond my control.
2) the content and semantics of the data is established before I “get there”.
Their perspective allows them to infer semantics, rather than specifying them. This practice is anathema to a behavior driven application where we expect that semantics are established specifically to support outputs and behaviors.
Example 2: Our model expert, is heavily invested in TDD, while the rest of the team is not. Half the team did not know how to implement a proper unit or integration test. His belief that modeling can be done without more formal design, but through writing tests that prove requirements and check acceptance criteria is at odds with the way most of us think, so model design tends to be more informal, while this expert, can’t fathom why others don’t understand what the model is doing and how it works, or why it needs to be that way.
Example 3: An expert in user interface development has worked at many places where large heavy frameworks are in use, and is convinced that the selection and integration of the right frameworks, 3rd party user control libraries, etc. will make user interface development much, much easier. He did not take the team’s familiarity with various products as an important selection criteria, so the adoption of his recommendation caused the team a significant learning curve. What wasn’t clear when he made the recommendation was that he shared the learning curve, so was not adequately prepared to help the team climb that curve.

5) Understand how deeply these perspectives are baked in these individuals – How hard is it going to be for them to adopt an appropriate perspective for this project/team. Some individuals hold their principles very loosely, and can easily adapt to current situations. Others perspectives simply cannot be assailed. If their beliefs are two diverse for safe collaboration, they are not a good candidate for the team.

Here is what I would do differently:

1) Not start building software until I had the right person in the tech lead/architect role.

2) Establish a preliminary set of team perspectives based on design principles, development practices, technology paradigms, and team accountability before hiring the rest of the team.

3) Hire the team based on their experience, desire to learn, expertise, and opinions relative to the preliminary team perspectives.

4) Hire 20% more than I thought I needed, so that those that did not work well in the team could be released without harm to the delivery schedule, or team structure, including hiring a backup architect.

5) Having established perspectives, make sure all resources that are on-boarded into the team are thoroughly exposed to these perspectives, and as they move from being preliminary to permanent, document them and hold all accountable for adherence.


No Comments

Post a Comment