Coherence Penalty for Humans - Wide Awake Developers


Let's try an analogy. If the "job" is a project rather than a computational task, then we can look at the number of people on the project as the "processors" doing the work.

In that case, the serial fraction would be whatever portion of the work can only be done one step after another. That may be fodder for a future post, but it's not what I'm interested in today.

There seems to be a direct analog for the incoherence penalty. Whatever time the team members spend re-establishing a common view of the universe is the incoherence penalty.

For a half-dozen people in a single room, that penalty might be really small. Just a whiteboard session once a week or so.

For a large team across multiple time zones, it could be large and formal. Documents and walkthrough. Presentations to the team, and so on.

In some architectures coherence matters less. Imagine a team with members across three continents, but each one works on a single service that consumes data in a well-specified format and produces data in a well-specified format. They don't require coherence about changes in the processes, but would need coherence for any changes in the formats.

Sometimes tools and languages can change the incoherence penalty. One of the arguments for static typing is that it helps communicate across the team. In essence, types in code are the mechanism for broadcasting changes in the model of the world. In a dynamically typed language, we'd either need secondary artifacts (unit tests or chat messages) or we'd need to create boundaries where subteams only rarely needed to re-cohere with other subteams.

All of these are techniques aimed at the incoherence penalty. Let's recall that overscaling causes reduced throughput. So if you have a high coherence penalty and too many people, then the team as a whole moves slower. I've certainly experienced teams where it felt like we could cut half the people and move twice as fast. USL and the incoherence penalty now helps me understand why that was true—it's not just about getting rid of deadwood. It's about reducing the overhead of sharing mental models.

In The Fear Cycle I alluded to codebases where people knew large scale changes were needed, but were afraid of inadvertant harm. This would imply a team that was overscaled and never achieved coherence. Once lost, it seems to be really hard to re-establish. That means ignoring the incoherence penalty is not an option.