Leveraging Cognitive Diversity to Tackle System Complexity
When we compose teams or staff an incident review, we almost always use identity as a proxy for perspective. We include someone from platform, someone from the application layer, someone from the team that owns the affected service. We assume that different roles and tenures will produce different mental models of the problem, and sometimes that assumption holds. But research on how people actually build mental models of complex systems suggests it fails more often than we'd expect.

By: Nick Travaglini

How to Resolve the Productivity Paradox in AI-Assisted Coding
Join Ben Good (Google) and Austin Parker (Honeycomb) as they unpack the productivity paradox in AI-assisted Coding.
Watch Now
Most engineering leaders today understand that diversity matters. They've built teams that reflect a range of backgrounds, functions, and experience levels. They run postmortems, retrospectives, and architecture reviews that bring multiple voices to the table.
They believe, not unreasonably, that this variety of perspectives leads to better decisions. But there's a problem hiding inside that assumption that can undermine everything: who people are is a surprisingly poor predictor of how they think.
Identity diversity isn't cognitive diversity
When we compose teams or staff an incident review, we almost always use identity as a proxy for perspective. We include someone from platform, someone from the application layer, someone from the team that owns the affected service. We assume that different roles and tenures will produce different mental models of the problem, and sometimes that assumption holds. But research on how people actually build mental models of complex systems suggests it fails more often than we'd expect.
Two engineers with the same title can think about a production failure in fundamentally different ways, with different theories of what drove the failure, and different beliefs and intuitions about which signals matter. Meanwhile, two people with very different roles can turn out to be working from nearly identical mental maps, shaped by the same incidents, on-call rotations, and organizational knowledge about how the system behaves.
This distinction has a name: cognitive diversity. And it turns out to matter far more than identity diversity when your goal is to understand and navigate a complex system.
New to Honeycomb? Get your free account today.
Get access to distributed tracing, BubbleUp, triggers, and more.
Up to 20 million events per month included.
Why cognitive diversity matters more for software systems
Not every problem requires cognitive diversity. A simple, narrowly scoped task with a known process doesn't demand it. In those cases, a capable engineer with the right context is enough.
Alas, production systems are sociotechnical systems and therefore aren’t so simple. They're characterized by interconnected components, emergent behaviors, and failures that cascade across boundaries in ways nobody fully anticipates. No single engineer, no matter how senior or experienced, holds a complete picture. The system is a tangled layered network and so it’s too large, too complex for any one mental model to capture it adequately.
This is where cognitive diversity stops being a nice-to-have. Research studying how groups build models of complex social and environmental systems has found that groups drawing on genuinely different mental models produce more accurate representations of how those systems actually work. They surface interdependencies that isolationist approaches miss, and they capture feedback loops that homogeneous groups overlook. Their collective models do a better job of predicting how a system responds to intervention.
AI is making this harder, not easier. As organizations integrate models into their systems as dependencies and as components in pipelines, the surface area of unpredictable behavior grows. Model outputs are opaque in ways that a myopic focus on service dependencies aren't. Failure modes are probabilistic rather than deterministic. The mental models engineers have built up over years of working with distributed systems don't map cleanly onto systems that include components nobody fully understands, including the people who built them. The gap between what any one person can hold in their head and what the system is actually doing keeps widening.

The implication for engineering organizations is direct: the quality of your collective picture depends not just on who is in the room, but on whether the people in the room are thinking about the problem differently.
What this looks like in practice
Shifting from identity diversity to cognitive diversity doesn't require scrapping the processes you already have. It requires adding a layer of intentionality to how you compose groups, run reviews, and synthesize what you hear. For incident reviews, start with the Howie Guide and supplement it with the following:
Compose incident reviews differently. Resist the instinct to staff a postmortem according to team membership or proximity to the failure. Ask instead where each person's understanding of the system comes from. An engineer who has watched the system behave badly under load, a support engineer who has fielded the customer impact, and an on-call responder who has developed strong intuitions about which telemetry signals are unhelpful will each bring a different mental model; resist the urge to let the org chart dictate that you only invite the first. You're looking for people whose knowledge comes from different vantage points, not just different teams.
Separate before you synthesize. Rather than opening an incident review or architectural discussion with group conversation, ask participants to independently write down their own understanding of what happened first. What do they think was driving the system's behavior? Which signals do they think mattered? What do they predict a proposed fix will do? Document these models before group discussion begins. It makes cognitive differences visible and it prevents the most confident or senior voice in the room from becoming everyone's theory of the system.
Treat divergence as signal, not noise. In most postmortems, synthesis means converging on a shared account of what happened. That's useful. And in complex systems, the points of persistent disagreement are often where the most important information lives. If one engineer believes the failure was driven by a cascading timeout and another believes it was a capacity planning miss, that's not a problem to be resolved by whoever’s voice is loudest. It's a sign that your collective model of the system has real gaps. Build an explicit step into your review process: whose account is least represented in the picture we're building, and what would we need to believe for them to be right?
Use cognitive diversity to stress-test changes before you ship. Once you have a proposed fix or architectural change, take it to people who think about the system differently and ask them independently what they predict will happen. Convergent predictions are encouraging; divergent ones are a sign that your change rests on a particular theory of the system that not everyone shares. That's worth surfacing before you deploy, not after.
An honest caveat
None of this is a formula. Centering cognitive diversity in a team requires human judgment calls. The goal is productive disagreement, not maximum difference—people who share enough common ground to collaborate but bring sufficiently distinct mental models to expand the group's collective picture of a system that is, by definition, too complex for any one of them to fully understand.
What the research is clear about is the cost of not trying. When engineering organizations forgo building in cognitive diversity, they risk building a collective picture of their systems that feels comprehensive but, in actuality, is missing critical pieces. In stable, low-complexity environments, that gap is manageable. In the kind of fast-moving, highly interconnected systems that software organizations actually run, it tends to show up in the form of incidents that surprise everyone and fixes that make things worse.
The question worth asking is whether your current processes are designed to find the engineer with a different model of what's happening before the system makes you find them.
P.S. I used Claude in the writing of this blog post.