You can't see what you can't see
In the previous lesson, you learned to distinguish fact from story — to separate what actually happened from the narrative you layered on top. That skill lets you observe more clearly from where you stand. But there's a harder problem: no matter how clearly you observe, you are still observing from a single position. And every position has structural blind spots that no amount of clarity can fix.
This isn't a character flaw. It's geometry. A security engineer reading code sees attack surfaces. A frontend engineer reading the same code sees data-shape problems. A support engineer sees the error messages end users will actually encounter. None of them are wrong, and none of them are seeing the complete picture. The information that reaches you is shaped by where you stand — your training, your role, your incentives, your history. What you can't see isn't hidden. It's just not visible from your angle.
The implication is uncomfortable: your most confident observations are also the ones most likely to contain blind spots. Confidence comes from the coherence of your current perspective, and coherence is precisely what makes gaps invisible. You don't experience a blind spot as a gap. You experience it as completeness.
The Johari Window: mapping what you don't know about yourself
In 1955, psychologists Joseph Luft and Harrington Ingham created a model that makes this problem concrete. The Johari Window divides self-knowledge into four quadrants based on two axes: what is known to you versus not known to you, and what is known to others versus not known to others.
The four quadrants are:
- Open (Arena): Known to you and known to others. This is shared, public knowledge — the things you and everyone around you can see about your work, your reasoning, your behavior.
- Hidden: Known to you but not to others. These are your private thoughts, your withheld reasoning, the context you carry but haven't shared.
- Unknown: Not known to you or to others. These are the deep unknowns — assumptions neither you nor anyone around you has surfaced yet.
- Blind Spot: Known to others but not to you. This is the quadrant that matters most for this lesson.
The blind spot quadrant contains everything that other people can see about your work, your reasoning, and your behavior that you genuinely cannot see yourself. Not because you're hiding from it. Because your position makes it structurally invisible.
The only way to shrink the blind spot quadrant is to seek feedback from people who occupy different positions than you do. Luft and Ingham's core insight was that self-awareness is not a solo project — it requires input from others because certain information about you is only accessible from the outside. Every perspective you actively seek from someone positioned differently than you moves information from "blind spot" into "open," and that transfer is irreversible. Once you see what you couldn't see before, you can't unsee it.
Cognitive diversity outperforms individual ability
The value of multiple perspectives isn't just about patching personal blind spots. It's mathematically demonstrable at the group level.
Scott Page, a complexity scientist at the University of Michigan, formalized this in what he calls the Diversity Prediction Theorem. The theorem states: the collective error of a group equals the average individual error minus the prediction diversity. In plain terms, a group's accuracy improves in two ways — by making individuals better, or by making individuals different from each other.
The striking implication: a group of cognitively diverse thinkers with moderate individual ability will systematically outperform a group of high-ability thinkers who all approach problems the same way. Page identified four dimensions of cognitive diversity — perspectives (how you represent a problem), interpretations (how you categorize what you see), heuristics (what strategies you reach for), and predictive models (how you anticipate outcomes). Groups that are diverse along these dimensions don't just add viewpoints — they create what Page calls "superadditivity," where combinations of different approaches generate solutions that no individual approach could produce alone.
This isn't a feel-good argument for inclusion. It's a structural argument about information. When everyone in a room shares the same training, the same incentives, and the same mental models, they share the same blind spots. Adding another person with the same perspective adds capacity but not coverage. Adding someone with a genuinely different perspective adds coverage — they see the parts of the problem that the existing group, no matter how talented, structurally cannot.
Dialectical thinking: holding contradictions as cognitive growth
There is a deeper version of perspective-taking than simply collecting viewpoints and picking the best one. Developmental psychologists Michael Basseches and Klaus Riegel described a form of cognition they called dialectical thinking — the ability to hold multiple contradictory perspectives simultaneously without collapsing them into a premature resolution.
Basseches identified three stages of adult cognitive development. Universalistic formal thinking seeks the single correct framework — one right answer, one best perspective. Relativistic thinking acknowledges that multiple frameworks exist but treats them as equally valid and ultimately incommensurable — everyone has their own truth. Dialectical thinking transcends both: it recognizes that different perspectives genuinely contradict each other, holds the contradiction, and works to create a new ordering that integrates what each perspective reveals.
Riegel put it bluntly: the dialectical thought that characterizes cognitive maturity consists in living with contradictions rather than rushing to resolve them. Development, in his framework, happens through the interaction between equilibrium and disequilibrium — between the comfort of a settled perspective and the productive discomfort of encountering one that doesn't fit.
This matters for epistemic practice because the most valuable perspectives are often the ones that contradict your current understanding. If someone's viewpoint is easy to integrate — "yes, I already thought of that" — it probably isn't revealing a genuine blind spot. The perspectives that make you uncomfortable, that create cognitive tension, that feel wrong from where you stand — those are the ones most likely to contain information you actually need.
The practical discipline is to notice when a perspective creates discomfort and treat that discomfort as a signal of potential value rather than a reason to dismiss it.
Engineering as institutionalized perspective-taking
Software engineering has, perhaps accidentally, built some of the most effective perspective-taking structures in any profession. Consider code review. At the surface level, code review exists to catch bugs. But what actually happens in a well-functioning review process is systematic perspective-taking: someone who did not write the code reads it from a different position — different assumptions about what the code should do, different knowledge of edge cases, different mental models of system behavior.
Cross-functional architecture reviews extend this further. When a backend engineer, a frontend engineer, a security specialist, a product manager, and a support lead all examine the same design, they aren't performing redundant analysis. Each person's position reveals structure the others cannot access. The backend engineer sees data consistency issues. The frontend engineer sees unnecessary coupling to implementation details. The security engineer sees unvalidated trust boundaries. The product manager sees misalignment with user intent. The support lead sees the failure modes that will generate tickets.
Research on cross-functional teams confirms this pattern at scale. Cognitive resource diversity theory holds that the variety of knowledge in a cross-functional team positively influences performance specifically because of the different perspectives each member brings. When teams have a supportive internal environment — shared purpose, psychological safety, and genuine voice — members engage in open dialogue and share perspectives that would otherwise remain siloed. The structural insight is that these practices work not because diverse teams are smarter, but because they have fewer shared blind spots.
The lesson for your own epistemic practice: don't treat perspective-seeking as a nice-to-have that you do when you have time. Treat it as a structural requirement for seeing clearly. The most dangerous decisions are the ones that feel complete from your single vantage point.
Using AI as a perspective-generation engine
Large language models introduce a new capability for perspective-taking that has no historical precedent: on-demand access to approximate viewpoints you don't personally hold. AI cannot truly occupy another person's position — it doesn't have stakes, experience, or embodied knowledge. But it can generate plausible articulations of how different roles, disciplines, and frameworks would parse a given problem.
The technique is called perspective prompting — instructing an AI to respond as if it were a specific professional, stakeholder, or intellectual tradition. This isn't about getting the "right" answer from a simulated expert. It's about generating candidate perspectives that you can then evaluate, test against reality, and use to surface your own blind spots.
Practical examples:
- "I've designed this API. What would a security engineer flag about it?"
- "Here's my project plan. What would a skeptical CFO ask about it?"
- "I'm making this hiring decision based on these factors. What would an organizational psychologist say I'm weighting incorrectly?"
- "What would someone who strongly disagrees with this approach argue, and what would be their strongest point?"
The value isn't in the AI's answers being correct — it's in the AI's answers being different from yours. Each generated perspective is a candidate blind-spot check. Some will be noise. Some will identify something you genuinely didn't consider. The discipline is the same as with human perspectives: notice which generated viewpoints create discomfort, and investigate those first.
A useful compound practice: after generating three to four AI perspectives on a decision, bring the most challenging one to a real human colleague and ask whether it holds up. This combines the breadth of AI-generated viewpoints with the depth and groundedness of human judgment.
The protocol: structured perspective-seeking
Here is a repeatable method for using multiple perspectives to reveal blind spots:
-
State your current position clearly. Write down what you believe, what you've decided, or what you've designed — in two to three sentences. This is your vantage point, made explicit.
-
Identify three to five maximally different positions. These should differ from yours along at least one of Page's dimensions: perspective (how they represent the problem), interpretation (how they categorize it), heuristics (what approaches they'd try), or predictive models (what they expect to happen). Different roles, different experience levels, and different stakes all count.
-
Collect perspectives without defending. Ask each person (or AI) what they see from their position. Write their observations down verbatim. Do not explain, justify, or rebut while collecting. Your goal is to receive, not to persuade.
-
Compare side by side. Place all perspectives — including your own — next to each other. Mark anything that appears in someone else's observation but not in yours. These are your candidate blind spots.
-
Investigate discomfort. The perspectives that feel wrong, annoying, or irrelevant are the most important ones to investigate. Discomfort signals that a perspective contradicts your current model — which is exactly the condition under which it might contain information you need.
-
Update or hold the tension. Some perspectives will reveal genuine blind spots that change your position. Others will reveal contradictions that you cannot yet resolve. Both outcomes are valuable. Updating is obvious growth. Holding unresolved tension — the dialectical move — is less comfortable but often more important.
This is not a one-time exercise. The most effective practitioners build perspective-seeking into recurring structures: weekly architecture reviews, pre-mortem sessions before launches, decision journals that include the perspectives they considered and the ones they didn't.
The next lesson — Emotional charge indicates significance — takes this further. Once you learn to seek perspectives that differ from yours, you'll notice that some of them provoke a stronger emotional reaction than others. That reaction isn't noise. It's signal — pointing to exactly the places where your current model is most invested and therefore most vulnerable to blind spots.
Sources
- Galinsky, A. D., Ku, G., & Wang, C. S. (2005). Perspective-taking and self-other overlap: Fostering social bonds and facilitating social coordination. Group Processes & Intergroup Relations, 8(2), 109-124.
- Page, S. E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press.
- Luft, J., & Ingham, H. (1955). The Johari Window: A graphic model of interpersonal awareness. Proceedings of the Western Training Laboratory in Group Development. UCLA.
- Basseches, M. (1984). Dialectical Thinking and Adult Development. Ablex Publishing.
- Riegel, K. F. (1973). Dialectic operations: The final period of cognitive development. Human Development, 16(5), 346-370.
- Page, S. E. (2018). The Model Thinker: What You Need to Know to Make Data Work for You. Basic Books.
- Ahrens, S. (2017). How to Take Smart Notes. Sönke Ahrens.