Your best ideas are the ones most worth attacking
You have a schema you're proud of. Maybe it's a theory about why your team underperforms, a mental model for evaluating job candidates, or a framework for how your industry is shifting. You've gathered evidence. The evidence supports your view. You feel confident.
That confidence is the danger.
L-0290 established that validation and confirmation are different operations. Looking for evidence that supports your schema is not testing it — it's cheerleading. This lesson takes the next step: if you want to know whether a schema is actually reliable, you must try to destroy it. Deliberately. Systematically. Before you stake real decisions on it.
This practice has a name. In military and intelligence circles, it's called red teaming — assigning a dedicated adversary to find every weakness in a plan before the enemy does. The principle translates directly to personal epistemology: the schemas you rely on most are the ones that most urgently need someone trying to break them. And since you can't always find an external adversary, you have to learn to be your own.
Red teaming: from the Vatican to the battlefield
The practice of institutionalized dissent is older than most people realize. In the eleventh century, the Catholic Church created the role of Advocatus Diaboli — the Devil's Advocate — whose formal responsibility was to argue against candidates for sainthood. The role existed because the Church recognized a structural problem: when everyone involved wants a candidate to succeed, no one looks for disqualifying evidence. The Devil's Advocate was not a cynic or an obstructionist. He was an epistemic safeguard — a person whose job was to find what enthusiasm had missed (Zenko, 2015).
The military formalized the same instinct during the Cold War. The RAND Corporation ran simulations for the United States military in the 1960s in which a "red team" — named for the color representing the Soviet Union — was tasked with thinking like the adversary. Their job was not to predict what the enemy would do. It was to be the enemy — to exploit every weakness in American plans that the planners themselves couldn't see because they were too close to their own logic. By the early 2000s, red teaming had become standard practice across the U.S. military, intelligence community, and eventually the private sector (Zenko, 2015).
The pattern is the same in every case: an individual or group that has built a plan, strategy, or model cannot be trusted to evaluate it objectively. The act of building creates attachment. Attachment creates blind spots. And blind spots persist until someone whose explicit job is to find them goes looking.
Why you can't see the flaws in your own schemas
Irving Janis documented the mechanism in his 1972 study of foreign policy disasters. He analyzed the Bay of Pigs invasion, the failure to anticipate Pearl Harbor, and the escalation of the Vietnam War, and identified a syndrome he called groupthink: when the desire for consensus suppresses the critical evaluation of alternatives. Janis found eight symptoms, including an illusion of invulnerability, collective rationalization, self-censorship of dissenting views, and the emergence of "mindguards" who actively shield the group from contradictory information (Janis, 1972).
But here's what most people miss about groupthink: you can be a group of one. Every symptom Janis identified in groups has an analog in individual cognition. Your illusion of invulnerability is your confidence in a schema that has never been tested. Your collective rationalization is the story you tell yourself about why the evidence supports you. Your self-censorship is the counterargument you generate and immediately dismiss. Your mindguard is the part of your mind that says "I already considered that" without actually having done so.
When you build a schema — "remote teams are less creative than co-located ones," "my industry rewards loyalty," "I do my best work under pressure" — you are the plan's author, advocate, and evaluator simultaneously. Janis showed that this combination reliably produces bad outcomes. The same cognitive system that designed the schema is the one evaluating whether it's sound. That is not validation. It is an immune response against disconfirmation.
The pre-mortem: making failure vivid before it happens
Gary Klein developed what may be the most practical red teaming technique for individual use: the pre-mortem. The method is deceptively simple. Before implementing a decision or relying on a schema, you imagine that it has already failed — completely, unambiguously, spectacularly. Then you generate specific reasons why.
The key insight behind the pre-mortem draws on research by Mitchell, Russo, and Pennock (1989), who found that prospective hindsight — imagining that an event has already occurred — increases the ability to correctly identify reasons for outcomes by 30% compared to simply asking people to predict what might happen. The temporal shift from "what could go wrong?" to "it went wrong — why?" bypasses your brain's motivated reasoning. When the failure is hypothetical and future-tense, your mind generates weak objections it can dismiss. When the failure is treated as a fait accompli, your mind shifts to explanation mode — and explanations are richer, more specific, and harder to ignore (Klein, 2007).
Applied to schemas, the pre-mortem works like this:
- State the schema you're relying on: "My belief is that X causes Y."
- Now assume the schema has led you catastrophically astray: "I acted on this belief and the result was a disaster."
- Generate at least five specific explanations for why the schema failed.
Each explanation is a potential flaw in your schema — a boundary condition you hadn't considered, a hidden assumption you hadn't named, or a confounding variable you hadn't controlled for. The pre-mortem doesn't guarantee you'll find the fatal flaw. But it dramatically increases the probability that you'll find something — because you've shifted your cognitive stance from defending to diagnosing.
Authentic dissent versus performative devil's advocacy
There is a trap here, and it's worth naming explicitly.
Charlan Nemeth, a psychologist at UC Berkeley who spent two decades studying the effects of dissent on group decision-making, found a critical distinction: authentic dissent stimulates divergent thinking, but role-played devil's advocacy does not (Nemeth, 2018). In her experiments, groups that included a member who genuinely disagreed produced more creative solutions and considered more alternatives than groups where someone was assigned to play devil's advocate.
The mechanism is psychological, not logical. When someone truly believes the counterargument, they press harder, generate more detailed objections, and don't back down when challenged. When someone is role-playing dissent, they generate weaker arguments, concede more easily, and the group knows — consciously or not — that the dissent is performative. The result is that devil's advocacy, poorly executed, can actually increase confidence in the original position: "We considered the counterarguments and they weren't very strong."
This matters for red teaming your own schemas because you are both the advocate and the adversary. If you generate counterarguments you can easily dismiss, you are performing devil's advocacy, not red teaming. The test is effort and discomfort. If your red team exercise feels uncomfortable — if the counterargument you generated genuinely threatens the schema you care about — you are probably doing it right. If it feels like a pro-forma check, you are confirming, not validating.
Nemeth's work suggests a practical standard: your red team argument should be one that a smart, informed person could sincerely hold. If you can't construct such an argument, either your schema is extraordinarily well-supported (rare) or you don't understand the opposition well enough to test your own position (common).
Adversarial collaboration: the gold standard
Daniel Kahneman, frustrated with what he called "angry science" — the pattern of rivals publishing attacks and counter-attacks without ever resolving disagreements — developed a method he called adversarial collaboration. Two researchers with opposing theories would jointly design an experiment that both agreed constituted a fair test. An arbiter would ensure neither side built hidden advantages into the methodology. The results would resolve the dispute, or at least narrow it, because both sides had committed in advance to accepting the outcome (Kahneman, 2003).
You can adapt this for personal schema validation. The principle is not "argue with yourself" — which Nemeth showed has limited value. The principle is design a test that your schema could fail. The test must be one you would accept as disconfirming evidence before you run it, not after. This is the difference between a red team exercise and a rigged game.
For example: you believe "cold outreach on LinkedIn doesn't work for my industry." An adversarial collaboration with yourself would mean designing a specific experiment — 50 personalized outreach messages over 30 days, using best practices from someone who claims it does work — and committing in advance to a threshold: if more than 5% convert to meaningful conversations, you'll update your schema. The commitment to the threshold before the evidence arrives is what makes this a real test rather than an exercise in confirming what you already believe.
Red teaming transforms with AI
Large language models are, in a precise sense, red teaming partners that never get tired, never get defensive, and never have a stake in your schema being right.
When you prompt an AI system with "Here is my belief about X — give me the strongest possible counterargument," you are outsourcing the role of the Devil's Advocate. And unlike a human playing devil's advocate, the AI has no social relationship with you, no awareness of what you want to hear, and no incentive to soften its objections. Recent research on AI red teaming has shown that these systems can systematically probe for weaknesses that human testers miss, precisely because they approach the task without the motivated reasoning that makes self-criticism unreliable (Ganguli et al., 2022).
But the value goes further. You can use AI to conduct structured pre-mortems: "Assume this business strategy has failed in 12 months. Generate ten specific, plausible reasons why." You can use it to steelman opposing positions: "Write the best possible argument that my schema is wrong, from the perspective of someone who knows my domain deeply." You can use it to surface hidden assumptions: "What am I taking for granted in this belief that might not be true?"
The critical caveat is the same one Nemeth identified for human devil's advocacy: the quality of the output depends on your willingness to take it seriously. If you generate AI counterarguments and dismiss them reflexively — "that's a generic objection, it doesn't apply to my situation" — you are using AI to perform the theater of red teaming without the substance. The AI produces the adversarial input. The epistemic honesty of engaging with it is still on you.
The compound effect is significant. An individual practicing red teaming in 2020 had to generate all counterarguments from their own knowledge base — which is precisely the knowledge base that produced the schema in the first place. An individual practicing red teaming in 2026 can draw on a cognitive partner that has been trained on orders of magnitude more counterarguments, failure modes, and alternative perspectives than any single human could hold. The schema that survives an AI-augmented red team is more robust than one that survived only self-critique.
The red team protocol for personal schemas
Here is a practical protocol you can apply to any schema before relying on it for a significant decision:
1. State the schema as a falsifiable claim. Not "I think remote work is better" but "Remote work increases my deep-focus output by at least 20% compared to office work." Precision makes the schema testable and forces you to name what "better" actually means.
2. Conduct a pre-mortem. Assume the schema has led you to a bad outcome. Generate at least five specific, plausible reasons why. Do not stop at three — the first three are usually the obvious ones. Reasons four and five are where the non-obvious failure modes live.
3. Steelman the opposition. Write the strongest version of the counterargument — not a straw man you can knock down, but the argument that would persuade a reasonable person who currently agrees with you to change their mind. If you cannot write this, consult someone who genuinely holds the opposing view, or use an AI system to generate it.
4. Identify the killing blow. Ask: what single piece of evidence, if it existed, would force me to abandon this schema entirely? If you cannot name such evidence, your schema may not be falsifiable — which is an entirely different problem (and the subject of the next lesson in this sequence).
5. Design a survivable test. Following Kahneman's adversarial collaboration principle, design a real-world test that your schema could fail. Commit to the disconfirmation threshold before running the test. If the schema survives, your confidence is now warranted — earned through assault, not assumed through comfort.
From red teaming to resource allocation
A schema that has survived genuine red teaming is not guaranteed to be correct. But it occupies a fundamentally different epistemic position than one that has merely been confirmed. You know its boundary conditions because you probed for them. You know its hidden assumptions because you surfaced them. You know the strongest counterargument because you constructed it. And you know the schema's limitations because you tried to find them.
This earned confidence matters because, as L-0292 will establish, schema validation has a cost. You cannot red team every belief you hold — there are too many and your time is finite. The red team protocol is for the schemas you're about to act on, the ones where being wrong carries real consequences. Casual beliefs don't need this rigor. The belief you're about to stake a career decision on, a team reorganization on, or a life change on — that belief has earned the right to be attacked before it's trusted.
The question is not whether your schemas have flaws. They do. The question is whether you find those flaws yourself, on your own schedule, in a context where you can revise — or whether reality finds them for you, on its schedule, in a context where the cost of revision is much higher.
Red team your own schemas. The ones that break needed breaking. The ones that survive are the ones worth building on.