Your schemas are untested hypotheses until you act on them
You have beliefs about how the world works. You believe certain management styles produce better teams, certain communication approaches resolve conflict, certain morning routines make you productive. These beliefs feel solid because you have held them for years, because smart people agree with you, because they make logical sense.
But none of that makes them true.
A schema that has never been tested against reality is an untested hypothesis wearing the costume of knowledge. It might be right. It might be catastrophically wrong. Without action — without putting the schema in contact with the world and observing what happens — you have no way to distinguish between the two.
This is the core insight of reality testing: the most reliable way to validate a schema is not to think about it harder, argue about it longer, or find more people who agree with it. It is to act on it and observe the results.
Pragmatism: ideas earn their truth through consequences
John Dewey, the philosopher who shaped American pragmatism more than any other thinker, rejected the idea that knowledge is something a mind passively receives by observing the world. Instead, he argued that inquiry begins when action hits an obstacle — when something you expected to work does not work — and proceeds through active manipulation of the environment to test hypotheses. An idea "agrees with reality," in Dewey's framework, if and only if it is successfully employed in human action in pursuit of human goals. Ideas are not representations of truth. They are instruments — tools whose value is determined by their consequences when applied.
This means that a schema sitting inside your head, no matter how internally consistent, has not yet earned the status of knowledge. Dewey would say it is a candidate for knowledge — a plan of action awaiting its test. You believe that direct feedback improves your team's performance more than written reviews? That belief is a tool. Use it. Observe what happens. If performance improves, the tool works in this context. If it does not, the tool needs modification or replacement.
William James put it even more bluntly: the pragmatic method asks of every belief, "What concrete difference will its being true make in anyone's actual life? What experiences will be different from those which would obtain if the belief were false?" If you cannot answer that question — if your schema produces no testable predictions — then it is not a schema you can validate. It is an abstraction floating free of the world.
The experiential learning cycle: action as the engine of understanding
David Kolb formalized this insight into a learning model that has shaped education and professional development for four decades. His experiential learning cycle (1984) identifies four stages through which genuine understanding develops: concrete experience (doing something), reflective observation (reviewing what happened), abstract conceptualization (drawing conclusions and updating your model), and active experimentation (applying the revised model to a new situation).
The critical feature of Kolb's cycle is that it begins and ends with action. You do not learn by reading about management — you learn by managing, reflecting on the outcomes, revising your mental model, and then managing differently. The cycle is not optional. Skip the concrete experience, and you have theory without grounding. Skip the active experimentation, and your reflection never reconnects with reality.
For schema validation, this means that reflection alone — sitting with your beliefs and analyzing them — is necessary but insufficient. Reflection tells you what your schema predicts. Action tells you whether the prediction is correct. The learning only completes when you close the loop: act, observe, reflect, revise, act again.
The gap between what you say and what you do
Chris Argyris and Donald Schon identified a problem that makes action-based testing particularly urgent. In their research on organizational behavior, they discovered that people consistently maintain two different theories about how they operate: an espoused theory (what they say they believe and would do) and a theory-in-use (what they actually do when the situation arises). These two theories are often incongruent, and the person holding them is typically unaware of the gap.
You might espouse a schema that says, "I believe in giving people honest, direct feedback." But your theory-in-use — observable in your actual behavior — might be: "I soften feedback to the point of meaninglessness whenever I sense the other person might be upset." You will never discover this discrepancy through introspection alone. Introspection accesses your espoused theory. Only action reveals your theory-in-use.
This is why Argyris argued for double-loop learning: not just correcting errors within your existing framework (single-loop), but questioning and revising the underlying assumptions that generated the errors in the first place. When you act on a schema and the results surprise you, single-loop learning asks, "What did I do wrong?" Double-loop learning asks, "Is the schema itself wrong? Are the goals and values behind the schema what I actually want?"
Reality testing through action is the mechanism that triggers double-loop learning. Without it, you remain trapped in single-loop corrections — adjusting tactics while never examining the strategy.
Build-measure-learn: validated learning at speed
Eric Ries, in The Lean Startup (2011), translated these ideas into a framework that operates at the speed of modern product development. His build-measure-learn feedback loop captures the essential structure of reality testing through action:
- Build a minimum viable product (MVP) — the smallest thing that tests your core assumption.
- Measure the results using metrics that relate meaningfully to the assumption being tested.
- Learn whether to persevere (the schema holds) or pivot (the schema needs fundamental revision).
The key concept Ries introduced is validated learning — knowledge that has been demonstrated empirically, through real-world experimentation, rather than asserted through logic or authority. Validated learning is the only kind of learning that reduces uncertainty, because it is the only kind that has survived contact with reality.
This applies directly to personal schemas. You believe that writing in the morning is when you do your best work? Build a test: write in the morning for two weeks, write in the evening for two weeks, track word count and self-assessed quality. Measure. Learn. Your schema either earns its status as validated knowledge or it gets revised. Either outcome is progress.
The discipline Ries demands is the same one Dewey demanded a century earlier: stop treating your beliefs as conclusions and start treating them as hypotheses. Then run the experiments.
Your body already knows this: embodied cognition and action-based knowledge
Research in embodied cognition provides neurological evidence for why action-based testing is so powerful. The embodied cognition framework, synthesized extensively in the Stanford Encyclopedia of Philosophy, holds that cognition is not an abstract computation happening in isolation inside the skull. Instead, thought is grounded in the body's sensory and motor systems — in its capacity to perceive and act.
Neuroimaging studies have shown that even reading a word like "kick" activates the motor areas of the brain associated with kicking. Concepts are not stored as abstract symbols; they are stored as patterns of sensory and motor activity. This means that acting on a schema — physically doing the thing your belief predicts will work — engages cognitive resources that purely abstract reasoning cannot access.
When you test a schema through action, you are not merely collecting data. You are engaging your full cognitive apparatus: perception, proprioception, motor planning, emotional response, social cognition. The result is a richer, more integrated form of knowledge than any amount of reflection could produce. This is why a leader who has managed through a crisis understands crisis management differently from one who has only read about it. The action created knowledge that abstract analysis could not.
Reality testing in cognitive behavioral therapy
The term "reality testing" has a precise clinical meaning in cognitive behavioral therapy (CBT). In therapeutic settings, reality testing involves evaluating the accuracy of a person's thoughts and beliefs by comparing them against concrete, objective evidence. A therapist might ask: "You believe that your colleagues think you are incompetent. What is the actual evidence for that belief? What evidence exists against it?"
CBT reality testing uses techniques like Socratic questioning, thought records, and behavioral experiments — structured actions designed to test whether a belief holds up when confronted with real-world data. The therapeutic insight is that distorted thinking patterns (catastrophizing, mind-reading, black-and-white thinking) persist precisely because they are never tested against reality. The person who believes "everyone will judge me if I speak up in meetings" has typically never actually tested that belief by speaking up and observing the response.
For personal epistemology, the parallel is direct. Your schemas about yourself, your relationships, your career, your capabilities — many of them are untested. They feel true because they have lived inside your head for years. But feelings of truth and evidence of truth are entirely different things. Reality testing through action is how you convert the first into the second.
How to design a schema test
Not all actions are tests. Going through the motions of acting on a belief, with no structure for observing results, is just living. A genuine reality test has four components:
A falsifiable prediction. State your schema as an if-then proposition: "If I delegate this decision entirely to the team, they will produce a result that is at least as good as what I would have decided alone, within the same timeframe." If you cannot state the prediction clearly enough that it could be proven wrong, you are not ready to test it.
A defined action. Specify exactly what you will do, when, and for how long. Vague intentions ("I'll try being more direct") are not tests. Specific commitments ("In tomorrow's one-on-one with Alex, I will state my concern within the first five minutes without softening language") are tests.
An observation protocol. Decide in advance what you will measure or record. This does not require sophisticated metrics. It can be as simple as writing down what happened immediately afterward, while the experience is fresh. But the commitment to observe must be explicit. Without it, confirmation bias will rewrite your memory of the results to match your existing schema.
A revision trigger. Define in advance what would cause you to revise the schema. If the team's decision is significantly worse than yours would have been, what does that mean for your delegation schema? Perhaps it means the schema needs a boundary condition ("delegation works when the team has sufficient context") rather than wholesale abandonment. Perhaps it means the schema is wrong. Decide before you have the results, not after, so that your ego does not negotiate with reality.
Action, AI, and your Third Brain
When your schemas are externalized as testable predictions and your test results are captured in structured logs, AI becomes a powerful validation partner. An AI system can review your schema test log and identify patterns you might miss: schemas that you repeatedly confirm without genuine falsification risk, tests that are structured to succeed, predictions that have been revised so many times they have drifted from their original meaning.
More importantly, AI can help you design better tests. Describe a schema to an AI partner and ask: "What would falsify this? What is the strongest evidence against it? What is the smallest action I could take that would distinguish between this schema being true and it being false?" This is not outsourcing your thinking. It is using a cognitive tool to sharpen the rigor of your reality testing — the same way Dewey used the classroom, Argyris used organizational case studies, and Ries used the MVP.
But the action itself cannot be outsourced. No amount of analysis, simulation, or conversation substitutes for putting your schema into the world and letting reality respond. The AI can help you prepare. The results come from you doing the thing.
The schema that survives contact with reality
Kurt Lewin, the psychologist who pioneered action research, worked from a principle that unifies every framework in this lesson: "No research without action, no action without research." Understanding the world and acting in the world are not separate activities. They are the same activity, viewed from different angles.
Your schemas — about people, systems, yourself, the world — are your best current models of how things work. They deserve respect. They also deserve testing. And the test that matters most is not whether the schema is logically consistent, whether experts agree with it, or whether it makes you feel confident. The test that matters is whether it survives contact with reality when you act on it.
A schema that has been tested through action and refined through observation is worth more than a thousand schemas that have only been thought about. Not because action is magic, but because reality is the only referee that cannot be fooled, flattered, or argued with.
Act on your schemas. Observe what happens. Update accordingly. This is not just a validation strategy. It is what honest epistemology looks like in practice.