You have a theory of knowledge. You just haven't looked at it.
Every time you evaluate a claim — "this study proves X," "my gut tells me Y," "the expert said Z" — you're running a program. That program decides what counts as evidence, how much certainty is required before you act, whether authority or experience gets more weight, and whether knowledge is something you receive or something you build. That program is your epistemology: your theory of knowledge itself.
Most people have never examined this program. They assume they're just "being rational" or "going with what makes sense." But rationality according to whom? Makes sense by what criteria? The answers to those questions aren't neutral. They're the output of a specific meta-schema — one that was shaped by your education, your culture, your professional training, and decades of unexamined assumptions about what knowing even means.
This is the most consequential meta-schema you carry. Your schema about risk (L-0333) determines what you attempt. Your schema about learning determines how you grow. But your schema about knowledge itself determines which inputs you accept, which arguments you find persuasive, and which entire categories of evidence you dismiss without realizing you've dismissed them. It is the filter that all other filters pass through.
Your epistemology develops — whether you direct that development or not
William Perry spent fifteen years studying how college students' beliefs about knowledge changed during their undergraduate years. His scheme, published in 1970 as Forms of Intellectual and Ethical Development in the College Years, identified a progression that most people move through — but many get stuck in:
Dualism. Knowledge is absolute. There are right answers and wrong answers, and authorities (professors, textbooks, experts) possess them. Your job is to find the right authority and absorb what they know. When two authorities disagree, one of them is wrong.
Multiplicity. Some questions don't have clear answers yet. Where authorities disagree, you're entitled to your own opinion. "Everyone has a right to their view" becomes the operating principle. Knowledge feels democratic — all perspectives are equally valid because certainty isn't available.
Relativism. Not all perspectives are equally valid. Evidence, context, and reasoning matter. Knowledge is constructed within frameworks, and different frameworks can produce different but defensible conclusions. You start evaluating claims by the quality of their reasoning rather than the status of their source.
Commitment within relativism. You can hold committed positions while acknowledging they're situated, provisional, and open to revision. You act on your best current understanding without pretending it's absolute truth.
Most adults aren't operating from the final position. Perry's research showed that many people stabilize somewhere in multiplicity — they've moved past blind deference to authority but haven't developed the tools to evaluate competing claims. They default to "that's just my opinion" or "everyone sees it differently" without recognizing that some ways of seeing are better supported than others.
King and Kitchener extended this work through their Reflective Judgment Model, mapping seven stages of increasingly sophisticated reasoning about knowledge. Their model distinguishes between pre-reflective thinking (knowledge is certain and comes from authorities), quasi-reflective thinking (knowledge is uncertain, so all views are somewhat equal), and genuinely reflective thinking (knowledge is constructed through inquiry and evaluated by the quality of evidence and argument). Their research across twenty years demonstrated that this progression is real, measurable, and consequential for how people reason about complex problems.
The point is not that one position is "correct." The point is that you are at a position, it shapes every judgment you make, and you can move.
The four dimensions you're already running
Barbara Hofer and Paul Pintrich, in their 2002 synthesis of personal epistemology research, identified four dimensions along which people's beliefs about knowledge vary. You hold a position on each one, and together they form the operating parameters of your epistemic meta-schema:
Simplicity of knowledge. Do you believe knowledge consists of discrete, isolated facts — or interconnected, complex webs of meaning? Someone high on simplicity expects clear, bite-sized answers. Someone who sees knowledge as complex expects that understanding requires holding multiple interacting ideas simultaneously. This dimension directly shapes how you respond to nuance. If your schema says knowledge is simple, ambiguity feels like failure. If it says knowledge is complex, ambiguity is expected and navigable.
Certainty of knowledge. Do you believe that knowledge is fixed and absolute, or evolving and provisional? This determines your relationship to being wrong. If knowledge is certain, changing your mind means you were wrong before. If knowledge is evolving, changing your mind means you updated — which is what knowledge is supposed to do.
Source of knowledge. Do you believe knowledge comes primarily from external authorities — experts, institutions, textbooks — or from personal reasoning and direct experience? Neither extreme works. Pure deference to authority leaves you unable to evaluate conflicting experts. Pure self-reliance leaves you reinventing every wheel and missing the accumulated insight of entire fields. Your position on this dimension determines how you weigh a published study against your lived experience, and when you defer versus when you push back.
Justification for knowing. What counts as adequate evidence? Some people require formal proof. Others accept pattern recognition from experience. Others trust institutional consensus. Your justification schema determines the threshold for belief — how much and what kind of evidence you need before you update your model of the world.
Marlene Schommer's earlier research (1990) measured these dimensions empirically and found that they're largely independent — you can believe knowledge is complex but also believe it's certain, or believe it comes from authority but also that it's always evolving. Your personal epistemology isn't a single position on a single scale. It's a profile across multiple dimensions, and that profile has direct consequences for how you learn, decide, and reason.
The competing traditions your epistemology borrows from
Whether you've studied philosophy or not, your theory of knowledge draws from epistemological traditions that have been debated for centuries. Recognizing which traditions your intuitions align with helps you see what your current epistemology includes and what it systematically excludes.
Empiricism holds that knowledge comes primarily from sensory experience and observation. If you tend to trust data, experiments, and direct evidence over abstract reasoning, you're running an empiricist program. The strength: it grounds claims in observable reality. The blind spot: it struggles with knowledge that can't be directly observed — values, meaning, the internal experience of other people.
Rationalism holds that knowledge comes primarily from reason and logical deduction. If you trust well-structured arguments, mathematical proofs, and logical consistency over anecdotes and personal experience, you're running a rationalist program. The strength: it produces internally coherent systems. The blind spot: logically valid arguments can lead to empirically false conclusions if the premises are wrong.
Constructivism — rooted in Piaget's developmental epistemology — holds that knowledge isn't received but built. Individuals construct understanding by integrating new experience with existing mental structures through assimilation and accommodation. If you believe that two people can encounter the same evidence and legitimately construct different knowledge from it because they bring different frameworks, you're running a constructivist program. The strength: it accounts for how people actually learn and why knowledge is contextual. The blind spot: it can slide into relativism where the quality of different constructions becomes impossible to evaluate.
Pragmatism holds that knowledge is justified by its practical consequences. A belief is "true" to the extent that it works — that it enables effective action. If you tend to evaluate ideas by asking "does this produce useful results?" rather than "is this objectively correct?", you're running a pragmatist program. The strength: it keeps knowledge grounded in action. The blind spot: something can work in the short term while being fundamentally wrong, and "useful" is itself a judgment that depends on your values.
You don't need to pick one. Most functioning epistemologies are hybrids — empiricist about medical decisions, constructivist about management philosophy, pragmatist about tool selection, rationalist about ethical reasoning. The question isn't which tradition is right. The question is whether you know which ones you're using, when, and why.
Knowledge doesn't just live in individual heads
Alvin Goldman's work on social epistemology — particularly his 1999 Knowledge in a Social World — established that knowledge is not purely an individual achievement. It's produced, validated, distributed, and sometimes distorted through social processes: peer review, institutional authority, media ecosystems, professional communities, cultural narratives.
This matters for your personal epistemology because many of the schemas you treat as "your own conclusions" were actually absorbed from social systems. Your beliefs about what constitutes good evidence, what questions are worth asking, and what kinds of knowledge are prestigious versus suspect were shaped by the epistemic culture you inhabit — your discipline, your industry, your social circle.
Engineers tend to run heavily empiricist and rationalist epistemologies because engineering culture rewards those modes. Therapists tend to run more constructivist epistemologies because therapeutic culture requires understanding knowledge as situated and personal. Neither group chose their epistemology from first principles. They absorbed it from their epistemic community.
Recognizing the social dimension of your epistemology doesn't undermine it. It contextualizes it. You can still hold your positions — but you hold them knowing they were shaped by forces outside your deliberate reasoning, which means they're worth examining rather than assuming.
The new epistemic question: how do machines "know"?
The emergence of large language models has introduced an epistemological challenge that previous generations didn't face. When an LLM generates a confident, well-structured answer to a complex question, what kind of knowledge is being demonstrated?
Recent research — including Baris Bozkurt's 2025 "Epistemology in the Age of Large Language Models" and Saurabh Jain and colleagues' 2024 "Defining Knowledge: Bridging Epistemology and Large Language Models" — has explored whether LLMs satisfy traditional definitions of knowledge. The consensus is unsettling: LLMs produce outputs that look like knowledge (they're often accurate, fluent, and well-organized) but don't arise from any process recognizable as justified true belief. They compress statistical patterns from training data into generative representations. They don't observe the world. They don't reason from principles. They don't experience understanding.
This matters for your personal epistemology in two ways.
First, it exposes your justification schema. When you accept an LLM's output, what are you treating as evidence? The fluency of the response? The apparent authority? The fact that it "sounds right"? Each of those is a justification criterion, and none of them are particularly reliable for LLM outputs. Research on epistemic diversity in LLM outputs shows that different prompts produce different claims about the same topic — meaning the "knowledge" you receive is partly a function of how you asked for it.
Second, it forces you to be more explicit about what you mean by "knowing." If a machine can produce accurate answers without understanding, then accuracy alone isn't what makes something knowledge. Your epistemology needs to account for the difference between having the right answer and understanding why it's right — a distinction that didn't matter much when all your knowledge sources were human.
Epistemic virtues: the character of your knowing
Your epistemology isn't just a set of beliefs about knowledge. It's also a set of dispositions — habits of mind that shape how you encounter new information. Virtue epistemologists call these epistemic virtues, and they function as meta-schemas about how to engage with the process of knowing itself.
Intellectual humility — the recognition that your current understanding might be wrong. Not self-deprecation. Not doubt for its own sake. The operational willingness to update when evidence warrants it. Without this virtue, your epistemology becomes a closed system that only accepts confirming evidence.
Intellectual curiosity — the drive to seek understanding beyond what's immediately necessary. Curiosity is what keeps an epistemology from becoming static. It generates the questions that expose the limits of your current schemas.
Intellectual rigor — the discipline to apply consistent standards of evidence rather than accepting claims that feel good and scrutinizing claims that don't. Rigor is what prevents your epistemology from degrading into motivated reasoning.
Intellectual courage — the willingness to follow evidence to conclusions that are socially costly or personally uncomfortable. Without this, your epistemology is constrained not by what's true but by what's safe to believe.
These virtues don't replace your epistemological framework. They're the operating conditions under which any framework produces good results. An empiricist without humility becomes a dogmatist who dismisses non-quantitative evidence. A constructivist without rigor becomes a relativist who can't distinguish insight from delusion. The framework sets the rules. The virtues determine whether you actually follow them.
Why this is the highest-leverage examination
You can examine your schema about risk and adjust it. You can examine your schema about learning and improve it. But when you examine your schema about knowledge itself, you're adjusting the instrument you use to examine everything else.
Consider what happens at each of Hofer and Pintrich's four dimensions:
If you discover your simplicity schema is set too high (you expect simple, clean answers), you become capable of sitting with complexity instead of forcing premature resolution. Every decision you make gets better.
If you discover your certainty schema is too rigid (you treat current knowledge as fixed), you become capable of updating. Every belief you hold becomes more accurate over time instead of more entrenched.
If you discover your source schema is too authority-dependent, you become capable of independent evaluation. You stop outsourcing your thinking to whoever has the most credentials.
If you discover your justification schema is too narrow (you only accept one kind of evidence), you become capable of integrating multiple ways of knowing — data, experience, intuition, formal reasoning — instead of being limited to a single channel.
Each adjustment cascades through every other schema you hold. That's what makes this a meta-schema in the deepest sense: it doesn't just govern one domain. It governs how you process input in every domain.
Your epistemology is showing
You reveal your theory of knowledge every time you argue. When you say "show me the data," you're revealing an empiricist justification schema. When you say "that's just a theory," you're revealing a certainty schema that equates knowledge with proven fact. When you say "trust the experts," you're revealing an authority-based source schema. When you say "my experience tells me otherwise," you're revealing a constructivist schema that privileges lived knowledge over institutional claims.
None of these are wrong in every context. All of them are wrong in some contexts. The difference between someone running an unexamined epistemology and someone who has made their theory of knowledge explicit is the ability to ask: is this the right epistemic tool for this specific situation? Should I be an empiricist right now, or a pragmatist? Should I defer to authority here, or does my direct experience carry more weight?
That kind of flexible, situation-aware epistemic reasoning is what King and Kitchener's reflective judgment model points toward — and what the next lesson on evaluating schema sources (L-0335) will help you operationalize. Because once you know you're running a theory of knowledge, the immediate next question is: where did that theory come from, and should you trust its sources?