You did not choose most of what you believe
Take any belief you hold with confidence — about what makes a good leader, how relationships work, what diet is healthy, how to manage money, what your strengths are. Now ask: where did that belief come from? Not "what is the belief?" but "who or what put it there, and why did I accept it?"
Most people cannot answer this question for most of their beliefs. The schema is present. It feels like their own. It may have been there so long it seems self-evident. But every schema has a source. You read it somewhere. Someone told you. You inferred it from a single experience and never revisited it. You absorbed it from a culture that never asked you to examine it.
In L-0334, you examined your epistemology — your schema about knowledge itself. Now we go one level more practical: given that you have schemas about knowledge, how do you evaluate the specific channels through which your schemas arrive? Not all sources are equal. A schema you derived from repeated personal experimentation is different from one you absorbed from a confident stranger on the internet. A belief grounded in peer-reviewed replication studies operates on a different evidentiary foundation than one grounded in a single anecdote from someone you admire. The primitive is simple but its implications are vast: evaluate where your models come from.
The evolutionary roots of source evaluation
Humans are fundamentally dependent on information from others. You cannot independently verify the vast majority of what you believe. You did not personally confirm that the earth orbits the sun, that bacteria cause infection, that compound interest works over decades. You accepted these claims on testimony — because the sources seemed trustworthy.
Dan Sperber and colleagues formalized this in their 2010 paper on epistemic vigilance, published in Mind and Language. Their argument: humans have evolved a suite of cognitive mechanisms specifically designed to evaluate communicated information. We do not simply accept everything we are told. We run a background process — epistemic vigilance — that monitors incoming claims for reliability. This process evaluates two dimensions simultaneously: the competence of the source (do they know what they are talking about?) and the benevolence of the source (are they trying to inform me or manipulate me?).
This is not cynicism. Sperber's point is that trust and vigilance are not opposites. They are complementary systems. You default to a stance of reasonable trust — communication would be impossible otherwise — but that trust is continuously calibrated by vigilance mechanisms that flag when something does not add up.
The problem is that these mechanisms evolved for small-group, face-to-face communication where you knew the speaker personally, could observe their track record directly, and had community feedback loops to catch liars. They did not evolve for a world where a stranger's podcast reaches millions, where authority signals can be manufactured, and where the most confidently delivered claims are often the least rigorously sourced.
Your epistemic vigilance hardware is Stone Age. Your information environment is exponential. The gap between the two is where bad schemas get in.
What source evaluation actually looks like
The information literacy field has spent decades trying to formalize source evaluation. The most widely taught framework is the CRAAP test, developed by Sarah Blakeslee at California State University, Chico in 2004. It evaluates sources on five dimensions:
Currency — how recent is the information? A schema about software architecture from 2015 may be dangerously outdated. A schema about human grief from the Stoics may be perfectly current.
Relevance — does this source actually address the domain where you are applying the schema? A brilliant management framework built for 500-person companies may be irrelevant to your three-person team.
Authority — what are the source's credentials, experience, or track record in this specific domain? Authority in one domain does not transfer automatically. A Nobel physicist has no special authority on nutrition.
Accuracy — is the claim supported by evidence? Can it be verified independently? Are the methods transparent?
Purpose — is the source trying to inform, educate, persuade, sell, or entertain? A person selling a course has different incentives than a researcher publishing findings.
The CRAAP framework is useful as a checklist, but it has a weakness: it asks you to evaluate a source by reading the source itself. You are looking for authority signals on the page you are already reading. This is vertical reading — staying within the source to assess the source.
Sam Wineburg and the Stanford History Education Group demonstrated a far more effective approach through their research on professional fact-checkers. They compared how historians, students, and fact-checkers evaluate online information. The fact-checkers outperformed everyone, and they did it by doing something counterintuitive: they spent less time on the source itself. Instead of reading deeply within a webpage, they immediately opened new tabs and searched for information about the source from independent parties. Wineburg called this lateral reading.
The lateral readers asked: what do other people say about this source? What is their track record? Who funds them? What do experts in this specific domain think of their claims? They evaluated the source by leaving the source.
This distinction — vertical versus lateral — maps directly to schema evaluation. When you encounter a new schema (a framework, a model, a claim about how something works), you can evaluate it vertically by examining the schema itself: does it seem logical? Is it internally consistent? Does it match my experience? Or you can evaluate it laterally: who is making this claim? What is their evidence base? What do independent sources say? Have other people applied this schema and gotten the results the source promises?
Lateral evaluation is harder. It is also dramatically more reliable.
Five source categories and their reliability profiles
Every schema you hold arrived through one of a handful of channels. Each channel has a characteristic reliability profile — a set of strengths and failure modes.
Direct experience. You tried something and observed the result. This is your highest-fidelity source for narrow claims. You know firsthand that your body reacts poorly to dairy, that your team performs better with written briefs than verbal ones, that you do your best thinking before noon. The strength of direct experience is specificity and low distortion. The weakness is small sample size and survivorship bias. You experienced one outcome, under one set of conditions, filtered through your particular cognitive biases. The schema may be perfectly accurate for you and completely wrong as a general principle.
Expert testimony. Someone with deep domain knowledge makes a claim based on their expertise. Alvin Goldman's work in social epistemology, particularly his paper "Experts: Which Ones Should You Trust?", lays out the core challenge: when you are not an expert in a domain, how do you evaluate competing expert claims? Goldman identifies several indicators — credentials, track record, peer recognition, ability to explain their reasoning — but also the fundamental asymmetry: you are using non-expert judgment to evaluate expert claims. The strength of expert testimony is depth. The weakness is that you may not have the knowledge to detect when an expert is wrong, biased, or operating outside their actual domain of competence.
Cultural transmission. Beliefs absorbed from your family, community, profession, or broader culture. These schemas arrive without explicit argument. You did not evaluate them and decide to adopt them. You grew up inside them. "Hard work always pays off." "Family comes first." "You should own a home." "Leaders are decisive." Cultural schemas are powerful precisely because they are invisible — they feel like reality rather than like a specific claim from a specific source. Their strength is that they encode accumulated collective experience. Their weakness is that they are almost never examined, they persist long after the conditions that generated them have changed, and they resist revision because rejecting them feels like rejecting your identity.
Narrative and media. Schemas acquired from books, podcasts, articles, social media, and conversations. This is the source category that has exploded in volume. A century ago, the average person encountered a handful of schema sources: family, local community, perhaps a newspaper and a church. Today you are exposed to thousands of schema-laden claims per day. Every blog post, every thread, every interview implicitly offers you a model of how something works. The strength of narrative sources is breadth — you can access the distilled thinking of people you will never meet. The weakness is that narrative quality (how compelling the story) is uncorrelated with schema quality (how accurate the model). The most viral frameworks are often the most oversimplified.
Algorithmic and AI-generated content. An increasingly significant source category. When you ask an AI assistant to help you think through a problem, the response carries implicit schemas. When you read AI-generated summaries, recommendations, or analyses, you are absorbing models that were distilled from training data of highly variable quality. The AI field's version of "garbage in, garbage out" is directly relevant: 85% of AI projects fail, with data quality issues responsible for roughly 70% of those failures, according to industry analyses. The strength of AI sources is speed and breadth of synthesis. The weakness is that provenance is opaque — you cannot trace a specific claim back to a specific evidence base, the training data may contain biases or errors that propagate invisibly, and the confident tone of AI output provides no signal about the actual reliability of the underlying claim.
The provenance problem
In data science, provenance refers to the documented trail of where data came from, how it was transformed, and what decisions shaped it along the way. Data without provenance is untrusted data. No serious data engineer would build a production pipeline on data of unknown origin.
Your schemas deserve the same standard.
Most people operate on schemas of unknown provenance. They believe things about leadership, productivity, relationships, money, and identity without being able to trace those beliefs back to a specific source, let alone evaluate that source's reliability. The schema is just there. It feels true. It has always been there.
This is the cognitive equivalent of a production system running on data from an unknown source with no documentation, no quality checks, and no audit trail. It may work. It may also be feeding you garbage outputs that you mistake for insight because the system has always run this way.
The provenance question is not "Is this schema true?" It is the prior question: "Given where this schema came from, how much confidence should I place in it before independent verification?" A schema from a controlled experiment with replication deserves provisional trust. A schema from a motivational speaker who has never published their methodology deserves provisional skepticism. Not rejection — skepticism. The calibration of confidence to evidence quality.
When authority fails and when it holds
There is a tension in source evaluation between two valid principles. The first: experts know more than non-experts, and it is rational to defer to expertise. The second: expertise can be wrong, biased, or applied outside its valid domain, and blind deference is a failure mode.
Both are true. The skill is knowing when each applies.
Authority holds when the expert has deep domain-specific experience, when their claims are within their domain of competence, when they can articulate the evidence and reasoning behind their position, when their claims are consistent with peer consensus (or when they can explain why they diverge), and when they have no obvious incentive to distort.
Authority fails when credentials in one domain are used to claim authority in another (a physicist opining on economics), when the expert has financial or reputational incentives tied to a specific conclusion, when the claim is unfalsifiable or the expert refuses to specify what evidence would change their mind, when the expert's confidence exceeds the actual evidence base, and when "trust me, I am an expert" replaces transparent reasoning.
The meta-schema you need is not "trust experts" or "distrust experts." It is: "calibrate trust to the specific conditions of this claim, this domain, and this source's position within it." That calibration is a skill. It improves with practice. And it requires you to do the work of evaluation rather than outsourcing your judgment to authority or to reflexive skepticism.
Protocol: the schema source audit
Here is a practice for systematically evaluating the sources behind your operating schemas.
Step 1: Select a domain. Pick one area of your life where your decisions have significant consequences — career strategy, health practices, relationship patterns, financial management, creative process. This is the domain you will audit.
Step 2: List your operating schemas. Write down five to ten beliefs that actively guide your behavior in this domain. Be specific. Not "I believe in healthy eating" but "I believe that intermittent fasting improves my cognitive performance." Not "I believe in good leadership" but "I believe that giving people autonomy produces better outcomes than giving them detailed instructions."
Step 3: Trace the provenance. For each schema, write down where it came from. Be honest. Categories: direct personal experience, specific expert or researcher, a book or article (name it), cultural absorption, someone I admire, a course or workshop, unknown. If the answer is "unknown," that is the most important finding. You are running your life on code of unknown origin.
Step 4: Evaluate the source. For each identified source, assess: (a) Did this source have direct experience or evidence in the specific domain of the claim? (b) Did this source have incentives that might bias the claim? (c) Have I verified this claim through any independent channel — a second source, personal experimentation, peer-reviewed evidence? (d) Am I applying this schema in the same context where the source developed it, or have I generalized it to a different domain?
Step 5: Rate your confidence. Based on the source evaluation, assign each schema a confidence rating: high (strong source, independently verified), medium (reasonable source, not yet verified), or low (weak source, unknown origin, or no independent verification). Low-confidence schemas are not necessarily wrong. They are your priority list for deeper investigation.
This audit will not take long. Thirty minutes, once. But it will change how you relate to your own beliefs. You will start noticing when you are operating on a schema you have never evaluated. You will start asking "where did I get this?" before asking "is this true?" And that prior question — the provenance question — will catch more errors than any amount of logical analysis applied after the fact.
Your AI tools have this problem too
If you use AI assistants as thinking partners — and within the Completions framework, you should — source evaluation applies to their outputs with particular force.
An AI assistant draws on training data spanning billions of documents. When it offers you a framework, a recommendation, or an analysis, you cannot trace that output back to a specific evidence base. The AI does not know where its schemas came from any more than you know where your cultural schemas came from. It absorbed patterns from data and now reproduces them with confident fluency.
This does not make AI outputs useless. It makes them unsourced. Treat them accordingly: as hypotheses to be verified, not as conclusions to be adopted. When an AI suggests a schema — "teams with psychological safety outperform those without" — the appropriate response is not to accept or reject the claim. It is to ask: what is the evidence base for this? Where can I find the original research? What are the boundary conditions? What does the best counterargument look like?
The AI is a schema generation tool. You are the schema evaluation tool. Do not outsource the second function to the first.
From source evaluation to abstraction layers
You now have a practice for evaluating where your schemas come from. But there is a deeper structural question that source evaluation opens up.
When you trace a schema back to its source and evaluate that source's reliability, you are operating at a specific level of abstraction. You are asking about a particular claim from a particular origin. But schemas do not exist in isolation. They stack. A belief about "how to manage a team" sits on top of a belief about "how motivation works," which sits on top of a belief about "what people fundamentally want." Each layer is sourced differently. Each layer has its own reliability profile.
In L-0336, you will learn about schema abstraction layers — how schemas operate at different levels of generality, how higher-level schemas constrain and shape lower-level ones, and why evaluating a schema sometimes means evaluating not just its own source but the sources of the schemas it depends on. Source evaluation tells you whether a particular schema is well-grounded. Abstraction layers tell you whether your entire schema architecture is well-grounded — whether the foundations are as solid as the structures built on top of them.
The question shifts from "Where did this belief come from?" to "Where did the beliefs underneath this belief come from?" That is where the real leverage is.