You don't understand an abstraction until you can point at something concrete.
Say the word "leverage" in a business meeting and everyone nods. Ask five people in that meeting to give you a specific, concrete example of leverage from their own work this quarter, and you'll get silence, vague gestures, or five incompatible definitions dressed up as examples.
This is the most common failure mode in human reasoning: the illusion of understanding created by fluency with abstract language. You can use a word correctly in a sentence, slot it into the right position in an argument, even define it on demand — and still not understand it in any way that lets you apply it, recognize it in novel situations, or explain it to someone who has never encountered it. The gap between verbal fluency and genuine comprehension is precisely the gap that exemplifies relationships close.
Over the past nine lessons in Phase 13, you've been building a vocabulary of relationship types. You've seen that relationships are as important as entities (L-0241), that making them explicit replaces assumptions (L-0242), and that they come in distinct types — causal, temporal, hierarchical, associative (L-0243). You've learned to distinguish directed from undirected relationships (L-0244), to recognize that relationship strength varies (L-0245), and to map prerequisite (L-0246), enabling (L-0247), contradictory (L-0248), and supporting (L-0249) relationships.
Now we arrive at the relationship type that makes all other types usable: the exemplifies relationship. This is the link between an abstract principle and a concrete instance of that principle. Without it, your knowledge graph is an elegant structure floating in mid-air. With it, every node is anchored to reality.
What the exemplifies relationship actually does
An exemplifies relationship connects two different levels of abstraction. On one end, you have a general principle, concept, or pattern. On the other end, you have a specific instance — something you can see, touch, recall, or point to. The relationship says: "This concrete thing is an instance of that abstract pattern."
This sounds trivially obvious, which is why most people skip it. That skip is the source of enormous cognitive debt.
Consider how concept mapping works. Joseph Novak developed concept maps at Cornell in 1972 as a research tool for tracking changes in children's science knowledge. His theoretical foundation came from David Ausubel's assimilation theory, which held that the single most important factor in learning is what the learner already knows. Ausubel distinguished between meaningful learning — where new concepts are assimilated into existing cognitive structure — and rote learning, where new information is stored without connecting to anything.
Novak's insight was that the connections matter as much as the concepts. A concept map is a node-link diagram where the nodes are concepts and the links are labeled relationships between them. The labels are critical: they specify what kind of relationship exists. "Gravity causes objects to fall." "A ball rolling downhill exemplifies kinetic energy." "Friction opposes motion." Without the labels, you have a picture of things near other things. With them, you have a map of meaning.
When Nesbit and Adesope conducted a meta-analysis of 55 studies involving 5,818 participants across domains from Grade 4 through postsecondary education, they found that concept mapping was consistently associated with increased knowledge retention and transfer. But the effect wasn't uniform. The studies where concept mapping produced the largest gains were the ones where students built their own maps and explicitly labeled the relationships between concepts — including, critically, the exemplifies relationships that connected abstract principles to concrete instances.
The reason is structural. An abstract concept sitting alone in your memory is stored as a verbal proposition — a string of words. An abstract concept connected to a concrete example is stored in two systems simultaneously: the verbal system (the definition) and the experiential system (the memory of the instance). Allan Paivio's dual coding theory, developed in the 1970s and validated across decades of subsequent research, demonstrates that information encoded in both verbal and nonverbal channels is more reliably recalled, more easily retrieved, and more flexibly applied than information encoded in only one.
The exemplifies relationship is the mechanism that activates dual coding. It is the bridge between the word and the world.
Why abstractions without examples are dangerous
There is a specific failure pattern that occurs when abstractions accumulate without concrete grounding, and it is more insidious than simple forgetting.
Ungrounded abstractions create the feeling of understanding without its substance. You can chain them together into elaborate arguments. You can use them in conversation and receive social validation. You can even teach them to others — passing along the same weightless verbal propositions you received, creating a chain of confident incomprehension.
Dedre Gentner's structure-mapping theory of analogical reasoning, first published in 1983, reveals why this happens. Gentner showed that when people reason by analogy, they map the relational structure of a familiar domain (the "base") onto a less familiar domain (the "target"). Crucially, what transfers is relations between objects, not attributes of objects. When you explain electricity by analogy to water flowing through pipes, you aren't saying that wires are wet or that electrons are blue. You are saying that the relationship between voltage and current in a circuit is structurally the same as the relationship between water pressure and flow rate in a plumbing system.
But here is the catch: analogical mapping only works if the base domain is well understood. And "well understood" means grounded in concrete experience. If your understanding of water pressure is itself an ungrounded abstraction — if you've never felt the difference between a garden hose and a fire hose, never watched water find its level, never noticed that the shower pressure drops when someone flushes a toilet — then the analogy does nothing. You are mapping one set of abstract symbols onto another set of abstract symbols. The structure maps correctly, but neither end is anchored to anything real.
This is why exemplifies relationships are not optional. They are the foundation on which all other relationship types rest. A causal relationship between two abstractions is a hypothesis. A causal relationship between two abstractions that each have concrete exemplars is knowledge you can test, apply, and revise. The exemplifies links are what make the difference.
The science of moving between concrete and abstract
The question of how to sequence concrete and abstract instruction has been studied extensively, and the answer is not as simple as "start with examples."
Jerome Bruner's theory of cognitive development, formalized across his work in the 1960s, proposes three modes of representation through which learners encounter any concept. The enactive mode is knowledge through direct physical action — you understand balance by standing on one foot. The iconic mode is knowledge through images and spatial representations — you understand balance by looking at a diagram of a seesaw. The symbolic mode is knowledge through abstract language and notation — you understand balance by reading the equation for torque.
Bruner's key claim was not just that these modes exist, but that they form a developmental progression. New concepts are best introduced through enactive experience, then represented iconically, and finally encoded symbolically. Each mode builds on the previous one. The symbolic representation is not a replacement for the enactive experience — it is a compression of it. And critically, when the symbolic representation becomes disconnected from its enactive and iconic foundations, it loses its meaning. It becomes an empty token that can be manipulated syntactically but not understood semantically.
This insight was formalized into an instructional technique called concreteness fading, systematically reviewed by Emily Fyfe, Nicole McNeil, and Robert Goldstone. Their research synthesized decades of evidence showing that the optimal instructional sequence is: concrete first, then gradually abstract. Start with physical manipulatives or vivid specific examples. Then introduce idealized visual representations that preserve the structure but remove irrelevant surface details. Then move to abstract symbols and formal notation.
The theoretical benefits they identified map precisely onto the exemplifies relationship:
- Concrete materials help learners interpret abstract symbols by providing a referent — the exemplifies link gives the abstraction something to point at.
- Physical and perceptual experiences provide embodied grounding that stabilizes abstract thinking during later manipulation.
- Memorable concrete images serve as retrieval anchors when abstract symbols lose their meaning under cognitive load.
- The gradual fading from concrete to abstract guides learners to strip away irrelevant surface features and attend to the structural relationships that matter.
The research showed that concreteness fading outperformed both concrete-only instruction (which tends to trap learners in context-specific thinking) and abstract-only instruction (which tends to produce ungrounded symbol manipulation). The exemplifies relationship, in other words, is not just a one-time link you create and forget. It is a bridge you cross in both directions — from concrete to abstract when you are building understanding, and from abstract back to concrete when you are applying it.
Multiple examples and the problem of transfer
One concrete example is better than none. But one example is also dangerous, because it invites a specific error: overgeneralizing from the surface features of that single instance.
If the only example of "leverage" in your knowledge base is "using a crowbar to pry open a crate," you will tend to think of leverage as something physical, involving a rigid bar and a fulcrum. You'll miss financial leverage, social leverage, temporal leverage, and informational leverage — all of which share the abstract structure (small input, amplified output through a mechanism) but share none of the surface features.
This is where Gentner's research on analogical reasoning becomes directly actionable. In a series of experiments, Gentner, Loewenstein, and Thompson found that when learners compared two or more concrete examples of the same abstract principle, they were significantly more likely to extract the deep structural pattern and transfer it to novel situations. Comparison forces attention away from surface features and toward the relational structure that the examples share. A single example invites memorization. Multiple examples invite abstraction.
The practical implication is that your concept maps should never have just one exemplifies link per abstract node. Each principle, each pattern, each concept in your knowledge graph should be grounded by at least three concrete examples drawn from different domains. "Feedback loop" should be connected to the thermostat in your house, the way your mood affects your posture which affects your mood, and the way customer reviews influence sales which generate more reviews. Three domains. Three surface profiles. One shared structure. That triangulation is what makes the abstract concept robust — retrievable in multiple contexts, resistant to being confused with superficially similar but structurally different patterns.
A 2024 replication study published in the journal Teaching of Psychology confirmed this finding in applied educational settings: concrete examples significantly enhanced recognition of previously unseen instances of abstract concepts. The effect was not about memorizing the examples themselves. It was about the examples training the learner to recognize the abstract pattern in new concrete situations. The exemplifies relationship, when multiplied across domains, becomes a pattern-recognition engine.
Your Third Brain: the grounding problem in artificial intelligence
The challenge of connecting abstractions to concrete reality is not unique to human cognition. It is one of the foundational problems in artificial intelligence, and the way AI systems fail at it illuminates why the exemplifies relationship matters so deeply for your own cognitive infrastructure.
Large language models like the one you may be using right now operate almost entirely in the symbolic mode Bruner described. They process tokens — abstract symbols representing words and subwords — and learn statistical relationships between those tokens from massive text corpora. They can produce fluent, grammatically correct, contextually appropriate strings of abstractions. They can define "leverage" better than most humans. They can chain abstract concepts together into arguments that read as authoritative.
But they hallucinate. They generate statements that are syntactically perfect and factually false — not because they are lying, but because they lack grounding. A model that has only ever processed symbols has no enactive or iconic layer to check its symbolic operations against. It can state that "the capital of Canada is Toronto" with the same confidence it states that "the capital of Canada is Ottawa," because both are well-formed symbol strings and the model has no concrete referent to adjudicate between them.
Researchers working on this problem distinguish between "true forgetting" — where knowledge is actually lost — and "spurious forgetting" — where the model possesses the knowledge but has lost the ability to access it in the right context. This distinction maps directly to the human experience of ungrounded abstractions. You haven't forgotten what "leverage" means. You just can't access that meaning when you encounter leverage in a context that doesn't look like a crowbar.
The most effective mitigation strategies for AI hallucination all involve some form of grounding: retrieval-augmented generation (RAG) connects the model's outputs to specific source documents. Knowledge graphs provide structured factual anchors. Chain-of-thought prompting forces the model to work through concrete intermediate steps rather than jumping between abstractions. Neuro-symbolic systems route high-abstraction reasoning through formal verification engines that check symbolic claims against concrete data.
Every one of these strategies is a computational analogue of the exemplifies relationship. They all work by connecting abstract propositions to concrete instances — giving the system something real to check its symbols against. When you build exemplifies links in your own knowledge graph, you are implementing the same architecture: grounding your abstractions in concrete reality so they can be verified, applied, and corrected.
The difference is that you have something AI systems currently lack: actual embodied experience. You have stood in the rain. You have burned your hand on a stove. You have felt the shift in a room when someone says the wrong thing. These enactive experiences are not decorations on your knowledge. They are the foundation. The exemplifies relationship is how you keep that foundation connected to the abstract structures you build on top of it.
Concept mapping as grounding practice
Concept mapping is not just a study technique. It is the operational practice of building and maintaining exemplifies relationships.
When Novak's students drew concept maps, they weren't just organizing information spatially. They were making explicit the relationships that were previously implicit — and the most valuable of those relationships were the exemplifies links. A student who drew an arrow from "photosynthesis" to "the plant on my windowsill turning toward the sun" had done something that no amount of reading the definition could accomplish: she had created a bidirectional retrieval path between the abstract and the concrete.
The bidirectionality matters. The exemplifies relationship doesn't just help you move from abstract to concrete (understanding what a principle means by recalling an example). It also helps you move from concrete to abstract (recognizing what principle a new situation is an instance of). This second direction — from particular to general — is what we call pattern recognition, and it is the basis of expertise.
Expert physicians don't diagnose by running through abstract decision trees. They diagnose by recognizing patterns — this patient's presentation reminds them of concrete cases they have seen before, and those cases are linked to abstract diagnostic categories. Expert programmers don't debug by reasoning from first principles of computer science. They debug by recognizing that this bug looks like that bug they fixed three months ago, and both are instances of an abstract pattern (a race condition, a null pointer, an off-by-one error). The exemplifies links are what make expertise fast.
This is why the practice of concept mapping at scale — maintaining a personal knowledge graph where abstract principles are explicitly linked to concrete instances — is not academic overhead. It is the construction of cognitive infrastructure that makes recognition, retrieval, and application possible. Each exemplifies link you add is an investment in your future ability to think.
Protocol: The grounding audit
Here is the operational protocol for ensuring your abstractions stay grounded. Run this audit on any concept, principle, or framework you consider important enough to keep in your active knowledge.
-
Name the abstraction. Write down the concept you want to audit. State it as a proposition: "Feedback loops amplify small changes into large effects." Not a single word. A relationship.
-
List your current examples. Without looking anything up, write down every concrete instance of this concept you can recall from memory. Don't edit. Don't evaluate. Just list.
-
Count and assess. If you have zero examples, you don't understand this concept — you have memorized it. If you have one example, your understanding is fragile and context-bound. If you have two, you have a comparison but not yet a pattern. Three or more from different domains means you have a robust, transferable understanding.
-
Fill the gaps. For any concept with fewer than three examples, actively generate new ones. Use Bruner's three modes: find an enactive example (something you've physically done or experienced), an iconic example (something you can visualize or diagram), and a symbolic example (a formal case from a textbook, research paper, or documented system).
-
Test the links. For each example, write one sentence that makes the exemplifies relationship explicit: "[This specific thing] is an instance of [this abstract principle] because [this structural feature is shared]." If you cannot write that sentence, the link is decorative, not structural.
-
Explain without definitions. Try to explain the abstract concept to someone using only your examples. No jargon. No definitions. Just the concrete instances and the pattern they share. If the person understands the concept, your grounding is solid. If they don't, your examples are either too narrow, too domain-specific, or not actually instances of what you think they are.
-
Schedule re-grounding. Abstractions drift. The examples that grounded a concept six months ago may no longer be vivid or relevant. When you revisit a concept and find that your concrete examples feel stale, generate fresh ones from your recent experience. Grounding is not a one-time event. It is ongoing maintenance.
The goal is not to eliminate abstraction. Abstraction is powerful — it lets you compress experience into transferable principles. The goal is to ensure that every abstraction in your cognitive infrastructure has at least three concrete anchors holding it to reality. An abstraction without anchors is a kite without a string. It looks impressive. It goes wherever the wind takes it.
Now that you understand how exemplifies relationships ground individual abstractions, the next lesson (L-0251) takes a broader view: what happens when you chain multiple relationships together in sequence. Causal chains — where A causes B causes C causes D — are sequences of relationships that reveal the full mechanism behind an outcome. The exemplifies links you've built here are what will let you trace those chains without losing contact with reality at any step.
Sources
- Novak, J. D., & Gowin, D. B. (1984). Learning How to Learn. Cambridge University Press.
- Novak, J. D., & Canas, A. J. (2008). "The Theory Underlying Concept Maps and How to Construct Them." Institute for Human and Machine Cognition.
- Nesbit, J. C., & Adesope, O. O. (2006). "Learning With Concept and Knowledge Maps: A Meta-Analysis." Review of Educational Research, 76(3), 413-448.
- Bruner, J. S. (1966). Toward a Theory of Instruction. Harvard University Press.
- Gentner, D. (1983). "Structure-Mapping: A Theoretical Framework for Analogy." Cognitive Science, 7(2), 155-170.
- Gentner, D., Loewenstein, J., & Thompson, L. (2003). "Learning and Transfer: A General Role for Analogical Encoding." Journal of Educational Psychology, 95(2), 393-408.
- Fyfe, E. R., McNeil, N. M., & Goldstone, R. L. (2014). "Concreteness Fading in Mathematics and Science Instruction: A Systematic Review." Educational Psychology Review, 26(1), 9-25.
- Paivio, A. (1986). Mental Representations: A Dual Coding Approach. Oxford University Press.
- Micallef, A., & Newton, P. M. (2024). "The Use of Concrete Examples Enhances the Learning of Abstract Concepts: A Replication Study." Teaching of Psychology, 49(3).
- Ausubel, D. P. (1968). Educational Psychology: A Cognitive View. Holt, Rinehart and Winston.