You understood each other perfectly. Except you didn't.
You and a colleague agree that the product needs to be "simpler." You both nod. You both feel aligned. You leave the meeting and start executing in opposite directions — because her "simpler" meant fewer features and yours meant fewer clicks. The word was identical. The meaning was not. And neither of you noticed the divergence until it had cost two weeks of work.
This is not a story about careless communication. This is a story about how language actually works. Every word you use carries a cloud of possible meanings, and which meaning activates in someone's head depends on their context — their experience, their discipline, their emotional state, the conversation they had an hour ago. You do not transmit meaning when you speak. You transmit sounds. The other person constructs meaning from those sounds using their own mental models. And their models are not yours.
Linguists call this phenomenon polysemy: a single word carrying multiple related but distinct meanings. "Run" has over 600 documented senses in English. "Set" has nearly 500. But the problem extends far beyond dictionary curiosities. The words that cause the most damage are the ones everyone thinks they understand — "strategy," "quality," "ownership," "alignment," "done." These are the words people never stop to define because they assume shared understanding. And that assumption is where communication breaks.
Words don't carry meaning. Context creates it.
Ludwig Wittgenstein, in his Philosophical Investigations (1953), demolished the idea that words have fixed meanings pointing to fixed things in the world. He replaced it with a radical alternative: the meaning of a word is its use in the language. He called the different contexts of use "language games" — not because language is trivial, but because meaning, like a game, emerges from shared rules and practices that are always local, always contextual.
Consider the word "Water!" Wittgenstein would point out that it functions completely differently depending on the situation. Shouted in a burning building, it is a command. Spoken at a dinner table, it is a request. Written on a chemical label, it is a classification. Gasped in the desert, it is an exclamation. The sound is identical. The meaning is entirely dependent on the language game being played. And there is no master list of language games. They are as varied and irregular as human life itself.
This insight has a direct practical consequence: you cannot determine what someone means by looking at the words alone. You must understand which game they are playing — what context they are operating in, what practices they are embedded in, what they are trying to accomplish. When you skip this step, you default to your context and project your meaning onto their words. This happens automatically, beneath conscious awareness, and it is the single most common source of communication failure in professional and personal life.
You systematically overestimate how well you've been understood
The intuition that "I said it clearly, so they understood it" is not just wrong — it is measurably wrong. Boaz Keysar and Anne Henly demonstrated this in a 2002 study published in Psychological Science. Participants were asked to communicate one of two possible meanings of ambiguous sentences to a listener. Speakers estimated that they had successfully conveyed their intended meaning 72% of the time. The actual success rate was 61%. Speakers consistently overestimated their effectiveness.
Critically, the overestimation was specific to speakers. Observers who were told the speaker's intention did not systematically overestimate the speaker's success. The bias lives in the act of speaking itself — when you know what you mean, you project that knowledge onto your listener. Psychologists call this the illusion of transparency: the feeling that your internal states are more visible to others than they actually are.
This illusion compounds with polysemy. Not only are your words carrying multiple possible meanings, but you are systematically overconfident that the other person selected the same meaning you intended. You think communication happened. It didn't — or at least, not the communication you think happened. The quote often attributed to George Bernard Shaw captures it: "The single biggest problem in communication is the illusion that it has taken place."
Every word carries a private connotation
The problem goes deeper than which dictionary definition someone selects. Even when two people agree on the denotation — the literal referent — of a word, they may diverge wildly on its connotation: the emotional and evaluative associations the word triggers.
Charles Osgood demonstrated this empirically in The Measurement of Meaning (1957), introducing the semantic differential technique. He asked large groups of people to rate concepts on bipolar scales — good/bad, strong/weak, active/passive — and discovered that the same word systematically triggers different evaluative, potency, and activity associations for different people. A 2022 study in Humanities and Social Sciences Communications replicated Osgood's methods and found that even young adults in the same culture showed significant gender differences in the emotional connotative meanings of common words.
Consider the word "ambitious." For one person, it activates connotations of drive, excellence, and admiration. For another, it activates connotations of ruthlessness, selfishness, and social threat. Both people can define "ambitious" identically in literal terms while having opposite emotional responses to being described as ambitious. And because connotative meaning operates largely below conscious awareness, neither person realizes they are reacting to a different word.
This is why feedback conversations go wrong. "Your work is aggressive" means "bold and impressive" in one manager's connotative world and "hostile and alienating" in the recipient's. The sentence is delivered once but received twice, in two different emotional registers, and both parties think the communication was clear.
The same term means different things in different rooms
Polysemy becomes most dangerous across disciplinary boundaries. The same word often has a technical meaning in one field that bears little resemblance to its technical meaning in another — and both meanings differ from the everyday usage.
"Architecture" in building design means the structural and aesthetic composition of physical space. In software engineering, it means the high-level organization of system components and their interactions. In enterprise strategy, it means the alignment of business capabilities with technology systems. All three uses share a loose metaphorical connection (structure, design, the arrangement of parts) but the operational content is completely different. When a software architect and a building architect say "the architecture needs work," they are not merely using different jargon — they are thinking in different conceptual frameworks.
Network science provides a particularly clear example of this collision. Mathematicians use "graph/vertex/edge." Physicists use "network/node/edge." Computer scientists use "network/node/link." Social scientists use "network/actor/tie." These are not synonym choices. Each vocabulary carries the assumptions and methods of its home discipline. When an interdisciplinary team meets and someone says "node," the term activates different default behaviors for each person in the room. Healthcare communication research has documented that this kind of terminology confusion leads to misunderstandings between stakeholders, reluctance to collaborate, and unsuccessful implementation of shared projects.
The lesson: the more specialized your vocabulary, the more you need to explicitly check that your listener's definition matches yours. Expertise makes this harder, not easier, because expertise makes your definitions feel self-evident. The more fluent you are in a technical language, the more invisible its assumptions become.
How AI reveals the mechanics of meaning
Modern language models offer a striking computational confirmation of what Wittgenstein argued philosophically: meaning is context, not reference.
Early word embedding systems like Word2Vec (2013) assigned each word a single fixed vector — a point in mathematical space. This meant "bank" in "river bank" and "bank" in "bank robbery" had identical representations. The model could not distinguish senses because it treated words as having stable, context-independent meanings. It had the same problem humans have when they assume shared vocabulary equals shared understanding.
BERT (2018) and its successors solved this by generating contextualized embeddings: a word's vector changes depending on the surrounding text. The word "bank" in "fishing by the bank" produces a different mathematical representation than "bank" in "robbing the bank." Research from Stanford's AI Lab showed that these context-dependent representations produce significantly better results on virtually every language task, confirming that meaning is not a property of the word but a property of the word-in-context.
Even more revealing: researchers found that the variety of contexts a word appears in, rather than its inherent number of dictionary senses, is what drives variation in contextualized representations. The more contexts a word inhabits, the more its meaning stretches. This maps precisely onto the human experience of polysemy — the words that cause the most miscommunication are the high-frequency words that appear in the most varied contexts, not the obscure words with unusual definitions.
This has a direct implication for your epistemic infrastructure. When you externalize your thinking — writing notes, documenting decisions, building a knowledge graph — you face the same problem language models face with static embeddings. If you use the word "strategy" in twelve different notes without contextualizing it, you have created twelve ambiguous references. Your future self, reading those notes months later, will construct meaning from a context that may differ from the one you wrote in. The solution is the same one that improved AI language understanding: make context explicit. Don't write "we need better alignment." Write "alignment between marketing's quarterly targets and engineering's sprint capacity, measured by the percentage of features shipped that were in the original quarterly plan."
The protocol: operational definitions at decision points
You cannot eliminate polysemy. You cannot rewire language to produce one-to-one mappings between words and meanings. What you can do is intervene at the moments where polysemy does the most damage — the moments where words become commitments.
Step 1: Identify high-stakes vocabulary. In any decision, strategy document, or team agreement, circle the words that are doing the most work. These are usually abstract nouns: "quality," "ownership," "alignment," "scale," "simple," "done," "ready," "good." They feel clear. They are not.
Step 2: Request operational definitions. For each high-stakes word, ask: "What does this look like concretely? How would we measure it? What is an example and a counter-example?" This forces the move from abstract agreement to concrete specification. It is the verbal equivalent of what AI researchers did when they moved from static to contextualized embeddings — binding the word to a specific context.
Step 3: Compare definitions before proceeding. Do not assume alignment — verify it. Ask each person involved to state their operational definition independently, then compare. The gaps you discover are not evidence of poor communication. They are the normal state of language. Finding them before execution is the entire point.
Step 4: Document the agreed definition. Write it down. Put it in the project brief, the team charter, the decision record. Operational definitions have a half-life — people drift back to their private meanings over time. The document serves as a recalibration point.
This protocol costs minutes. The alternative — discovering definitional divergence after weeks of execution — costs days, relationships, and trust. Every "but I thought we agreed" conversation in your career is evidence that this step was skipped.
The connection to your cognitive infrastructure
L-0167 showed that emotional context colors all perception. This lesson extends that insight to language itself: the words you hear are colored not just by your emotions but by your entire history of using those words — your discipline, your culture, your past conversations, your defaults. You do not hear words. You hear words through a filter of accumulated context. So does everyone else. And their filter is different from yours.
This matters for L-0169, which examines context collapse in digital communication. When you communicate through text — Slack messages, emails, tweets — you strip away the contextual cues (tone, gesture, shared physical environment) that help listeners select the right meaning. Polysemy becomes even more dangerous in low-context channels because there is less information available to disambiguate. Understanding that words inherently carry multiple meanings prepares you to recognize why digital communication fails in specific, predictable ways.
The primitive holds: shared vocabulary does not guarantee shared meaning. Every time you assume it does, you are betting your outcomes on the hope that someone else's context-dependent meaning construction happened to match yours. That is not communication. It is coincidence. And your epistemic infrastructure — your ability to think clearly, decide well, and collaborate effectively — depends on replacing coincidence with verification.