You already traverse graphs when you think
When you lie in bed at night and your mind drifts from tomorrow's meeting to a conversation you had last week to a song you heard in college to the town where you grew up to the smell of a particular kitchen — you are traversing a graph. Each thought is a node. Each association is an edge. The drift is not random. It follows connections that exist in the structure of your memory, even if you never consciously mapped them.
This is not a metaphor. Cognitive scientists have modeled human memory as a network since the 1960s, and the mathematics of how activation moves through that network — which nodes light up, in what order, through which paths — is formally identical to graph traversal in computer science. The insight of this lesson is that what your mind does spontaneously, you can learn to do deliberately. Graph traversal is not just something that happens to you. It is a technique you can practice, refine, and deploy to generate insights you would never reach by thinking harder about the same node.
The algorithms your mind already runs
Computer science formalized two fundamental approaches to graph traversal in the 1950s and 1960s, and both have direct cognitive analogs.
Depth-first search (DFS) starts at a node and follows one path as far as it will go before backtracking. It picks a neighbor, then picks a neighbor of that neighbor, then a neighbor of that neighbor's neighbor — plunging deeper and deeper into the graph until it hits a dead end or a node it has already visited. Only then does it backtrack and try a different branch. The result is a narrow, deep exploration of one particular chain of connections.
When you obsess over a problem — when you follow a thread of reasoning down six or seven layers of "but why?" and "what causes that?" and "where did that come from?" — you are running depth-first search on your knowledge graph. The power of this mode is penetration. You reach nodes that broad scanning never touches. The danger is tunnel vision: you explore one path exhaustively while ignoring everything adjacent to it.
Breadth-first search (BFS) takes the opposite approach. Starting from a node, it visits every immediate neighbor first, then every neighbor of those neighbors, then every neighbor of those neighbors' neighbors — expanding outward in concentric rings. The result is a wide, shallow exploration of the local neighborhood.
When you brainstorm — when you list every association you can think of for a given concept without following any of them deeply — you are running breadth-first search. The power of this mode is coverage. You see the full shape of what connects to your starting point. The danger is superficiality: you know what's adjacent but never discover what's six hops away.
Neither algorithm is superior. They answer different questions. Depth-first answers: what is at the far end of this particular chain of connections? Breadth-first answers: what is the full neighborhood of this concept? The skilled thinker uses both, and knows when to switch.
Spreading activation: how the brain actually does it
The cognitive science behind this is not speculative. In 1975, Allan Collins and Elizabeth Loftus published their spreading activation theory of semantic processing in Psychological Review, and it remains one of the most influential models of how memory retrieval works. The model proposes that concepts in memory are represented as nodes in a network, connected by associative links of varying strength. When you think of a concept — when a node activates — that activation spreads along the links to neighboring nodes, which in turn spread activation to their neighbors, decaying with distance and link weakness.
This is graph traversal at the neural level. Activation does not jump directly from "dog" to "veterinarian." It propagates: dog activates animal, pet, bark, loyalty, fur. Pet activates cat, owner, care. Care activates doctor, health, veterinarian. The traversal follows the graph's structure, and the path it takes determines what you retrieve. Change the starting node, and you retrieve different things. Change the link strengths — through learning, repetition, or deliberate restructuring — and you change what future traversals can reach.
The critical finding is that activation is not equally distributed. Stronger links (frequently co-activated associations) carry more activation. This means your default traversals follow well-worn paths — the same associations, the same chains, the same conclusions. Collins and Loftus demonstrated that this is how priming works: activating one concept pre-activates related concepts, making them faster to retrieve. The practical implication is that your habitual thought patterns are literally carved into the link strengths of your knowledge graph. Traversal follows the grooves.
Vannevar Bush saw this in 1945
Eighty years before knowledge graphs became a standard tool, Vannevar Bush described exactly this dynamic — and proposed a solution.
In his 1945 essay "As We May Think," published in The Atlantic, Bush observed that the human mind "operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain." Bush was describing spreading activation decades before Collins and Loftus formalized it. He saw that thinking is traversal — that cognition is the act of following trails through a connected structure.
But Bush also saw the limitation. The trails in your head are volatile. You cannot reliably retrace them. You cannot share them. You cannot inspect why a particular chain of associations led to a particular conclusion. So he proposed the memex: a device in which an individual could store all their books, records, and communications, and — critically — build and follow what he called "associative trails." A trail was a named, saved path through a collection of linked documents. You could create a trail, annotate it, return to it later, and share it with others.
Bush was designing a personal knowledge graph with traversal as its primary interaction. The memex was not a filing system. It was a traversal machine. The user's job was not to categorize documents but to link them — to build the trails that would make future traversal productive. "Wholly new forms of encyclopedias will appear," Bush wrote, "ready made with a mesh of associative trails running through them."
The web realized part of this vision. Hyperlinks are edges. Browsing is traversal. But the web's graph is public and generic. Your knowledge graph is personal and specific. The trails that matter most for your thinking are the ones running through your own network of concepts, beliefs, and experiences. Bush's core insight was that the act of building and following those trails — the act of traversal itself — is where new thinking happens.
Mednick's associative theory: creativity as traversal distance
Sarnoff Mednick formalized the connection between traversal and creativity in 1962. His associative theory of creativity proposed that creative thinking is fundamentally the process of forming connections between concepts that are far apart in your associative network.
Mednick distinguished between two types of associative hierarchies. People with steep associative hierarchies, when given a concept like "table," immediately produce the strongest, most common associations — "chair," "wood," "furniture" — and then quickly run out of responses. Their traversal stays local. People with flat associative hierarchies produce a wider, more evenly distributed range of associations — "leg," "food," "negotiation," "periodic" — because their activation spreads more broadly and reaches more distant nodes before decaying.
Mednick's Remote Associates Test (RAT) was designed to measure this. Given three seemingly unrelated words — "fish," "mine," "rush" — can you find a fourth that connects them? (The answer is "gold.") Solving this requires traversal: you must explore the associative neighborhoods of all three words and find the node where their neighborhoods overlap. The further apart the starting nodes, the more traversal is required, and the more creative the solution.
Recent research using network science methods has refined Mednick's original model. Yoed Kenett and colleagues have shown that highly creative individuals do not simply have flatter hierarchies — they have richer, better-connected, and more flexibly navigable associative networks. The creative advantage is not just about link strength. It is about graph structure: more paths, more bridge nodes (L-0350), more routes between distant regions. Creativity, in this framework, is the capacity for long-range traversal through a well-connected graph.
The practical consequence is direct: if you want to think more creatively, you need to both build a richer graph (more nodes, more edges, more cross-domain bridges) and practice traversing it in ways that exploit that richness.
Random walks: the power of not knowing where you are going
Computer science has a third traversal strategy that neither depth-first nor breadth-first captures: the random walk. In a random walk, you start at a node and move to a randomly selected neighbor, then to a randomly selected neighbor of that neighbor, and so on. There is no goal, no strategy, no preference for deep versus wide. The path is determined by the structure of the graph and the roll of the dice.
Random walks sound useless. They are not.
In graph theory, random walks are one of the most powerful tools for understanding global graph structure. A random walk on a well-connected graph will, given enough time, visit every node — and the frequency with which it visits each node reveals that node's structural importance. Nodes that random walks visit often are central. Clusters that random walks get trapped in reveal community structure. Transitions between clusters reveal the bridges that connect different regions of the graph.
Applied to your knowledge graph, a random walk is the deliberate practice of following associations without a destination. You start with a concept and follow whatever connection feels alive — not the strongest one, not the most logical one, just whatever presents itself. Then you do it again from the new node. And again. The path meanders. It crosses domain boundaries. It connects things you would never have connected deliberately.
This is what serendipity looks like from the inside. The accidental discovery that transforms a research program, the shower thought that solves a problem you were not thinking about, the conversation that wanders into territory neither person expected — these are random walks on knowledge graphs. They work because the graph's structure contains relationships you have not consciously mapped, and the random walk stumbles across them.
Kenett et al. modeled this directly, simulating random walks on the semantic networks of high- and low-creative individuals. They found that random walks on creative people's networks covered more territory, crossed more category boundaries, and reached more remote associations — not because the walk strategy was different, but because the graph was different. The network structure of a creative person's knowledge makes random traversal more productive.
The AI parallel: message passing and graph neural networks
The same traversal dynamics that drive human cognition are now the foundation of some of the most powerful techniques in artificial intelligence.
Graph neural networks (GNNs) learn by passing messages along edges. Each node starts with its own features — its local information. Then, in each layer of processing, every node aggregates information from its neighbors, updates its own representation, and passes the updated information back out along its edges. After several rounds of message passing, each node's representation encodes not just its own features but the features of its entire local neighborhood — and, through propagation, information from nodes many hops away.
This is spreading activation formalized as a learning algorithm. The GNN does not analyze nodes in isolation. It traverses the graph, aggregating context from the structure of connections. A node's meaning emerges from its position in the network — from the traversal paths that reach it.
The parallel to human cognition is precise. When you think about a concept, you are not retrieving an isolated definition. You are activating a node and receiving information from its neighborhood — the related concepts, the supporting examples, the contradicting evidence, the metaphorical connections. The richer the neighborhood and the more traversal paths that converge on the node, the richer your understanding.
GNNs have proven remarkably effective at knowledge graph reasoning — predicting missing links, answering multi-hop questions, discovering hidden relationships. They succeed precisely because they traverse: they follow connections through the graph structure, combining information from multiple paths to reach conclusions that no single node contains. Your mind, when it works well, does the same thing. The insight is never in one node. It is in the path.
Traversal as a deliberate practice
Here is where this lesson moves from theory to technique. If traversal generates insight, then you can practice traversal the way you practice any cognitive skill: deliberately, with variation, and with increasing sophistication.
Depth-first practice. Pick a concept and go deep. Follow one connection to its next connection to its next connection. Do not branch. Do not return to the starting point. Write each hop down. Push through the moment where the associations become unfamiliar — that is where the value is. The goal is not to reach a predetermined destination. The goal is to discover what is at the far end of a chain you have never followed to its terminus.
Breadth-first practice. Pick a concept and go wide. List every connection you can think of — not just the obvious ones, but the tangential, the metaphorical, the half-remembered. Set a minimum count (ten, fifteen, twenty) and force yourself past the easy associations. The first five will be obvious. Numbers six through ten will require effort. Numbers eleven through twenty will surprise you. Those surprises are the edges in your graph that you have but rarely activate.
Random walk practice. Start anywhere. Follow whatever connection feels least predictable. When you arrive at a new node, do not plan — just follow the next association that appears, preferring the unexpected over the familiar. Continue for at least ten hops. The path will feel disjointed and purposeless. That is the point. Review the path afterward and look for an unexpected connection between the starting node and wherever you ended up. More often than you expect, you will find one.
Cross-domain traversal. Start in one domain of your knowledge (say, cooking) and set a target domain (say, organizational management). Traverse from the start to the target, writing each hop. How many steps does it take? What bridge nodes do you cross through? If it takes more than five or six hops, you may have a structural gap in your graph — a missing bridge that, once built, would connect two regions of your knowledge that are currently isolated from each other.
Why traversal generates what static retrieval cannot
There is a specific reason why following connections produces insights that "thinking about" a topic does not. Static retrieval — looking up a fact, recalling a definition, reviewing what you know about X — activates a node and its immediate neighborhood. It is breadth-first search with a depth of one. The activation spreads to direct associates and then stops.
Traversal carries activation further. Each hop activates a new neighborhood, and the combination of activation from the starting node with activation from the current node creates a context that neither node produces alone. By the time you are six hops into a depth-first traversal, you are simultaneously holding the context of your starting point and the context of a node that has almost nothing in common with it. The collision of those two contexts — the starting frame and the distant frame — is where novel combinations emerge.
This is why shower thoughts work. The relaxed, unfocused state reduces the dominance of strong associations and allows activation to spread further along weak links. It is why walking helps thinking — the mild sensory stimulation provides random seed nodes that perturb your traversal off its habitual paths. It is why conversation generates ideas that solitary thought does not — another person's associations inject nodes into your traversal that your own graph structure would never have reached.
Every technique that reliably generates creative insight — brainstorming, mind mapping, analogy, metaphor, cross-disciplinary reading — is a traversal technique in disguise. Each one works by moving activation to parts of your graph that normal thinking does not reach.
The bridge to L-0352: what the shortest path reveals
You now have traversal as a thinking tool — a set of deliberate strategies for moving through your knowledge graph to generate insights that static retrieval misses. But traversal raises a structural question: when you find a surprising connection between two distant concepts, what does the path between them tell you?
The next lesson, L-0352, examines shortest paths — the minimum number of hops between two nodes in your graph. The shortest path between two seemingly unrelated ideas is not just a curiosity. It reveals the hidden structural connections in your understanding. It shows you which bridge nodes are load-bearing, which domains are closer than you thought, and which connections, if they were severed, would isolate entire regions of your knowledge from each other. If traversal is the technique, shortest path is the diagnostic. It tells you not just that two ideas connect, but how they connect — and what that connection reveals about the architecture of your understanding.