Your categories are confessions
Every classification system makes a claim about what matters. Not explicitly — classification systems rarely announce their values. They embed them in structure, in the dimensions they select, in what they make visible and what they render invisible. When you sort your notes into folders, label tasks on a board, or tag entries in a journal, you are not performing a neutral act of organization. You are publishing a map of your priorities.
This lesson is about learning to read that map — in your own systems, in the institutions around you, and in the AI tools that increasingly classify on your behalf.
Classification as ethical infrastructure
Geoffrey Bowker and Susan Leigh Star, in their landmark study Sorting Things Out: Classification and Its Consequences (1999), made a case that changed how information scientists think about categories. Their central argument: "Each standard and each category valorizes some point of view and silences another. This is not inherently a bad thing — indeed it is inescapable. But it is an ethical choice, and as such it is dangerous — not because it is wrong, but because it is often invisible."
To classify is human. You do it constantly — sorting emails, triaging tasks, deciding which events go on your calendar and which don't. But Bowker and Star showed that these acts are never mere bookkeeping. Every classification system carries what they called a "moral and ethical agenda." The categories you create determine what gets counted, what gets resourced, what gets remembered, and what gets erased.
Their most devastating case study was South Africa's Population Registration Act of 1950. The apartheid regime required every person to be classified into a racial group — White, Coloured, Indian, or Native. These categories determined where you could live, whom you could marry, what jobs you could hold, and whether the state would educate your children. The classification wasn't describing racial reality. It was constructing it. Bowker and Star used the term "torque" to describe what happens when a classification system's categories don't match a person's lived experience — the system bends the life to fit the label, not the other way around.
That's an extreme example. But the mechanism operates everywhere, at every scale. When your company classifies customer support tickets into "bug," "feature request," and "user error," it has already decided that some complaints are the system's fault and some are the user's fault. That boundary is a value judgment wearing the costume of a category.
Dirt is matter out of place
Anthropologist Mary Douglas arrived at a parallel insight from a completely different direction. In Purity and Danger (1966) — ranked by the Times Literary Supplement as one of the hundred most influential nonfiction books published since 1945 — she argued that concepts of pollution and purity have almost nothing to do with hygiene. They are about classification boundaries.
Douglas's most famous formulation: dirt is "matter out of place." Dirt is not a substance with inherent properties. It is a relational concept — something becomes dirty when it violates the categories a culture has established. Shoes are not dirty in themselves, but shoes on the dining table are. Food is not dirty, but food on your clothing is. The "dirt" is the anomaly — the thing that doesn't fit the classification system and therefore threatens the order that system protects.
This insight has direct consequences for how you think about your own knowledge systems. When a note feels like it "doesn't belong anywhere" in your folder structure, the discomfort you feel is Douglas's pollution concept operating at the personal level. The note isn't broken. Your classification system is revealing its boundaries — and those boundaries are revealing what you've decided counts as a legitimate category of thought.
Douglas showed that cultures respond to classification anomalies in predictable ways: they ignore the anomaly, they redefine it to fit an existing category, they declare it dangerous, or they use it in ritual as a source of power. You do the same things with information that doesn't fit your personal taxonomy. You ignore the note that doesn't match any tag. You force-fit a task into the wrong project. You declare certain kinds of knowledge "not my area." Each response is a values decision masquerading as an organizational one.
The DSM: when classification is literally diagnosis
Perhaps no classification system demonstrates the values-laden nature of categories more starkly than the Diagnostic and Statistical Manual of Mental Disorders (DSM).
In the DSM's first edition (1952), homosexuality was classified under "sociopathic personality disturbance." The classification wasn't describing a medical reality — it was encoding a cultural value into a diagnostic category. As the American Psychiatric Association itself later acknowledged, the judgment that a given behavior is "abnormal" and requires clinical attention depends on cultural norms, and those norms had been internalized into the classification structure.
In 1973, after years of activism and accumulating research — particularly from Evelyn Hooker, whose 1957 study found no measurable psychological difference between homosexual and heterosexual men — the APA Board of Trustees voted to remove homosexuality from the DSM. The underlying human reality hadn't changed. The classification changed because the values embedded in it were challenged.
This is not ancient history. It is a living demonstration that categories are not neutral containers. They are active claims about what is normal, what is pathological, what deserves treatment, and what deserves acceptance. Every time you create a category in your own system, you are making a smaller-scale version of the same kind of claim.
You are what you measure
Goodhart's Law, named after British economist Charles Goodhart, states: "When a measure becomes a target, it ceases to be a good measure." Campbell's Law, from social psychologist Donald Campbell, amplifies this: the more any quantitative social indicator is used for decision-making, the more it will corrupt the processes it was meant to monitor.
Both laws describe what happens when classification becomes institutionalized. The categories you choose to measure are the categories that get optimized — and everything outside those categories gets systematically underweighted.
Consider how a company classifies employee performance. If the categories are "lines of code written," "tickets closed," and "meetings attended," those metrics will be optimized — at the expense of mentoring junior engineers, writing documentation, or thinking carefully about architecture. The classification system doesn't just describe performance. It defines performance. And in defining it, it reveals what the organization actually values, regardless of what its mission statement says.
Sandra Harding's work on feminist standpoint epistemology offers a structural explanation for why this matters. Harding argued that all knowledge systems — including classification systems — are "socially situated." They reflect the perspective of whoever built them. Her concept of "strong objectivity" requires that you don't just ask "is this category accurate?" but "whose perspective does this category serve?" The categories that feel most natural are often the ones that most closely match the worldview of the people who designed the system. For everyone else, the categories produce what Bowker and Star called torque — the friction between how the system classifies you and how you experience yourself.
This is why auditing your classification systems is not an exercise in tidiness. It is an exercise in self-knowledge. The dimensions you measure are the dimensions you value. The dimensions you don't measure are the dimensions you've decided — consciously or not — don't count.
AI classification: values at scale
The connection between classification and values becomes urgent when AI systems classify on your behalf — because AI classification operates at a scale and speed that makes the embedded values effectively invisible.
In January 2023, TIME magazine reported that OpenAI had outsourced the content labeling work for ChatGPT's safety systems to workers in Kenya, who were paid between $1.32 and $2 per hour to classify text as toxic or non-toxic, violent or non-violent, sexual or non-sexual. The workers read descriptions of child sexual abuse, bestiality, murder, and torture — and their labeling decisions became the training data that taught ChatGPT what counts as harmful content and what doesn't.
There are at least three layers of values embedded in this process. First, the decision about which categories exist — "toxic" versus "non-toxic" is already a values choice. Where is the boundary? Who decides? Second, the labeling itself — every time a worker marks a piece of text as toxic or safe, they bring their own cultural context, personal experience, and psychological state to the judgment. Third, the labor conditions — the decision to outsource this work to the Global South at $2 per hour embeds a value about whose cognitive labor is expendable.
Timnit Gebru and colleagues, in their influential 2020 paper on the risks of large language models, argued that biases in AI systems are not accidental. They are the result of choices — about what data to collect, how to label it, and which perspectives to center. The classification decisions made during training become the invisible infrastructure through which AI systems interpret the world for millions of users.
This matters for your personal knowledge systems because AI tools increasingly do classification work for you — auto-tagging emails, suggesting categories for notes, clustering search results. Every time an AI classifies on your behalf, it is applying someone else's values to your information. If you don't understand this, you'll mistake the AI's classification for neutral description. It never is.
Your folder structure is a values document
Bring this back to the personal level. Look at how you organize your notes, your files, your bookmarks, your task board.
Tiago Forte's PARA method — Projects, Areas, Resources, Archives — is explicitly designed around actionability. If you use PARA, you've decided that the most important dimension of your knowledge is "how close is this to something I'm currently doing?" That's a value. It privileges action over contemplation, currency over depth, and output over exploration.
If instead you organize by topic — philosophy, engineering, psychology, cooking — you've decided that the conceptual domain of knowledge is the most important dimension. That privileges understanding over action, and depth within domains over connections across them.
If you organize by links rather than folders — a Zettelkasten-style system of atomic notes connected by association — you've decided that relationships between ideas matter more than categories. That privileges emergence over structure, and serendipity over retrieval.
None of these is wrong. But each one is a confession. Your organizational system says: "These are the dimensions of reality I care about enough to build infrastructure around." Everything that doesn't fit those dimensions gets filed under "miscellaneous" — which is the classification system's word for "this doesn't match my values, so I have no way to see it."
Here is a concrete test. Open your notes app or knowledge management tool right now. Look at your top-level categories. Now ask:
- What's prominent? The categories with the most entries are the values you've operationalized.
- What's absent? The dimensions of your life that have no category are the values you've neglected — or the values you hold theoretically but haven't built infrastructure to support.
- What's miscellaneous? Your "inbox," "unsorted," or "misc" folder is a graveyard of things your classification system couldn't accommodate. That's where your blind spots live.
If you say you value health but have no health-related categories in your knowledge system, your classification reveals that health is an aspiration, not an operational value. If you say you value creativity but every category in your system is organized around productivity metrics, the system is telling you something your self-narrative isn't.
Classification and your Third Brain
When AI operates as a Third Brain — augmenting your thinking by working with your externalized knowledge — the values embedded in your classification system compound. An AI that searches your notes will find what your categories make findable. It will surface connections along the dimensions you've built. It will be blind to relationships that your classification system never encoded.
This means that a values audit of your classification system isn't optional — it's a prerequisite for effective AI augmentation. If your system has no category for "things I'm uncertain about," your AI will never help you reason about uncertainty. If your system has no category for "ideas that contradict my current beliefs," your AI will reinforce your existing positions by default.
The remedy is not to create categories for everything — that produces noise, not signal. The remedy is to deliberately ensure that your classification system reflects your actual values, not just your habitual ones. Add one category for a value you hold but haven't been operationalizing. Remove a category that exists by inertia rather than intention. Rename a category to be more honest about what it actually contains.
Every time you adjust a category, you are adjusting the lens through which both you and your AI tools see your knowledge. Classification is not a filing task. It is an epistemic act — a declaration of what dimensions of reality you are willing to perceive.
The recursive insight
Here is the lesson that runs beneath all the examples: classification systems don't just organize reality. They construct the version of reality their creators are able to see. The apartheid state didn't just classify people by race — it created a world in which race was the organizing principle of all experience. The DSM didn't just classify homosexuality as a disorder — it created a medical infrastructure in which sexual orientation was treated as pathology. Your folder structure doesn't just organize your notes — it creates a cognitive environment in which certain kinds of thinking are supported and certain kinds are invisible.
This works recursively. You build a classification system based on your current values. The system then reinforces those values by making them structural — easy to see, easy to use, easy to optimize. Values that aren't encoded in categories become harder to act on, harder to remember, harder to share with AI. Over time, the system's categories become your categories. The map replaces the territory.
The only defense is periodic audit. Not rearranging — auditing. Asking: does this system still reflect what I actually care about? Or has it calcified around values I held three years ago? The categories that feel most natural are the ones most in need of examination, because naturalness is what happens when a value has been so thoroughly embedded in infrastructure that it becomes invisible.
Classification reveals what you value. And what you value shapes everything you build next.