The $327 million thought that had no type
On September 23, 1999, NASA's Mars Climate Orbiter approached the red planet after a 286-day journey. Ground controllers expected it to settle into orbit at 150 kilometers above the surface. Instead, the spacecraft plunged to 57 kilometers — too low to survive — and disintegrated in the Martian atmosphere.
The cause was not a faulty sensor, a software crash, or a design flaw. Lockheed Martin's ground software calculated thruster impulses in pound-force seconds. NASA's navigation software expected those same values in newton-seconds. One team spoke imperial. The other spoke metric. Neither system checked whether the units matched. The $327.6 million mission was destroyed by a type error that no human or machine ever flagged.
NASA's own investigation concluded that the root cause was not the mismatch itself but "the failure of NASA's systems engineering, and the checks and balances in our processes, to detect the error." In other words: the system lacked a type constraint. Any number was accepted. And so the wrong number sailed through unchallenged for nine months.
This is what happens in the absence of types. Not chaos — something worse. The quiet propagation of errors that look perfectly valid until they destroy what matters.
What a type system actually does
A type system is a set of rules that assigns a property called a "type" to every object in your system, then restricts which operations are valid for each type. The concept originates in formal logic and mathematics, but it operates identically in software, physics, organizational design, and personal knowledge management.
Consider the simplest possible example. You have two numbers: 5 and "five". In an untyped system, you can add them: 5 + "five" — and the system tries to produce an answer, usually something nonsensical like "5five" or NaN. In a typed system, the operation is rejected before it executes. Not because the system knows the answer would be wrong — but because the types are incompatible. Addition is defined on numbers. "five" is a string. The constraint catches the error before the operation even runs.
This is the core principle: types don't check whether your answer is correct. They check whether your question even makes sense.
In programming, this distinction splits the entire discipline. Languages like Rust, Haskell, and OCaml use type systems descended from Hindley-Milner type theory — a formal system proven to be both sound (it never accepts an invalid program) and complete (it can infer types without programmer annotations). In these languages, the compiler rejects programs that contain type mismatches before a single line of code runs. There is, by design, no concept of a runtime type error. The constraint is absolute.
TypeScript takes a different approach. It layers a type system on top of JavaScript — a language with no compile-time type enforcement at all. TypeScript's type system is intentionally unsound: it allows escape hatches like any that bypass checking entirely, because its designers wanted to accommodate the vast existing JavaScript ecosystem. The tradeoff is explicit. You get some type safety, but the guarantee has gaps.
The empirical data on this tradeoff is striking. A 2017 study by Gao et al. published at ICSE examined publicly committed bugs in JavaScript projects and found that both TypeScript and Flow could have prevented approximately 15% of those bugs — the ones that were detectable by static type analysis. A more recent 2025 study found that type-related errors dropped from roughly 33% in JavaScript codebases to 12.4% in TypeScript codebases. These are not theoretical improvements. They are measured reductions in real defects caused by adding constraints that restrict what operations the code is allowed to perform.
Constraints as cognitive infrastructure
The principle extends far beyond software. Wherever humans operate in complex environments, typing — assigning constrained categories that restrict valid operations — reduces errors.
In physics, dimensional analysis is a type system. Every physical quantity carries a dimension: mass, length, time, temperature, electric current. The rules of dimensional analysis say that you can only add quantities of the same dimension (meters plus meters, not meters plus seconds), and that multiplication and division produce predictable dimensional results (velocity is length divided by time, not length divided by temperature). When a physics student checks whether their equation is "dimensionally consistent," they are performing a type check. If the dimensions don't match, the equation is wrong — regardless of whether the numbers look plausible.
The Mars Climate Orbiter violated this constraint. The Gimli Glider — a 1983 incident where an Air Canada Boeing 767 ran out of fuel mid-flight and was forced to glide to an emergency landing — violated it too. Ground crews calculated fuel load using 1.77 pounds per liter (an imperial conversion factor) when the new 767's systems expected 0.8 kilograms per liter (a metric factor). The plane took off with roughly half the fuel it needed. The type error — pounds where kilograms were expected — was invisible because no constraint forced the units to match.
In medicine, checklists are type systems. Atul Gawande's research, published in The Checklist Manifesto and validated by a landmark 2009 study in the New England Journal of Medicine, demonstrated what happens when you add constraints to surgical procedures. The WHO Surgical Safety Checklist — a 19-item protocol constraining what must be verified before, during, and after surgery — reduced major surgical complications from 11.0% to 7.0% and inpatient deaths by more than 40% (from 1.5% to 0.8%) across eight hospitals worldwide. The study examined 7,688 patients. The constraint was simple: certain operations (incision, anesthesia, closure) were not valid until certain preconditions (patient identity confirmed, allergies reviewed, blood loss anticipated) were satisfied. A type system for surgery.
Gawande's insight was precisely the insight of type theory: complexity had exceeded human cognitive capacity, and the solution was not more training or more effort. It was structural constraints that made entire categories of error impossible. A surgeon cannot accidentally operate on the wrong limb when the checklist forces the team to mark the correct site before the first incision. The constraint is not a suggestion. It is a gate.
The mathematics beneath
Category theory — a branch of mathematics that studies structures and relationships between structures — provides the formal foundation for why types prevent errors.
In category theory, a category consists of objects and morphisms (arrows between objects). The rules of composition dictate which arrows can be chained together: a morphism from A to B can be composed with a morphism from B to C to produce a morphism from A to C. But you cannot compose a morphism from A to B with a morphism from D to C — the types do not align. The intermediate object must match.
This is exactly how a type system works. Functions (operations) have input types and output types. You can chain function outputs to function inputs only when the types align. The type system enforces composability by making misaligned compositions impossible to express.
Bartosz Milewski, whose Category Theory for Programmers brought these ideas to a wide audience, puts it directly: "Composition is at the very root of category theory — it's part of the definition of the category itself, and composition is the essence of programming." The types exist to make composition safe. Without them, you can chain anything to anything, and the system will cheerfully produce garbage.
This is not an abstract concern. Every time you build a pipeline — a sequence of transformations where the output of one step feeds the input of the next — you are working with composition. If step one produces a customer record and step two expects a financial transaction, the pipeline will run, produce wrong results, and you may not notice for weeks. A type constraint catches this at design time. Types make incorrect compositions unrepresentable.
Typing in your own thinking
You already use informal type systems constantly. You just don't name them.
When you refuse to compare your team's velocity this quarter to last quarter's because "the team size changed," you are performing a type check. The two numbers have different contexts — different types — and the comparison operation is not valid between them.
When you push back on a colleague who says "our NPS score is higher than our competitor's revenue growth," you are flagging a type error. NPS and revenue growth are different types. Comparing them is dimensionally incoherent.
When you create a dropdown menu instead of a free-text field for a project status, you are implementing a type constraint. You are saying: this field accepts values from the set {not started, in progress, blocked, done}. Anything else is a type error.
The problem is that most people apply these constraints inconsistently. They type-check some fields in their thinking but leave others dangerously unconstrained. Consider these common failure modes:
- Untyped goals. "I want to be more productive" accepts any interpretation: more hours, more output, more efficiency, more impact. Without a type constraint (e.g., "productivity = tasks completed per focused hour"), you cannot measure progress, compare approaches, or detect when you are optimizing the wrong variable.
- Untyped feedback. "That presentation was good" has no type. Good how? Good structure? Good delivery? Good data? Good conclusions? Without typing the feedback, the recipient fills in whatever meaning their ego prefers or their insecurity fears.
- Untyped decisions. "We decided to go with Option B" is untyped unless you also record what type of decision this was (reversible vs. irreversible, data-driven vs. judgment-based, delegated vs. consensus). The type determines which review process applies, how much time to invest, and when to revisit.
When types become prisons
There is a real cost to over-typing. A type system that rejects valid inputs is worse than no type system — it trains people to circumvent the constraints rather than work within them.
Strongly typed languages like Rust demand that every type relationship be explicitly defined, and the compiler rejects any program with a type ambiguity. This produces extraordinary reliability (Rust is famous for memory safety guarantees), but it also imposes a steep learning curve and slower initial development. Loosely typed languages like JavaScript allow rapid prototyping precisely because they impose minimal constraints — but they pay for that speed with bugs that surface later, in production, in front of users.
The same tradeoff appears in organizational systems. A task management system with 47 required fields and 12 mandatory categories will produce beautifully structured data that nobody enters — because the typing overhead exceeds the value of the constraint. A task management system with one field ("what needs to happen?") will be easy to use and impossible to analyze.
The discipline is calibration: type tightly where errors are catastrophic, type loosely where exploration is more valuable than consistency, and review your type system regularly as your understanding of the domain evolves.
Critical systems — medical procedures, financial transactions, mission-critical software, irreversible decisions — deserve strong typing. The cost of an error vastly exceeds the cost of the constraint. Exploratory systems — brainstorming sessions, early-stage research, creative work, first drafts — deserve weak or no typing. The cost of the constraint (premature closure, rejected novelty) vastly exceeds the cost of an error.
AI and the new frontier of typed outputs
The principle of typed constraints is now reshaping how we interact with artificial intelligence. Large language models generate text with no inherent type structure — they produce strings that might contain valid JSON, or SQL, or poetry, or hallucinated citations, or a mix of all four. The output is untyped.
Structured outputs change this. OpenAI's Structured Outputs feature, introduced in 2024, constrains model responses to match a supplied JSON Schema. The mechanism is called constrained decoding: the JSON Schema is converted into a context-free grammar, and at each step of token generation, the model is restricted to only tokens that produce valid output according to the schema. The type constraint is not applied after generation (validation). It is applied during generation (enforcement). Invalid outputs cannot be produced.
Libraries like Pydantic and Instructor extend this pattern, allowing developers to define type-safe schemas in Python and have AI responses automatically validated against them. The results are measurable: hallucinated fields are eliminated (the model cannot invent a field the schema does not define), format errors disappear (the output is guaranteed to parse), and downstream systems can consume AI outputs without defensive parsing code.
This is the Mars Climate Orbiter lesson applied to AI. When an LLM outputs a number, is it a probability, a count, a dollar amount, or a confidence score? Without a type constraint, you are trusting that the model happened to produce the right kind of number. With a typed schema, the output is constrained to the correct type before it ever reaches your system.
If you are building a Third Brain — an externalized cognitive infrastructure augmented by AI — typed interfaces between you and your AI tools are not optional. They are the mechanism that prevents your augmented thinking system from propagating plausible-looking errors at machine speed.
The protocol: typing your epistemic infrastructure
Here is how to apply typing constraints to your own thinking systems:
1. Audit your untyped fields. Look at every category, label, status, and tag in your system. For each one, ask: what values are currently accepted? If the answer is "anything," that field is untyped. It will accumulate inconsistency until it becomes useless.
2. Define the valid set. For each untyped field, determine the set of valid values based on your actual usage. If your task statuses are "todo, in progress, and done" — write that down as the type definition. If your priority levels are "critical, high, medium, low" — that is an enum type with four values.
3. Implement the constraint. In digital tools, use dropdown menus, validation rules, or template fields. In personal systems, write the valid values at the top of your page or in your template. The constraint only works if it is enforced at the point of entry, not checked after the fact.
4. Track rejections. When a constraint blocks an input, record it. If the rejection is correct (someone tried to enter an invalid value), the type is working. If the rejection is incorrect (someone tried to enter a legitimate value the type does not accommodate), the type needs revision.
5. Evolve the types. Review your type definitions monthly. Types that reject legitimate inputs too often need to be broadened. Types that accept too much variety need to be narrowed. A type system is not a one-time design — it is infrastructure that evolves with your understanding.
The lesson from every domain — software, physics, medicine, aerospace, AI — is the same. Unconstrained systems accept everything, including errors. Types are the mechanism that makes certain categories of error structurally impossible, not by catching mistakes after they happen, but by making them unrepresentable in the first place.
Your thinking system is either typed or it is accumulating errors you cannot see. The constraints you add today determine which mistakes become impossible tomorrow.