You are almost certainly measuring the wrong things
Every January, millions of people step on a scale. The number staring back at them is a lagging indicator — a measurement of what already happened across the previous weeks and months of eating, sleeping, moving, and stressing. By the time the scale delivers its verdict, the outcome is already locked in. The only thing left is the emotional reaction.
Meanwhile, the information that would have actually predicted that number — sleep quality, daily movement, meal composition, stress levels, hydration — went untracked. Those are leading indicators: metrics that sit upstream in the causal chain, that move before the outcome moves, that give you time to intervene before the result crystallizes.
This distinction — between metrics that predict the future and metrics that describe the past — is one of the most consequential signal-detection skills you can develop. It applies to your health, your career, your finances, your relationships, and your cognition. And most people get it exactly backward. They obsessively track lagging indicators (revenue, weight, grades, follower counts) while ignoring the leading indicators (pipeline activity, sleep consistency, study hours, engagement depth) that actually determine where those numbers are heading. They stare at the scoreboard instead of watching the game.
In the context of Phase 7's focus on distinguishing signal from noise, here is the core claim: lagging indicators are noise for prediction purposes. They are valuable for accountability, for understanding what happened, and for validating your models. But they cannot tell you what is coming. Only leading indicators can do that. And the skill of identifying which metrics genuinely lead — which ones carry predictive signal rather than retrospective description — is a form of epistemic infrastructure that separates people who react to outcomes from people who shape them.
The causal chain: where you measure determines what you see
Every outcome you care about sits at the end of a causal chain. Revenue is the result of sales conversations, which are the result of qualified leads, which are the result of marketing activity and product quality. Your energy level at 3 PM is the result of sleep quality, morning nutrition, hydration, cognitive load across the day, and physical movement. A relationship's health is the result of thousands of micro-interactions — bids for attention, repair attempts after conflict, shared positive experiences.
A lagging indicator measures the end of the chain. A leading indicator measures somewhere upstream. The further upstream you measure, the more time you have to intervene — but the noisier the signal becomes, because more variables can intervene between your measurement point and the eventual outcome.
This creates a fundamental tradeoff. Lagging indicators are precise but useless for course correction. Leading indicators are actionable but require validation. The skill is not choosing one over the other. It is building a measurement system that includes both, and knowing which to use for which purpose.
Andy Grove understood this deeply. In High Output Management (1983), his canonical text on operational leadership at Intel, Grove introduced the concept of paired indicators — the practice of always measuring a quantity metric alongside a quality metric, and always measuring a leading metric alongside a lagging one. His reasoning was that any single metric, measured in isolation, will mislead you. Inventory levels (a leading indicator of production readiness) must be paired with shortage incidents (a lagging indicator of whether that inventory was actually sufficient). The leading indicator tells you where you are heading. The lagging indicator tells you whether your leading indicator was right (Grove, 1983).
Grove also developed what he called the stagger chart — a visual tool that plots successive forecasts against actual outcomes over time. By watching how forecasts shift from month to month, a manager could identify the systematic biases in their leading indicators and calibrate accordingly. The stagger chart is not a prediction tool. It is a tool for improving predictions — a meta-metric that measures the accuracy of your leading indicators themselves.
The Conference Board model: leading indicators at national scale
The most famous system of leading indicators operates at the scale of the entire United States economy. The Conference Board's Leading Economic Index (LEI), maintained since the 1960s, combines ten components into a composite that anticipates turning points in the business cycle approximately seven months before they arrive (The Conference Board, 2025).
The ten components are: average weekly hours in manufacturing, average weekly initial claims for unemployment insurance, manufacturers' new orders for consumer goods and materials, the ISM Index of New Orders, manufacturers' new orders for nondefense capital goods excluding aircraft, building permits for new private housing units, the S&P 500 Index of Stock Prices, the Leading Credit Index, the interest rate spread between 10-year Treasury bonds and the federal funds rate, and average consumer expectations for business conditions.
Notice what these components have in common. None of them measure GDP directly. None of them measure economic output as it currently exists. Each one measures something that happens before economic output changes. Manufacturers adjust working hours before they hire or fire workers. Building permits are filed before construction begins. Stock prices reflect investor expectations about future earnings, not current earnings. New orders precede production. Credit conditions precede spending.
The LEI works not because any single component is a reliable predictor — each one is noisy in isolation — but because the composite smooths out the noise of individual components while preserving the shared directional signal. When seven or eight of the ten components move in the same direction, the signal is strong enough to predict a turning point months before conventional (lagging) measures like GDP, unemployment, or corporate earnings confirm it.
This is the architecture of a leading indicator system: multiple upstream measurements, each imperfect, combined into a composite that is more reliable than any individual component. The same architecture works at personal scale.
Goodhart's Law: when leading indicators become noise
There is a trap embedded in the concept of leading indicators, and it is one of the most important ideas in the science of measurement.
In 1975, British economist Charles Goodhart published a paper on monetary policy in the United Kingdom in which he observed that "any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes" (Goodhart, 1975). The observation was originally about central banking: when the Bank of England used the money supply as its primary indicator of inflation and then began targeting the money supply directly, the historical relationship between money supply and inflation broke down. People and institutions changed their behavior in response to the targeting, which meant the indicator no longer measured what it used to measure.
Anthropologist Marilyn Strathern later paraphrased Goodhart's observation into its most cited form: "When a measure becomes a target, it ceases to be a good measure" (Strathern, 1997).
The mechanism is straightforward. A leading indicator works because it correlates with an outcome through a genuine causal pathway. But when you make the indicator a target — when you reward people for hitting the indicator rather than achieving the underlying outcome — they optimize for the indicator through whatever means are available, including means that sever the causal link to the outcome. The indicator keeps looking good. The outcome deteriorates. And you cannot tell, because you are watching the indicator, not the outcome.
Donald Campbell identified the same dynamic from the social science side, writing in 1976: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor" (Campbell, 1976).
The Wells Fargo cross-selling scandal is the canonical corporate example. Wells Fargo tracked "cross-sell ratio" — the number of products per customer — as a leading indicator of customer relationship depth and future revenue. The theory was sound: customers with more products are more engaged and more profitable. But when management turned this leading indicator into a performance target, with compensation and employment tied to hitting cross-sell numbers, employees began opening accounts without customer authorization. By 2016, an estimated 3.5 million unauthorized accounts had been created. The indicator looked spectacular. The underlying outcome — genuine customer relationships — had been destroyed. The metric had been gamed so thoroughly that it measured the opposite of what it was designed to measure (Tayan, 2019).
This is what Goodhart's Law looks like in practice. A metric that was once a genuine leading indicator of customer health became pure noise — worse than noise, because it actively misled decision-makers into believing the business was thriving while it was corroding from within.
The defense against Goodhart's Law is Grove's paired indicators principle: never measure a leading indicator without also measuring the outcome it supposedly predicts. If your leading indicator keeps improving but the lagging outcome does not, the leading indicator is being gamed, confounded, or was never truly predictive. The pair keeps you honest.
Process metrics are leading indicators for your performance
The distinction between leading and lagging indicators maps precisely onto the distinction between process and outcome in performance science.
A systematic review and meta-analysis of goal-setting research in sport found a striking hierarchy of effectiveness. Process goals — goals focused on the specific actions and behaviors that produce performance — had an effect size of d = 1.36 on actual performance. Performance goals — goals focused on personal benchmarks — had an effect size of d = 0.44. And outcome goals — goals focused on winning, rankings, or external results — had an effect size of d = 0.09. Process goals were fifteen times more effective at improving performance than outcome goals (see review data from the International Review of Sport and Exercise Psychology, 2022).
The reason maps directly to the leading/lagging framework. Process metrics are leading indicators. They measure what you do today — the behaviors, habits, and actions that sit upstream of the outcome. Outcome metrics are lagging indicators. They measure the result after it has been determined. When athletes focus on outcomes (winning the race, beating a competitor), their attention is consumed by something they cannot directly control, which increases anxiety and degrades the very performance they are trying to optimize. When they focus on process (stroke mechanics, breathing rhythm, split times), their attention is on what they can control right now — the leading indicators that actually determine the outcome.
This is not limited to sports. In healthcare, the British Medical Journal published research showing that process measures are more suitable for performance management than outcome measures, precisely because outcomes are influenced by too many confounding factors. A hospital's mortality rate (lagging) is affected by patient demographics, disease severity, regional health patterns, and dozens of other variables beyond the hospital's control. But hand-hygiene compliance rates and time-to-antibiotic administration (leading) directly measure the quality of care delivery and are far more actionable for improvement (Mant, 2001).
The principle generalizes to personal performance. If you want to improve your writing, tracking "articles published" (lagging) tells you less than tracking "hours spent writing" and "drafts completed" (leading). If you want to improve your health, tracking your weight (lagging) tells you less than tracking sleep duration, daily steps, and vegetable servings (leading). If you want to deepen a relationship, tracking "how happy am I in this relationship" (lagging) tells you less than tracking "bids for connection made" and "repair attempts after disagreement" (leading).
The question is always the same: are you measuring the end of the causal chain, or somewhere upstream where you still have leverage?
Personal leading indicators: the metrics that predict your life
The research on personal health metrics provides a concrete illustration of how leading indicators work at the individual level.
Sleep quality is one of the most powerful personal leading indicators researchers have identified. It predicts next-day cognitive performance, mood, physical energy, decision quality, and interpersonal patience. A study published in PNAS found that sleep duration and timing are significantly associated with next-day physical activity levels — meaning your sleep tonight is a leading indicator for your movement tomorrow, which is itself a leading indicator for your long-term health outcomes (PNAS, 2025).
Heart rate variability (HRV) has emerged as another potent personal leading indicator. HRV measures the variation in time between successive heartbeats and reflects the balance between sympathetic (stress) and parasympathetic (recovery) nervous system activity. Research has shown that HRV can predict burnout risk with approximately 79% accuracy when combined with psychosocial factors, and that pre-stress-event HRV measures significantly predict subsequent performance under pressure in executive functioning tasks (Frontiers in Physiology, 2022; medRxiv, 2025). Lower HRV signals accumulating stress before you feel burned out. Higher HRV signals resilience before you face the test.
Exercise frequency operates as a leading indicator for nearly everything — sleep quality, cognitive function, emotional regulation, metabolic health, and even social engagement. A systematic review and network meta-analysis found that combined exercise at a frequency of four times per week, for sessions of thirty minutes or less, had the strongest effect on improving sleep quality (Frontiers in Psychology, 2024). Exercise is not just good for you. It is a measurable, trackable, upstream predictor of dozens of downstream outcomes.
The pattern across this research is consistent: the metrics that predict your quality of life are behavioral and physiological metrics that you can measure today — sleep, movement, HRV, nutrition — not the outcomes that those behaviors eventually produce — weight, productivity, mood, longevity. The outcomes confirm the trajectory. The leading indicators define it.
Your Third Brain: AI as leading indicator discovery engine
Here is where artificial intelligence transforms leading indicator identification from intuition into systematic analysis.
The fundamental challenge with leading indicators is discovering which upstream metrics actually predict which downstream outcomes in your specific context. The Conference Board spent decades refining their ten components through statistical analysis of economic data. You do not have decades to identify which personal metrics predict your performance, health, and wellbeing. But you do have access to tools that can accelerate that discovery.
Feed an LLM your personal tracking data — sleep logs, exercise records, mood ratings, work output metrics, energy levels, relationship quality assessments — and ask it to identify correlations between upstream behaviors and downstream outcomes. Which behaviors from Monday through Wednesday predict your Friday energy? Which patterns in your sleep data precede your most productive weeks? Which communication patterns precede relationship friction?
AI excels at this because the pattern detection operates across more variables and longer time spans than human working memory can handle. You might intuitively sense that poor sleep leads to bad days, but an LLM analyzing three months of your tracking data can tell you that specifically, sleep onset after 11:30 PM combined with fewer than two exercise sessions in the preceding three days predicts a 40% reduction in your self-rated work quality two days later. That level of specificity turns a vague intuition into a precise, trackable leading indicator.
The critical discipline: AI identifies candidate leading indicators. You validate them through deliberate tracking. A correlation in historical data is a hypothesis, not a fact. Run the experiment. Track the proposed leading indicator for two weeks. See whether it actually predicts the outcome. If it does, you have a genuine signal. If it does not, you have eliminated a false candidate — which is also valuable, because it stops you from tracking noise.
Modern personal dashboards are moving in this direction. Tools are increasingly shifting from passive tracking to predictive analytics — identifying patterns in your data that forecast future states rather than merely recording past ones. The quantified self movement's next evolution is not more data. It is better leading indicators extracted from existing data.
Protocol: build your leading indicator system
Step 1 — Identify your lagging outcomes (10 minutes). Write down three to five outcomes that matter to you right now. These should be results, not activities. Examples: monthly revenue, body composition, relationship satisfaction, creative output volume, career progression. These are your lagging indicators.
Step 2 — Map the causal chains (15 minutes). For each lagging outcome, trace backward through the causal chain. What behaviors produce the outcome? What conditions enable those behaviors? What inputs drive those conditions? Write the chain. Revenue comes from sales. Sales come from conversations. Conversations come from outreach. Outreach comes from having a pipeline. Pipeline comes from daily prospecting activity. The further upstream you go, the more leading the indicator becomes.
Step 3 — Select your leading indicators (10 minutes). For each causal chain, identify one to two metrics that are upstream enough to give you warning time but downstream enough to maintain a genuine predictive relationship with the outcome. These become your daily tracking targets.
Step 4 — Build paired measurements. For each leading indicator, pair it with the lagging outcome it is supposed to predict. Track both. This is your Goodhart's Law defense. If the leading indicator moves but the lagging outcome does not follow within a reasonable timeframe, your leading indicator is broken — either gamed, confounded, or non-predictive.
Step 5 — Review and recalibrate weekly. Every week, check: did the leading indicators from last week predict this week's outcomes? If yes, continue tracking. If not, investigate why. Adjust your indicators based on evidence. A leading indicator system that never gets recalibrated is just a to-do list with extra steps.
Step 6 — Use AI to discover what you are missing. Once you have two to four weeks of paired data, feed it to an LLM and ask: "Based on this data, which of my tracked metrics have the strongest predictive relationship with these outcomes? Are there patterns I am not tracking that the data suggests matter?" Let the AI propose candidates. Then validate them through deliberate tracking.
The signal is always upstream
The lesson that connects this to the broader Phase 7 curriculum is this: lagging indicators feel informative because they are concrete, definitive, and emotionally resonant. Your revenue number, your test score, your weight, your performance review — these feel like the truth about where you stand. And they are the truth. About where you stood. Past tense.
The future is written in leading indicators — in the upstream behaviors, conditions, and patterns that have not yet produced their outcomes. Learning to identify, track, and act on leading indicators is a fundamental signal-detection skill because it redirects your attention from noise (the retrospective measurement that cannot be changed) to signal (the predictive measurement that still can).
In L-0129, you learned that your emotional reaction to information is often noise rather than signal. Leading indicators extend that principle to measurement itself. The metric that triggers the strongest emotional reaction — last quarter's revenue, your latest weigh-in, the performance review score — is almost always the lagging indicator. The metric that feels boring, incremental, and undramatic — daily prospecting calls, nightly sleep duration, weekly exercise sessions — is almost always the leading one.
Signal is quiet. Signal is upstream. Signal is where the future is still being written.
In the next lesson, First-party data beats second-hand reports, you will learn why the data you collect directly — including the leading indicators you now know how to track — carries more signal than any filtered, aggregated, or interpreted report from someone else.
Sources
- Grove, A. S. (1983). High Output Management. Random House.
- Goodhart, C. A. E. (1975). Problems of monetary management: The U.K. experience. Papers in Monetary Economics, Reserve Bank of Australia.
- Strathern, M. (1997). 'Improving ratings': Audit in the British university system. European Review, 5(3), 305-321.
- Campbell, D. T. (1976). Assessing the impact of planned social change. Evaluation and Program Planning, 2(1), 67-90.
- The Conference Board. (2025). US Leading Indicators. https://www.conference-board.org/topics/us-leading-indicators/
- Tayan, B. (2019). The Wells Fargo cross-selling scandal. Harvard Law School Forum on Corporate Governance. https://corpgov.law.harvard.edu/2019/02/06/the-wells-fargo-cross-selling-scandal-2/
- Mant, J. (2001). Process versus outcome indicators in the assessment of quality of health care. International Journal for Quality in Health Care, 13(6), 475-480.
- Doerr, J. (2018). Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs. Portfolio/Penguin.
- Luo, S., et al. (2025). Sleep duration and timing are associated with next-day physical activity. Proceedings of the National Academy of Sciences. https://www.pnas.org/doi/10.1073/pnas.2420846122
- Sammito, S., et al. (2023). Heart rate variability for evaluating psychological stress changes in healthy adults: A scoping review. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC10614455/