Title: Noise: A Flaw in Human Judgment
Author: Daniel Kahneman, Olivier Sibony and Cass R. Sunstein
Year: 2021
Pages: 454
Noise did not change my life, but it changed the way I thought about my own decisions and thoughts.
We discussed “Thinking, Fast and Slow” last week, a book that helps us reflect on our thinking processes.
Daniel Kahnemann is once again helping us to reflect this week on how our brain makes decisions and judgments.
As a result, I gave this book a rating of 8.5/10.
For me, a book with a note 10 is one I consider reading again every year. Among the books I rank with 10, for example, are How to Win Friends and Influence People and Factfulness.
Table of Contents
3 Reasons to Read Outliers
Most of us know about bias, but noise is the silent troublemaker. It’s when people make wildly different decisions in the same situations. This book shows you where those hidden inconsistencies live—in hiring, medicine, justice, and even your daily choices.
Make Fairer Decisions
We like to believe that judgment is fair and consistent. The truth? It often isn’t. Noise reveals how even experienced professionals are swayed by irrelevant factors—like mood, time of day, or the order of information. If you care about fairness, this book gives you tools to fight back.
Fix Your Systems
Whether you lead a team or run a business, your decisions shape outcomes. This book helps you structure better decision-making systems by focusing on clarity, consistency, and what the authors call “decision hygiene.” It’s not flashy—but it works.
Book Overview
Imagine this: two people with identical backgrounds apply for the same insurance policy. One gets a much higher premium than the other. Or picture two suspects committing nearly identical crimes but receiving wildly different sentences.
No, it’s not a twist of fate or a hidden detail. It’s something far less dramatic but far more common—something Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein call noise.
Most of us are familiar with bias—the idea that judgment can be distorted in predictable ways. We’ve heard about cognitive shortcuts, systemic prejudice, and how beliefs influence decisions.
But what Noise argues is that there’s another culprit lurking in the background: the randomness of human judgment. And unlike bias, noise is sneaky. It hides in plain sight, unmeasured, and usually unnoticed.
The authors explain that noise isn’t about people being wrong in the same way—it’s about people being all over the place. Different judges give different sentences for the same crimes. Doctors offer different diagnoses for the same symptoms.
Managers rate similar employees completely differently. These aren’t just outliers; they’re symptoms of a widespread problem. And once you start looking for it, you see it everywhere.
One of the most eye-opening parts of the book is a study conducted inside an insurance company. Multiple underwriters were asked to assess the same set of claims. The company expected maybe a 10% difference between judgments.
The actual number? Fifty-five percent. That’s not just a little variation—it’s a full-on judgment lottery. And it’s not just insurance. From hiring decisions to medical diagnoses, financial forecasts to performance reviews, our systems are filled with inconsistency that can’t be explained by skill or logic.
So what causes all this noise? The book breaks it down into three main sources. Sometimes it’s level noise—some people are just stricter or more generous than others.
Other times it’s pattern noise—people weigh different parts of the same case differently, even when using the same scale.
And then there’s occasion noise, which might be the most random of all: the influence of time of day, mood, weather, hunger, or even what case you evaluated right before this one.
These aren’t failures of intelligence—they’re part of being human. But when lives, careers, or millions of dollars are on the line, quirks matter. Noise makes systems unreliable. It makes outcomes unpredictable and often unfair.
But here’s the twist: noise is rarely treated as a problem. While organizations spend a lot of time trying to fix bias—through diversity training, awareness programs, or checking for systemic slants—noise remains mostly invisible.
That’s what makes this book so important. It names and frames something we all experience but rarely understand.
Still, the book doesn’t just diagnose the issue. It offers something better: a practical path forward.
The authors introduce a concept they call decision hygiene—a set of habits, tools, and structural tweaks that reduce noise without needing to identify every specific cause.
It’s a bit like washing your hands. You don’t have to know exactly which germ you’re avoiding. You just follow a routine that works across many cases.
The same goes for judgment: structure your process, break complex evaluations into smaller parts, rate attributes separately, use clear definitions, delay intuition until after the data is in. Simple things—but they work.
One of the most compelling examples is in hiring. Many organizations still rely on free-flowing interviews where managers “get a feel” for a candidate.
But research shows that unstructured interviews are full of noise and surprisingly weak predictors of job performance. Structured interviews—with clear questions, defined scoring, and evaluations across key traits—are not only more consistent, they’re also more accurate.
Another powerful story comes from forensic science. When fingerprint examiners were shown crime scene prints after being told a suspect had confessed, their interpretations shifted. They were more likely to find a match.
This wasn’t fraud—it was human psychology. Their judgment had been “primed” by context. To fix this, scientists now use sequential unmasking, a technique that controls the order in which information is revealed. Evidence first, context later. That small change cuts down on contamination—and noise.
The book also makes a fascinating point about aggregation. We often assume that one expert is better than a group.
But in many cases, averaging multiple independent judgments leads to more accurate decisions.
This is true in forecasting, hiring, even diagnosing complex problems. It turns out that noise is often personal—and combining perspectives helps cancel it out.
That said, the book doesn’t ignore the human side of all this. In fact, one of its most thoughtful chapters explores the conflict between consistency and dignity.
People don’t want to be judged by formulas. They want to be seen as individuals. There’s something dehumanizing about being evaluated by a system that doesn’t take your story into account.
So the question becomes: how do we design systems that are consistent and fair—but also compassionate?
The answer isn’t either/or. It’s balance. Use structure where possible, and judgment where necessary.
Make room for exceptions—but make sure they’re deliberate, not random. This balance is what the authors argue for: a world where noise is reduced, not at the cost of humanity, but in service of better judgment.
In its final chapters, the book gently reminds us that not all noise is worth eliminating. Sometimes, small inconsistencies aren’t worth the cost of overengineering the process.
What matters is knowing when noise is truly damaging—and building systems that are strong where it counts.
You walk away from Noise with a new kind of lens. It changes how you see decision-making—yours and others’.
It makes you a little more skeptical of gut calls, a little more interested in structure, and a lot more aware of the invisible variability that shapes our lives.
If Thinking, Fast and Slow made you question how your brain works, Noise will make you question how your decisions—and systems—are built. It’s not a beach read, and it’s not light, but it’s essential for anyone who wants to make better, fairer, more thoughtful choices in a noisy world.
What is judgment, really?
Judgment, as they define it, isn’t just thinking. It’s a form of measurement—a way of assigning a value to something. When a doctor says a tumor is probably benign or a manager decides which candidate to hire, that’s a judgment. And just like any measurement tool, judgments can be faulty. Some are biased. Some are noisy. Many are both.
The big distinction: Bias vs. Noise
Bias is when people are systematically off in one direction—like always being too optimistic about future sales. Noise is more like a scattershot—people giving very different answers to the same problem. Even if bias is reduced, noise still causes major errors. And surprisingly, in many professional settings, noise can be the bigger problem.
They also clarify that while we can’t always know the “true value” of a judgment (especially in things like sentencing or insurance pricing), we can still detect and measure noise through methods like noise audits, where multiple people assess the same cases independently. The variation itself is the clue.
Why is noise a problem?
In creative domains or areas of taste, variation can be good. But when it comes to professional judgments—medicine, hiring, law, or finance—inconsistency can mean unfairness, wasted resources, or real harm. Imagine two similar patients getting different treatments or two job candidates being judged differently just because it’s Monday or the interviewer skipped lunch.
The authors argue this isn’t just theoretical. They show real-world examples across industries where this kind of inconsistency leads to bad decisions—and it often goes unnoticed because people assume that errors “cancel out.” But they don’t. One overpriced insurance policy and one underpriced one don’t balance each other—they just hurt in different ways.
Where noise hides
Part of the challenge is that noise doesn’t have the same “explanatory charisma” as bias. When something goes wrong, we like to find a reason—usually a bias. But noise is harder to see and explain. It requires statistical thinking, which doesn’t come naturally. Professionals rarely imagine how differently others might judge the same situation. And organizations often ignore disagreement because it’s uncomfortable or inconvenient.
How do we fix it?
The book ends with a hopeful but practical tone. Yes, noise is everywhere, but we’re not helpless.
Some people are better judges than others, thanks to intelligence, domain skill, and actively open-minded thinking. But even the best judges disagree—so we need better systems, not just better individuals.
This is where decision hygiene comes in. It’s like washing your hands—you may not know what germs you’re avoiding, but the process protects you anyway. Good judgment systems do the same: they reduce variability without needing to know every specific cause of error.
The authors outline six key principles of decision hygiene that showed up throughout the book:
- The goal of judgment is accuracy, not self-expression.
- Take the outside view—compare your case to similar cases instead of seeing it as unique.
- Structure judgments into independent parts rather than forming one big impression.
- Delay intuition—don’t jump to conclusions before considering all the evidence.
- Collect independent judgments from multiple people before aggregating them.
- Use algorithms or rules when possible—they’re the only way to truly eliminate noise.
These steps aren’t glamorous. They’re not dramatic or even intuitive. But they work.
Chapter by Chapter
Chapter 1 – Crime and Noisy Punishment
What if justice depended less on law and more on luck?
This chapter hits hard right away with a tough question: if two people commit the same crime, should they receive the same sentence? Most of us would instinctively say yes. But in real life, things don’t play out so neatly.
The authors start with a strong example from the criminal justice system—specifically the American courts in the 1970s—to introduce the idea of noise. It’s a simple but unsettling idea: when different judges give different sentences for the same crime, and that difference isn’t due to legal rules or facts, it’s noise—random variability in professional judgment.
The Judge Who Spoke Up
We meet Judge Marvin Frankel, a respected federal judge who became increasingly disturbed by what he saw in the courtroom. He noticed something others ignored or accepted: a lack of consistency. Why was one man getting 15 years for robbery while another got 30 days for the same thing?
Frankel didn’t just raise the issue—he took action. In 1973, he published a book arguing that this kind of inconsistency was a failure of justice. He saw sentencing as a domain where decisions should be based on clear standards, not personal whims. He went as far as proposing something radical: the use of sentencing commissions and even computers to create more consistent outcomes. His argument wasn’t about robots replacing humans—it was about minimizing unfair randomness in decisions that deeply affect lives.
Enter the Guidelines
Frankel’s influence helped lead to the U.S. Sentencing Reform Act of 1984. This law created the U.S. Sentencing Commission and introduced mandatory sentencing guidelines. Judges were now required to follow specific rules based on the nature of the crime and the offender’s criminal history.
These guidelines drastically reduced variation. Research showed that sentencing became more consistent—there was still some variation, but the wild swings were gone. In a way, this was the system’s first major attempt to “de-noise” its judgments.
The Pushback and the Rebound
But the system pushed back. Many judges and legal experts felt these guidelines were too rigid. They argued that justice sometimes requires flexibility—that no two cases are truly identical. In 2005, the Supreme Court responded to this tension by ruling that the guidelines could no longer be mandatory. Judges could still use them, but they weren’t bound by them.
Not surprisingly, variation in sentencing increased again after the decision. The noise came back.
What’s the Big Deal with Noise?
This isn’t just a story about crime or courts. The authors use this example to show us something bigger: noise is everywhere people make judgments. It’s not about people being biased or corrupt. It’s about honest professionals—like judges—making inconsistent decisions because of subtle, often invisible influences.
That’s what makes noise so hard to fight. It’s not emotional or ideological. It’s statistical. It’s silent. But it leads to unfairness all the same.
This chapter lays the foundation for the book’s central idea: while we often talk about bias (which pushes decisions in a particular direction), we rarely talk about noise (which makes decisions spread out randomly). But both are equally dangerous.
Final thought
The core message here is simple but powerful: when we allow decisions to vary because of factors that shouldn’t matter—like who’s judging you or what mood they’re in—we’re not just being inefficient. We’re being unjust. That’s the real cost of noise.
Chapter 2 – A Noisy System
Noise isn’t just in courtrooms—it’s everywhere people make decisions
This chapter expands the lens. While Chapter 1 focused on the courtroom, Chapter 2 makes it clear: noise isn’t a legal problem, it’s a human problem. Wherever people make judgments, noise creeps in. Even in places where we assume consistency—like medicine, business, or insurance—noise quietly does its damage.
A Surprising Experiment in Insurance
The chapter kicks off with a fascinating real-world study. A large insurance company invited researchers to run a “noise audit.” The idea was simple: give experienced underwriters the exact same customer profile and ask them to estimate the appropriate premium. Ideally, if everyone followed the same rules, their answers should be close.
They weren’t. Not even close.
The variation in quotes was enormous—often differing by over 50%. That’s not bias; that’s noise. The underwriters were all trained professionals using the same systems and tools, but their judgments were scattered. This randomness in decision-making is the exact kind of noise the book aims to explore—and reduce.
Bias vs. Noise
One of the key distinctions made in this chapter is between bias and noise. Bias is when judgments are systematically off in one direction. Noise is when judgments are all over the place. You can think of bias as the target being missed in a predictable way, while noise is when the darts are scattered all over the board.
Most organizations focus hard on eliminating bias. That’s good—but they often forget about noise. And as the authors argue, noise can be just as damaging, even if it’s harder to see. A business might proudly eliminate racial or gender bias in hiring, for example, but still have wildly different outcomes depending on which manager is doing the interview. That’s noise.
Where Noise Hides
The insurance example isn’t unique. The authors show that noisy systems exist in:
- Medicine, where doctors give different diagnoses for the same symptoms.
- Asylum applications, where the odds of approval can depend heavily on which official reviews the case.
- Hiring, where one manager’s “high potential” is another’s “not a fit.”
- Business, where strategic decisions about investments or performance reviews vary widely depending on who’s making them.
This doesn’t mean people are bad at their jobs. It means that judgment is inherently noisy—and that most organizations aren’t aware of it.
Why Don’t We Notice the Noise?
This part of the chapter is particularly interesting. The authors argue that noise often goes undetected because it doesn’t feel like a problem. If a company sees a range of decisions, it might think that’s just “human nuance” or “managerial style.” But often, it’s just inconsistency masquerading as expertise.
Another reason we don’t notice noise is because we tend to focus on average outcomes. If a team averages out to a good result, we assume everything is working well. But averages can hide a lot of internal variation—and that variation can be costly.
So What’s the Cost?
The cost of noise is inefficiency, unfairness, and missed opportunities. If different people make different decisions based on the same information, it means that good ideas might be ignored, bad hires might be made, and customers might be treated unfairly—all because of chance.
The Call to Action
The chapter closes with a call to measure and manage noise—not just bias. The authors suggest that organizations can’t fix what they don’t see. By running noise audits—like the insurance company did—they can finally bring this hidden problem to light.
Once you start looking for noise, you’ll see it everywhere.
Chapter 3 – Singular Decisions
What happens when a decision is made just once—and there’s no way to compare it?
So far, we’ve looked at decisions that are repeated—like sentencing or underwriting—where you can spot patterns and inconsistencies. But what about the kinds of decisions that only happen once? The one-off judgments? That’s what this chapter explores.
The authors call these singular decisions, and they’re surprisingly common in leadership, hiring, investing, medicine, and even personal life. Think of a CEO deciding to acquire another company, a doctor diagnosing a rare condition, or a hiring manager choosing a candidate from a shortlist. These aren’t decisions made every day with plenty of chances to compare and refine. They’re high-stakes, one-time calls—and they’re incredibly vulnerable to noise.
Why Singular Decisions Are So Noisy
Here’s the core idea: when there’s no clear benchmark, no easy feedback, and no repeatable process, judgment becomes even more uncertain. We rely more on intuition, gut feeling, or past experience. But those things aren’t always reliable.
The authors point out a key challenge with singular decisions: we can’t detect noise afterward. There’s no alternate reality where we see how a different judge, manager, or doctor would’ve decided. So even if a decision feels sound at the time, it could still be wildly inconsistent compared to what someone else would’ve done.
And this leads to a deeper problem: overconfidence.
The Illusion of Validity
People are generally overconfident in their judgments, especially in singular decisions. Why? Because we tend to believe in the story we’ve built in our heads. If a manager feels that a candidate is “a perfect fit,” they’ll stick with that belief—even if the reasoning is shaky. The same goes for investment decisions, product launches, or medical calls. We trust our reasoning, and we rarely get clear feedback to prove us wrong.
The chapter introduces a concept called “the illusion of validity”—our tendency to believe that our judgments are right just because they feel coherent. We mistake consistency within our own minds for accuracy in the real world. This illusion becomes even stronger when the decision is complex, and we’ve invested time in it.
The Missing Counterfactual
Another important point the authors make is about counterfactuals. In singular decisions, we often don’t know what would have happened if we’d chosen differently. If you hire one candidate and they underperform, you’ll never know if the runner-up would’ve been better—or worse. This makes it nearly impossible to detect noise in hindsight. So we move on, assuming we made the best choice.
Can We Reduce Noise in Singular Decisions?
Yes—but it takes effort. One approach the authors suggest is to break down the decision into multiple independent judgments, ideally made by different people. Instead of asking one manager to make a hiring decision based on a gut feeling, have several people evaluate different attributes separately and systematically. Then combine their input. This reduces the influence of any one person’s noise and creates a more balanced view.
Another strategy is structured decision-making—using clear criteria, checklists, or scoring systems to guide the process. It’s not about removing human judgment, but about helping it be more deliberate and less random.
Why This Matters
Singular decisions often shape lives, companies, and even history. And yet, they’re made in the fog of uncertainty, personal bias, and noise. The authors argue that we should treat these decisions with more humility—recognizing that just because something feels right doesn’t mean it is.
The takeaway?
The most important decisions are often the noisiest—and the ones where we’re least likely to see it. To make better one-off calls, we need structure, diverse perspectives, and the discipline to question our certainty.
Chapter 4 – Matters of Judgment
Your mind as a measuring tool
The authors explain that when you make a judgment—like deciding if a job candidate will succeed or what sentence to give a criminal—you’re actually performing a kind of mental measurement. You’re assigning a value to something that doesn’t have a clear, objective answer. And because our minds aren’t perfect measuring instruments, our judgments can vary a lot. That variation is noise.
Unlike questions with clear right or wrong answers, many professional judgments are subjective. Still, the authors argue, just because something isn’t verifiable doesn’t mean anything goes. There are still better and worse ways to judge, depending on how consistent and well-structured the reasoning is.
The Michael Gambardi exercise
One of the most fascinating parts of this chapter is a little experiment involving a fictional CEO candidate named Michael Gambardi. Readers are given a short profile about him and asked to decide: Will he succeed in the new job?
There’s no right answer—he’s not a real person—but almost everyone will have an opinion. The point of this exercise isn’t to guess correctly, but to show how quickly we form impressions and turn them into judgments. We read a few facts, connect them into a story, and then rate someone’s chances of success like we’re filling out a performance review. It all feels natural—but it’s not always reliable.
This shows how easily we mistake a “feeling of coherence” for a solid decision. If the story makes sense in our head, we feel confident—even if we’d make a different call on another day or if someone else sees the same profile completely differently. That’s the heart of the noise problem.
Verifiable vs. nonverifiable judgments
Judgments fall into two broad categories. Some can be verified—like whether a medical diagnosis was correct after test results come back. Others, like evaluating leadership potential, may never be fully confirmed. The authors call these nonverifiable judgments, and they’re especially prone to noise because there’s no feedback loop to help improve or adjust them.
When there’s no way to confirm if we were right, we tend to rely even more on our internal signals—like confidence or coherence—to feel good about our decisions. But those signals can be misleading.
Why the process matters
Even when we can’t verify a judgment’s accuracy, we can still ask: was the process sound? The authors stress that how we arrive at a judgment can be just as important as the judgment itself. A good process—structured, consistent, and well-thought-out—can reduce noise, even if the outcome can’t be checked.
This is especially important in fields like hiring, admissions, or law, where decisions carry serious consequences. If different people reach wildly different conclusions based on the same information, the system feels unfair—even if nobody meant to be unfair.
Evaluative judgments and fairness
The chapter closes by looking at evaluative judgments—decisions where we have to balance competing values, like how harsh a sentence should be or what grade a student deserves. These decisions are personal, but they still need consistency. If one judge routinely gives lenient sentences and another gives harsh ones for similar crimes, that’s not just style—it’s noise. And it undermines trust in the system.
The authors argue that in areas where fairness is essential, we can’t just accept noise as a fact of life. We need to build systems that encourage consistency—not rigid conformity, but reasonable alignment.
Chapter 5 – Measuring Error
Understanding error as a combination of bias and noise
The authors open this chapter by breaking down what makes a judgment wrong. It’s not just about being biased—error is actually made up of two parts: bias and noise. Bias happens when judgments consistently miss in one direction. Noise happens when judgments scatter in many directions. Both lead to mistakes, but we tend to talk a lot more about bias and ignore noise.
To illustrate this, the authors use a simple yet powerful metaphor: imagine a target. If arrows are consistently off to one side, that’s bias. If arrows are all over the place, that’s noise. Most organizations work hard to reduce bias—but few even measure the noise in their systems.
The “mean squared error” model
This part of the chapter gets a bit more technical, but the authors keep it digestible. They introduce a simple equation used by statisticians: error² = bias² + noise².
In plain terms, this means the total error in any judgment is a mix of both bias and noise.
Let’s say a group of doctors is diagnosing patients. If they all over-diagnose, that’s bias. If they wildly disagree on the same case, that’s noise. If both things happen, the error grows. And because these two sources of error add up, even moderate noise can cause serious problems.
Why noise deserves more attention
One of the most interesting ideas in this chapter is that noise might be even more damaging than we realize—not just because it causes inconsistency, but because we don’t see it. Bias is visible. Noise is silent. So while organizations set up training and systems to combat bias, they often leave noise untouched.
The authors also show that in many real-world situations, noise contributes more to error than bias does. That’s a surprising claim—but they back it up with examples from insurance, medicine, and the justice system. In these fields, the inconsistency of expert judgment can be more harmful than a shared tendency to over- or under-estimate.
What happens when you measure noise
The chapter includes a fascinating case where a company ran a “noise audit” to measure how much variation existed in its employee evaluations. Managers were asked to rate the same set of fictional performance profiles. The results were all over the place.
This is where the authors introduce the idea of “system noise”—the total amount of variability in a system’s judgments. They argue that you can’t improve your decisions if you don’t know how noisy they are. And the only way to know is to measure.
Reducing error starts with understanding both parts
A big takeaway from this chapter is that if we want to make better decisions—more accurate, fair, and consistent—we need to address both bias and noise. Focusing only on one means we’re still left with a lot of avoidable error. The equation is simple, but the implications are huge.
The authors argue that every organization using human judgment should measure noise, just like they measure bias. Only then can they truly improve decision quality.
Chapter 6 – The Analysis of Noise
What makes up noise? It’s more than just random scatter
In this chapter, the authors take us deeper into the mechanics of noise. Now that we’ve seen that noise is real and measurable, the question becomes: what causes it? It turns out that noise isn’t just a single thing. It’s made up of different components that affect our judgments in different ways.
The authors break noise into two key parts: level noise and pattern noise. Level noise is when some people consistently give higher or lower judgments than others. Think of one judge who’s always harsher than their peers, or a doctor who tends to over-diagnose. Pattern noise is more subtle—it’s when people react differently to specific cases, even if their average judgments are similar. One manager might always rate enthusiastic employees higher, while another might be drawn to calm, analytical types.
The wine tasting example
One of the most relatable illustrations in this chapter is how wine tasters score wines. Some tasters consistently rate wines higher or lower—that’s level noise. But even more interesting is how differently they rate the same wine depending on what characteristics they focus on or how they interpret flavor profiles. That variation across wines is pattern noise.
This helps us see that even when people agree on a general scale, they may still disagree wildly on specific cases. And that unpredictability—the way our minds match information to judgment—is where pattern noise thrives.
Occasion noise is also part of the mix
Alongside level and pattern noise, there’s a third piece: occasion noise. This is about inconsistency within the same person across time. A judge might give a lighter sentence in the morning than in the afternoon. A doctor might interpret an X-ray differently depending on their mood, fatigue, or what case came right before. The same person, with the same knowledge, can give a different answer depending on when they’re asked.
This part of the chapter is eye-opening because it shows just how fragile human judgment can be. Even without meaning to, even without realizing it, we’re affected by noise every day—and we don’t usually notice.
Why this matters
The big point here is that noise isn’t just an occasional glitch in the system. It’s built into the way people judge. And if we want to improve decisions, we have to understand where the noise is coming from. Is someone consistently too lenient? That’s level noise. Do they react differently to similar cases based on unspoken preferences? That’s pattern noise. Are they just inconsistent from one day to the next? That’s occasion noise.
The authors argue that each of these elements can and should be measured. When we know which kind of noise is at play, we can start to reduce it more effectively.
A new way of thinking about judgment systems
This chapter invites us to stop thinking of people as lone decision-makers and start thinking of judgments as outcomes of a system—a system that includes individual tendencies, context, timing, and randomness. Once we see judgment as a system, we can start making it better, just like we improve machines or processes.
Chapter 7 – Occasion Noise
Why the same person doesn’t always judge the same way
So far, we’ve looked at how different people can reach different conclusions based on the same information. But in this chapter, the authors zoom in on something even more surprising: the same person, faced with the same situation on different days, can make different judgments too. This is called occasion noise.
Occasion noise happens when temporary, often invisible factors influence our decisions. Mood, fatigue, weather, time of day, even what we just had for lunch—these things may seem irrelevant, but they can actually sway our thinking. The result is inconsistency, not between people, but within the same person.
The fingerprint examiner study
One of the most striking examples in this chapter involves fingerprint examiners. These are trained professionals who analyze fingerprints for criminal investigations. You’d expect them to be incredibly consistent, right? But in a study where examiners unknowingly reviewed the same prints twice—months apart—a significant number of them gave different conclusions. Same prints, same examiner, different judgment. That’s occasion noise in action.
It’s a powerful reminder that human judgment isn’t just shaped by what we know, but also by when we’re asked to decide.
The judge’s lunch break effect
Another famous study highlighted in this chapter shows how judges ruling on parole cases were far more likely to grant parole early in the day or right after lunch—when they were presumably more refreshed. As the session dragged on and fatigue set in, approval rates dropped sharply. That means something as simple as a snack break could change someone’s future.
This isn’t about bad people making bad choices. It’s about how vulnerable our decisions are to subtle influences we don’t even notice. And because these influences are random, they add noise to our judgments.
The illusion of consistency
One of the most eye-opening points the authors make is that most people believe they’re more consistent than they really are. We tend to think we judge fairly and logically, no matter the day or mood. But when our judgments are measured over time, that belief often doesn’t hold up. We underestimate the role of noise in our own thinking.
The problem is, this kind of inconsistency rarely gets noticed. In many workplaces, no one checks whether the same manager gives wildly different performance reviews from month to month. In hospitals, few systems are set up to catch how a tired doctor might read a scan differently at the end of a long shift. The noise is there, but it stays hidden.
What can we do about it?
The authors suggest that while we can’t eliminate all occasion noise, we can design systems to reduce its impact. For example, using structured processes or decision aids can help keep people on track, even when they’re tired or distracted. And awareness helps—just knowing that mood or timing might affect our judgment can make us more careful.
Chapter 8 – How Groups Amplify Noise
We form groups to make better decisions—but they often make things worse
This chapter tackles a common assumption: that group decisions are better than individual ones. More people, more perspectives, more balance—what could go wrong? According to the authors, quite a bit. When groups come together to make judgments, they don’t just average out mistakes. They often amplify noise.
The problem lies in how people interact in groups. Rather than independent thinking, we get social dynamics—pressure to conform, dominant voices steering discussion, and the desire to reach agreement quickly. These behaviors don’t reduce noise. They create new patterns of it.
The illusion of agreement
One of the key points the authors make is that when a group reaches a decision, it often feels more legitimate—like the result must be more balanced or objective. But that’s not always true. Group decisions can be just as scattered as individual ones, and sometimes even more so.
In one example, they describe how teams evaluating candidates for hiring or promotions often end up reaching very different conclusions based on group dynamics rather than solid reasoning. Strong personalities may sway the group. Early comments can anchor the discussion. In the end, the decision reflects the process more than the evidence.
Groupthink vs. group noise
It’s easy to confuse noise in groups with groupthink, but the authors draw a clear line between the two. Groupthink is when people agree too quickly and ignore dissent. Group noise, on the other hand, is when group members have different opinions, and the final judgment reflects that variability.
Imagine a committee rating grant proposals. Some members are generous, others are strict. The result might be a compromise, but it’s also a blend of noise. The differences don’t cancel out—they just get baked into the outcome.
Averaging judgments can help—but only if done right
Interestingly, the authors say there is one way that groups can reduce noise: independent averaging. When people make judgments separately and anonymously, and then those judgments are averaged, the noise tends to decrease. This is sometimes called the “wisdom of crowds,” and it works best when opinions are formed in isolation before being shared.
But once people start influencing each other—through discussion or persuasion—the benefits fade. Noise creeps back in.
Designing better group decisions
To reduce group noise, the authors suggest a few principles. The most important? Structure. Instead of open, unstructured conversations, groups should have clear procedures—everyone gives independent input before discussion starts, criteria are agreed upon in advance, and strong personalities don’t dominate the floor. Basically, the more discipline in the process, the less noise in the result.
They also emphasize the value of awareness. Just knowing that group decisions are prone to noise can lead to better habits—like pausing before reacting, inviting diverse views, and checking for consistency.
Chapter 9 – Judgments and Models
The gap between human judgment and statistical models
The authors open this chapter with a bold claim: when it comes to making accurate predictions, statistical models almost always outperform human judgment. Even in areas where we might expect personal expertise or intuition to matter—like medical diagnoses, academic admissions, or employee evaluations—models do a better job. That sounds counterintuitive at first, especially when we’re so used to trusting human experts. But the research is pretty clear: we’re not as good at forecasting as we think we are.
Why models win
So, what gives models the upper hand? The key is consistency. Models apply the same rules every time. They don’t get distracted, tired, emotional, or influenced by irrelevant details. Humans, on the other hand, are full of variability. The same person might make different decisions at different times, and different people might make wildly different judgments about the same case. That’s where noise creeps in.
What’s interesting is that even very simple models can beat experts. You don’t need anything fancy. A basic formula that sticks to a few key predictors usually does better than a human using experience and intuition.
The illusion of validity
Another idea explored in this chapter is how easily we fall into the trap of overconfidence. We often think we “just know” when something is right, or when we’ve spotted a great candidate or a likely outcome. This feeling is comforting but deceptive. The authors call it the “illusion of validity”—we trust our gut even when it’s leading us astray.
One memorable example is from Paul Meehl, a psychologist who showed that simple models outperformed expert predictions in fields from psychiatry to criminal justice. Still, professionals often resist using models. Why? Because it feels like giving up control. There’s a psychological resistance to relying on something that doesn’t “feel” intelligent.
Blending models with judgment
That said, the authors don’t suggest we get rid of human judgment entirely. Instead, they propose a hybrid approach: build solid models based on evidence, and then, when necessary, allow for a structured and limited role for expert input. The goal isn’t to remove humans from the loop but to reduce unnecessary noise and bias.
The bigger message
A big takeaway here is that we need to rethink how much we rely on intuition and “experience” in areas where data-driven models can offer more accuracy. We’re not naturally good at prediction, and pretending otherwise just adds more noise.
This chapter lays the groundwork for why developing good models and sticking to them can help organizations make better, more consistent decisions. It pushes us to be humble about our limits and open to tools that help us improve accuracy—even if they don’t “feel” right.
Chapter 10 – Noiseless Rules
Simple rules can outperform even complex human decisions
In this chapter, the authors continue the case for using models—but now with an even stronger focus on rules. They show that simple, mechanical rules often do a better job than human experts, not just because they eliminate bias, but because they eliminate noise. These rules are consistent. They don’t get tired, hungry, emotional, or distracted. And that consistency pays off.
What’s most surprising is how simple these rules can be. The authors talk about predictive models that use just a few clear variables to guide decisions. A rule like “if a loan applicant has no prior defaults and a credit score above 700, approve the loan” might sound too simplistic—but it turns out to be more accurate than many human loan officers making case-by-case calls.
Why rules work better
The power of rules lies in their reliability. Even if they’re not perfect, they don’t waver. People, by contrast, might weigh the same information differently from day to day. That variability—what the authors have been calling noise—is exactly what rules avoid.
There’s a compelling example here involving bail decisions. When a simple algorithm was used to recommend whether a defendant should be released or held before trial, it outperformed judges. The model was more consistent and fair, and led to fewer repeat offenses and missed court dates. In other words, the rule didn’t just match the judges—it beat them.
People resist rules
Despite this success, people often reject these rules. The authors explore a few reasons why. One is pride—we want to believe our judgment adds something special. Another is discomfort with losing control or trusting a system that feels “cold.” And sometimes we fear what might happen if the model makes a mistake, even though we tolerate human mistakes all the time.
There’s also a tension between accuracy and individualization. Humans value the idea of being seen and judged as unique. Rules, by design, don’t personalize. That makes them feel impersonal—even if they’re more fair in the long run.
Improving rules with judgment—carefully
Still, the authors acknowledge that some room for human input might be necessary. But they argue that this input should come after the rule has done most of the work. And it should be limited, structured, and accountable. Let the model do the heavy lifting, then allow human judgment to adjust within a controlled margin. This way, we get the benefit of consistency without entirely removing the human element.
Chapter 11 – Objective Ignorance
We often know less than we think—especially when making predictions
This chapter digs into a humbling but important idea: there’s a lot we simply can’t know when making judgments about the future. The authors call this “objective ignorance.” It’s not just about being uninformed or inexperienced—it’s about the limits of what’s knowable. Even with all the data in the world, the future is still uncertain, and our predictions are more fragile than we’d like to admit.
One of the main points the authors make is that noise isn’t just a problem of poor decision-making—it’s often a reflection of uncertainty. In complex, unpredictable situations, judgment gets noisy because there’s no clear answer. We try to guess anyway, but we’re working with incomplete information, unpredictable outcomes, and shifting conditions.
The broken-leg problem
There’s a great example in this chapter that illustrates this point: the “broken-leg problem.” Imagine you’re using a predictive model to guess whether someone will go to the movies. The model uses patterns—like how often they usually go, what time of day it is, and so on. But today, you find out they broke their leg. The model doesn’t know this, and suddenly, your human judgment might seem more accurate.
But here’s the twist: these exceptions are rare. The broken-leg is the classic case of information that’s truly relevant but not in the model. Still, trying to account for too many “what-ifs” can backfire. Humans tend to overestimate the importance of these exceptions, which actually adds more noise to the decision.
Mistaking confidence for accuracy
Another fascinating idea in this chapter is how we confuse confidence with correctness. When people feel strongly about a prediction, they tend to believe it’s accurate. But the authors explain that confidence is often driven more by how coherent the story feels than by actual evidence. We underestimate how uncertain things really are—and that leads to poor predictions.
This is especially true in fields like business strategy, stock forecasting, or political analysis. Experts feel sure of their calls, but studies show their track records are usually no better than chance. The problem isn’t just bias. It’s objective ignorance. The future resists being pinned down.
Planning fallacy and the illusion of control
The authors also tie this idea to something called the planning fallacy—our tendency to underestimate how long things will take or how much they’ll cost. This isn’t just optimism; it’s a failure to account for uncertainty. We think our situation is unique and manageable, even when evidence from similar cases says otherwise.
This illusion of control can lead organizations and individuals to make overconfident bets. And when we don’t admit how much we don’t know, we’re more likely to make noisy, inconsistent decisions.
Chapter 12 – The Valley of the Normal
Most people—and most cases—are actually pretty average
This chapter wraps up the section by challenging another common assumption: that the world is full of unique, exceptional cases that need unique, exceptional judgments. The authors argue the opposite. Most of the time, we’re in what they call the valley of the normal—a space where people, events, and outcomes follow predictable, ordinary patterns.
The issue is that we often don’t act like that’s true. Decision-makers tend to treat each case as special or different, believing that it needs personal insight or a tailored response. But this mindset actually adds noise. When we constantly look for what’s unusual, we overlook the consistency that’s already there.
Why normal matters
The authors explain that being “normal” doesn’t mean boring—it means that a case fits within a well-understood pattern. Think about hiring decisions, loan approvals, or student evaluations. In most situations, there are clear indicators of success that apply broadly. Trying to search for hidden “gems” or exceptions can actually lead to more error.
There’s a helpful reminder in this chapter: extreme cases are rare. That’s why they’re called “extreme.” Yet people often make decisions as if they’re common. For example, a hiring manager might fall in love with a candidate who has an unconventional background, assuming they’re a breakout star. But in reality, the safer bet is usually someone who matches the established success pattern.
The base rate is your friend
A big idea here is base rate neglect. That’s when we ignore the general odds of something happening because we’re too focused on specific details. Let’s say you’re trying to predict whether a startup founder will succeed. Instead of starting with the base rate (how many startups actually succeed), you focus on the founder’s charisma or story. That’s a mistake.
By not anchoring our judgments in what’s statistically normal, we invite both bias and noise. The authors stress that understanding what’s typical—and using it as your starting point—is one of the best ways to improve decision quality.
Resisting the lure of the exceptional
The challenge is that people love stories. We’re drawn to the exceptional case, the dramatic turnaround, the one-in-a-million shot. But betting on these stories leads to worse outcomes over time. If we accept that most cases are in the valley of the normal, we can focus on building systems that work well on average—not just occasionally.
That doesn’t mean we ignore unique details. But it means we shouldn’t let rare exceptions drive everyday decisions. The more we lean on general patterns and historical data, the less room there is for noise to sneak in.
Chapter 13 – Heuristics, Biases, and Noise
The psychology behind how we judge
This chapter dives into the mental shortcuts we all use—called heuristics—and how they can go wrong. These shortcuts often help us make quick decisions, but they can also cause two major problems: bias (where everyone makes the same mistake) and noise (where people make different mistakes).
The authors draw on decades of research to explain how these intuitive patterns of thinking, often handled by our fast, automatic “System 1,” can lead us astray. While the heuristics-and-biases program has traditionally focused on shared human errors, this book goes further—showing how individual variation creates noise too.
Diagnosing errors without knowing the truth
A big insight here is that even when we don’t know the “correct” answer to a judgment, we can still detect bias. For instance, if irrelevant things (like font style) influence people’s evaluations, we know something’s off. And if people ignore meaningful differences (like a job lasting 2 years vs. 3), they’re also likely biased.
This idea is visualized with the “shooting target” analogy: even if we can’t see the bullseye, we can tell something’s wrong when two teams aiming at the same spot hit totally different areas—or if two teams aiming at different spots all hit the same area.
Substitution: answering the wrong question
One of the most interesting parts of the chapter is how we subconsciously substitute a difficult question with an easier one. Instead of asking, “How likely is it that Bill is an accountant who plays jazz?” people ask, “How much does Bill seem like that type of person?” And since “accountant who plays jazz” feels more representative than just “jazz player,” we make a logical error—even though it’s less probable.
This substitution also explains why we’re more afraid of airplane crashes right after they’re on the news (the availability heuristic) or why we judge someone’s life satisfaction based on our current mood.
When we start with a conclusion
Another common mistake is conclusion bias—when we decide on an answer we like and then build our reasoning around it. The George Lucas story illustrates this beautifully. He rejected the idea of killing a main character in Return of the Jedi not because of logic, but because he simply “didn’t like it.”
This emotional reaction often comes first, with rational justifications arriving later. And it happens everywhere—from politics to product reviews. This is where things like confirmation bias and the affect heuristic sneak in, shaping how we interpret evidence.
Anchoring and the power of first impressions
There’s also anchoring—where we latch onto a number or idea and adjust from there. Even irrelevant numbers, like your Social Security digits, can sway how much you’re willing to pay for a bottle of wine. First impressions (even the order of adjectives in a description) also heavily affect how we see people or products, making us overly consistent in our judgments—something called excessive coherence.
Bias leads to noise too
You might think bias and noise are totally separate. But they’re not. Substitution, conclusion bias, and excessive coherence don’t just cause everyone to make the same mistake—they also create variability when people have different biases or when context shifts.
For example, judges granting asylum have wildly different approval rates, sometimes just because of their individual leanings. That’s noise, rooted in bias. And our mood can shift how we answer the same question on different days—occasion noise, also driven by the same mechanisms.
Why this matters
This chapter is a powerful reminder that our thinking isn’t just flawed in predictable ways—it’s also wildly inconsistent. Biases make us confident in our wrong answers, and noise ensures those answers vary from person to person.
And if we want to improve judgment, we need to tackle both—not just by trying to “think harder,” but by understanding the mental processes that shape our decisions.
Chapter 14 – The Matching Operation
Judgment happens when we match a case to a mental scale
In this chapter, the authors walk us through what actually happens in our minds when we make a judgment. They introduce something called the matching operation, which is the mental process of comparing a case (like a job applicant, a student essay, or a risk assessment) to a set of internal reference points—what they call a “scale.”
Every time we judge, we’re matching. We take in information and ask ourselves, “What does this remind me of?” or “How does this compare to other things I’ve seen?” Based on that match, we give a number, a rating, or a decision. It feels intuitive, but this process is more fragile than we realize.
Matching feels easy—but it hides complexity
The power of the matching operation is that it feels natural. You see a candidate’s resume and get a sense of how strong it is. You taste a wine and rate it from 1 to 10. These judgments seem quick and confident. But beneath the surface, the process is built on reference points that vary from person to person—and even moment to moment.
One example is how Olympic judges rate performances. They’re supposed to use consistent criteria, but their ratings are influenced by what they just saw. A great routine might make the next one seem worse by contrast—even if it’s objectively strong. That’s the matching operation at work, and it’s a source of noise.
Reference classes and sliding scales
To make good judgments, we need to match cases to the right reference class. That means comparing apples to apples. But this isn’t always easy. What’s the right reference class for a CEO candidate with a mix of nonprofit and tech experience? Or for a movie that’s both a comedy and a thriller?
Because these mental scales are vague and often personal, people match cases differently. And that’s where pattern noise shows up. Two people might see the same performance and assign different ratings—not because of disagreement on quality, but because they’re using different mental yardsticks.
Judgments are influenced by irrelevant context
Another problem is that our reference points are easily skewed by irrelevant information. If you’ve just read five amazing resumes, a good one might look mediocre. If your morning started with a tough disciplinary case, a moderate one might seem mild by comparison. These context effects mean we’re not just matching against an internal standard—we’re matching against what’s fresh in our mind.
Why this adds to noise
The matching operation explains a lot about where noise comes from. It’s not that people don’t know what they’re doing—it’s that the scales they use aren’t fixed. They shift depending on memory, experience, context, and even order of presentation.
And since every person builds their own mental scale over time, even highly trained experts will differ. That variability is built into the very structure of how we judge.
Chapter 15 – Scales
Judgment depends on the scale—and scales are surprisingly messy
This chapter builds directly on the last one. If judgment is about matching a case to a scale, then the quality of that scale matters a lot. The problem? Most of the time, those scales are unclear, inconsistent, or used in different ways by different people. That leads straight to noise.
The authors begin by pointing out that whenever we rate something—whether it’s a candidate, a restaurant, or a level of risk—we’re placing it somewhere on a scale. That scale might be numerical (like 1 to 5), descriptive (like “poor” to “excellent”), or even abstract (like “high potential” or “mild concern”). But regardless of format, the challenge is the same: how do we know what each point on the scale really means?
The illusion of shared meaning
One of the big insights here is that people think they agree on what a scale means, when in fact they don’t. Two reviewers might both rate someone a “4 out of 5,” but one might think that means “excellent” while the other thinks it means “good but not outstanding.” So even when scores look the same, the reasoning behind them can be totally different.
And when the same person uses the same scale on different days, their own interpretation can shift. That’s occasion noise again—but here, it’s noise created by fuzzy tools.
What anchors a scale?
The authors introduce an important concept: anchors. These are reference points that help people decide where on the scale a case belongs. For example, if you’re evaluating a job candidate, your memory of a past excellent candidate might act as an anchor for a 5. But if your memory shifts—or if you’re thinking about someone weaker—your anchor shifts too.
This makes the scale itself unstable. It’s like trying to measure something with a ruler that stretches and shrinks depending on your mood or memory.
The challenge of vague labels
Another issue is that many judgment tools use vague language. Words like “moderate,” “strong,” or “likely” feel precise—but they’re interpreted very differently by different people. One person’s “moderate risk” might be another’s “significant concern.” These differences add more noise, especially in fields like medicine, finance, or public health, where those words can shape real decisions.
When numbers don’t help
You might think switching to numbers would fix the problem. After all, numbers are objective, right? Not exactly. People still interpret the same number differently. Some evaluators use the whole 1–10 range; others only score between 6 and 9. The result is that scores might look quantitative, but they’re still deeply personal—and noisy.
Improving scales doesn’t eliminate judgment—but it helps
The authors argue that better-designed scales can reduce noise. That means being specific about what each point means, using examples, and training people to apply the scale consistently. But they also make a key point: no scale can completely remove judgment. The goal isn’t perfection. It’s less noise.
Chapter 16 – Patterns
Noise hides in the patterns we create—without even realizing it
In this chapter, the authors zoom in on one of the most subtle sources of noise: the patterns people form when making repeated judgments. Even when two judges use the same scale, their individual preferences and habits create distinct personal patterns—meaning they don’t just disagree occasionally, they systematically judge differently across many cases.
This type of noise is called pattern noise, and it’s a bit trickier than the kinds we’ve seen before. It’s not just about people being stricter or more lenient overall (that’s level noise). It’s about how people rank or evaluate specific kinds of cases differently. Two people might agree on one case, but completely diverge on the next.
The fingerprint pattern experiment
A great example of this comes from a study involving fingerprint examiners. When different experts looked at the same set of prints, they not only disagreed with each other—they each had a unique pattern in the way they made errors. Some were more cautious; others more confident. Some leaned toward inclusion; others leaned toward exclusion. These personal tendencies created unique “judgment fingerprints.”
This is pattern noise in action. It’s not random. It’s a stable difference in how people interpret the same information.
We don’t notice our own judgment patterns
One of the most interesting points here is that people are largely unaware of their own patterns. They may think they’re being objective or consistent, but when their decisions are compared across time or against others, clear tendencies emerge. These might be rooted in personality, training, past experiences, or unconscious preferences—but they show up again and again.
And because we rarely compare judgments side by side, these differences often go undetected.
Pattern noise matters even when overall ratings are similar
A surprising insight in this chapter is that even if people agree on averages, they can still have high pattern noise. For instance, two teachers might give the same average grade across a group of students—but completely disagree on who the top performers are. So while their ratings might look similar in the end, their judgments vary wildly case by case.
This matters a lot when decisions are made based on rankings or ratings. If different evaluators are inconsistent in how they sort or score, it introduces noise into hiring, grading, performance reviews, and beyond.
Reducing pattern noise isn’t easy—but it’s possible
The authors suggest that to reduce pattern noise, organizations need to promote comparability. That means training people to use scales in similar ways, using concrete benchmarks or examples, and encouraging independent judgments before group discussion. It’s also helpful to use structured tools that minimize the room for personal interpretation.
Chapter 17 – The Sources of Noise
To reduce noise, we need to understand where it comes from
This chapter ties together everything we’ve learned so far about noise. It’s not just one thing—it’s the result of multiple sources working together (and sometimes against each other). The authors lay out a framework that helps explain exactly why judgments vary so much. Once we see the sources, we can start thinking about how to manage or reduce them.
They break noise down into three main types: level noise, stable pattern noise, and occasion noise. We’ve seen these before, but this chapter clarifies how each one plays a different role—and how they add up to create system noise.
Level noise is about overall strictness or leniency
Some people are just tougher than others. Whether it’s a judge handing out sentences, a teacher grading essays, or a manager giving performance reviews, individuals tend to have their own average level of harshness or generosity. That’s level noise.
It’s easy to spot when two people consistently give higher or lower scores than others. But in real-world systems, this can lead to unfairness. Two employees doing the same job might get very different evaluations depending on who their boss is—not because of the work itself, but because of differences in rating style.
Stable pattern noise is about differences in judgment across cases
This is when people use the same scale, but rank or interpret cases differently. One evaluator might value creativity over structure, while another does the opposite. Their personal preferences lead them to form different patterns—even when they agree on general standards.
The key here is that these differences are consistent. It’s not random. Each person has a unique “judgment fingerprint,” and that adds a layer of noise every time we rely on human evaluations.
Occasion noise is about inconsistency within the same person
The same person can give different judgments depending on the day, their mood, or even what they had for breakfast. That’s occasion noise. It’s the most invisible kind because it happens inside the same brain—no comparison is needed.
We all know what it’s like to feel more generous in the morning or more irritable after a long meeting. But we rarely connect those feelings to the quality of our decisions. That’s what makes occasion noise so sneaky.
System noise is the total of all three
The authors explain that these three sources—level, pattern, and occasion—combine to create system noise. That’s the overall inconsistency in a judgment process. And the more complex or human-driven the system, the more likely it is to be noisy.
For example, in hiring, you might have one manager who’s generally tough (level noise), who also values certain traits more than others (pattern noise), and who happens to be in a better mood on Tuesday than Thursday (occasion noise). Put that all together, and the same candidate could be hired or rejected just because of who reviews them and when.
Why this matters for organizations
One of the most important insights here is that organizations often don’t measure or even notice noise. They think of mistakes as rare or individual problems. But when you look across a system, the noise adds up—and it can affect fairness, efficiency, and outcomes in major ways.
The authors stress that understanding these sources is the first step toward reducing noise. If you don’t know where it’s coming from, you can’t fix it.
Chapter 18 – Better Judges for Better Judgments
What makes a good judge?
That’s the main question this chapter explores. The authors take us into the world of judgment and decision-making, looking closely at what separates great judges from the rest—not in the courtroom sense, but in any field where people are required to evaluate, decide, and predict.
The ingredients of good judgment
One of the key ideas here is that some people consistently make better judgments than others. But interestingly, it’s not about IQ or raw intelligence. Good judgment comes from qualities like actively open-minded thinking, humility, and a deep awareness of one’s own biases. The authors describe good judges as those who can step back, question their own assumptions, and stay flexible in the face of uncertainty.
Another trait that stands out is a commitment to evidence. Better judges are data-driven and not overly attached to intuition. They think in terms of probabilities and are more comfortable with ambiguity.
The superforecasters example
The chapter refers to the well-known Good Judgment Project and superforecasters—individuals who were remarkably good at predicting geopolitical events. What made them stand out wasn’t some kind of genius-level intellect, but a specific mindset: they updated their beliefs regularly, considered alternatives, and worked with others to challenge their own thinking.
This part is fascinating because it reminds us that judgment isn’t about having all the answers—it’s about asking better questions, being cautious about overconfidence, and treating decisions as ongoing processes rather than fixed conclusions.
Why better judgment matters
The big takeaway is that good judges reduce both bias and noise. While bias skews judgments in a particular direction, noise causes them to be too scattered. So improving the quality of judges isn’t just about getting closer to the right answer—it’s also about making judgments more consistent and less noisy across different people and situations.
In a world where high-stakes decisions are made every day—in hiring, medicine, forecasting, and more—this chapter argues that choosing and developing better judges might be one of the most powerful things we can do to improve outcomes.
Chapter 19 – Debiasing and Decision Hygiene
Why fighting bias isn’t enough—and how to reduce judgment errors systematically
This chapter takes a step back and asks a practical question: what can we do to improve judgment? We’ve already seen that both bias and noise cause errors. But while there’s been a lot of attention on debiasing—trying to fix flawed thinking—the authors argue that it hasn’t worked very well. Debiasing is hard, and most efforts have little long-term impact. So they introduce a broader, more effective approach: decision hygiene.
Why debiasing alone doesn’t work
The authors are clear: debiasing sounds good in theory. If we know that overconfidence or confirmation bias skews our thinking, why not train people to think more objectively? But in practice, it’s hard to do. Biases are deeply ingrained, often unconscious, and they don’t go away just because we learn about them.
Even well-trained professionals still fall into the same traps. Knowing about bias isn’t enough to overcome it—just like knowing about germs doesn’t stop the flu unless you also wash your hands. That’s where the hygiene metaphor comes in.
What is decision hygiene?
This is one of the most important ideas in the book. Decision hygiene is about building systems and habits that prevent error from creeping in—whether it’s bias, noise, or both. Think of it as cleaning up the decision-making process before anything goes wrong, rather than trying to fix errors after the fact.
One of the key points is that good hygiene isn’t reactive. It doesn’t wait for signs of bias. It’s proactive and consistent. You put it in place like a routine, even if you don’t see a specific problem yet—just like you wash your hands regardless of whether your hands “look” dirty.
Principles of decision hygiene
The authors lay out several hygiene strategies. One is structuring judgment—breaking complex decisions into parts and evaluating each part independently. For example, when hiring someone, instead of just going with a gut feeling, you assess skills, experience, and cultural fit separately before combining them. This reduces the chance of one impression dominating the whole decision.
Another technique is delaying intuition. Don’t jump to a conclusion early in the process. Instead, gather enough information before forming an opinion. Once you have a hunch, your brain starts to defend it—so keeping an open mind a little longer can help avoid tunnel vision.
They also emphasize noise auditing—measuring how much judgments vary across people and time. If a team of experts is making decisions, run a test to see how aligned their answers are. Just being aware of the noise can push people toward more careful thinking.
Decision hygiene ≠ decision perfection
One important clarification the authors make is that decision hygiene won’t make every decision perfect. That’s not the point. The goal is to make judgments better on average, by reducing variability and bias. Just like good handwashing doesn’t prevent all illness but dramatically lowers risk, good decision hygiene makes mistakes less likely and less damaging.
Chapter 20 – Sequencing Information in Forensic Science
How the order of information can quietly distort expert judgment
This chapter zeroes in on a specific and high-stakes field: forensic science. We often think of forensics as an exact, objective discipline—fingerprints, ballistics, DNA. But the authors show that even in this world of hard evidence, the way information is presented can introduce noise and bias. In particular, the sequence in which experts receive information matters far more than we’d expect.
Why sequence matters more than we think
The authors explain that forensic judgments often involve interpretation, not just mechanical matching. A fingerprint examiner doesn’t just plug data into a machine—they look at a print and make a judgment about whether it matches another. And just like in other forms of judgment, context influences what they see.
If an examiner knows ahead of time that a suspect has confessed—or that DNA has already matched—they may unconsciously lean toward confirming that the fingerprints match too. Not because they’re dishonest or biased in a conscious way, but because the information primes their judgment.
This subtle influence is called confirmation bias, but here, it’s driven by the order in which the information is received.
The contextual bias problem
The chapter draws from real-world research showing that when forensic examiners are given background information (like police notes or the suspect’s history) before looking at the actual evidence, their judgment is skewed. They become more likely to see what they expect to see. This is especially dangerous because forensic results can determine guilt or innocence.
A key insight here is that the contamination isn’t obvious. Experts don’t feel like they’re being influenced. But the studies show that judgments are less consistent and more error-prone when context is provided too early.
The simple fix: sequence discipline
What’s the solution? The authors suggest a surprisingly straightforward practice: control the sequence of information. It’s called Linear Sequential Unmasking (LSU). The idea is to start with the evidence first—like the fingerprint, bite mark, or ballistic sample—without any background information. Let the expert make an initial judgment based only on what they see.
Only after that initial decision do you reveal additional context—other lab results, police reports, or suspect history. This protects the judgment from being shaped by expectation or emotion. It’s a form of decision hygiene: a small change in process that reduces both bias and noise.
Why this chapter matters beyond forensics
While the focus is on forensic science, the lesson applies more broadly. In any judgment process—especially where there’s interpretation involved—the order in which information is presented can shape outcomes. Whether it’s doctors, auditors, or hiring managers, people are vulnerable to influence from context. And just like in forensics, better sequencing can help preserve objectivity.
Chapter 21 – Selection and Aggregation in Forecasting
Combining different perspectives improves predictions—if we do it right
This chapter explores one of the most practical ways to reduce noise and improve accuracy in forecasting: aggregation. The idea is simple but powerful. Instead of relying on a single person’s judgment, you gather multiple independent forecasts and combine them. Over and over again, this method proves to be more accurate—and less noisy.
Why individual forecasts fall short
The authors start by reminding us just how hard forecasting is. Whether we’re predicting election results, product sales, or geopolitical events, the future is full of uncertainty. And human forecasters—no matter how smart—are vulnerable to bias, overconfidence, and noise. Their predictions can vary wildly, even when based on the same information.
But here’s the good news: these weaknesses don’t add up. They cancel out. When you average multiple independent judgments, the errors tend to balance each other, and the result is usually better than the best individual forecast.
The wisdom of crowds—when done carefully
This is where the classic “wisdom of crowds” idea comes in. When people make judgments independently and without influence, combining those judgments leads to surprisingly accurate results. The authors point out that this only works if certain conditions are met—most importantly, independence. If forecasters copy each other, talk too much before submitting their views, or all rely on the same data points, the benefit disappears.
That’s why aggregation needs to be structured. Otherwise, you risk groupthink, echo chambers, or simply reinforcing shared biases.
How to select and weigh forecasts
The next part of the chapter explores selection: who should we listen to when aggregating forecasts? Should everyone’s opinion count equally? Not always. The authors suggest that identifying and rewarding forecasters with a strong track record—like those in the Good Judgment Project—can improve results. But even then, the method of combining their inputs matters.
Some approaches give more weight to recent accuracy. Others reward calibration—meaning how well a forecaster understands their own limits (being right isn’t just about being confident; it’s about being appropriately confident). In practice, simple averages work surprisingly well, but smarter weighting can give an extra edge.
Averaging within and across people
One really interesting insight is that aggregation doesn’t only work across different people. It can even work within one person. If you ask someone to make multiple forecasts on the same issue at different times—or in different ways—and then average those answers, the result is often more accurate than their first instinct alone. This is sometimes called the “wisdom of the inner crowd.”
So whether it’s across a team or inside one brain, the same rule applies: more structured input, carefully combined, leads to better outcomes.
Chapter 22 – Guidelines in Medicine
Why consistent care often matters more than heroic expertise
This chapter explores how medical decisions—often thought to be the domain of highly trained, deeply experienced experts—are actually full of noise. Patients with similar conditions can receive very different diagnoses, treatments, or advice depending on which doctor they see. That variation, the authors argue, is not just inconvenient—it can be dangerous. And one of the most effective tools to reduce it? Guidelines.
The uncomfortable truth: noise in healthcare is common
We tend to think of medicine as a science. But in many cases, it’s still very much a judgment-based practice. Different doctors can interpret the same symptoms differently, recommend different treatments, or order different tests. These discrepancies don’t always reflect different patient needs—they often reflect the doctor’s preferences, habits, or even mood.
And the stakes are high. When treatment decisions vary for reasons unrelated to the patient, that’s not just noise—it’s potential harm. It can lead to over-treatment, under-treatment, or misdiagnosis.
Why doctors resist guidelines
You’d think that introducing clear medical guidelines—step-by-step rules for diagnosing and treating common conditions—would be an easy win. But the authors explain that doctors often resist them. There’s a deep-rooted belief in medicine that each patient is unique, and that clinical judgment, honed over years of experience, should take priority.
Some clinicians worry that guidelines “dumb down” care or threaten their autonomy. Others are skeptical about the quality of the evidence behind them. And some just feel more comfortable relying on personal experience.
But the data tells a different story
Despite this resistance, research shows that guidelines almost always improve consistency and outcomes. They reduce unnecessary variation, lead to faster diagnoses, and even save lives. For example, when hospitals adopted protocols for heart attack treatment or post-surgical care, results improved significantly.
The authors point out that guidelines don’t have to eliminate clinical judgment—they just provide a more reliable starting point. Think of them as a safety net: a way to ensure that every patient gets a minimum standard of care, regardless of which doctor is on duty.
Guidelines as decision hygiene
Just like structured interviews or forecasting rules, medical guidelines are a form of decision hygiene. They protect against both bias and noise by anchoring judgment in evidence and best practices. And they’re especially valuable in high-stress, high-uncertainty environments—like emergency rooms or intensive care units—where fast, accurate decisions matter most.
The bigger picture
While this chapter focuses on healthcare, the lesson applies more broadly. In any field where people make high-stakes judgments under pressure, guidelines can reduce error and protect against inconsistency. They may not feel as glamorous as individual expertise—but they’re often more effective.
Chapter 23 – Defining the Scale in Performance Ratings
Why vague performance scales lead to noisy evaluations—and how to fix them
This chapter takes us into one of the noisiest corners of organizational life: performance ratings. Whether it’s annual reviews, promotion decisions, or talent assessments, most companies rely on some kind of rating system. But the authors argue that these systems are often flawed at the root—not because people don’t try to be fair, but because the scales themselves are poorly defined.
The illusion of precision
When managers rate employees on things like “leadership,” “collaboration,” or “strategic thinking,” they often assign numbers—say, 1 to 5 or 1 to 10. These numbers feel precise. But in reality, they’re not. What one manager sees as a “4” in leadership, another might call a “3” or a “5.” Without clear definitions, these numbers become highly subjective, and the result is a lot of noise.
The authors point out something many of us have experienced: the same performance, judged by two different people, can get two completely different scores. And even one person might rate the same performance differently on different days. That’s not just frustrating—it undermines trust in the entire evaluation process.
Why language matters
A big issue is the language used in performance scales. Terms like “meets expectations” or “exceeds expectations” sound clear, but they’re open to interpretation. Meets whose expectations? Exceeds what standard? Without anchoring these phrases in observable behaviors or specific examples, people just insert their own meanings.
One manager might expect a team lead to deliver results quietly, while another might value bold, visible leadership. So when they score someone on “leadership,” they’re not just applying a shared standard—they’re rating based on their own mental model.
How to define a better scale
The solution isn’t to ditch performance ratings, but to clarify the scale. The authors suggest creating structured definitions for each rating level, with clear behavioral anchors. Instead of asking, “How strong is this person’s communication?” you might define what a “4” in communication looks like: maybe it’s “clearly presents complex ideas in meetings and adapts message to different audiences.”
This kind of structure reduces room for interpretation—and with it, reduces noise.
The power of examples and calibration
Another helpful approach is using examples. When raters are trained on sample cases that demonstrate what a “3” or “5” looks like, their ratings become more aligned. Even better is calibration sessions, where multiple managers discuss their ratings and resolve major differences. This doesn’t mean everyone has to agree—it just ensures people are using the scale in similar ways.
Why this matters for fairness and growth
Performance ratings affect promotions, raises, and career paths. When they’re noisy, they become unfair—rewarding some and penalizing others based more on who their manager is than how they perform. And when feedback is inconsistent, it’s harder for employees to grow.
Chapter 24 – Structure in Hiring
Why hiring decisions are noisy—and how structure can lead to better, fairer outcomes
Hiring is one of the most important decisions organizations make. It shapes culture, performance, and long-term success. But in this chapter, the authors argue that hiring is also one of the noisiest areas of judgment. Despite all the interviews, resumes, and gut feelings, we’re often just guessing—and those guesses vary a lot from person to person.
The myth of the great interviewer
Many hiring managers believe they have a “knack” for spotting talent. They trust their instincts in interviews, assuming they can read between the lines or sense potential. But research shows the opposite. Unstructured interviews—the kind that feel like friendly conversations—are not very predictive of future job performance. Worse, they’re full of noise.
Different interviewers draw different conclusions from the same answers. Even the same interviewer might give different evaluations on different days. What we think of as insight is often just inconsistency.
Structure is the antidote to noise
The authors argue that structured hiring processes dramatically reduce both bias and noise. A structured process doesn’t rely on intuition or conversation flow. Instead, it defines what qualities are being evaluated, how they’re being scored, and what questions are used to assess them.
A good structured interview uses the same set of questions for every candidate. Those questions are chosen because they predict job success. Interviewers then score responses using a clear rubric, not just their gut feeling. This makes evaluations more consistent—and more accurate.
Breaking down the judgment
One of the smartest practices described in this chapter is decomposing the judgment. Instead of asking, “Is this person a good fit?”, the process breaks it down into smaller questions: How do they handle ambiguity? Can they manage conflict? Do they communicate clearly?
Each of these components is evaluated separately. Then, the scores are combined—ideally, using a simple algorithm or weighted average—to form an overall assessment. This approach reduces the influence of first impressions and halo effects, where one strong (or weak) trait skews everything else.
Predictive power and fairness
Structured hiring doesn’t just improve accuracy—it also improves fairness. When everyone is assessed using the same criteria and format, there’s less room for unconscious bias. And because the process is transparent, it’s easier to explain and defend hiring decisions.
The authors highlight how even simple structured processes can outperform more complex but unstructured ones. You don’t need to turn hiring into a scientific experiment. Just using consistent questions, clear scoring, and multiple independent evaluations already makes a big difference.
Why intuition isn’t enough
This chapter doesn’t argue that intuition has no role. But it should come after the structured steps—not before. Letting structure lead the process protects against early impressions dominating the outcome. Then, if there’s room for judgment, it’s based on a more solid foundation.
Chapter 25 – The Mediating Assessments Protocol (MAP)
A simple method to improve complex decisions—by structuring how we judge
This final chapter introduces a practical tool that ties together many of the book’s lessons: the Mediating Assessments Protocol (MAP). It’s a structured way to make better decisions in areas where judgment is needed but noise often creeps in—like hiring, promotions, admissions, or investments.
The core idea is simple: before making an overall decision, evaluate separate, independent assessments of each relevant attribute. This forces decision-makers to slow down, consider the evidence step by step, and avoid being swayed by first impressions or intuition too early.
Why holistic judgments go wrong
The authors argue that many of our worst decisions happen because we make holistic judgments too soon. In a hiring interview, for example, you meet someone, like them, and then every answer they give feels a little better because you already decided they’re a “fit.” That’s excessive coherence in action—and it’s noisy.
MAP breaks this cycle. It asks decision-makers to identify the key attributes that matter for the role (say, analytical ability, collaboration, and leadership), evaluate each one separately, and only then combine them into a final judgment.
How MAP works
MAP has three key steps:
- Define the mediating assessments: Decide what dimensions matter. These are the structured traits or competencies you’ll assess.
- Score each one independently: Gather evidence and evaluate each attribute on its own. Avoid thinking about the overall decision at this stage.
- Make a final recommendation: Once all the assessments are complete, then use them to guide your overall judgment.
This process helps avoid common traps like over-weighting one trait (the “halo effect”) or letting irrelevant impressions steer the decision.
Why MAP reduces noise
The strength of MAP is in its structure and sequencing. By forcing a pause between evaluation and decision, it builds decision hygiene right into the process. It also encourages independence—if multiple people are doing the assessments, they each do their own scoring before discussing. This helps reduce group noise and dominant voices.
Even better, MAP can be adapted to many types of decisions. The authors mention its use in hiring panels, investment committees, school admissions, and other group evaluations where bias and inconsistency are common.
Structure doesn’t mean rigidity
A key point the authors make is that MAP doesn’t eliminate judgment—it just improves it. You’re still making a subjective call in the end, but you’re doing it based on a clearer, more organized picture. The protocol isn’t a straitjacket; it’s a filter that keeps noise from clouding your view.
And because it doesn’t require sophisticated tools or data, MAP is usable right away. Any team can adopt it to improve fairness and reduce avoidable errors in their judgment processes.
Chapter 26 – The Costs of Noise Reduction
Not all noise is worth removing—and that’s a hard truth
This chapter tackles a subtle but important idea: reducing noise sounds great in theory, but in the real world, it comes at a cost. Time, money, effort, complexity—every strategy to reduce variability in judgments requires something in return. And the authors argue that we have to be selective about when and where we fight noise. Otherwise, we risk spending too much to solve the wrong problems—or making things worse.
Noise reduction is not always free
Up to this point in the book, the message has been clear: noise is a hidden source of error that causes unfair, inconsistent decisions. But now the authors introduce a reality check: you can’t eliminate all noise without trade-offs. Whether it’s creating strict guidelines, applying algorithms, conducting calibration sessions, or building structured evaluations—all of these things require effort. And the effort isn’t always justified by the result.
The core insight is this: reducing noise only makes sense when the benefits outweigh the costs. In a high-stakes medical diagnosis, the cost of a noisy judgment is huge—so it’s worth investing in checklists, protocols, and second opinions. But in a low-impact decision—like reviewing a batch of online résumés—it might not be worth applying a full-scale noise audit.
How to think about “optimal noise”
The authors introduce a powerful concept here: optimal noise is not zero. That might sound surprising, especially after several chapters showing how damaging noise can be. But sometimes, a little noise is more efficient than complete control. Think of a restaurant manager who lets different chefs adjust seasoning to their taste, or a teacher who grades participation with some flexibility. The inconsistency isn’t ideal—but standardizing it might take too much effort for too little gain.
So rather than chasing perfection, the authors argue that organizations should look for the sweet spot—where enough noise is removed to improve fairness and accuracy, but not so much that the system becomes slow, expensive, or overly rigid.
Beware the unintended consequences
This is where the chapter gets really thoughtful. The authors point out that in trying to remove noise, we sometimes create new problems. For instance, rigid rules can be consistent—but also blind to context. In one example, a credit scoring algorithm might fairly deny loans to applicants with bad credit—but it won’t recognize a strong case for exception, like someone who just repaid a major debt. Human discretion can sometimes provide flexibility and fairness that rules cannot.
Similarly, applying a one-size-fits-all guideline to hiring or promotions can reduce variation—but also erase judgment, personality, or creativity from the process.
In other words, structure can bring order—but too much structure can remove wisdom.
Cost of consistency vs. benefit of fairness
Ultimately, the authors are advocating for a balanced approach. If reducing noise leads to better decisions and improves fairness, it’s usually worth it. But if the cost of consistency is greater than the benefit—or if the effort required diverts attention from more important issues—then a little noise might be acceptable.
This is a mindset shift. Instead of saying “less noise is always better,” the question becomes: what’s the right amount of noise for this situation?
Chapter 27 – Dignity
What happens when noise reduction comes into conflict with human dignity?
This chapter shifts from technical and organizational questions to something more philosophical: the role of human dignity in decision-making. As the authors explain, sometimes the push for noise reduction—especially through automation and strict rules—can come at the cost of something deeply important: the feeling that people are being treated as individuals rather than numbers.
It’s a tension between efficiency and humanity, and the authors don’t shy away from the complexity.
The promise and problem of rules and algorithms
Earlier in the book, we learned that algorithms and rules often outperform human judgment. They’re more consistent, less biased, and, most importantly, less noisy. But as the authors explore here, not everyone welcomes this kind of judgment. When a decision is made about you—whether you’re hired, admitted, approved, or rejected—it doesn’t always feel right to be judged by a formula.
People want to feel seen. They want to believe their unique circumstances matter. And when a rule or algorithm says no—without explanation or nuance—it can feel dehumanizing. That’s the dignity problem.
Being treated as a case vs. a person
This part of the chapter hits home. The authors explain that most people are okay with being judged, even by tough standards, as long as they feel their case is being genuinely considered. What’s offensive isn’t the rejection—it’s the sense that there was no room for who they are. When a system treats everyone exactly the same, it can lose the flexibility and compassion that come from human interaction.
A striking example: imagine being denied life-saving treatment because a protocol or score says you don’t qualify. Even if the decision is technically optimal, it can feel cruel if no one took the time to talk with you or explain why. In moments like these, dignity matters more than precision.
The risk of technocracy
The authors also warn about technocracy—a system where decision-making is driven purely by technical rules and models, with little room for individual judgment. While technocratic systems can be efficient, they can also become cold, opaque, and unaccountable.
People may feel powerless in such systems, especially when they can’t appeal or understand how a decision was made. That kind of opacity, even if it reduces noise, can undermine trust—and dignity.
So how do we balance dignity and noise reduction?
This is where the chapter offers something powerful: the idea that good systems don’t eliminate human judgment—they guide and support it. A structured process can reduce noise without fully removing the human element. For example, a hiring process might use structured scoring and algorithms, but still include a personal interview where the candidate feels heard.
In short, structure and rules can help us be fair—but it’s still important that people feel they’re being treated as more than data points.
Chapter 28 – Rules or Standards?
Should we follow strict rules—or leave room for judgment?
This closing chapter takes on one of the most fundamental questions in the debate around noise and judgment: When is it better to rely on fixed rules, and when should we trust flexible standards? The answer isn’t simple, but the authors walk us through the trade-offs with clarity, nuance, and a sense of balance.
Rules reduce noise—but can feel rigid
Rules are specific. They say, “If X happens, do Y.” They bring clarity and predictability. From a noise-reduction point of view, rules are ideal—they leave little room for variation. Everyone applies the same criteria and reaches the same conclusion.
But rules can also feel too rigid, especially when applied to complex or personal cases. What if the rule doesn’t account for a key detail? What if it leads to an unfair result in a one-off situation? That’s where rules can fall short—by failing to leave room for human judgment, compassion, or context.
Standards allow for judgment—but invite noise
Standards are looser. They say, “Use your judgment.” For example, a rule might say “every claim over $10,000 must be denied without documentation,” while a standard might say “only approve claims that are sufficiently supported.” That flexibility is valuable in many cases—but it opens the door to noise.
People interpret standards differently. Two judges or evaluators might read the same situation and apply the standard in wildly different ways. One might be strict, another lenient. That’s where inconsistency and unpredictability creep in.
So we’re back to the central tension of the book: consistency vs. flexibility, noise reduction vs. human discretion.
The best answer isn’t either/or—it’s knowing when to use each
The authors don’t take a hard stance that one approach is always better. Instead, they encourage a case-by-case mindset. When fairness depends on consistency—like grading tests or sentencing crimes—rules often make sense. But when the world is messy and cases differ in important ways—like caregiving, crisis response, or artistic work—standards can preserve human dignity and context.
They also emphasize hybrid approaches, where rules and standards work together. For instance, a structured process might use rules to guide early steps and standards for final judgments. Or an algorithm might rank candidates, but a human makes the final call based on additional information.
Designing for both fairness and wisdom
This final chapter brings us full circle. The goal of better judgment isn’t just about choosing between rules or standards—it’s about designing systems that reduce error without dehumanizing the process. That means understanding when to automate, when to standardize, and when to trust thoughtful discretion.
The authors remind us that rules can feel cold, and standards can feel chaotic. But with the right structure, training, and awareness, we can combine both approaches in ways that support fairness, reduce noise, and still treat people like people.
4 Key Ideas From Noise
System Noise
It’s not just about bias—judgment errors often come from variability. Different people give different answers to the same problem. This inconsistency, called noise, creates unfairness and waste that we rarely notice.
Decision Hygiene
You can’t always eliminate bias, but you can reduce noise. Structured processes, clear scoring systems, and independent evaluations are like washing your hands—they quietly protect decisions from invisible messes.
Matching and Scaling
Judgment is a matching game. We compare new cases to mental scales or past examples. But those scales vary between people—and even from day to day. Making the scale clearer helps reduce inconsistency.
Structure Over Intuition
We trust our gut, but it’s unreliable. This book shows that structured decision-making—breaking things into parts, delaying intuition, and using consistent criteria—leads to better outcomes than “going with your feel.”
6 Main Lessons From Noise
Design Better Judgments
Don’t rely on your first impression. Break down decisions into smaller parts, assess them separately, and combine them later. This reduces bias and keeps you grounded.
Avoid the Mood Trap
Your mood can shape your choices more than you think. Don’t make important decisions when tired, hungry, or upset. Pause and re-evaluate with a clear head.
Use Rules Where It Matters
In high-stakes decisions, structure is your friend. Create rules and checklists to ensure consistency—especially when the cost of error is high.
Respect the Power of Aggregation
Don’t rely on one opinion—yours or anyone else’s. Average independent judgments when possible. Whether it’s forecasting or hiring, aggregation beats the best individual.
Don’t Confuse Fairness with Flexibility
Treating people “case by case” might feel fair, but it often creates inequality. Consistent systems may seem cold, but they protect everyone from judgment errors.
Balance Structure and Dignity
Too much automation can make people feel invisible. When designing systems, leave room for human touch and explainability. People want to feel heard—even when they’re told no.
My Book Highlights & Quotes
Most of us most of the time live with unquestioned belief that the world looks as does because that’s the way it is. There’s one small step from this belief to another: other people view the world much the way I do. These beliefs which have been called naïve realism are essential to the sense of a reality we share with other people. We rarely question these beliefs. We hold a single interpretation of the world around us at any single time. And we normally invest very little effort in creating plausible alternatives to it. One interpretation is enough and we experiences as truth. We do not go through life imagining alternative ways to see what we see
Judgment is not a synonym for thinking. It’s a way of assigning value or a measurement and the instrument for doing so is the brain
People expect that [judgements] will be based on the values of the system not the person making them
System noise is inconsistency and inconsistency damages the credibility of the system
Wherever there is judgment, there is noise—and more of it than you think
In a negotiation situation, for instance, good mood helps. People in a good mood are more cooperative and elicit reciprocation. They tend to end up with better results than do unhappy negotiators
Life is often more complex than the stories we like to tell about it
Judgment is not a synonym for thinking, and making accurate judgments is not a synonym for having good judgment
It is hard to agree with reality if you cannot agree with yourself
On the other hand, a good mood makes us more likely to accept our first impressions as true without challenging them
Conclusion
The book provides readers with a deeper understanding of the detrimental impact of noise on decision-making and offers practical guidance on how to reduce noise and improve the quality, fairness, and accuracy of judgments across various domains.
But the good news is that it can be mitigated. Kahneman, Sibony, and Sunstein provide a number of practical techniques that can help us make better decisions.
These techniques include collecting more information, using more objective methods, and creating more transparent processes.
I am incredibly grateful that you have taken the time to read this post.
Support my work by sharing my content with your network using the sharing buttons below.
Want to show your support and appreciation tangibly?
Creating these posts takes time, effort, and lots of coffee—but it’s totally worth it!
If you’d like to show some support and help keep me stay energized for the next one, buying me a virtual coffee is a simple (and friendly!) way to do it.
Do you want to get new content in your Email?
Do you want to explore more?
Check my main categories of content below:
- Book Notes
- Career Development
- Essays
- Explaining
- Leadership
- Lean and Agile
- Management
- Personal Development
- Project Management
- Reading Insights
- Technology
Navigate between the many topics covered in this website:
Agile Art Artificial Intelligence Blockchain Books Business Business Tales C-Suite Career Coaching Communication Creativity Culture Cybersecurity Decision Making Design DevOps Digital Transformation Economy Emotional Intelligence ESG Feedback Finance Flow Focus Gaming Generative AI Goals GPT Habits Harvard Health History Innovation Kanban Large Language Models Leadership Lean Learning LeSS Machine Learning Magazine Management Marketing McKinsey Mentorship Metaverse Metrics Mindset Minimalism MIT Motivation Negotiation Networking Neuroscience NFT Ownership Paper Parenting Planning PMBOK PMI PMO Politics Portfolio Management Productivity Products Program Management Project Management Readings Remote Work Risk Management Routines Scrum Self-Improvement Self-Management Sleep Social Media Startups Strategy Team Building Technology Time Management Volunteering Web3 Work
Do you want to check previous Book Notes? Check these from the last couple of weeks:
- Book Notes #126: Inevitable by Mike Colias
- Book Notes #125: Revenge of the Tipping Point by Malcolm Gladwell
- Book Notes #124: Radical Candor by Kim Scott
- Book Notes #123: The Personal MBA by Josh Kaufman
- Book Notes #122: The First 20 Hours by Josh Kaufman
Support my work by sharing my content with your network using the sharing buttons below.
Want to show your support tangibly? A virtual coffee is a small but nice way to show your appreciation and give me the extra energy to keep crafting valuable content! Pay me a coffee: