Episode 24 — Defeat cognitive bias before it misleads you

In Episode 24 — Defeat cognitive bias before it misleads you, we are going to zoom in on the mental shortcuts that can quietly push an analysis off course. Even strong technical skills can be undermined when your brain tries to save time by filling in gaps and smoothing over uncertainty. That is not a character flaw, it is normal cognition doing what it evolved to do, and it shows up even more when you are tired, rushed, or juggling multiple threads at once. The goal today is to make those traps easier to spot in yourself, not just in other people. Once you can name the shortcut you are taking, you can slow it down, test it, and keep your findings grounded in what the evidence actually supports.

Cognitive bias is best understood as a natural mental trap that distorts judgment, even when your intentions are good and your experience is deep. Your mind is constantly filtering information because it cannot process everything with equal attention. Filtering is useful, but the same filtering can become a problem when it turns into a fixed lens that only lets certain patterns through. In technical investigations, bias often shows up as overconfidence in an early narrative, impatience with ambiguity, or a tendency to treat familiar signals as decisive. The risk is not just that you get one detail wrong, but that the entire storyline becomes shaped by the wrong assumptions. When that happens, you might interpret every new observation as support for the narrative, even if it is actually contradictory. A disciplined analyst learns to treat certainty as something earned, not something felt.

Confirmation bias is one of the most common and most damaging patterns in cyber analysis because it rewards you for being consistent instead of being correct. Once you form a theory, you will naturally pay more attention to evidence that supports it and skim past evidence that complicates it. You might favor logs that align with your hypothesis and dismiss other sources as noisy, even when those sources are the ones that could prove you wrong. This becomes especially risky when a team is under pressure to deliver an answer quickly, because the first plausible explanation feels like relief. The trap is that your analysis becomes a search for supportive details rather than a test of competing explanations. A practical way to weaken confirmation bias is to treat every theory as provisional and to actively ask what evidence would need to be present if the theory were true. If that expected evidence is missing, your confidence should drop, not rise.

Another common trap is anchoring, where the first meaningful piece of information you see becomes the reference point for everything that follows. Anchoring can happen when an alert description frames the event as a known threat, when a previous shift left a note that suggests a cause, or when a single artifact looks dramatic and grabs attention. The danger is that you start adjusting around the anchor instead of evaluating the situation fresh. Even if you later encounter evidence that should shift your view significantly, you may only move a small distance away from the original anchor. This is how investigators end up with conclusions that feel reasonable but fail to match the full body of facts. A good habit is to restate the problem in your own words before you commit to a narrative, and then check whether that restatement is based on evidence or on the first framing you were handed. Anchors are sticky, so you have to notice them early.

Availability bias is different, but it can be just as misleading, especially in environments where you have seen many similar incidents. It causes you to overvalue information that is easy to remember, such as a recent breach, a high profile technique, or a memorable incident that ended badly. When something is vivid, your brain treats it as more likely to be happening again, even if the base rates do not support that. In cyber work, availability bias can cause you to label a pattern as malicious because it resembles a recent case, while ignoring the possibility that it is a benign operational change. It can also cause you to assume a certain threat actor is involved because you read about them recently, not because the evidence points there. The fix is not to distrust memory, but to recognize that memory is not a probability engine. You want to supplement recall with deliberate checks that ask whether the current data truly matches the remembered case in more than superficial ways.

At this point, it helps to imagine yourself reviewing a report you wrote yesterday, but with a mindset that is looking for hidden personal assumptions rather than obvious technical mistakes. You might notice that you treated a certain log source as authoritative without stating why, or that you assumed a user action was careless because that is what you have seen before. You might see that you described an event as suspicious based on a pattern you personally dislike, rather than on what the organization considers risky. These are not dramatic errors, but they shape conclusions, and conclusions shape action. When you review for assumptions, you look for statements that are not backed by evidence, especially statements that sound certain. You also look for places where uncertainty was present but never acknowledged. This kind of review is uncomfortable at first because it feels like questioning your competence, but it is actually how you protect it.

A useful metaphor is to think of biases as tinted glasses that change how you see the world, even when you forget you are wearing them. The tint might make certain signals look more intense, or it might dull signals that do not match your expectations. Once you accept that the tint is always there, the skill becomes learning how to compensate for it. You do not solve bias by trying to be purely objective through force of will, because willpower fades under stress. You solve it by building habits that expose the tint and create friction before you commit to a conclusion. In practice, that means you slow down when the story feels too clean, you look for disconfirming evidence, and you invite critique before the work leaves your team. The point is not to erase your instincts, because instincts are valuable. The point is to keep instincts from turning into unquestioned truth.

Mirror imaging is a bias that deserves special attention in threat work because it is subtle and it feels rational. It happens when you assume an attacker will think like you, value what you value, and choose the same optimal path you would choose. Defenders often assume an adversary will behave efficiently, avoid unnecessary risk, and prefer techniques that make sense to a well run engineering team. In reality, attackers may be constrained by their tooling, their access, their skill level, or their incentives, and they may take messy paths that still work. Mirror imaging can also make you underestimate creativity, persistence, or opportunism, especially when you assume an attacker will not bother with something you consider crude. The antidote is to treat adversary behavior as something to be inferred from evidence, not something to be projected from your own preferences. When you catch yourself saying what you would do, take that as a signal to return to what they did.

Groupthink is the social cousin of cognitive bias, and it happens when the desire for harmony starts to override the need for critical thinking. In security teams, groupthink can appear when a dominant voice sets the narrative early, when junior analysts hesitate to disagree, or when the team is exhausted and wants closure. It can also happen when a team has a proud identity and unconsciously resists information that would imply the team missed something. The outcome is not always a wrong conclusion, but it is often an under tested conclusion. The team may converge quickly on a theory without fully checking alternatives, and dissenting observations might be treated as distractions rather than signals. Healthy teams do not avoid agreement, they avoid premature agreement. If you notice that disagreement feels risky or unwelcome, that is a warning sign that the team process is shaping the conclusion more than the evidence is.

Awareness of these traps is the first step, but awareness alone is not enough because bias tends to operate in the background. The reason these patterns persist is that they often feel like intelligence. A quick conclusion feels like expertise, a confident narrative feels like leadership, and agreement feels like efficiency. The problem is that feelings of clarity do not guarantee correctness, especially in complex technical environments with incomplete data. Once you see bias as a predictable feature of human cognition, you can stop treating it as a personal failure and start treating it as an operational risk. Operational risks are handled with controls, not with guilt. Controls can be process controls, peer review, structured checks, and a culture that values being correct more than being fast. When you frame it this way, bias becomes something you manage systematically, the same way you manage other reliability problems.

One simple and powerful control is to use a devil’s advocate, not as a performative role, but as a genuine method to challenge the prevailing opinion. The devil’s advocate should not attack people, only ideas, and their job is to search for what the main narrative is overlooking. They can ask what alternative explanations could fit the evidence, what assumptions have not been stated, and what data would change the conclusion if it appeared. This role is especially useful when the team is aligned early, because alignment can mask gaps. It is also useful when a conclusion has big consequences, because the cost of being wrong is higher than the cost of being slightly slower. When done well, the devil’s advocate does not slow the team down, it speeds up learning by forcing the analysis to become clearer. The best version of this role is one that rotates, so challenging assumptions becomes normal rather than personal.

Another control is to slow down decision making at the moments when your brain most wants to speed up. This does not mean dragging out routine work or avoiding action during an active incident. It means adding a short pause before you lock onto a final interpretation, especially when the story feels obvious or emotionally satisfying. Slowing down can be as simple as restating your hypothesis, identifying at least one alternative, and naming the most important uncertainty that remains. It can also mean checking whether you are anchored to an early clue, or whether you are being pulled by what is most memorable rather than what is most supported. This pause creates space for critical thinking, and it gives your team a chance to surface concerns that might otherwise remain unspoken. Over time, this habit becomes a stabilizer, because you learn that a small amount of deliberation at the right time prevents major rework later.

Bias management also becomes real when you practice identifying one personal bias that might affect your current investigation. Personal bias here does not mean a moral failing, it means a consistent tilt in how you interpret technical ambiguity. Some analysts tend to assume malice quickly because they have been burned by false negatives. Others tend to assume benign explanations because they have spent years dealing with noisy tools and false positives. Some people give too much weight to certain data sources because those sources have saved them in the past. Others distrust certain sources because they have seen them fail at the worst time. The practice is to name your tilt and then compensate for it deliberately. If you know you anchor quickly, you can force yourself to generate alternatives before you commit. If you know you seek confirming evidence, you can force yourself to look for contradictions first. Naming the tilt makes it manageable.

The real point of all of this is not to make you doubt everything, but to help you trust your conclusions for the right reasons. Bias defeat is about strengthening the link between evidence and judgment so the work holds up when questioned. When a stakeholder challenges your finding, you want to be able to explain not only what you concluded, but how you avoided common traps along the way. That explanation is also how you build credibility over time, because it shows your judgment is disciplined rather than reactive. It also protects the organization, because decisions based on intelligence can have real consequences, from blocking legitimate business traffic to missing a true intrusion. The best analysts are not the ones who never make mistakes. They are the ones who build processes that catch mistakes early, before those mistakes become institutionalized in reports, playbooks, and long term assumptions.

Conclusion: You know the traps so ask a peer to review your work. A peer review is one of the most effective ways to surface bias because it introduces a second mind that is not anchored to your initial impressions. The peer does not need to be more senior to be useful, they just need enough context to evaluate whether your logic follows from the evidence. When you ask for review, you are not asking someone to rubber stamp your conclusion, you are asking them to test it, challenge it, and point out where your assumptions are doing too much work. That review can reveal confirmation bias, anchoring, availability bias, mirror imaging, or groupthink effects that were invisible from inside your own head. Over time, this practice also improves team culture, because it normalizes respectful disagreement and makes rigor the default. When you make peer review a habit, you are not just avoiding errors, you are training better judgment, and that is the long game in intelligence work.

Episode 24 — Defeat cognitive bias before it misleads you
Broadcast by