Episode 46 — Blend multiple models to strengthen conclusions

In Episode 46, Blend multiple models to strengthen conclusions, we focus on a skill that separates a good analyst from a consistently persuasive one, which is knowing how to combine structured models without turning the analysis into a mess. Most security teams have at least one preferred framework, and some teams treat that framework like the only legitimate lens for understanding an intrusion. The reality is that threat activity is multi-dimensional, and a single model rarely captures both the sequence of attacker actions and the relationships that explain why those actions worked. This episode is about building a more complete view by blending models in a deliberate way. The aim is not to collect frameworks like trophies, but to use them as complementary tools that sharpen your conclusions.

A practical way to start is to use the kill chain to track progress and the diamond model to show relationships. The kill chain is well suited for describing an attacker’s path as a sequence, because it naturally emphasizes stages and movement from one step to the next. The diamond model is well suited for describing who is involved and what they used, because it forces the analyst to connect adversary, capability, infrastructure, and victim. When you use these two together, you can tell a story that includes both timeline and structure. You can explain not only what phase the attacker reached, but also which pieces of infrastructure and capability enabled that progress. The result is an analysis that feels complete rather than one-dimensional.

Once you have that base, you can layer your findings into the MITER ATTACK framework to identify specific technical adversary techniques. This layer is where you move from general descriptions to precise technical mapping. Instead of saying the attacker moved laterally, you can identify the technique pattern that matches what you observed. Instead of saying the attacker persisted, you can describe the specific persistence behavior you confirmed. This precision helps with repeatability, because two different analysts can align their findings when they use the same technique language. It also helps operational teams, because technique mapping can be connected to detections, hunting queries, and control coverage in a way that narrative alone cannot.

The deeper lesson is to avoid relying on a single model when a combination provides much deeper insight. If you use only one framework, you will inherit its blind spots. Some models emphasize sequence but not context, and others emphasize entities and relationships but not progression. Some focus heavily on tactics and techniques, but leave out attacker intent and victim-specific factors. Combining models is a way to reduce those blind spots, but only when the combination is intentional. You are looking for a better explanation, not a bigger diagram. The analysis should feel like it gained clarity, not like it gained paperwork.

This is where it helps to understand how the weakness in one model can be covered by the strengths of another. The kill chain can struggle to show the richness of infrastructure relationships, especially when attackers use layered hosting, redirectors, and multiple communication paths. The diamond model can represent those relationships well, but on its own it may not communicate how the intrusion progressed over time. MITER ATTACK can provide technical granularity, but it can become a checklist if you do not connect techniques to evidence and objectives. When you blend these, you are balancing narrative sequence, relational context, and technical specificity. That balance makes your conclusions harder to dismiss because they are supported from multiple angles.

Now imagine a comprehensive report that uses three different models to explain a complex breach, because this is where blending becomes more than an academic exercise. The report might use a kill chain style narrative to describe how the breach unfolded from initial access to impact. It might use a diamond model view to show how the adversary, their tools, their infrastructure, and the victim environment interacted. It might then map confirmed behaviors to MITER ATTACK to provide a technique-level index that engineering teams can use for detection improvements. When you do this well, each model is serving a different reader and a different purpose, but all of them point to the same underlying truth. The report feels rigorous because it can be understood from more than one professional perspective.

A simple analogy is to think of blending models as using multiple camera angles to see a whole scene. One camera angle gives you a useful view, but it may hide a detail that matters, like the fact that the attacker approached from a different direction than you assumed. A second angle may reveal relationships that the first angle obscured. A third angle might show the timing and pacing, making it clear whether the attacker was moving quickly or slowly and deliberately. Your goal is not to overwhelm the viewer with footage, but to choose angles that reveal what matters. In threat analysis, those angles are your models, and the discipline is in selecting the ones that improve understanding rather than inflate complexity.

A useful example of blending is to identify where the diamond model’s infrastructure point overlaps with the kill chain’s delivery phase. Delivery is often where infrastructure is most visible, because something has to carry the payload, the link, or the authentication lure to the victim. In the diamond model, infrastructure captures the systems and services that enable that transfer, such as domains, hosting, relays, and communication endpoints. When you overlap these, you can show that delivery is not just a phase, but a set of concrete infrastructure choices. That overlap can also help you decide where to disrupt activity, because delivery-related infrastructure often provides containment or blocking opportunities. By connecting these views, you turn phase-based thinking into evidence-based action.

Using multiple models increases the rigor and credibility of your final analytic conclusions, but only if you treat each model as a method of testing your reasoning. When two models independently support the same conclusion, your confidence should increase. When they disagree, that is not a failure, it is a signal that you may have misinterpreted evidence or made an assumption that needs revisiting. For example, your kill chain narrative may imply a certain step occurred, but your technique mapping may show you have no evidence of it, only a suspicion. This kind of cross-check is valuable because it forces honesty about what is known versus what is inferred. Over time, it builds a habit of analytic rigor that improves both accuracy and trust.

This approach also helps you communicate with different teams using the frameworks they know best. A detection engineering team may respond quickly to technique mapping because it aligns with how they build telemetry and alerts. An incident response team may prefer a phase-based view because it aligns with containment and eradication decisions. Leadership may need a relationship-driven explanation that ties attacker choices to business impact and risk. When you blend models, you can produce a single analysis that can be read through multiple lenses without rewriting everything for each audience. That saves time, reduces translation errors, and keeps everyone aligned on the same incident picture.

There is a real risk, though, that blended analysis becomes overly complex or confusing, and avoiding that outcome is part of the skill. The simplest rule is that the models should not compete for attention, they should support a single storyline. Choose one primary narrative flow and let the other models act as supporting views that clarify, validate, or add actionable detail. Use consistent naming for entities so the reader does not wonder whether two labels refer to the same thing. Be disciplined about what you include, because not every possible mapping is helpful. When the reader finishes, they should feel like the situation became clearer, not like it became more technical just for show.

To stay disciplined, verify that each model adds a unique and valuable perspective to your overall threat assessment. If a model does not change your understanding, improve your communication, or strengthen your confidence, it may not belong in the final product. Sometimes one model is enough, and that is fine, but the premise here is that complex cases benefit from multiple angles. You should be able to state what each model contributed in plain language, such as sequence, relationships, or technical specificity. This also helps you avoid redundancy, where you repeat the same information three times using different vocabulary. The goal is complementarity, not duplication.

A good practice drill is mapping a single incident through the kill chain and then onto a diamond model, because it trains both sequence thinking and relationship thinking in one pass. Start with what you know happened first and carry it forward step by step, without filling in gaps with guesses. Then take the same evidence and place it onto the diamond, forcing yourself to identify adversary, capability, infrastructure, and victim in a way that is supported by what you observed. As you do this, notice where the mappings feel easy and where they feel strained. Those strained points are often the places where your investigation needs more evidence or where your assumptions are doing too much work. This practice builds the habit of validating your own narrative before you present it to others.

In Episode 46, Blend multiple models to strengthen conclusions, the key takeaway is that blended models provide depth when they are used intentionally and kept clear. The kill chain can help you track progress, the diamond model can show the relationships that make the intrusion coherent, and MITER ATTACK can anchor your findings in precise technique language. When you combine them thoughtfully, the weaknesses of one model are offset by the strengths of another, and your conclusions become more credible. The analysis also becomes easier to communicate across teams because each audience can engage with the lens they recognize. Apply two different frameworks to your next case and notice how the second perspective either confirms your story or reveals the exact place where you need to tighten your evidence.

Episode 46 — Blend multiple models to strengthen conclusions
Broadcast by