Episode 44 — Model intrusions with the diamond for clarity

In Episode 44, Model intrusions with the diamond for clarity, we take a very practical step toward making intrusion details easier to understand and easier to share. A lot of security work is full of fragments, like an I P address here, a hash there, and a vague story about what happened somewhere in between. This episode is about turning those fragments into a coherent picture without forcing people to memorize your entire investigation. The Diamond Model gives you a structure that is simple enough to draw quickly and strong enough to capture real technical relationships. As we go, keep one idea in mind: clarity is not about making an intrusion smaller, it is about making the relationships visible so your decisions are better.

The Diamond Model is built around connecting four core features, and those features are the whole point of why it works. You are not collecting random facts and hoping the narrative forms on its own. Instead, you are deliberately describing an intrusion as a relationship between an Adversary, a Capability, an Infrastructure, and a Victim. The model forces you to ask what each point represents and whether you can actually support it. When you do this well, you stop treating indicators as isolated trivia and start treating them as evidence of interaction. That shift matters because investigations are not just about what you found, they are about what it means in context.

To make the model feel concrete, start by placing a threat actor and their malware onto the diamond and then let the relationships do the work. Your adversary point is the actor you believe is responsible, whether it is a named group, an affiliate, an internal threat, or an unknown actor with a distinctive pattern. Your capability point can be the malware family, a toolset, a technique chain, or even a single exploit path if that is what the evidence supports. When you connect those two points, you are making a claim about who is using what, and that claim should be grounded in evidence rather than instinct. The moment you put those two points on paper, you can start seeing where your certainty is strong and where it is still guesswork.

A common failure mode is to treat the victim as an afterthought, but the victim side of the diamond is not decoration. The victim point provides context that can change your interpretation of every other point. The victim is not only a hostname or a user account, it is also the role, the business function, the geography, the access level, and the reasons that specific target makes sense to the adversary. When you ignore that side, your model becomes a tool-centric summary rather than an intrusion model. When you include it, you can talk about why the actor went after that laptop instead of a server, why the actor targeted finance instead of engineering, and what the actor was likely trying to achieve.

The lines between the points are where you describe how the intrusion actually happens, and they deserve more attention than they often get. The diamond’s value is not only the four labels at the corners, it is the explanation of how the corners connect. The adversary does not magically appear on the victim, the adversary uses infrastructure to deliver capability, and then uses capability to impact the victim. When you draw those lines, you can describe the direction of movement and the nature of the interaction. This is where you stop saying the malware was present and start saying the malware was delivered by this path, executed under these conditions, and communicated through this infrastructure. If your lines are vague, your model will be vague, and the people reading it will fill in gaps with assumptions that may be wrong.

Now picture a diagram that perfectly illustrates how an attacker used a server to hit a laptop, because that mental image is exactly what you are building toward. You might have a compromised virtual private server as infrastructure, a remote access trojan as capability, a particular operator or group as adversary, and a specific employee laptop as victim. The lines between them can show the path from the external server to the laptop, the delivery mechanism that made the connection possible, and the control channel that maintained access. When you can show that relationship in a single view, you make the investigation easier to validate and easier to explain. You also make it easier to compare this incident to other incidents, because you can see what repeats and what changes.

If you want an analogy that stays useful without oversimplifying, think of the diamond as a simple map that shows who did what and how. A map does not tell you everything about a city, but it tells you how the major pieces connect, and that is what people need when they are trying to navigate. The adversary is the driver, the capability is the vehicle, the infrastructure is the road system and staging points, and the victim is the destination. That analogy is not perfect, but it highlights the core value of the model: it is about relationships and movement. If you cannot explain how something got from one point to another, then your model is not finished, even if your list of indicators is long.

Once you are comfortable building one diamond, you can start getting real analytical power by comparing multiple diamonds. One of the most useful techniques is identifying shared infrastructure between two different diamonds to link separate malicious cyber incidents. Shared infrastructure can be obvious, like the same domain, the same certificate fingerprint, or the same hosting provider with unique configuration artifacts. It can also be subtle, like recurring redirector patterns, reused command and control endpoints, or consistent naming conventions that show an operator habit. When you see infrastructure overlap, you gain a defensible reason to believe events are related, even if the malware changes. That is important because adversaries swap tools more easily than they swap operational patterns.

The model is especially valuable when you are deep in technical investigation and you need to visualize pivot points. Pivot points are the places where one piece of evidence leads you to another, like an I P address leading to a domain, a domain leading to a certificate, and a certificate leading to a cluster of related hosts. With a diamond, you can show which point you pivoted from and which point you pivoted to, and you can keep your reasoning visible. That visibility helps you avoid circular logic, where you assume something is connected because it feels connected. It also helps you communicate the difference between a confirmed link and a hypothesis that still needs validation. In practice, that means your team spends less time arguing about opinions and more time aligning on evidence.

Another place the diamond shines is in communication, especially when you need to explain complex technical relationships to stakeholders. Stakeholders do not need every packet detail, but they do need to understand the structure of the threat and what it implies for risk. A diamond lets you show, at a glance, which parts of the attack were external, which parts depended on internal weaknesses, and which parts represent the attacker’s unique choices. You can explain that the actor is consistent, but the capability shifts, or that the capability is common commodity malware, but the infrastructure is tailored and suggests a more deliberate operation. When you use the diamond this way, you are not dumbing anything down, you are translating complexity into a shape that supports decisions.

As your investigations mature, you can expand the diamond with technical meta-features to capture details that matter without turning the model into a wall of text. Meta-features can include timestamps for key events, the delivery method, the execution chain, the initial access technique, and the command and control pattern. You can also include environmental context, like whether the victim endpoint was off-network, whether the infrastructure used encrypted channels, or whether the capability used living-off-the-land techniques in addition to malware. These additions should remain disciplined, because the goal is still clarity. The purpose of meta-features is to support comparison and validation, not to turn the diamond into a substitute for your full case notes. If you add meta-features thoughtfully, you get the best of both worlds: a compact model with enough detail to be actionable.

One of the healthiest habits you can build with the Diamond Model is verifying that all four points are supported by observed technical evidence. That does not mean you must know the adversary’s name to have an adversary point, because unknown is a valid value when you are honest about it. What it does mean is that you should be able to say what evidence supports each point and what evidence supports each relationship line. For capability, you may have hashes, behavioral signatures, process trees, or decompiled functions that anchor your claim. For infrastructure, you may have network logs, D N S records, certificates, hosting artifacts, or routing observations. For victim, you should have asset identity, user identity, role context, and exposure details that explain why that victim is relevant and what the impact is.

To make this skill real, practice drawing a diamond for a recent phishing attack, because phishing is common, messy, and perfect for learning the discipline. Your adversary might be unknown, but you can still characterize it as an external actor using a specific campaign pattern. Your capability could be the attachment type, the embedded macro, the payload, or even a credential theft workflow if no malware is involved. Your infrastructure might include the sending domain, the redirector, the landing page host, and the command and control endpoint if a payload was delivered. Your victim should include both the recipient and the organizational context, like why that user was targeted and what access they have that makes them valuable. When you draw the lines, you can narrate how the email reached the victim, how the victim was enticed to interact, and how the attacker’s infrastructure supported delivery and follow-on control.

As you do this, you will notice that the diamond naturally exposes gaps that matter. Maybe you have strong infrastructure evidence but weak capability evidence because the payload was blocked and you never captured it. Maybe you have capability evidence but the infrastructure is obscured by trusted cloud services or fast-flux hosting. Maybe you know the victim details but you are not sure whether the adversary is a specific group or just a broad criminal ecosystem. Those gaps are not failures, they are the map showing where to invest your next investigative effort. Over time, you will also notice that diamonds become comparable objects, not just drawings, and that makes patterns emerge faster. When incidents are modeled consistently, you get a shared language for threat hunting, response prioritization, and intelligence sharing.

In Episode 44, Model intrusions with the diamond for clarity, the big takeaway is that the Diamond Model clarifies relationships, and relationships are what turn evidence into understanding. When you can show how an adversary, capability, infrastructure, and victim connect, you can communicate the intrusion with precision without drowning people in raw logs. You can link incidents through shared infrastructure, capture pivot points in a way others can audit, and expand the model with meta-features that preserve the details that matter. You can also build discipline by verifying each point with observed evidence and being honest about what is unknown. Use that clarity to build a diamond for your current top threat, because the act of modeling is often the fastest way to see what you truly know and what you still need to prove.

Episode 44 — Model intrusions with the diamond for clarity
Broadcast by