Episode 26 — Synthesize multi-source findings into one clear story
In Episode 26 — Synthesize multi-source findings into one clear story, the goal is to take the scattered signals you collect every day and turn them into something that reads like understanding instead of a pile of facts. Most investigations start with fragments, like one suspicious login, one odd process, or one unusual network connection, and those fragments rarely arrive in the right order. The pressure is to report quickly, but speed without synthesis often produces an update that is technically accurate and strategically useless. What you want is a narrative that helps a reader grasp what happened, why it matters, and what the evidence supports right now. Synthesis is the skill that takes you from observed events to a coherent story that a technical team can validate and a decision maker can act on.
Synthesis is the act of combining separate pieces of information into one whole, but it is not just aggregation. Aggregation is collecting and grouping, while synthesis is connecting and explaining. If you drop ten artifacts into a report without explaining their relationship, you have created a catalog, not intelligence. When you synthesize, you are making an argument that the pieces belong together and that their order and relationship tell a meaningful story. This requires judgment because real data is messy, incomplete, and occasionally misleading. It also requires restraint because not every interesting observation deserves a place in the final narrative. A good synthesizer learns to hold two ideas at once: confidence in what the evidence supports, and humility about what it does not. That balance is what keeps your story both clear and honest.
A strong starting point is learning to look for the common threads that link network logs with host forensic artifacts. Network telemetry can show connections, timing, destinations, and patterns that suggest command and control or data movement. Host artifacts can show process execution, persistence mechanisms, user activity, and changes to the system that explain why the network behavior occurred. When these two perspectives align, they reinforce each other, and your confidence rises naturally. For example, an outbound connection to a rare destination becomes more meaningful if you can tie it to a specific process that launched at the same time. Likewise, a suspicious process becomes more meaningful if you can show it reached out to infrastructure that matches the broader pattern you are tracking. Synthesis is where you weave those threads together so the relationship is obvious to the reader, not just implied.
One of the most common mistakes is reporting isolated facts without explaining how they relate to each other. This is easy to do because investigations naturally produce discrete findings, and each one feels like progress. The problem is that a reader cannot tell whether the facts are connected, coincidental, or simply adjacent in time. If you do not explain relationships, your audience fills the gap with their own assumptions, and that is where miscommunication begins. Some readers will assume the worst and escalate unnecessarily, while others will assume the facts are unrelated and ignore a real threat. The fix is to add connective tissue, which means you state why two observations matter together. You do not need dramatic language, you need clear logic that shows how the evidence forms a chain.
As you build that chain, keep your attention on the big picture so you can explain what the threat actor is doing. This does not mean you invent motives or claim certainty about intent. It means you describe the apparent objective based on the sequence of actions you can support, such as initial access, privilege use, discovery, lateral movement, persistence, and data access. The big picture is a pattern of behavior, not a single artifact. When you focus on that pattern, you make it easier for your audience to understand what matters next, such as containment decisions, priority hosts, and likely follow on activity. A story that stays at the level of raw indicators often fails to answer the question your stakeholders actually care about. They want to know what is happening and what it means for them, not just what you found.
A helpful mental image is to imagine you are piecing together a puzzle with parts from many different boxes. Some pieces are from the puzzle you are solving, and some pieces only look like they belong because the colors match. Other pieces belong, but their edges are worn, so they do not snap into place easily. This is how multi-source analysis feels in real life, especially when logs are incomplete or when systems have different clocks and schemas. Your job is to test fit, not to force fit. When a piece does not align, you do not throw it away automatically, but you also do not jam it into the picture because you want closure. Instead, you set it aside, note what would make it fit, and look for additional pieces that can confirm or refute the connection. That patient testing is what separates synthesis from storytelling.
Another image that clarifies the skill is to think of synthesis as transforming raw ingredients into a final and complete meal for your audience. Raw ingredients can be excellent, but they are not the meal, and handing them over without preparation forces your audience to do the cooking. In intelligence work, your audience often cannot do that cooking quickly or correctly because they do not have the context, the time, or the investigative trail in their head. Synthesis is the preparation step, where you combine, sequence, and explain so the output is usable. You also choose the right serving style for the consumer, which means you keep it technically sound but not overloaded with every scrap you touched. A good meal is not every ingredient in the pantry, it is the ingredients that belong together. The same is true for a good analytic narrative.
As you merge sources, you will run into contradictions, and handling them well is one of the clearest signs of mature analysis. Contradictions show up when one log source suggests a time window that another source does not match, or when a host artifact implies execution that network telemetry does not reflect. The worst move is to ignore contradictions because they complicate the story. The second worst move is to treat contradictions as proof that all sources are unreliable. The better move is to treat contradictions as a diagnostic signal that tells you where your understanding is weak. Sometimes the contradiction resolves through simple explanations, like clock drift, missing telemetry, or differences in log granularity. Sometimes it reveals a deeper issue, like a separate mechanism of access or a mistaken attribution. Your narrative should acknowledge the tension and explain how you resolved it, or explain what remains uncertain.
Good synthesis always answers the question of what all this data actually means now. That is the hard part, because meaning is not a field in a log. Meaning comes from relationships, timing, context, and consequence. When you say what it means, you are not guessing, you are translating evidence into an assessment that can guide action. The assessment might be that the activity is consistent with credential misuse and internal discovery, or that it is consistent with malware execution and outbound staging. It might also be that the activity is likely benign but requires one specific validation step to be sure. The key is that you do not leave the reader with a cloud of signals and a shrug. You give them a grounded interpretation that is clearly tied to the evidence you present.
A timeline is one of the most reliable tools for synthesis because it forces order onto chaos. When you place events from different sources onto a shared sequence, you can see cause and effect relationships that are hard to notice in separate views. A process start on a host becomes more meaningful when you can place it right before the first unusual outbound connection. A privileged login becomes more meaningful when it precedes configuration changes or access to sensitive repositories. Timelines also reveal gaps, which is just as valuable as what they reveal directly. If you expect to see a certain event between two other events and it is missing, that gap can signal a logging blind spot or an alternate path you did not consider. A clear timeline is also easier to communicate because it matches how humans naturally understand stories. Your audience can follow a sequence more easily than a collection of unordered facts.
As you write, make sure the narrative flows logically from the evidence to your final analytic conclusion. This is where many reports stumble, because the writer knows the conclusion and forgets that the reader does not. You want your story to feel inevitable, not because it is dramatic, but because each step is supported. That means you introduce observations, explain their relevance, connect them to other observations, and then state what the combined pattern indicates. When you do this well, your reader can trace your reasoning and see where confidence comes from. They can also challenge it productively if they disagree, because the logic is visible. Hidden reasoning creates fragile conclusions, because a reader cannot tell whether you made a careful inference or a leap. Visible reasoning creates strong conclusions, even when uncertainty remains.
Another essential check is making sure your story accounts for the most important data points you found. This is not the same as including everything. It means you identify which findings are load bearing, the pieces that truly support your assessment, and you ensure they are addressed explicitly. If your conclusion depends on a specific artifact, it needs to appear clearly in the narrative, not buried as an aside. If there is a finding that seems to conflict with your conclusion, it should be handled directly so you are not accused of selective reporting. This is where synthesis becomes honest, because you are not only arranging supportive facts. You are integrating the full picture as best as the evidence allows. A good habit is to reread your conclusion and ask whether each major claim is supported by at least one clear piece of evidence in the body. If the support is missing, the claim should be softened or removed.
Practice can be simple, and one effective exercise is to write a three sentence summary that combines findings from two different tools. You might take a network observation and a host observation and compress them into a short, coherent statement that explains the relationship and the meaning. For example, you can describe that network logs show a rare outbound connection pattern from a specific endpoint, and host artifacts show a process chain that initiated that connection at the same time. Then you can state what the combined pattern suggests, such as likely remote access tooling activity or likely staging behavior, while keeping uncertainty appropriately framed. This exercise is useful because it forces you to choose the key facts and articulate the connection between them. It also helps you practice writing for clarity without losing technical precision. When you can do this in three sentences, you can usually scale it into a longer narrative without drifting into noise.
Synthesis is also a discipline of audience empathy, even when you are writing for technical readers. Your audience is not inside your investigation, and they do not know which dead ends you explored or which assumptions you rejected. If you omit connective explanations, you are asking them to do mental reconstruction, which is slow and error prone. If you include too much detail, you bury the signal under process. The middle path is to be selective about what you include and generous about why it matters. This is also where consistent language helps, because switching terms or describing the same concept three different ways makes the narrative harder to follow. Clarity is not only about sentence structure, it is about conceptual stability. When your terms stay consistent and your logic stays visible, the reader can focus on the meaning rather than decoding your intent.
Over time, you will find that synthesis is not a single step at the end, but something you do throughout an investigation. You start forming provisional connections early, and you update them as new evidence arrives. The risk is that early connections become fixed stories, so you need to hold them lightly until the timeline and cross-source checks support them. This is where your earlier habits, like source discipline and bias awareness, become practical enablers of synthesis rather than separate topics. If you evaluate sources carefully, you avoid building your story on shaky claims. If you watch for cognitive shortcuts, you avoid forcing coherence where it does not exist. Synthesis is the place where all those skills converge, because it is where you turn technical work into a product that others can use. The more you practice, the more natural it becomes to ask, what does this connect to, and what does it mean.
Conclusion: Synthesis builds clarity so write a summary for your most recent case. When you take the time to merge multi-source findings into one coherent narrative, you reduce confusion, speed up decisions, and increase confidence across your team. The value is not only that you can explain what happened, but that you can explain why you believe it happened, based on evidence that fits together in a clear sequence. By looking for common threads across network and host perspectives, resolving contradictions instead of hiding them, and using a timeline to enforce order, you turn raw signals into understanding. When you check that your narrative accounts for the most important data points and that your conclusion follows logically from what you presented, you create an intelligence product that can stand up to review. Take your most recent case, choose two key tools, and write that compact summary that ties their findings together into one story, because that small practice is how clarity becomes a habit.