Episode 10 — Read network telemetry for signals that count
In Episode 10, Read network telemetry for signals that count, we turn our attention to one of the richest and most underused sources of intelligence in any environment: network traffic. Network telemetry tells a story about how systems actually communicate, not how we assume they do. When you learn to read that story fluently, you stop relying solely on alerts and start recognizing patterns of behavior that indicate real adversary activity. This episode is about developing the habit of scanning network data with intent, knowing what matters, and ignoring what does not. Network signals are rarely loud at first, but they are often consistent, and consistency is what makes them powerful. The goal here is not deep packet inspection mastery, but practical signal recognition that works in daily operations.
At its most basic level, network telemetry includes logs from firewalls, routers, proxies, and flow collection systems that record how systems connect to one another. These records capture who talked to whom, when the communication happened, how long it lasted, and how much data moved. Even without payload visibility, this information is incredibly revealing because it describes behavior. Flow data shows relationships, and relationships are what attackers must create to move, persist, and extract value. When you review network telemetry regularly, you begin to recognize what normal communication looks like for your environment. That familiarity is what allows abnormal behavior to stand out quickly. Network logs are not just exhaust data, they are a behavioral map of your organization.
One of the first places to focus is outbound traffic, because adversaries almost always need to communicate outward at some point. Unusual outbound traffic patterns can indicate data exfiltration, command-and-control activity, or unauthorized remote access. This might appear as a system that normally communicates internally suddenly sending data externally, or a workstation reaching out to destinations it has never contacted before. Volume matters, but so does timing and destination diversity. Exfiltration does not always involve massive transfers, and command traffic is often deliberately small. The key is to look for behavior that does not align with the system’s role. When outbound traffic breaks expectations, it deserves attention.
Small, regular beaconing traffic is especially easy to dismiss and especially dangerous to ignore. Persistent backdoors often communicate in low volumes to avoid detection, checking in periodically for instructions. These beacons may look harmless because they involve minimal data and do not trigger bandwidth alerts. However, their regularity is the signal. A connection that occurs every few minutes or every hour with no clear business purpose is often more suspicious than a single large spike. Attackers rely on defenders overlooking these subtle patterns in favor of louder events. Training yourself to notice regular, low-volume connections is a key skill in network-based detection.
All of this depends on having a baseline of normal network activity, because anomalies only exist relative to normal behavior. Establishing a baseline does not require perfection, but it does require observation over time. You need to know which systems talk externally, which services generate consistent traffic, and which destinations are expected. Baselines can be informal at first, built through repeated review rather than complex modeling. Over time, your intuition becomes more accurate, and anomalies become easier to spot. Without a baseline, everything looks suspicious or nothing does, and both outcomes are equally unhelpful. Baselines turn raw telemetry into context.
To make this concrete, consider analyzing a spike in Domain Name System (D N S) traffic to a previously unknown and suspicious-looking domain. The spike itself is a signal, but the context determines its meaning. You would want to know which hosts are making the requests, how frequently they occur, and whether the domain resolves to infrastructure associated with known services or newly registered assets. You would also look at timing, because bursts of D N S queries at regular intervals can indicate automated behavior rather than user-driven activity. Even without seeing the payload, D N S patterns often reveal early stages of malicious activity. Treating D N S as an intelligence source rather than just plumbing unlocks significant defensive value.
A useful way to think about network logs is as a digital footprint left behind by a person walking. Each step alone is not very informative, but a trail of steps reveals direction, pace, and intent. Network connections are those steps. Over time, they show where systems go, how often they go there, and whether their paths make sense. Just as a footprint trail through an unusual area draws attention, network paths that diverge from normal routes deserve scrutiny. This perspective helps you think in terms of movement and behavior rather than isolated events. It also reinforces why patterns matter more than single data points.
Certain fields in a flow record consistently provide the most value when identifying potential malicious activity. Source and destination addresses tell you who is communicating, while source and destination ports provide clues about the service or protocol being used. Timestamps and durations reveal timing patterns, which are critical for spotting beaconing or automation. Byte and packet counts help distinguish between lightweight signaling and heavy transfers. When you know which fields matter most, you can scan records more efficiently and avoid getting lost in noise. You do not need every field to find meaningful signals, you need the right ones.
Encrypted traffic adds complexity, but it does not eliminate visibility. When traffic is encrypted, you lose payload content, but you retain metadata, and metadata is often enough. You can still see destinations, timing, volume, and frequency, all of which are useful for behavioral analysis. Encryption protects confidentiality, but it does not hide the existence of communication. Attackers must still connect, and those connections still leave traces. Understanding this helps prevent the false assumption that encrypted traffic is opaque and therefore not worth analyzing. In practice, many detections rely entirely on metadata patterns rather than content inspection.
Network telemetry becomes even more powerful when correlated with host-based logs. Network logs might tell you that a system connected to a suspicious destination, while host logs can tell you which process initiated that connection. Together, they provide a more complete picture of what happened and why. Correlation reduces ambiguity because it ties behavior to execution context. This combination also helps validate findings, as signals observed in both domains are less likely to be false positives. Intelligence work benefits from this layered view because it strengthens confidence in conclusions. No single data source tells the whole story, but together they form a coherent narrative.
Frequency analysis is another effective technique for finding signals that matter. By looking for rare connections, you can quickly surface behavior that deviates from the norm. Most network traffic is repetitive and predictable, which means truly unusual connections often stand out statistically. This does not require advanced math, just an awareness of what is common versus what is rare. Rare does not automatically mean malicious, but it does mean worth understanding. Over time, frequency analysis helps you focus attention where it is most likely to pay off. It is a practical way to reduce noise without ignoring potential threats.
The quality of your analysis depends heavily on the quality of your logging. Ensuring that your systems capture both source and destination ports for all connections is critical, because ports add important context. They help differentiate between web traffic, remote access, file transfer, and other activities that may have very different risk profiles. Missing port data forces analysts to guess, which weakens conclusions. Good telemetry design is a force multiplier for intelligence because it reduces uncertainty at every step. Investing in comprehensive logging pays dividends long after the initial setup.
Prioritization is also important, because not all traffic deserves equal attention. Monitoring for connections to known malicious infrastructure and unauthorized remote access tools should be high on the list, because these often indicate active compromise. This does not mean ignoring unknowns, but it does mean recognizing where risk is highest. Known bad destinations provide clear signals that justify immediate action. Combining this with behavioral analysis of unknown destinations gives you balanced coverage. Prioritization ensures your limited time is spent where it can reduce risk fastest.
Network telemetry is one of the clearest windows into adversary behavior when you know how to read it. It rewards patience, pattern recognition, and consistency rather than heroic one-time analysis. The more often you review network data, the more intuitive it becomes, and the faster you can spot signals that count. Your next step is simple but meaningful. Check your firewall logs for any anomalies today, not with the goal of finding something dramatic, but with the goal of reinforcing your baseline. Over time, this habit turns network data from background noise into a trusted intelligence source that quietly protects your environment.