Why you should record your network traffic

Learn how to overcome the challenges of PCAP in high-speed, complex environments and discover the efficiency of our solution, n.Rewind, for seamless network analysis.

Florian Thebault
October 6, 2022
LinkedIn LogoX logo

**And more importantly, why do it with our solution, n.Rewind **

PCAP or it didn’t happen!

The cyber combatant reflex card.

Known as PCAP (for Packet CAPture, the best-known network data logging format), capturing network traffic is common practice for both network administrators and cybersecurity specialists.

A PCAP file is a record in binary format (with ~20-30% overhead compared to the original stream). It provides full visibility of past events. It allows complete snapshots of the network to be taken for analysis.

In other words, it’s your networks replay, rewatch your favorite episodes such as Tuesday's DDoS or the _Ransomware Special _or the DHCP Malfunction Show

Without a timely PCAP trace, it’s not possible to perform an efficient forensic analysis. Here arises the <u>challenge n°1: how to make the capture in time? </u>

PCAP, what for?

Anything and everything.

A PCAP serves as a basis for analyzing and helping to monitor the health of the network (e.g., ICMP flood or lag at TCP session level), but also to detect anomalies and malicious activity. The PCAP allows to trace the origin of the problem you want to identify on the network.

Supervising the source or destination of traffic, identifying data from "applications" and devices, detecting the origin and causes of malfunctions (which can also be indicators of compromise, or IoC), etc.

Overall, there are four main categories of packet capture usage:

  • Identify congestion points: thanks to the capture, it is possible to observe the path of all packets, which can help to identify congestion points,
  • Troubleshoot: the granularity of the PCAP (i.e. having all the packets) is the ideal (though very time-consuming) solution to identify precisely a problem and its root cause,
  • Explore: an essential component of forensics, the PCAP is often essential for identifying or confirming the presence of malware, detecting malicious behavior and security flaws,
  • Keep evidence: the PCAP allows you to share a consistent, complete, disseminable record of a network event. To be useful, a PCAP must both allow the entire event to be recorded, but also remain usable (exploration time/size of the recording). The larger it is, the more complete it is, the more time is needed to isolate the elements of interest. This brings us to challenge n°2: how to get a targeted capture?

Is PCAP easy to do?


Unless: you're in the cloud or in hybrid or high-speed environments, under a demanding SLA, or under attack. Yep, apart from your normal day in a SOC it’s super easy to do.

A Tier 2 SOC analyst assists in the remediation of cyber-attacks by collecting host and network data for further analysis and triggering appropriate security measures in accordance with the incident response plan. The main issues it will face are:

  • No or incomplete or too much information about what happened before and after the cyber incident
  • Even if forensic data is available, it can be very large, expensive to store and extremely time-consuming to explore and analyze manually
  • In the midst of an incident, it can be complicated to make a useful capture and to switch between tools to analyze the packets.

As you can see, <u>challenge n°3: how to make a capture in the heat of the moment?</u>

Does size matter?

Oh yes! And usage too!

If you want to do core capture, the size of PCAPs will often exceed the capabilities of the tools used to capture them. Furthermore, it will be difficult to collect complete conversations (or sessions) because your capture tool is not powerful enough. This is why we recommend that you consider the stream-to-disk rate, a metric that is particularly important to pay attention to in environments with data rates above 10Gbit/s. It is not uncommon at high bitrates for capture tools to claim performance that they are only capable of delivering in a handful of seconds. A limitation that some are not afraid to market as a feature, called "microburst".

Conversely, if the capture is done at the edge of the network, visibility will be limited. And we may miss the detection of a DDoS or malware.

The confluence of these constraints has given rise to a common quote that we often hear from some of our clients: "Sometimes I spend two to three days to find what I’m looking for in the extract. This really underlines the “needle in a haystack” principle. And no you can’t burn the haystack and go through the ashes with a magnet as our cheeky intern would say.

And this recurring problem is not going to go away as enterprise network speeds increase. Wireshark is already unable to record 1Gbit/s network streams without packet loss. But already some ISPs are offering 2.5Gbit/s customers and others 8Gbit/s (even if their offer is marketed at 10Gbit/s). Data centers are moving from 40Gbit/s to 100Gbit/s and 400Gbit/s is expected to be widespread by the end of this decade. Performing packet capture in these environments is no longer possible with current tools and hardware.

Therefore, <u>challenge n°4 : how do you find the needle in the PCAP haystack? </u>

Introducing n.Rewind

Intelligent packet capture

As mentioned above, security specialists face 4 challenges limiting their ability to benefit from a useful trace:

  • Challenge 1: how to do the capture in time.
  • Challenge 2: how to make a targeted capture.
  • Challenge 3: How to capture in the heat of the moment.
  • Challenge 4: how to find the needle in the PCAP haystack.

These challenges are worsened with the constant evolution of networks (core to edge, edge to cloud, edge to edge), with hybridization and the acceleration of throughputs. RIP tcpdump.

n.Rewind is the customizable capture tool that we have designed to meet all of the above challenges. It allows for full or filtered, automated capture and includes a buffer to make sure you don't miss the start of the show.

To do this, we started from scratch and realized that only our network probe technology could provide a new answer to this recurring set of problems. The application of our probe technology on the capture allowed us to:

  • Benefit from a buffer, (where the size depends on the allocated RAM), which allows you to go back in time and record the "network moments" that precede its triggering, useful for cyber security analysts looking to understand the attack path or weak signals specific to an activity.
  • Perform automatic captures triggered by filters or APIs.
  • Upstream filtering, i.e., on the core network at 100Gbit/s. With a set of filters that is much more precise and more useful than the 5-tuple and that considers some complex protocol stacking like tunneling, multiple VLANs, MPLS, etc... A very efficient way to get a lightweight PCAP when you know exactly what you're looking for in big pipes. Just like our previous experience, spending several days capturing multiple 1G networks with Wireshark and then spending several days finding what we were looking for.... is really no fun.
  • Downstream filtering, this has the same use, but is of great interest when you don't yet know exactly what you are looking for in your streams. The advantage of our solution is that you can use it on a Raspberry Pi for 1G captures or a sluggish Celeron for 10G, and 5 to 10 smaller than the competition for a 100G.
  • Handle all data rates, even the highest, with a simple software solution. This has the benefit of being able to be deployed anywhere on your IT infrastructure. In our former career, after spending three days exploring a data center, taking snapshots at ten or so points of capture before finding the one we were really interested in, we started dreaming of a solution that could be deployed remotely anywhere on our infrastructure, with just a few clicks and command lines, without necessarily having to re-route flows or install TAPs.

Designed to meet the pain points of SOC analysts or CERT/CSIRT experts. The daily analysis of these records allows for in-depth, on-the-fly monitoring and exploration of network events from complete and targeted traces. This is essential for a transparent secure network and to reduce cyber risk in private, public, and hybrid cloud environments.

Built to handle the most complex flows found in data centers and to track the highest throughputs, the records produced by n.Rewind improves incident response times as well as the day-to-day security operations of any type of network.

Do you want to be sure you’ll never miss the show ? Reach out, and try our network replay tool.

Florian Thebault
October 6, 2022
LinkedIn LogoX logo

Ready to unlock
full network visibility?

More blog posts

Go to the blog