Last Updated: 2014-10-13 14:03:34 UTC
by Johannes Ullrich (Version: 1)
[This is a guest diary published on behalf of Chris Sanders]
Hunting for evil in network traffic depends on the analysts ability to locate patterns and variances in oceans of data. This can be an overwhelming tasks and relies on fundamental knowledge of what is considered normal on your network as well as your experienced-based intuition. These dark waters are navigated by finding glimmers of light and following them where they lead you by carefully investigating all of the data sources and intelligence in your reach. While hunting the adversary in this manner can yield treasure, following some of these distant lights can also land you in the rocks.
One scenario that often puts analysts in murky waters occurs when they chase patterns of network traffic occurring over clearly visible intervals. This periodic activity often gets associated with “beaconing”, where analysts perceive the timing of the communication to mean that it may be the result of malicious code installed on a friendly system.
As an example, consider the flow records shown here:
If you look at the timestamps for each of these records, you will see that each communication sequence occurs almost exactly one minute from the previous. Along with this, the other characteristics of the communication appear to be similar. A consistent amount of data is being transferred from an internal host 172.16.16.137 to an external host 184.108.40.206 each time.
So, what’s going on here? Less experienced analysts might jump to the conclusion that the friendly device is compromised and that it is “beaconing” back out to some sort of attacker controlled command and control infrastructure. In reality, it doesn’t take a lot of research to determine that the mysterious external entity is a Google hosted IP address. In this case, this traffic actually represents the periodic updating of a Google Finance stock ticker.
As analysts, we are taught to identify patterns and hone in on those as potential signs of compromise. While this isn’t an entirely faulty concept, it should also be used with discretion. With dynamic content so prevalent on the modern Internet, it is incredibly common to encounter scenarios where devices communicate in a periodic nature. This includes platforms such as web-based e-mail clients, social networking websites, chat clients, and more.
Ultimately, all network traffic is good unless you can prove it’s bad. If you do need to dig in further in scenarios like this, try to make the best use of your time by looking for information you can use to immediately eliminate the potential that the traffic is malicious. This might include some basic research about the potentially hostile host like we did here, immediately pivoting to full PCAP data to view the content of the traffic when possible, or by simply examining the friendly host to determine which process is responsible for the connection(s). The ability to be selective of what you choose to investigate and to quickly eliminate likely false positives is the sign of a mature analyst. The next time you are hunting through data looking for evil, be wary when your eyes are drawn towards “beaconing” traffic.