Sunday, August 14, 2011
When discussing network latency, most people only think about two adjacent endpoints; we aren't often called upon to evaluate latencies between intermediate devices along the network path. Well, I was recently called upon to do just that, and--between documentation of the network path and knowledge of the protocols in use--I was able to take data from one endpoint and make reliable estimates of some intermediate latencies. Consider the following packet capture, which illustrates the initiation of an SSL connection through an HTTP proxy:
Now, the first three packets--the TCP handshake--give us a straightforward indication of the network latency between the client and the HTTP proxy; in this example, it's roughly 12ms. We then see our client issue an HTTP CONNECT method to establish a tunneled TLS connection to the true destination; here's where we put our higher-level protocol knowledge to good use. We know that, according to the HTTP and TLS RFCs, the proxy device is not allowed to return a 2xx code to a CONNECT request unless/until its connection to the remote endpoint is complete. So, then, the elapsed time between the proxy's ACK of our CONNECT request and its "200 Connection established" response is a reasonable measurement of the latency between the proxy and the device that is handling the "other end" of the TLS connection. In this case, the latency we "see" is 132ms; however, we know that 12ms of that is the latency between the client and the HTTP proxy. So, we can estimate the latency between the proxy and the "other end" of the TLS connection at 120ms. Taking this a step further, the Client Hello/Server Hello exchange indicates a latency of 128ms (140ms raw - 12ms client-to-proxy-latency).
To be fair, these raw numbers include a certain amount of overhead, in that the proxy burns up a few milliseconds managing the transactions on both ends; still, taking this measurement over a collection of connections to the same destination can give us a reliable AND reasonable estimate of "effective latency" between not only points A (client) and B (HTTP proxy), but also between points B (proxy) and C (SSL termination at the destination).
You can't do this in every case of course, but the lesson is simple - the more you know and understand about the higher-level protocols (like HTTP and SSL/TLS), the more valuable your network analysis skills can be. Go have fun with it! If you don't have Wireshark, GO GET IT NOW...