Earlier tonight (using “tonight” loosely) I attended a meetup that hosted an excellent presentation about scaling AngularJS applications, by a guy who obviously knows what he’s talking about. But this post is, more or less, not about that.

It’s about a comment that I made, in response to a question fielded by a co-attendee of the meeting. The question went something like this: “Why is there a performance gain in delivering multiple code files over the wire to the end user’s browser, versus just one, when you’re going from one server to one client? Shouldn’t a single transfer just max out their connection anyway?” My response: “[incomprehensible mumbling] Basically, Internet QoS sucks.”

That’s an oversimplification, but not as far from the truth as you’d expect.

And my statement holds true even when you remove other variables from the equation. Such as the fact that if you do client-side caching at the file level, you throw out that cache if you change a file, and if you’ve got compartmentalized code there’s a good chance that you won’t have to reload your entire app after one minor patch.

The issue lies in the TCP protocol, the old, reliable “bucket brigade” way that most web traffic (well, a lot of it…VoIP, some video streaming, some BitTorrent and a few other things go over the “fire hose” that is UDP). But ignore the watery analogies for the moment, because the culprit for slower single-stream transfers is the combination of algorithms used to determine what happens when congestion and/or packet loss happens on a TCP connection (e.g. the HTTP connection that was used to deliver this web page). The net result of these algorithms is that, once you hit the capacity of the link between you and the server (which may well be less than the capacity of either your connection or the connection on the other end…ahem, TWC and YouTube), you end up with a data rate graph that follows a sawtooth pattern.

If you make multiple connections to the same server, however, each of those connections has its own, slightly offset, sawtooth pattern. The result is that you use more of your pipe more of the time to transfer data, rather than backing off due to hitting your connection’s speed cap. This is particularly important on very high speed connections (say, my 50 Mbps Time Warner Cable line) or connections with variable speed/reliability (like a cellular data network, 3G or 4G). So, it’s almost universally important, which is why sites like Speedtest.net (which also provides the speed test software for most ISPs) use multiple connections to test folks’ internet speeds. It’s also why modern browsers (Firefox 3+, Chrome…heck, even IE8) open up plenty of connections per server. It really does make things load more quickly…up to a point…diminishing returns and all that.

Before I conclude, a few notes:

  1. Setting up TCP connections, and HTTP requests over them, involves some overhead. So, while three or four distinct chunks of JavaScript heading over the wire may be better than one, twenty is almost certainly not better than four (unless you’re delivering a huge app over a really fast connection). Multiple connections also increases server load, though you have to serve the files up to the user at some point anyway…six of one, a half-dozen of the other.
  2. If you have a bunch of resources that your web users need to load no matter what (e.g. images), pushing them to a domain other than your main website content will give them their own simultaneous connection pool. So if a browser supports six concurrent connections by default, putting scripts at scripts.example.com, regular content at www.example.com and images as images.example.com could net you eighteen simultaneous connections, assuming the web browser doesn’t have a hard cap on overall simultaneous connections (they do). You end up with diminishing returns here too, so for static content other speedups can also be pursued, such as using a CDN.
  3. Multiple connections being used to download a single file isn’t new by any stretch. It worked well then, too, even over dialup.

I still stand behind my tl;dr answer shared during the meetup: internet QoS sucks. So, if your files are large enough, optimize your site for multiple simultaneous from-browser connections, even if they’re all to the same host. Then run your metrics. They’ll most likely thank you for making that change.

P.S. If you think this post is useful information, tweet/retweet/share it. If not, comment on why. Thanks either way.