This doesn't do what you think it does, and it can actually degrade performance.
HTTP pipelining is the act of issuing multiple requests for HTTP elements and then waiting for them to arrive back to the client, in the order they were requested, over a single persistent HTTP connection. In other words, rather than request a page element, get it back, request the next one, etc, (within your http 1.1 compliant persistent connection, of course) you fire off all your requests up to some upper limit, and then wait for them to come back one after another.
So there are two problems here - first, if one of the elements requested in the pipeline is slow coming down, such as data coming back from a database query on the server, a URL pointing to a resource that is offline or has a bad DNS entry, etc, then the entire pipeline stalls. You will hit that one bad or slow element, and no more data will come down until the bad request times out.
Even a large graphic in the pipeline that takes a couple of seconds to download will cause all the other HTTP elements that are potentially available for download to queue up behind it.
Second, pipelining support can spotty at best - you'll find websites that aren't designed with pipelining in mind, hosted by servers, content distribution systems, and load balancers that don't correctly implement the specification either.
Finally, all modern browsers support more than one simultaneous persistent HTTP connection to the server. This allows them to issue multiple (usually between 4 and 6) HTTP requests simultaneously, and independently, of one another, so that once they've requested the initial HTML document, they can immediately request, and receive in parallel, multiple elements that make up the page.
IE, Safari, Chrome, and Firefox all come with pipelining disabled by default, for good reasons - better compatibility and reduced risk of unpredictable performance decreases caused by stalled pipelines.
__________________
-Scott
|