Vous êtes sur la page 1sur 2

Well, no, the main content and pagelets are received with the same connection.

T
he pagelets are simply streamed as they are generated to the browser, and placed
in the document with Javascript.
I developed a simple page framework recently. The core idea is to separate a pag
e into several features, each of which will be handled in parallel. The output o
f each feature is an HTML segment, which the framework then assembles by layout
configuration. The first version is not perfect. If you get interested, check he
re https://github.com/chennanfei/Moonlight
Changhao Jiang, Research Scientist at Facebook, describes a technique, called Bi
gPipe, that contributed to making the Facebook site, "twice as fast." BigPipe is
one of several innovations, a "secret weapon," used to achieve reported perform
ance gains: "[BigPipe] reduces user perceived latency by half in most browsers."
The exception was Firefox 3.6, latency was reduced by approximately 50 ms - abo
ut a 22% reduction.
The motivation for BigPipe, and associated innovations:
Modern websites have become dramatically more dynamic and interactive than 10 ye
ars ago, and the traditional page serving model has not kept up with the speed r
equirements of today's Internet.
Taking inspiration from hardware (pipeline and scalar microprocessors), the Face
book team used PHP and Javascript (no changes in existing Web servers and browse
rs are required) to a achieve a "fundamental redesign of the existing web servin
g process. The redesign involves: breaking down the page serving process into ei
ght distinct steps (each step is called a "pagelet"), some of which can be perfo
rmed in parallel. Responding to the initial page request by returning
an unclosed HTML document that includes an HTML head tag and the first part of t
he body tag. The head tag includes BigPipes JavaScript library to interpret pagel
et responses to be received later. In the body tag, there is a template that spe
cifies the logical structure of page and the placeholders for pagelets.
And then creating JSON-encoded objects (the pagelets) that include "all the CSS,
JavaScript resources needed for the pagelet, and its HTML content, as well as s
ome meta data."
The increasing load time latency for sophisticated Web pages is not a new issue,
nor is the use of some form of pipelining to effect performance gains. Aaron Ho
pkins discusses Optimizing Page Load Time on Die.net, including numerous factors
, other than the traditional page request life cycle, that can affect the latenc
y of page loading. One interesting point in Aaron's post:
IE, Firefox, and Safari ship with HTTP pipelining disabled by default; Opera
is the only browser I know of that enables it. No pipelining means each request
has to be answered and its connection freed up before the next request can be s
ent. This incurs average extra latency of the round-trip (ping) time to the user
divided by the number of connections allowed. Or if your server has HTTP keepal
ives disabled, doing another TCP three-way handshake adds another round trip, do
ubling this latency.
Jiang did not indicate that BigPipe took advantage of a browsers innate pipelini
ng functionality, and implied that it did not when saying that no changes in exi
sting servers or browsers was required. It would be interesting to know whether
or not the BigPipe innovations will continue to be useful when the browser does
change - i.e. with the widespread implementation of HTML 5.
Kensaku Komatsu created a demo (reported by The Zinger) that
... compares data flow in HTML5 Web Sockets on the one side versus XML HTTP
Request on the other. When I ran it, the results were astounding: 565 millisecon
ds against 31444 milliesecondswow! The Web Sockets experience is 55 times faster,
in part, because there is so much less unnecessary header traffic going over th
e wire.
The demo uses HTTP Pipelining, something that is generally considered to be "dan
gerous,"
but it is not HTTP pipelining. The network traffic is made up of WebSocket f
rames and not HTTP requests and responses. It is explicitly controlled by the ap
plication author and is not subject to the problems of HTTP/1.1 pipelining. Beca
use WebSockets can send and receive at any time, are directly controlled by the
programmer, and are not subject to proxy interference, the pipelining ability is
safe and should not be disabled.
Komatsu's demo brings together the relationships between Facebook's innovations,
HTTP pipelining questions, and the future capability of HTML 5, especially WebS
ockets and how they will eventually interact to increase Web site performance an
d minimize user experienced latency

Vous aimerez peut-être aussi