Google Chrome: See No Evil, Do No Evil – An Internet Performance Perspective

The intertubes of the Web are abuzz with talk of the new, open-source Google Chrome browser [two articles here and here]. I will not presume to wade into the debate of whether it is necessary, or what strategic business goals Google has set that rely on having its own browser. I will limit my comments to the area of Web performance.

Open-Source Browser: Ours or Theirs?

When I read that Google Chrome was an open-source browser, the first thought was: is it theirs or a re-branded Firefox? No one knows at this point, but that will have a direct effect on how the browser performs, and how extensible it will be.

HTTP Standards

Unlike other standards, HTTP standards set out how a browser uses the underlying TCP stack. MSIE6/7 have very broken implementations, and MSIE8 is building on those by increasing the number of connections per host to 6, up from 2 set out in RFC 2616.

Firefox can be configured to mangle this as well, but by default it plays by the standard, adding the option of HTTP pipelining into its mix of persistent HTTP connections.

It will be VERY interesting to see how Google Chrome comes configured out of the box, and how much control users have over the HTTP behaviour of this new browser.

(X)HTML/CSS/JS Standards

This area is a mess. No browser implements this standards in a way that is completely consistent with the written text, and page designers have to use a variety of page testing products (such as BrowserCam) prior to release to ensure that their design is somewhat presentable in all browsers on all platforms.

The rendering of Javascript will be crucial in this new browser, as so much of the new Web is built on applications that are almost completely Javascript-driven.

I am sure that there will be sites that will be completely mangled by the new browser, but, knowing Google, we will be getting a 2.0 release, the 1.0 release being used within Google for a while now to test it under real-world conditions.


As a few sites in the world do use cache-control headers properly, it will be interesting to see how a browser created by one of the major ad-serving and search providers on the Web tracks page objects. Will it follow explicit/implicit caching rules? Or will it impose a heavy penalty on bandwidth by downloading objects more frequently than other production browsers do?

Proxies, and the Debacle of the Google Web Accelerator

Back in 2005, Google launched a badly designed and gighly flawed product called the Google Web Accelerator. This product proxied Web traffic through the Google network and allowed the company to develop a pattern of user browsing habits and search selections that would allow them to better target their ad products.

I have a great fear that this will be an integrated part of the Google browser project. If it is, it should be a configurable option, not an out-of-the box standard.

I am sure that there will be a few performance conversations that occur around the Google Chrome browser in the weeks ahead. I look forward to hearing what the community has to say about this new addition to the browser wars.

Leave a Reply