Patrick Meenan

Welcome!

This is my personal blog and mostly contains my random thoughts about web performance, development, browsers or whatever else I might be thinking about at the time.

Latest Posts

Shipping jQuery and React Frameworks with Chrome

Should we ship jQuery, React and other popular frameworks with browsers so sites don’t have to re-download the same frameworks over and over?

Some background

For years, web performance advocates have casually suggested that browsers should “just ship jQuery” or other popular frameworks to avoid the need for every site to force users to re-download identical library code (there’s a recent WHATWG discussion on it here).

However, this concept has historically faced several fundamental hurdles.

First, there is a massive variety of framework versions in active use across the web, making it almost impossible to select a single “canonical” version.

Second, sites must be able to react quickly to security vulnerabilities, and being “locked” to a browser-shipped version could seriously hinder necessary updates.

Finally, these frameworks are served from a wide array of domains and are frequently bundled with site-specific code, which completely breaks simple URL-based caching.

Proposal: A Web-Wide Compression Dictionary

Compression dictionary transport brings an interesting possible solution to this problem. Instead of shipping raw library binaries, we could ship a versioned compression dictionary (e.g., a “2026 web” dictionary) that includes common frameworks like React and jQuery. This is basically the modern alternative to the old “Built-In Web Libraries” approach.

Unlike the whole bundling approach—which struggled with the sheer variety of library versions and the risk of making certain versions “sticky” and slowing down security updates—a compression dictionary provides a wonderfully transparent mechanism. It allows servers to compress their unique resource bundles against the shared dictionary, gaining cross-site sharing benefits without requiring developers to change their HTML or worrying about being locked into a specific binary version. The dictionary natively supports versioning and avoids the privacy risks or other concerns associated with traditional shared library caching schemes.

Since libraries tend to change pretty incrementally over time, a single version of jQuery, React, or other commonly used code can actually compress other versions of the same library really well, eliminating the need to match a site’s specific version.

Even better, the proposal leverages the existing Compression Dictionary Transport mechanism and the Available-Dictionary request header for seamless backward compatibility and easy deployment. The browser would just advertise the web-wide dictionary as being available when a better, content-specific dictionary is not.

Methodology: Building the Dictionary

So how do we build it? The 50MB dictionary was constructed by analyzing massive amounts of public web data pulled as part of the HTTP Archive crawl. For this run, the crawl was updated to parse all of the Javascript it encountered, extract each top-level comment and function block, and store them in the crawls_staging.script_chunks table along with the hash of the payload and the URL it was served from.

(The code for generating and testing the dictionary is up on Github in the web-dictionary project).

We counted the unique occurrences of those hashes across different URLs and pulled the script chunks that were seen on at least 10,000 different URLs. That yielded around 10,300 highly pervasive script or comment blocks. A deduplication pass was then used to ensure that similar functions—such as those across different versions of the same library—were compressed against each other. This was purely to minimize the dictionary size while maximizing utility. The resulting dictionary is around 50MB and works with both Brotli and ZStandard.

The tested dictionary contains a lot of the typical boilerplate copyright blocks as well as the frameworks you’d normally expect to see out there (jQuery, jQueryUI, React, Preact, Angular, etc.), plus a lot of underlying code that is widely reused.

Methodology: Testing the Dictionary

To actually test the effectiveness of this beast, I pulled the list of script and HTML requests that were loaded by the top 100,000 pages from the March HTTP Archive crawl. That resulted in ~3 million unique URLs.

I then fetched the URLs independently to keep BigQuery costs in check (even though the HTTP Archive has the original bodies) and re-compressed them twice: once with Brotli level 11, and once with Brotli level 11 plus the 50MB dictionary. The original encoded size as-served from the origin was also logged, and then the relative sizes were compared for analysis.

Experimental Results

I stopped the processing at ~400k URLs (70% scripts, 25% HTML) because the data converged really quickly and wasn’t changing as more URLs were processed.

Here’s how the savings looked:

Bar chart of framework dictionary compression savings
Framework Dictionary Compression Savings

Script Metrics

  • Brotli 11 Savings over Original: saved 16% (11 KB).
  • Brotli 11 + Dict Savings over Original: saved 29% (15 KB).
  • Brotli 11 + Dict Savings over Brotli 11: saved 15% (4 KB).

HTML Metrics

  • Brotli 11 Savings over Original: saved 35% (8 KB).
  • Brotli 11 + Dict Savings over Original: saved 55% (9.5 KB).
  • Brotli 11 + Dict Savings over Brotli 11: saved 27% (1.7 KB).

Conclusion

While the inclusion of a 50MB framework dictionary does offer some really great compression benefits—particularly for HTML and certain classes of scripts—the overall conclusion is that it’s just not yet worth the effort right now.

Managing a 50MB static dictionary on every single client device and growing adoption of server-side compression with the same dictionary is a fairly long and drawn-out process.

Given that just using Brotli 11 compression alone already provides significant savings over what most websites are currently serving (and tuning their compression is exactly what we’d have to do for the dictionary support anyway), the most effective path forward is to encourage broader adoption of Brotli 11 well before we start introducing the overhead of a browser-shipped web-wide dictionary.

Does Gemini Create Fast Websites?

I’ve used AI to generate a few websites over the last month or so and not too long before that I was helping my son create a website on Wix so I thought it might be interesting to see how they stack up against each other in terms of performance. I was REALLY motivated when I saw the new Stitch app had a “create react app” button (ok, petrified probably describes my reaction better than motivated).

The contenders

The first site I helped with was a Wix website for my son’s horse transport business. I directed him there since he could build most of it himself and I knew the Wix team works hard to make their sites fast (within the constraints of running a giant self-publishing framework). We spent a fair bit of time on getting the layout to “mostly work” but the grid system is pretty limiting so we ended up with a design that was “good enough” rather than “great”. He did most of the work though so I’m happy with that part of things.

The next site was a site that I promised a contractor that we use a lot that I could help him put together. I’ve been meaning to play with 11ty and other frameworks for a while but the level of effort has always been more than I wanted to take on and I have zero design skills (less than zero if that’s possible). I figured it was a good time to see how far AI has come since I last used it with Gemini 3.0 (which didn’t go great but was decent for spec work) and both Gemini 3.1 and Antigravity had just come out so it felt like the perfect time to give it a shot.

This was a breath of fresh air and probably the best experience I’ve ever had with a dev tool for building websites. I basically just told it the pages that I wanted to include, some details about the business, the features I wanted the website to have and that I wanted it to be statically hosted (and preferably served from cloudflare pages). It built a beautiful, functional site in just a few minutes. Some of the things didn’t work quite right but I just told it what needed to be fixed and it took care of it. It had used some stock photos served from third-party sites as placeholders and I had it use Nano Banana Pro to generate better ones (and to self-host them) and had it generate photo galleries using actual photos he provided me.

I had to ask it to automate resizing and optimizing of the images but, to this day, I haven’t had to touch a line of the source code for the site. I was even lazy enough to ask the AI to change some text on the page instead of doing it myself. It created a nicely responsive site that performed well and took just a few hours of effort from end to end. It picked Astro as the framework to use and it may need some more tweaks to fix things as they come up (like I haven’t checked how accessible it is yet) but it’s WAY better than I could have done on my own.

The third site I built was to migrate this blog off of blogger (again, with Antigravity and Gemini 3.1 pro thinking). I won’t get into all of the details because I already wrote a post about it but this time I was more specific about wanting it to generate a fast SSG website that defaults to plain HTML and only uses JS as a progressive enhancement. I asked it to build it so that I could author articles in markdown and for it to migrate all of the existing content.

Again, a few hours of work produced something I’m really happy with and is way better than I could have done by myself. I’ve had to go back a few times to ask it to add features (like OG meta tags for post embedding, an RSS feed, etc) but that was all done through prompts. I write all of the markdown files for the articles by hand but I haven’t touched a line of the framework code that turns it into the blog (Astro again).

The final one I wanted to look at was a quick test with using Stitch to generate a react app. I spent maybe five minutes on it because it’s just for testing purposes and I asked it to scrape the content from the contractor website and redesign it. I’m not the target audience for the tool so I didn’t spend any time on the design itself of the functionality of the app other than to make sure it worked because I was mostly interested in seeing how the code it generated performed.

How did they do?

I was really impressed with both of the Astro sites that Antigravity generated. They handily beat the performance of the Wix and Stitch-generated sites and served plain HTML with a side of progressive enhancement.

Side-by-side filmstrip of the four websites loading
Side-by-side filmstrip of the four websites loading

To be fair, they aren’t all serving the same content and the Wix page has a massive background image but they’re relatively comparable and the waterfalls clearly show that the performance differences aren’t because of the content.

Wix

The Wix site is surprisingly fast given how much it loads but even for the above-the-fold content it is pulling resources from 4 different domains and relying on a fair bit of JS:

Wix waterfall
Wix waterfall

That is the trimmed-down version just to get to the LCP image. The full waterfall is 174 requests and 2MB of content, most of which is Javascript (11MB uncompressed JS).

Antigravity - Contractor website

This, I was thrilled with:

Antigravity contractor waterfall
Antigravity contractor waterfall

It looks like it loads ~600 bytes of JS to help with email address decoding (from the mailto: link I assume) and pulls fonts from Google fonts but, otherwise it’s just the HTML, CSS and images for the content itself. ~200KB all-in and 9 requests. Most of that is from the fonts and images. At some point I should have it inline that javascript and self-host the fonts but I’m thrilled with the result.

Antigravity take two - My blog

I picked an article page with a visible image to make it fair(er) but it knocked it out of the park:

Antigravity blog waterfall
Antigravity blog waterfall

No javascript, no custom fonts, just the HTML, CSS and images for the content itself. The one thing that did jump out at me was the late loading of the images including the hero image. It looks like Astro defaults to loading="lazy" on all images so I just went back and asked Gemini to fix that.

My first attempt was a little too heavy-handed and I removed lazy loading from ALL images which made things a bit worse since it would load all of the images in parallel (though, at least starting sooner):

Antigravity blog waterfall - take two
Antigravity blog waterfall - take two

The third time was the charm:

Antigravity blog waterfall - take three
Antigravity blog waterfall - take three

Now it loads the first image in any blog post eagerly, in parallel with the CSS, and then lets native lazy-loading take care of any subsequent article images.

The visual experience isn’t meaningfully different since it still has to go through the layout/style/render pipeline before displaying it to the screen but the browser now has what it needs to display the above-the-fold content ~40% sooner (and I feel better about myself).

Stitch react app

Yeah, it’s about what I expected. See if you can tell when the app hydrates:

Stitch waterfall
Stitch waterfall

It doesn’t actually display anything on the screen until well after the waterfall finishes (while the client-side rendering is being applied). It doesn’t help that every one of the react requests is redirecting from a generic major version to a specific patch version. The actual weight of the JS isn’t as bad as I worried it might be (11 requests, ~625KB compressed) but it’s ALL render blocking.

Here’s hoping people take the export options of the MCP or project brief and feed it into something else to build the actual websites and don’t try going straight to production with the built-in react app. It’s great for getting a feel for how the app would work but not much else.

Conclusion

I’m really excited for where this journey is taking us. I think the ease-of-use for generating websites directly instead of using a CMS is going to be a game-changer and it will be interesting to see how the market evolves.

That said, the tools still need a lot of help and are “tools”. They remind me of a conversation I had with a product manager years ago about their site performance. He had assumed that the dev team would “just make it fast” and that it wasn’t something he should have asked them to focus on. It feels like AI is at a similar place and our role is basically the same. It CAN make things fast, responsive, accessible, etc., but it might not unless you specifically tell it to.

The Coming Software Revolution

The actual writing of code hasn’t been the bulk of development for a while, particularly in larger companies with larger codebases and existing users. The more existing users there are, the more process there tends to be around protections and procedures. These exist to ensure you aren’t breaking existing behaviors, risking security (or press), or causing regressions in some random metric added to the launch process over the product’s lifetime.

This is all in place for a good reason but it also significantly reduces the velocity of development for the product involved. For extremely complex products (like, say, a web browser) this isn’t necessarily a risk to the product itself because the ROI for a new competitor to build a full browser from scratch has been astronomical. This is why there has been so much consolidation in browser engines and why most new “browsers” are just reskinned versions of Chromium.

That has all changed in the last year or so as LLMs have gotten much better at writing code.

Web browsers are prime candidates for disruption

I have a lot of history in the web browser space so it’s one that I’m most familiar with (though I also have plenty of exposure to other huge projects with large legacy user bases).

A clean-sheet web browser is a perfect candidate for being built with AI:

  • Virtually every part of the stack is heavily specified and documented (WHATWG, W3C, IETF, TC39, etc.)
  • There are at least 3 modern, independent open-source rendering engines (Blink, Gecko, Webkit) that can be used as a reference and for validation.
  • There are multiple open-source implementations of Javascript (V8, SpiderMonkey, JavaScriptCore) that can be used as a reference and for validation.
  • There are interop tests for testing common web features.
  • There are tests within each browser implementation for their implementations of web features.
  • There are billions of pages on the web that can be used to test against both existing implementations and any new ones.

If you are getting into the market or need a web rendering engine for your product (embedded or otherwise), are you better off building a new one from scratch that is purpose-built for your needs or trying to shoehorn an existing engine into your product?

I’d hazard a guess that for most use cases the answer is now (or will soon be) “build a new one from scratch”.

What would be involved in building a new web browser?

Browsers have some pretty clear components that basically stand-alone and have well-defined interfaces into the other parts of the system:

Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.
Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.

For the most part, you can break each of those pieces into a stand-alone project once the interfaces are defined and iterate on building them independently.

For each component, you can iterate on an implementation with an LLM autonomously:

  • Define the interfaces to other parts of the system.
  • Build isolated tests against existing engines to use as a baseline point of comparison.
  • Use the huge corpus of existing tests as an automated feedback loop.

Built for your needs

With an engine that is built to your specific needs, you can tune and optimize it in ways that are simply not possible when building on top of an existing engine.

Want better memory-safety guarantees? Have the agents build it in Rust from the ground-up and minimize the use of unsafe operations.

You can target specific architecture assumptions without the legacy baggage of supporting decades of hardware. For example:

  • You could require a hardware GPU with a minimum feature set and eliminate the software rendering path entirely.
  • You could target modern CPU architectures with SIMD support.
  • You could target operating systems with specific isolation and memory safety features.

You can also strip out all of the features that don’t make sense for your use case:

  • Want to run headless? No need for a UI.
  • Don’t need dev tools? Don’t include the layers of hooks that they require.
  • Targeting a specific embedding case? You can strip out everything that isn’t required for that use case.

Competitive advantages

Beyond the platform-targeting advantages, you can also optimize it for your specific needs, making the tradeoffs that make sense for your use case between performance, memory size and binary size (and optimize for specific metrics).

Once you have a functional implementation, you could leverage something like AlphaEvolve to optimize it well beyond what any existing browser implementations are capable of.

My guess is you could easily build a browser that is twice as fast as the current browsers, uses half of the memory with a binary size a fraction of what the browsers currently ship and do it with a relatively small team (with a generous token budget).

Not just browsers

This largely holds for any large software project that has an existing large user base. The barrier to entry for building a new implementation that is purpose-built has dropped significantly, and the risks of “process” slowing down development are going to become a huge problem for existing players in a lot of markets. If your company relies on its massive, legacy codebase as a moat, it might be time to start digging a new one.

View All Posts in Archive
Enlarged view