Patrick Meenan

Welcome!

This is my personal blog and mostly contains my random thoughts about web performance, development, browsers or whatever else I might be thinking about at the time.

Latest Posts

Does Gemini Create Fast Websites?

I’ve used AI to generate a few websites over the last month or so and not too long before that I was helping my son create a website on Wix so I thought it might be interesting to see how they stack up against each other in terms of performance. I was REALLY motivated when I saw the new Stitch app had a “create react app” button (ok, petrified probably describes my reaction better than motivated).

The contenders

The first site I helped with was a Wix website for my son’s horse transport business. I directed him there since he could build most of it himself and I knew the Wix team works hard to make their sites fast (within the constraints of running a giant self-publishing framework). We spent a fair bit of time on getting the layout to “mostly work” but the grid system is pretty limiting so we ended up with a design that was “good enough” rather than “great”. He did most of the work though so I’m happy with that part of things.

The next site was a site that I promised a contractor that we use a lot that I could help him put together. I’ve been meaning to play with 11ty and other frameworks for a while but the level of effort has always been more than I wanted to take on and I have zero design skills (less than zero if that’s possible). I figured it was a good time to see how far AI has come since I last used it with Gemini 3.0 (which didn’t go great but was decent for spec work) and both Gemini 3.1 and Antigravity had just come out so it felt like the perfect time to give it a shot.

This was a breath of fresh air and probably the best experience I’ve ever had with a dev tool for building websites. I basically just told it the pages that I wanted to include, some details about the business, the features I wanted the website to have and that I wanted it to be statically hosted (and preferably served from cloudflare pages). It built a beautiful, functional site in just a few minutes. Some of the things didn’t work quite right but I just told it what needed to be fixed and it took care of it. It had used some stock photos served from third-party sites as placeholders and I had it use Nano Banana Pro to generate better ones (and to self-host them) and had it generate photo galleries using actual photos he provided me.

I had to ask it to automate resizing and optimizing of the images but, to this day, I haven’t had to touch a line of the source code for the site. I was even lazy enough to ask the AI to change some text on the page instead of doing it myself. It created a nicely responsive site that performed well and took just a few hours of effort from end to end. It picked Astro as the framework to use and it may need some more tweaks to fix things as they come up (like I haven’t checked how accessible it is yet) but it’s WAY better than I could have done on my own.

The third site I built was to migrate this blog off of blogger (again, with Antigravity and Gemini 3.1 pro thinking). I won’t get into all of the details because I already wrote a post about it but this time I was more specific about wanting it to generate a fast SSG website that defaults to plain HTML and only uses JS as a progressive enhancement. I asked it to build it so that I could author articles in markdown and for it to migrate all of the existing content.

Again, a few hours of work produced something I’m really happy with and is way better than I could have done by myself. I’ve had to go back a few times to ask it to add features (like OG meta tags for post embedding, an RSS feed, etc) but that was all done through prompts. I write all of the markdown files for the articles by hand but I haven’t touched a line of the framework code that turns it into the blog (Astro again).

The final one I wanted to look at was a quick test with using Stitch to generate a react app. I spent maybe five minutes on it because it’s just for testing purposes and I asked it to scrape the content from the contractor website and redesign it. I’m not the target audience for the tool so I didn’t spend any time on the design itself of the functionality of the app other than to make sure it worked because I was mostly interested in seeing how the code it generated performed.

How did they do?

I was really impressed with both of the Astro sites that Antigravity generated. They handily beat the performance of the Wix and Stitch-generated sites and served plain HTML with a side of progressive enhancement.

Side-by-side filmstrip of the four websites loading
Side-by-side filmstrip of the four websites loading

To be fair, they aren’t all serving the same content and the Wix page has a massive background image but they’re relatively comparable and the waterfalls clearly show that the performance differences aren’t because of the content.

Wix

The Wix site is surprisingly fast given how much it loads but even for the above-the-fold content it is pulling resources from 4 different domains and relying on a fair bit of JS:

Wix waterfall
Wix waterfall

That is the trimmed-down version just to get to the LCP image. The full waterfall is 174 requests and 2MB of content, most of which is Javascript (11MB uncompressed JS).

Antigravity - Contractor website

This, I was thrilled with:

Antigravity contractor waterfall
Antigravity contractor waterfall

It looks like it loads ~600 bytes of JS to help with email address decoding (from the mailto: link I assume) and pulls fonts from Google fonts but, otherwise it’s just the HTML, CSS and images for the content itself. ~200KB all-in and 9 requests. Most of that is from the fonts and images. At some point I should have it inline that javascript and self-host the fonts but I’m thrilled with the result.

Antigravity take two - My blog

I picked an article page with a visible image to make it fair(er) but it knocked it out of the park:

Antigravity blog waterfall
Antigravity blog waterfall

No javascript, no custom fonts, just the HTML, CSS and images for the content itself. The one thing that did jump out at me was the late loading of the images including the hero image. It looks like Astro defaults to loading="lazy" on all images so I just went back and asked Gemini to fix that.

My first attempt was a little too heavy-handed and I removed lazy loading from ALL images which made things a bit worse since it would load all of the images in parallel (though, at least starting sooner):

Antigravity blog waterfall - take two
Antigravity blog waterfall - take two

The third time was the charm:

Antigravity blog waterfall - take three
Antigravity blog waterfall - take three

Now it loads the first image in any blog post eagerly, in parallel with the CSS, and then lets native lazy-loading take care of any subsequent article images.

The visual experience isn’t meaningfully different since it still has to go through the layout/style/render pipeline before displaying it to the screen but the browser now has what it needs to display the above-the-fold content ~40% sooner (and I feel better about myself).

Stitch react app

Yeah, it’s about what I expected. See if you can tell when the app hydrates:

Stitch waterfall
Stitch waterfall

It doesn’t actually display anything on the screen until well after the waterfall finishes (while the client-side rendering is being applied). It doesn’t help that every one of the react requests is redirecting from a generic major version to a specific patch version. The actual weight of the JS isn’t as bad as I worried it might be (11 requests, ~625KB compressed) but it’s ALL render blocking.

Here’s hoping people take the export options of the MCP or project brief and feed it into something else to build the actual websites and don’t try going straight to production with the built-in react app. It’s great for getting a feel for how the app would work but not much else.

Conclusion

I’m really excited for where this journey is taking us. I think the ease-of-use for generating websites directly instead of using a CMS is going to be a game-changer and it will be interesting to see how the market evolves.

That said, the tools still need a lot of help and are “tools”. They remind me of a conversation I had with a product manager years ago about their site performance. He had assumed that the dev team would “just make it fast” and that it wasn’t something he should have asked them to focus on. It feels like AI is at a similar place and our role is basically the same. It CAN make things fast, responsive, accessible, etc., but it might not unless you specifically tell it to.

The Coming Software Revolution

The actual writing of code hasn’t been the bulk of development for a while, particularly in larger companies with larger codebases and existing users. The more existing users there are, the more process there tends to be around protections and procedures. These exist to ensure you aren’t breaking existing behaviors, risking security (or press), or causing regressions in some random metric added to the launch process over the product’s lifetime.

This is all in place for a good reason but it also significantly reduces the velocity of development for the product involved. For extremely complex products (like, say, a web browser) this isn’t necessarily a risk to the product itself because the ROI for a new competitor to build a full browser from scratch has been astronomical. This is why there has been so much consolidation in browser engines and why most new “browsers” are just reskinned versions of Chromium.

That has all changed in the last year or so as LLMs have gotten much better at writing code.

Web browsers are prime candidates for disruption

I have a lot of history in the web browser space so it’s one that I’m most familiar with (though I also have plenty of exposure to other huge projects with large legacy user bases).

A clean-sheet web browser is a perfect candidate for being built with AI:

  • Virtually every part of the stack is heavily specified and documented (WHATWG, W3C, IETF, TC39, etc.)
  • There are at least 3 modern, independent open-source rendering engines (Blink, Gecko, Webkit) that can be used as a reference and for validation.
  • There are multiple open-source implementations of Javascript (V8, SpiderMonkey, JavaScriptCore) that can be used as a reference and for validation.
  • There are interop tests for testing common web features.
  • There are tests within each browser implementation for their implementations of web features.
  • There are billions of pages on the web that can be used to test against both existing implementations and any new ones.

If you are getting into the market or need a web rendering engine for your product (embedded or otherwise), are you better off building a new one from scratch that is purpose-built for your needs or trying to shoehorn an existing engine into your product?

I’d hazard a guess that for most use cases the answer is now (or will soon be) “build a new one from scratch”.

What would be involved in building a new web browser?

Browsers have some pretty clear components that basically stand-alone and have well-defined interfaces into the other parts of the system:

Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.
Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.

For the most part, you can break each of those pieces into a stand-alone project once the interfaces are defined and iterate on building them independently.

For each component, you can iterate on an implementation with an LLM autonomously:

  • Define the interfaces to other parts of the system.
  • Build isolated tests against existing engines to use as a baseline point of comparison.
  • Use the huge corpus of existing tests as an automated feedback loop.

Built for your needs

With an engine that is built to your specific needs, you can tune and optimize it in ways that are simply not possible when building on top of an existing engine.

Want better memory-safety guarantees? Have the agents build it in Rust from the ground-up and minimize the use of unsafe operations.

You can target specific architecture assumptions without the legacy baggage of supporting decades of hardware. For example:

  • You could require a hardware GPU with a minimum feature set and eliminate the software rendering path entirely.
  • You could target modern CPU architectures with SIMD support.
  • You could target operating systems with specific isolation and memory safety features.

You can also strip out all of the features that don’t make sense for your use case:

  • Want to run headless? No need for a UI.
  • Don’t need dev tools? Don’t include the layers of hooks that they require.
  • Targeting a specific embedding case? You can strip out everything that isn’t required for that use case.

Competitive advantages

Beyond the platform-targeting advantages, you can also optimize it for your specific needs, making the tradeoffs that make sense for your use case between performance, memory size and binary size (and optimize for specific metrics).

Once you have a functional implementation, you could leverage something like AlphaEvolve to optimize it well beyond what any existing browser implementations are capable of.

My guess is you could easily build a browser that is twice as fast as the current browsers, uses half of the memory with a binary size a fraction of what the browsers currently ship and do it with a relatively small team (with a generous token budget).

Not just browsers

This largely holds for any large software project that has an existing large user base. The barrier to entry for building a new implementation that is purpose-built has dropped significantly, and the risks of “process” slowing down development are going to become a huge problem for existing players in a lot of markets. If your company relies on its massive, legacy codebase as a moat, it might be time to start digging a new one.

It's Alive!

Wow, it has been a minute! Or, more specifically, 12 YEARS!

I’ve been wanting to move off of blogger for the longest time and had all these ideas for what I wanted to move to to make it easier to post content. Mostly I just wanted to be able to throw together a post in markdown and have some tooling turn that into a “good” static site (responsive, fast, not ugly, etc.). I toyed around with building something with 11ty a few years ago but never had the time to invest to really learn the platform.

If you have ever had the misfortune of watching one of my presentations or a website that I actually “designed” you know that I have zero business actually calling it design or being allowed anywhere near a UI.

Screenshot of the WebPageTest UI with a plain white HTML form and fields.
WebPageTest UI circa 2010 before Mark Zeman graciously offered to actually design it.

What I AM good at though is knowing what should be considered for a good technical deployment and user experience and with how the AI overlords are coming for all of our jobs I figured I’d take a stab at seeing how well it could do what I wanted for me. I have a Google One AI Pro family account because it was basically “free” since I was already paying for ~5TB of Google One cloud storage.

Getting started

I decided to use a two-phase approach (since I’m particularly lazy). I gave Gemini 3.1 Pro Thinking a prompt to generate a prompt for Antigravity to do all of the actual work:

Create a prompt for Google Antigravity to migrate blog.patrickmeenan.com to a new blog website that it creates.

Technical requirements for the new site:
- It should be a SSG website using Astro
- It should be responsive and automatically adjust to different devices and viewports
- It should support both dark mode and light mode and automatically set it based on the browser
- All text should be easily legible in both dark mode and light mode
- It should generate each blog post from a markdown file (and linked images)
- Each blog post should be stored in a separate folder with one markdown file and any media content required for the blog post
- The blog post articles should be organized by year, month and day and support multiple posts on a given day
- It should support syntax-coloring for all code blocks for most languages, including but not limited to Bash, Javascript, HTML, c, c++, java and json
- It should support mermaid diagrams in the markdown
- Images should be optimized to load a version of the image no larger than 1280x1280 while maintaining aspect ratio and rotation (including EXIF rotation) and jpeg quality level 85
- Images should be able to be clicked on to view a large version of the image and right-click to save-image-as should work
- It should be well optimized for SEO
- It should be well optimized for performance
- Where they exist, it should use existing best practices

Content requirements:
- The blog post should have the feel of a well-designed modern technical blog
- Up to 3 of the most recent posts should be on the home page with the most recent post being displayed first
- Clicking on the title of a given post should bring you to a dedicated page for that specific post
- There should be a way to navigate to previous/next posts both from an individual post as well as from the home page (and the navigation should include the title of the post being navigated to if it is navigating to a single post)
- There should be an index of all of the blog posts with a way to navigate them by year and by month

Existing content:
- All of the posts from the existing blog at https://blog.patrickmeenan.com/ should be migrated to the new platform
- Each of the existing posts should be recreated as a new markdown file in the appropriate location suited for the new platform with any images used by the posts included in the same folder
- The general structure of the existing posts (title, headings, paragraphs, inline images, links) should be maintained but they should use the default styling of the new platform and the markdown should be as clean as possible.
- The existing article URLs should still work on the new platform. Either by redirect or by using the same URL structure on the new platform. If redirects are used they should be HTTP 301 redirects and in a format suitable for nginx being used as a web server.

When it is complete, Antigravity should check the results to make sure the requirements were met.

I ended up being even more specific than I expected to going into it, but I didn’t really do all that much planning. I just started typing the requirements and thinking about what I may have missed and then let it go from there. It produced a much more structured version of the prompt with multiple phases but I liked what it produced so I went ahead and copy/pasted that into Antigravity and let it go to town (also with Gemini 3.1 pro thinking).

Cleaning it up

I was honestly suprised by how well it did. It moved all of the existing blog content over as clean markdown files and largely created what you are looking at now. Nothing was actually “broken” but I did have to come in after it and ask it to make a bunch of corrections. I didn’t touch the actual code though, I just pointed out the specific changes I wanted to make or the issues that I was seeing and had it correct itself.

Some of the highlights:

  • It had just put links to the blog posts on the home page where I actually wanted the posts themselves to be readable there. My fault for not being specific.
  • I forgot to ask it to create an “About” page so I had it add one.
  • When it created the about page, the logos for the various services were all giant 580 x 580 svg images and, even after pointing it out, it thought it had sized them correctly. I had to suggest “maybe they should be in containers to constrain the size” at which point it realized what it had to do.
  • The image pipeline ONLY worked with jpeg’s and broke when I used a png screenshot for this post so I had it add png support.
  • The dialog that displayed when you clicked on images was not formatted well. It threw the image into a corner of the screen, had scroll bars and just generally looked bad. It took a few prompts to fix all of the issues with it but the result turned out pretty well.
  • The dark and light themes for code blocks were not working right. It was blinding to look at a white-background code block in dark mode and when it fixed that, the actual text was also not correctly switching. It took a few tries but it finally got the color themes for the code blocks working correctly.
  • I had it add support for captions below the inline images and then fix the spacing so there wasn’t a mile of whitespace between them.
  • I had it add a copy button to the code blocks when you hover over them.
  • I even used it to change some of the text on the landing page because I was too lazy to hunt it down in the source files.

The results

At the end of the day, it took maybe 3 hours to complete, which is orders of magnitude faster than it would have been had I done it by hand (and looks and works WAY better than I would have ever been able to produce). That includes writing this post which I used as a test of the publishing pipeline to make sure the whole thing worked as expected.

I’m excited by what the tooling can do these days and I think they’re at a point where they can work really well with an experienced engineer to help direct them. It feels a lot like working with an entry-level engineer, just a lot faster. Which is what worries me most about the future of the industry. I’m not sure if they will ever get to the point where they are making the architectural decisions about what to build and how the pieces should go together (or they could and we’ll all become product managers) but I worry about how we continue to build that skillset if we start relying on the AI tooling for the kinds of things we’d normally work with and mentor junior engineers on.

Anyway, thanks for suffering through that with me. Hopefully now that the migration is done and putting up a new post is just a simple markdown file I will start posting more regularly than every 12 years.

View All Posts in Archive
Enlarged view