Patrick Meenan

Welcome!

This is my personal blog and mostly contains my random thoughts about web performance, development, browsers or whatever else I might be thinking about at the time.

Latest Posts

The Coming Software Revolution

The actual writing of code hasn’t been the bulk of development for a while, particularly in larger companies with larger codebases and existing users. The more existing users there are, the more process there tends to be around protections and procedures. These exist to ensure you aren’t breaking existing behaviors, risking security (or press), or causing regressions in some random metric added to the launch process over the product’s lifetime.

This is all in place for a good reason but it also significantly reduces the velocity of development for the product involved. For extremely complex products (like, say, a web browser) this isn’t necessarily a risk to the product itself because the ROI for a new competitor to build a full browser from scratch has been astronomical. This is why there has been so much consolidation in browser engines and why most new “browsers” are just reskinned versions of Chromium.

That has all changed in the last year or so as LLMs have gotten much better at writing code.

Web browsers are prime candidates for disruption

I have a lot of history in the web browser space so it’s one that I’m most familiar with (though I also have plenty of exposure to other huge projects with large legacy user bases).

A clean-sheet web browser is a perfect candidate for being built with AI:

  • Virtually every part of the stack is heavily specified and documented (WHATWG, W3C, IETF, TC39, etc.)
  • There are at least 3 modern, independent open-source rendering engines (Blink, Gecko, Webkit) that can be used as a reference and for validation.
  • There are multiple open-source implementations of Javascript (V8, SpiderMonkey, JavaScriptCore) that can be used as a reference and for validation.
  • There are interop tests for testing common web features.
  • There are tests within each browser implementation for their implementations of web features.
  • There are billions of pages on the web that can be used to test against both existing implementations and any new ones.

If you are getting into the market or need a web rendering engine for your product (embedded or otherwise), are you better off building a new one from scratch that is purpose-built for your needs or trying to shoehorn an existing engine into your product?

I’d hazard a guess that for most use cases the answer is now (or will soon be) “build a new one from scratch”.

What would be involved in building a new web browser?

Browsers have some pretty clear components that basically stand-alone and have well-defined interfaces into the other parts of the system:

Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.
Components of a web browser, including the UI, rendering engine, javascript engine, networking stack, etc.

For the most part, you can break each of those pieces into a stand-alone project once the interfaces are defined and iterate on building them independently.

For each component, you can iterate on an implementation with an LLM autonomously:

  • Define the interfaces to other parts of the system.
  • Build isolated tests against existing engines to use as a baseline point of comparison.
  • Use the huge corpus of existing tests as an automated feedback loop.

Built for your needs

With an engine that is built to your specific needs, you can tune and optimize it in ways that are simply not possible when building on top of an existing engine.

Want better memory-safety guarantees? Have the agents build it in Rust from the ground-up and minimize the use of unsafe operations.

You can target specific architecture assumptions without the legacy baggage of supporting decades of hardware. For example:

  • You could require a hardware GPU with a minimum feature set and eliminate the software rendering path entirely.
  • You could target modern CPU architectures with SIMD support.
  • You could target operating systems with specific isolation and memory safety features.

You can also strip out all of the features that don’t make sense for your use case:

  • Want to run headless? No need for a UI.
  • Don’t need dev tools? Don’t include the layers of hooks that they require.
  • Targeting a specific embedding case? You can strip out everything that isn’t required for that use case.

Competitive advantages

Beyond the platform-targeting advantages, you can also optimize it for your specific needs, making the tradeoffs that make sense for your use case between performance, memory size and binary size (and optimize for specific metrics).

Once you have a functional implementation, you could leverage something like AlphaEvolve to optimize it well beyond what any existing browser implementations are capable of.

My guess is you could easily build a browser that is twice as fast as the current browsers, uses half of the memory with a binary size a fraction of what the browsers currently ship and do it with a relatively small team (with a generous token budget).

Not just browsers

This largely holds for any large software project that has an existing large user base. The barrier to entry for building a new implementation that is purpose-built has dropped significantly, and the risks of “process” slowing down development are going to become a huge problem for existing players in a lot of markets. If your company relies on its massive, legacy codebase as a moat, it might be time to start digging a new one.

It's Alive!

Wow, it has been a minute! Or, more specifically, 12 YEARS!

I’ve been wanting to move off of blogger for the longest time and had all these ideas for what I wanted to move to to make it easier to post content. Mostly I just wanted to be able to throw together a post in markdown and have some tooling turn that into a “good” static site (responsive, fast, not ugly, etc.). I toyed around with building something with 11ty a few years ago but never had the time to invest to really learn the platform.

If you have ever had the misfortune of watching one of my presentations or a website that I actually “designed” you know that I have zero business actually calling it design or being allowed anywhere near a UI.

Screenshot of the WebPageTest UI with a plain white HTML form and fields.
WebPageTest UI circa 2010 before Mark Zeman graciously offered to actually design it.

What I AM good at though is knowing what should be considered for a good technical deployment and user experience and with how the AI overlords are coming for all of our jobs I figured I’d take a stab at seeing how well it could do what I wanted for me. I have a Google One AI Pro family account because it was basically “free” since I was already paying for ~5TB of Google One cloud storage.

Getting started

I decided to use a two-phase approach (since I’m particularly lazy). I gave Gemini 3.1 Pro Thinking a prompt to generate a prompt for Antigravity to do all of the actual work:

Create a prompt for Google Antigravity to migrate blog.patrickmeenan.com to a new blog website that it creates.

Technical requirements for the new site:
- It should be a SSG website using Astro
- It should be responsive and automatically adjust to different devices and viewports
- It should support both dark mode and light mode and automatically set it based on the browser
- All text should be easily legible in both dark mode and light mode
- It should generate each blog post from a markdown file (and linked images)
- Each blog post should be stored in a separate folder with one markdown file and any media content required for the blog post
- The blog post articles should be organized by year, month and day and support multiple posts on a given day
- It should support syntax-coloring for all code blocks for most languages, including but not limited to Bash, Javascript, HTML, c, c++, java and json
- It should support mermaid diagrams in the markdown
- Images should be optimized to load a version of the image no larger than 1280x1280 while maintaining aspect ratio and rotation (including EXIF rotation) and jpeg quality level 85
- Images should be able to be clicked on to view a large version of the image and right-click to save-image-as should work
- It should be well optimized for SEO
- It should be well optimized for performance
- Where they exist, it should use existing best practices

Content requirements:
- The blog post should have the feel of a well-designed modern technical blog
- Up to 3 of the most recent posts should be on the home page with the most recent post being displayed first
- Clicking on the title of a given post should bring you to a dedicated page for that specific post
- There should be a way to navigate to previous/next posts both from an individual post as well as from the home page (and the navigation should include the title of the post being navigated to if it is navigating to a single post)
- There should be an index of all of the blog posts with a way to navigate them by year and by month

Existing content:
- All of the posts from the existing blog at https://blog.patrickmeenan.com/ should be migrated to the new platform
- Each of the existing posts should be recreated as a new markdown file in the appropriate location suited for the new platform with any images used by the posts included in the same folder
- The general structure of the existing posts (title, headings, paragraphs, inline images, links) should be maintained but they should use the default styling of the new platform and the markdown should be as clean as possible.
- The existing article URLs should still work on the new platform. Either by redirect or by using the same URL structure on the new platform. If redirects are used they should be HTTP 301 redirects and in a format suitable for nginx being used as a web server.

When it is complete, Antigravity should check the results to make sure the requirements were met.

I ended up being even more specific than I expected to going into it, but I didn’t really do all that much planning. I just started typing the requirements and thinking about what I may have missed and then let it go from there. It produced a much more structured version of the prompt with multiple phases but I liked what it produced so I went ahead and copy/pasted that into Antigravity and let it go to town (also with Gemini 3.1 pro thinking).

Cleaning it up

I was honestly suprised by how well it did. It moved all of the existing blog content over as clean markdown files and largely created what you are looking at now. Nothing was actually “broken” but I did have to come in after it and ask it to make a bunch of corrections. I didn’t touch the actual code though, I just pointed out the specific changes I wanted to make or the issues that I was seeing and had it correct itself.

Some of the highlights:

  • It had just put links to the blog posts on the home page where I actually wanted the posts themselves to be readable there. My fault for not being specific.
  • I forgot to ask it to create an “About” page so I had it add one.
  • When it created the about page, the logos for the various services were all giant 580 x 580 svg images and, even after pointing it out, it thought it had sized them correctly. I had to suggest “maybe they should be in containers to constrain the size” at which point it realized what it had to do.
  • The image pipeline ONLY worked with jpeg’s and broke when I used a png screenshot for this post so I had it add png support.
  • The dialog that displayed when you clicked on images was not formatted well. It threw the image into a corner of the screen, had scroll bars and just generally looked bad. It took a few prompts to fix all of the issues with it but the result turned out pretty well.
  • The dark and light themes for code blocks were not working right. It was blinding to look at a white-background code block in dark mode and when it fixed that, the actual text was also not correctly switching. It took a few tries but it finally got the color themes for the code blocks working correctly.
  • I had it add support for captions below the inline images and then fix the spacing so there wasn’t a mile of whitespace between them.
  • I had it add a copy button to the code blocks when you hover over them.
  • I even used it to change some of the text on the landing page because I was too lazy to hunt it down in the source files.

The results

At the end of the day, it took maybe 3 hours to complete, which is orders of magnitude faster than it would have been had I done it by hand (and looks and works WAY better than I would have ever been able to produce). That includes writing this post which I used as a test of the publishing pipeline to make sure the whole thing worked as expected.

I’m excited by what the tooling can do these days and I think they’re at a point where they can work really well with an experienced engineer to help direct them. It feels a lot like working with an entry-level engineer, just a lot faster. Which is what worries me most about the future of the industry. I’m not sure if they will ever get to the point where they are making the architectural decisions about what to build and how the pieces should go together (or they could and we’ll all become product managers) but I worry about how we continue to build that skillset if we start relying on the AI tooling for the kinds of things we’d normally work with and mentor junior engineers on.

Anyway, thanks for suffering through that with me. Hopefully now that the migration is done and putting up a new post is just a simple markdown file I will start posting more regularly than every 12 years.

Updated WebPagetest "Data Center" Tour

It has been 3 years since the last tour and a lot of people have been asking if it is still hosted in my basement so it’s time for an update.

First, yes it is still hosted out of my basement.  I did move it out of the utility room and into a storage room so if the water heater leaks it will no longer take out everything.

Yes, Halloween has gotten a bit out of control. This is what it looked like last year (in our garage though the video doesn’t quite do it justice).

The WebPagetest “rack” is a gorilla shelf that holds everything except for the Android phones.

Starting at the bottom we have the 4 VM Servers that power most of the Dulles desktop testing.  Each server is running VMWare ESXi (now known as VMWare Hypervisor) with ~8 Windows 7 VM’s on each.  I put the PC’s together myself:

- Single socket Supermicro Motherboards with built-in IPMI (remote management)
- Xeon E3 processor (basically a Core i7)
- 32 GB Ram
- Single SSD Drive for VM Storage
- USB Thumb drive (on motherboard) for ESXi hypervisor

The SSDs for the VM storage lets me run all of the VM’s off of a single drive with no I/O contention because of the insane IOPS you can get from them (I tend to use Samsung 840 Pro’s but really looking forward to the 850’s).

As far as scaling the servers goes, I load up more VM’s than I expect to use, submit a whole lot of tests with all of the options enabled and watch the hypervisor’s utilization.  I shut down VM’s until the CPU utilization stays below 80% (one per CPU thread seems to be the sweet spot).

Moving up the rack we have the unraid NAS where the tests are archived for long-term storage (as of this post the array can hold 49TB of data with 18TB used for test results).  I have a bunch of other things on the array so not all of that 30TB is free but I expect to be able to continue storing results indefinitely for the foreseeable future.

I haven’t lost any data (though drives have come and gone) but the main reason I like unraid is if I lose multiple drives it is not completely catastrophic and the data on the remaining drives can still be recovered.  It’s also great for power because you can have it automatically spin down the drives that aren’t being actively accessed.

Next to the unraid array is the stack of Thinkpad T430’s that power the “Dulles Thinkpad” test location.  They are great if you want to test on relatively high-end physical hardware with GPU rendering.  I really like them for test machines because they also have built-in remote management (AMT/vPro in Intel speak) so I can reboot or remotely fix them if anything goes wrong.  I have all of the batteries pulled out so they don’t kill them with recharge cycles but if you want built-in battery backup/UPS they work great for that too.

Buried in the corner next to the stack of Thinkpads is the web server that runs www.webpagetest.org.

The hardware mostly matches the VM servers (same motherboard, CPU and memory) but the drive configuration is different.  There are 2 SSD’s in a RAID 1 array that run the main OS, Web Server and UI and 2 magnetic disks in a RAID 1 array that is used for short-term test archiving (1-7 days) before they are moved off to the NAS.  The switch sitting on top of the web server connects the Thinkpads to the main switch (ran out of ports on the main switch).

The top shelf holds the main networking gear and some of the mobile testing infrastructure.

The iPhones are kept in the basement with the rest of the gear and connect WiFi to an Apple Airport Express.  The Apple access points tend to be the most reliable and I haven’t had to touch them in years.  The access point is connected to a network bridge so that all of the Phone traffic goes through the bridge for traffic shaping.  The bridge is running Free BSD 9.2 which works really well for dummynet and has a fixed profile set up (for now) so that everything going through it sees a 3G connection (though traffic to the web server is configured to bypass the shaping so the test results are fast to upload).  The bridge is running a supermicro 1U atom server which is super-low power, has remote management and is more than fast enough for routing packets.

There are 2 iPhones running tests for the mobile HTTP Archive and 2 running tests for the Dulles iPhone testing for WebPagetest.  The empty bracket is for the third phone that is usually running tests for Dulles as well but I’m using it for dev work to update the agents to move from mobitest to the new nodejs agent code.

The networking infrastructure is right next to the mobile agents.

The main switch has 2 VLANs on it.  One connects directly to the public Internet (the right 4 ports) and the other (all of the other ports) to an internal network.  Below the switch is the router that bridges the two networks and NATs all of the test agent traffic (and runs as a DHCP and DNS server).  The WebPagetest web server and the router are both connected to the public Internet directly which ended up being handy when the router had software issues and I was in Alaska (I could tunnel through the web server to the management interface on the router to bring it back up).  The router is actually the bottom unit and a spare server is on top of it, both are the same 1U atom servers as the traffic-shaping bridge though the router runs Linux.

My Internet connection is awesome (at least by US pre-Google Fiber standards).  I am lucky enough to live in an area that has Verizon FIOS (Fiber).  I upgraded to a business account (not much more than a residential one) to get the static IP’s and I get much better support, 75Mbps down/35Mbps up and super-low latency.  The FIOS connection itself hasn’t been down at all in at least the last 3 years.

The Android devices are on the main level of the house right now on a shelf in the study, mostly so I don’t have to go downstairs in case the devices need a bit of manual intervention (and while we shake out any reliability issues in the new agent code).

The phones are connected through an Anker usb hub to and Intel NUC running Windows 7 where the nodejs agent code runs to manage the testing.  The current-generation NUC’s don’t support remote management so I’m really looking forward to the next release (January or so) that are supposed to add it back.  For now I’m just using VNC on the system which gives me enough control to reboot the system or any of the phones if necessary.

The phones are all connected over WiFi to the Access point in the basement (which is directly below them).  The actual testing is done over the traffic-shaped WiFi connection but all of the phone management and test processing is done on the tethered NUC system.  I tried Linux on it but at the time the USB 3 drivers were just too buggy so it is running Windows (for now).  The old android agent is not connected to the NUC and is running mobitest but the other 10 phones are all connected to the same host.  I tried connecting an 11th but Windows complained that too many USB device ID’s were being used so it looks like the limit (at least for my config) is 10 phones per host.  I have another NUC ready to go for when I add more phones.

One of the Nexus 7’s is locked in portrait mode and the other is allowed to rotate (which in the stand means landscape).  All of the rest of the phones are locked in portrait.  I use these stands to hold the phones and have been really happy with them (and have a few spares off to the left of the picture).

At this point the android agents are very stable.  They can run for weeks at a time without supervision and when I do need to do something it’s usually a matter of remotely rebooting one of the phones (and then it comes right back up).  After we add a little more logic to the nodejs agent to do the rebooting itself they should become completely hands-free.

Unlike the desktop testing, the phone screens are on and visible while tests are running so every now and then I worry that the kids may walk in while someone is testing a NSFW site but they don’t really go in there (something to be aware of when you set up mobile testing though).

One question I get asked a lot is why I don’t host it all in a data center somewhere (or run a bunch of it in the cloud).  Maybe I’m old-school but I like having the hardware close by in case I need to do something that requires physical access and the costs are WAY cheaper that if I was to host it somewhere else.  The increased power bill is very slight (10’s of dollars a month), I’d have an Internet connection anyway so the incremental cost for the business line is also 10’s of dollars per month and the server and storage costs were one-time costs that were less than even a couple of months of hosting.  Yes, I need to replace drives from time to time but at $150 per 4TB drive, that’s still a LOT cheaper than storing 20TB of data in the cloud (not to mention the benefit of having it all on the same network).

View All Posts in Archive
Enlarged view