1
0 Comments

How I successfully (and repeatably) score 100/100 on PageSpeed/Lighthouse/Core Web Vitals

Google has repeatedly said that page speed is a key ranking factor that they consider. It's not longer a nice-to-have, but a necessity when considering SEO as a viable acquisition channel. Page speed also has a direct positive effect on every other channel, with it having a direct impact on conversion rate, as well as being favorable to things like your PPC quality score (which in turn leads to a lower cost per click).

As a developer I have built many projects, and page speed is something I take seriously on all of them.

There are thousands of articles out there that explain how to get high scores for your page speed, but most of them simply state the obvious (cache things, don't load lots of JavaScript, lazy load images, etc) or expect you to do the unfeasible. After all, simply saying that you have too much external JavaScript when you need to track clicks from Facebook, Google Ads, LinkedIn, etc isn't really a fix, is it?

In this post I'm going to go through some of the tactics I employ to get those scores.

Cloudflare Zaraz

When you build a site you'll often finish the project with a great page speed score. You'll put it live, and start implementing your acquisition channels. Usually this means you need to install something like Google Tag Manager and start to add various tracking tools for your channels like Facebook Ads/Pixel, Google Ads tracking, LinkedIn Ads tracking, etc.

Each of these tools slows your site down. You can quickly go from a score in the high 90s to a score in the 30s. But you need these tools, you can't just remove them.

In comes Cloudflare Zaraz. Zaraz is similar to GTM, in that it allows you to manage third party scripts and fire them on various triggers and variables. It even has a built-in consent manager, which is one step further than GTM.

The big difference between Zaraz and GTM is that Zaraz offloads scripts to Cloudflare's edge network. To explain this you first need to understand how GTM works. In simple terms, in GTM you define scripts that should be loaded on certain triggers. A lot of the time the trigger is simply 'page load', which means it loads on every page. That means every time someone requests a page of your website the browser will load GTM, and then GTM will feed external scripts to the browser which all get run in a thread on your visitor's computer. This quickly slows things down.

Check out the differences between Cloudflare Zaraz and Google Tag Manager.

Zaraz works in a similar way. You define your trigger ('page load'), and define the scripts to be run. The difference is that these scripts don't get fed to the browser, they get fed to a Cloudflare Worker, which is a JavaScript VM that runs in the cloud. This offloads most of the work to another machine other than your visitor's, speeding things up dramatically.

Now, there are some trade-offs here. Zaraz doesn't work for everything, and it isn't as polished a product as GTM, but it certainly does the job for most users. The flip side is that many people find GTM overwhelming, which certainly isn't the case with Zaraz.

WebP

You'll have probably seen the notes in Lighthouse and PageSpeed Insights about not using next-gen formats for your images. Most web developers still use PNGs and JPGs for images, although there is definitely a positive push towards most people using SVGs for vector-based imagery, which is great.

WebP is a format for traditional raster-based imagery (photos, and anything that isn't a vector, basically). WebP is built for the web. It's highly optimised for file sizes.

To give you an idea, take the following badge from one of Accreditly's certifications:

PHP Certification

As a PNG, that has been run through a lossless compressor - which removes all meta info and compresses the image as much as possible without losing quality - comes out at about 120kb.

The same image as WebP, without any compression added, comes out at 25kb.

That's a huge difference.

Great. But how do we implement it? Most tools still export as JPG or PNG, and most users upload in those formats.

A couple of options which are viable.

Implement an image proxy CDN

Products exist to tackle this exact problem. One of which is Cloudflare Images.

Placing your website behind Cloudflare proxies all of your traffic through Cloudflare's network, which allows them to offer services that would otherwise be difficult to implement. One such product is Cloudflare Images.

Cloudflare Images caches all of your images on a CDN for faster serving to your visitors. But on top of that, Cloudflare also converts your images into multiple formats and then serves the best format to users based on what the visitor's browser can support. This is a turnkey solution that works, but it does have a small fee attached.

Other options include Imgix and Cloudinary. Both of these tools offer great solutions but they're not as simple to integrate as Cloudflare... but they also don't require you to proxy all your traffic via their systems.

Roll your own

It's actually quite trivial to roll your own implementation of this. The exact implementation depends on your tech stack. Below is a super basic example in vanilla PHP.

$pathParts = pathinfo('/path/to/your/logo.png'); // can be jpg 
$newFileName = $pathParts['filename'] .'.webp';

$filePath = storage_path('app/public/logos/'. $logo->file_path); // storage_path should be a custom function that returns your public path for file storage
$filePathNew = storage_path('app/public/logos/'. $newFileName);

if(strtolower($pathParts['extension']) == 'png') {
    $img = imagecreatefrompng($filePath);
    imagepalettetotruecolor($img);
    imagealphablending($img, true);
    imagesavealpha($img, true);
} elseif(strtolower($pathParts['extension']) == 'jpg' || strtolower($pathParts['extension']) == 'jpeg') {
    $img = imagecreatefromjpeg($filePath);
    imagepalettetotruecolor($img);
} else {
    // You can support bmp or other formats too here
}
try {
    imagewebp($img, $filePathNew, 100);
    imagedestroy($img);
} catch (\Exception $e) {
    // Catch the error
}

And then displaying the image is simple, as browsers support graceful fallbacks:

<picture>
    <source srcset="/path/to/your/image.webp" type="image/webp">
    <img src="/path/to/your/image.png"
        alt="A very optimized image"
        loading="lazy"
        decoding="async"
        width="225"
        height="255">
</picture>

Optimize your fonts

No one uses Arial and Helvetica for their website any more. It's all custom fonts. We have great tools at our disposal now, like Google Fonts, and even privacy focussed alternatives like Bunny.

Most of these tools create a request chain. Let's assume you have a CSS file and you want to load in a Google font using [@import](/import):

/* app.css */

[@import](/import) url('https://fonts.googleapis.com/css2?family=Inter:wght@400;600;800&display=swap');

Then you include that file in your HTML:

<link href="/css/app.css" rel="stylesheet">

Let's look what happens when someone loads the page:

  1. When the page loads the browser requests app.css.
  2. app.css requests https://fonts.googleapis.com/css2?family=Inter:wght@400;600;800&display=swap.
  3. https://fonts.googleapis.com/css2?family=Inter:wght@400;600;800&display=swap requests (potentially) dozens of font files.

You've got a chain of requests just to fulfil a font. Most of that chain is render blocking. It's killing your page speed and Core Web Vitals. Google Fonts is actually better than most systems too, TypeKit (Adobe's font loader) actually chains even more files.

You can reduce one level of the chain by simply loading the font CSS directly rather than via an [@import](/import):

<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;600;800&display=swap" rel="stylesheet">

That's better, but it's still chaining multiple requests.

The best way to implement this is to take the CSS from the URL: https://fonts.googleapis.com/css2?family=Inter:wght@400;600;800&display=swap and do the following:

  1. Strip out the languages you don't need. Most sites (well, most that I work on, your mileage may vary) are in English so I only need the Latin character set. That removes 4 additional character sets I don't need (note that some fonts have this as a character range so may not actually save a request).
  2. Host the font files yourself. I'll come onto why you should do this shortly. The browser always needs to download the font files, but it's better for them to come from your server than Google's, so download and serve locally.
  3. Load the CSS inline. Don't use a <link> HTML tag, copy and paste the CSS into a <style> tag at the top of the document.
  4. Change the font-display to fallback. By default it is display, which causes an amount of CLS on the page. fallback also create CLS, but less of it. The alternative of optional gives no CLS but can also lead to nothing being rendered at all, especially with certain doc formats like AMP.

You should be left with something similar to:

<!-- Fonts -->
<style>
    /* latin */
    [@font](/font)-face {
        font-family: 'Inter';
        font-style: normal;
        font-weight: 400;
        font-stretch: 100%;
        font-display: fallback;
        src: url(/fonts/inter-latin-400-normal.woff2) format('woff2'), url(/fonts/inter-latin-400-normal.woff) format('woff');
        unicode-range: U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0300-0301,U+0303-0304,U+0308-0309,U+0323,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD;
    }

    /* latin */
    [@font](/font)-face {
        font-family: 'Inter';
        font-style: normal;
        font-weight: 500;
        font-stretch: 100%;
        font-display: fallback;
        src: url(/fonts/inter-latin-500-normal.woff2) format('woff2'), url(/fonts/inter-latin-500-normal.woff) format('woff');
        unicode-range: U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0300-0301,U+0303-0304,U+0308-0309,U+0323,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD;
    }
</style>

I mentioned that you should download the font files rather than have them served from Google's CDN. That sounds like bad advice, right? Surely Google's CDN is fast and surely people's browsers have seen that font file from the CDN before so it's already cached from other sites?

Well that used to be the case, but browsers have recently implemented something called a partitioned cache. This basically means that if site A has a file it loads, whether from a third party or not, it isn't available for site B to load from the local cache. Essentially, any font loaded from Google's Font CDN is loaded fresh on every site it's seen on. It'll be cached after the first visit, but the first visit to every site pulls a fresh copy.

Because of that you're better off hosting it locally, as you'll benefit from HTTP/2's ability to multiplex multiple files down the same 'pipe'. If you're interested in how this works we've done an article on the differences between HTTP/1.1, HTTP/2 and the upcoming HTTP/3.

JIT compilation of CSS

I now use Tailwind CSS pretty much exclusively. In the past I would use Bootstrap as our UI framework of choice. Both are great, but the approach that Tailwind takes means you will naturally be left with a tiny CSS file compared to vanilla Bootstrap. Why? Well, read on.

### Bootstrap

When you use Bootstrap you effectively load the full framework and then override to look how you want. You can then add extra styles and classes to make it work for you. It's great, but that leads to a huge CSS file, and most sites use a tiny percentage of it. In many cases you'll find that many developers actually implement their own versions of components that already ship with the framework for one reason or another, which compounds the problem. Technically you don't have to (and shouldn't) load the full framework, but instead load modules that you are actually using, but this still leads to lots of wastage as sometimes you just need one small part of a module, and some modules have dependencies on others, meaning you can end up loading several files to use a single class.

There's a remedy to this in Bootstrap; PurgeCSS. PurgeCSS reads your HTML, works out what classes you're using, and then deletes all of the CSS from your generated file that it doesn't think is in use. As you can imagine this is a little problematic at times, and requires plenty of testing. It's not ideal, but can easily reduce your generated CSS by enormous amounts.

Tailwind

So how does Tailwind work?

Tailwind works similarly to PurgeCSS (in fact it might even use it under the hood). You define where Tailwind can find your generated HTML in the config file, and then all of those files are 'watched'. Any classes that are found in these files are then compiled from Tailwind's master CSS into your CSS file, and then it's all minified.

This results in a tiny CSS file when compared to other frameworks. You're only generating what you need, rather than the alternative which is to generate everything and then delete what you don't need. It also makes it much faster to work with.

More importantly though, because you're using it from the start there is far less chance of regression, deleting accidental classes or CSS.

There are times when classes are injected dynamically though, either by server-side code or dynamically using JS. To accommodate this Tailwind allows you to whitelist classes. Tailwind call this 'safelist', and you can learn how to use Tailwind safelist here.

Wrapping up

This obviously isn't an exhaustive list. I've tried to cover some more advanced topics that I find make a real tangible difference to page speed in the real world. If you follow one of the many generic page speed guides out there, and tail it with the tips from this article you should see some great results. The real trick in my experience is to consider page speed from the start, rather than look at it retrospectively. It's much easier to fix small issues as they arise than to let them become big problems over time.

Trending on Indie Hackers
Passed $7k 💵 in a month with my boring directory of job boards 56 comments How I got 1,000+ sign-ups in less than a month with social media alone 20 comments 87.7% of entrepreneurs struggle with at least one mental health issue 14 comments How to Secure #1 on Product Hunt: DO’s and DON'Ts / Experience from PitchBob – AI Pitch Deck Generator & Founders Co-Pilot 13 comments Competing with a substitute? 📌 Here are 4 ad examples you can use [from TOP to BOTTOM of funnel] 10 comments Are you wondering how to gain subscribers to a founder's X account from scratch? 9 comments