Written on the 6th of May 2024, last modified on the 13th of May 2024.
LinkWhy improve your website's performance?
You're browsing the internet, searching for information, shopping for products, or maybe just looking to catch up on the latest news. You click on a link, eagerly anticipating what lies beyond. But instead of being greeted with instant gratification, you're met with a loading spinner that seems to spin endlessly. Or perhaps even worse, when you try to click a link on the page but you miss the button because an image finally loaded in, not just any image, but an ad. Which now takes you to a completely different website than you meant to visit. Infuriating, isn't it?
As users, we've all experienced the impatience or the frustration that accompanies slow-loading websites. And in a world where a user's time is more valuable than ever, the performance speed of a website can make or break the user experience. How long would you stay on a website that takes, what feels like, forever to load or do anything on? If it's not a necessity, your answer is going to be rather short, right? So don't expect the same from your users.
Over the past few years, I have had the opportunity to improve the performance of websites for hundreds of businesses by developing a new and fast Content Management System, the foundation of these websites. I have been able to do this by looking at the problems from all sorts of angles and finding plenty of improvements. These range from changing how resources are loaded by the browser in the front-end to delving into the database queries made by the back-end. Now I want to share the benefits of putting in all this effort as well as how you can do the same.
First of all, let's discuss why improved performance speed can lead to a longer retention rate. If a webpage takes too long to load, visitors will leave before you have been able to state your case, or rather, a user will stay longer on faster websites. And the longer they stay, the more likely they are to take actions on the website, which means an increased conversion rate. Which in turn means visitors will take the actions you would like them to, turning these visitors into leads for your business and potentially paying customers. But not only that, a better-performing website will also increase user satisfaction, meaning happier clients and customers of your services and products.
Additionally, better-performing websites will also improve accessibility for visitors with certain constraints. Those on a slower mobile network for example in a remote area where internet access is slower can now make use of your website. Connected with this is how search engines will reward websites with a higher page rank when they have improved performance speed. All this means that it allows you to reach a larger audience.
LinkHow is bad performance felt?
Besides my earlier examples of bad user experience through the slow loading of pages, it is good to know what value is added throughout the loading process. Knowing this, we can look at our experiences with our systems and figure out where we need to focus our attention first. The value added during loading can be broken down into several questions the user will ask themselves.
- Is it working? - At this moment, the user is wondering whether their initiation of the interaction has been registered and is looking for feedback. For instance, the spinner atop the browser indicates the navigation has been successful and it is waiting for data to be returned, but hopefully, the website itself has already started loading in.
- Is it useful? - The first meaningful content has appeared for the user, for example, the text of the article they intend to read. And now they want to know whether the contents are relevant to them, if it isn't they might just click away already.
- Is it usable? - While the data is loading, the page might not yet be interactable; this could be because there is still some behavior to load in, and before that button can be pressed, but it doesn't respond yet to interaction.
- Is it enjoyable? - Now everything has loaded, and the user has started to engage with the page. Over the course of their visit, they will want the experience to be delightful as well, not just on the first load. For instance, when the user scrolls or presses a button, does it feel slow to respond to their interactions.
I hope you understand that each of these questions stems from different things happening, not just a single thing called loading. This, in turn, means performance can be improved in many different ways because you are targeting different problems.
Think of two websites that finish loading in the same amount of time. But one only shows all its content at once when everything has been downloaded and processed, and the other shows each item as soon as it can be made available. The latter can offer a better user experience since the user is able to answer the questions I proposed above during the loading, not all the way at the end. Even though the total loading time was technically the same, the way content is provided can have a big impact on the feel of a page because the values can be judged at an earlier point in time.
But before we get to actionable steps, we need to overcome another problem first. How do you know whether you are doing better if you don't know how well you did before and are doing now. By measuring it, of course, and to do this, I think it will be good to start with Google's Core Web Vitals.
LinkWhat are Core Web Vitals?
The Core Web Vitals include several metrics for which we can ask "by how much?" or "how long did that take?" This will be able to give you an idea of where you can improve and when trying to improve how well of a job you are doing. The simplest way for anybody to start getting an insight into this is to use Google's PageSpeed Insight. Go give it a shot and put a website in there, might I suggest this one.
At the top you might see a section about the real world experiences of users, this depends on the amount of traffic a website receives. This section includes information about the following metrics: Time to First Byte, First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift, First Input Delay, Interaction to Next Paint. In the next section you will see some numbers in circles, hopefully as close to 100, but for now we don't care too much about these. More Core Web Vitals can be found a bit further down. These have the following labels: Speed Index, First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift, Total Blocking Time. Together those metrics make up the first part of report called surprisingly enough Performance.
Let's briefly discuss what each metric means for the end users' experience. Later on we will delve further into each metric to understand what exactly is measured, why it is important to measure and what improvements can be made to a website in order to score better on each metric.
- Time to First Byte - How long the user's device has to wait before it receives the first bit of information back from the website's server when requesting data from it.
- Speed Index - How long it takes for content to start appearing on the screen when visiting a webpage.
- First Contentful Paint - How long the user has to wait to see the first piece of content, like text or images, when visiting a webpage.
- Largest Contentful Paint - How long the user has to wait to see the biggest piece of content to appear on the screen, like the header image or article text, when visiting a webpage.
- Cumulative Layout Shift - How much elements on the webpage unexpectedly move around when the page is being loaded in. For example if during the loading of a page the text has appeared, but then suddenly the image above it is loaded in. If no room had been reserved for that image to appear in the text moves out of the way causing a layout shift.
- Total Blocking Time - How long the user has to wait before the website responds to the user interacting with it when the webpage is loading in. For example when you hit the button to open up the menu of the website some code has to first be loaded in before the webpage can react to your interaction with the appropriate response.
- First Input Delay - How long the webpage takes to respond to the user's input after it has fully loaded the webpage's initial content. Going back to the previous example this is not the time it takes for the webpage to start listening to the button being pressed, but how long it takes for the website to process that you have pressed the button and would like to see the menu open up.
- Interaction to Next Paint - How long until the user sees the change on the webpage after having interacted with it. Revisiting the menu button again, this would now be the time it takes for the menu to show up after having pressed the button for it. Another example would be when scrolling. Does the screen immediately show the new content, or is there a delay between scrolling and showing the scrolled to part of the webpage. This metric measures that delay as well as that of other interactions.
Are these metrics enough?
No, but they can give a good starting point. Think about how and where different applications provide their value. Take, for example, an image-sharing website. It's nice and all that I can interact with the website, but as long as no images have loaded, no value is being provided. Therefore, having a low time to interactivity is not as important as having the images above the fold load in. Focus on improving this key metric first, which could be a combination of several metrics. Let's say the image-sharing website allows users to save these images in a personal collection. Now, interactivity is a key part of the application as well, so the time until interactivity is a good metric to measure and perhaps improve.
The takeaway should be that after you have done some baseline measuring, you pick a metric that matches the user experience and work towards improving that. Which could be multiple, as long as they aren't too many at once. Yes, doing better in everything would be wonderful, but you only have so much time to spend on it, so make sure this budget is spent well.
One of the ways new metrics can be made is using new APIs which are in varying stages of being made available.
- User Timing API - Provides access to high precision timestamps for performance measurements of applications.
- Long Tasks API - Allows the detection of long running tasks that use the user interface thread for prolonged periods of time, as well as block other tasks from being able to run and respond to user input.
- Element Timing API - Enables developers to monitor when specified images or text nodes are shown on screen to the user.
- Navigation Timing API - Provides access to timing information as it relates to document navigation.
- Resource Timing API - Provides access to timing information as it relates to resources in a document.
- Server timing - Specifies a method for a server to communicate performance metrics to the browser.
More Core Web Vitals are likely to be added in the future, and perhaps some of the existing metrics will be removed. For now, these are the ones that can easily be measured. How easily? Well, as shown earlier, it can be as easy as visiting a link, but there is slightly more to it.
LinkHow do I measure these vitals?
First, think about what you want to measure. The Core Web Vitals, right? Well, there is more to it. You can be more specific than that, and it is probably best to start more specific and, as you improve, widen the scope. Your users are dealing with a lot of different environments out in the real world. Some might be visiting on their phone connected to the internet via cellular data, whereas another uses a desktop computer with a wired internet connection. This also means there can be many variables you should - and to some extent can - control for.
We can, therefore, measure these metrics in two circumstances: in a controlled environment and, of course, the opposite — an uncontrolled environment. With a controlled environment, you can, before releasing an application or feature, ensure it performs as expected and prevent regression. In an uncontrolled environment, you would, after release, track what the actual experience is like for the end users. The results of real user monitoring can, of course, drastically differ between users because of the varying circumstances mentioned before. But nonetheless, the field results can give valuable insight into what your users are encountering, which lab testing won't be able to show. But for now, we will be looking at making lab measurements since these are easier to produce and reproduce without having a large existing user base.
Start with asking yourself what environment you will be measuring. This should be close to what your actual users are using; for example, the device. If most users are visiting via their phones, then start with that. In addition, take into account whether the server is running on your local machine or is it a testing environment on a remote server? The former will show better what impact your changes are having, but can be misleading since it doesn't include the same latency to each request as users will be having in the real world. Although this latency can often be simulated via slowing down the network connection on purpose via throttling.
When you want to compare measurements, always measure more than once, and try to only change a single thing and see how much of an impact this has. This means don't switch to a completely different device when running these measurements, change from Wi-Fi to cellular data, or make multiple edits to the codebase, etc.
As a little aside, take caching into account. I am not referring to caching on the testing device even though this should not be forgotten to clear out between measurements. Your server might need to create data on first use, for example, when generating a cropped image dynamically. The first time the image is requested, it will not exist yet, so it will take longer. Afterwards, the next request will be substantially faster if the image is stored on the server. This can often be seen when running a measurement multiple times before making any edits to improve it. Is the value consistent, or is there a lot of variability? If the first is the slowest, this can indicate caching. If all have a lot of variability, do ask yourself how good any conclusions can be that you are pulling from these tests.
After all that, we can finally talk about tools you can use. The first and simplest one we have already come across is PageSpeed Insight, a website provided by Google. The way that it works is you provide it with the URL of the website and it will run some tests and measurements for you. Unfortunately, it measures a lot at once which slows down how quickly you can iterate on changes, it caches its own results, and the webpage has to be publicly available already. So ultimately, only really useful to show off to clients and for others to validate any claims you make or others are making about their work.
PageSpeed Insight makes use of Lighthouse, the second tool you can use. This runs locally on your computer and is accessible from the developer tools in several browsers based on Chromium, the most common one being, of course, Google Chrome. Because it utilizes your device's hardware and network connection, it can vary based on your location and hardware you use. This makes tests you record not comparable to others, but it is a lot quicker than using an external service. It allows you to iterate more quickly and not have the webpage publicly available, which means it is the choice when locally developing the website.
While on the subject of Google Chrome's developer tools, there are additional tools that can aid you in learning more of what goes on inside your browser. When it comes to performance, these are the Network, Performance, and Memory tabs. These allow you to gain insight into what the connection between the server and the browser is doing, as well as the processor and memory that is being used.
Some of the metrics like First Input Delay and Interaction to Next Paint aren't tested by Lighthouse itself since these require interactions with the page. There does exist a Chrome extension called Web Vitals which can log these as well as some other metrics to the console or a panel floating on top of the website.
Want to know what your actual users are experiencing? This can be seen when using PageSpeed Insight, but a better location to see it is the Goole Search Console. Once you have validated domain ownership and have enough users visit the website for a long enough period, you can see an overview of the actual users' experience there as well.
Finally, I want to draw your attention to Unlighthouse, which is similar to Lighthouse, but instead of measuring a single page, it can do this for every page on the website. This makes it an ideal tool for finding specific pages which might experience unique problems and can help validate that the improvements are being made across the website and not just a single page.
LinkHow do you improve the performance of a website?
Now that we know why performance matters and how we can measure it, it comes down to the actual work of improving it. This is, well, a very lengthy answer that will come in many parts since a single article won't be able to contain all the important information.
The main reason for this is that every site is different; you might experience some issues that someone else might not. Sometimes these are simple fixes, whereas other times it can be a fundamental problem of the website's architecture that limits its performance. Think about the different ways websites can be built. Some use different architectures, each with different strengths and weaknesses based on different needs from the users and the team building it. A scenario can be two websites that show content in the same amount of time, but one feels a lot slower. A source of the problem could be that the time it takes for the page to become interactive takes a lot longer. The page might be there in the same time as the other, but not being able to do anything with it can feel frustrating. The reason behind this might be that richer user interactions are required and a lot more code needs to be transferred and loaded for that to happen. So remember, looks can be deceiving, and the cause of these differences can be due to fundamental decisions and needs for which not every solution could or even should apply.
All in all, this means a simple one-size-fits-all solution won't be possible.
Link