<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Performance on Smashing Magazine — For Web Designers And Developers</title><link>https://www.smashingmagazine.com/category/performance/index.xml</link><description>Recent content in Performance on Smashing Magazine — For Web Designers And Developers</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Mon, 09 Feb 2026 03:03:08 +0000</lastBuildDate><item><author>Mansoor Ahmed Khan</author><title>From Chaos To Clarity: Simplifying Server Management With AI And Automation</title><link>https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/</link><pubDate>Tue, 18 Nov 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/</guid><description>Server chaos doesn’t have to be the norm. AI-ready infrastructure and automation can bring clarity, performance, and focus back to your web work.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/" />
              <title>From Chaos To Clarity: Simplifying Server Management With AI And Automation</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Chaos To Clarity: Simplifying Server Management With AI And Automation</h1>
                  
                    
                    <address>Mansoor Ahmed Khan</address>
                  
                  <time datetime="2025-11-18T10:00:00&#43;00:00" class="op-published">2025-11-18T10:00:00+00:00</time>
                  <time datetime="2025-11-18T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>Cloudways</b></p>
                

<p>If you build or manage websites for a living, you know the feeling. Your day is a constant juggle; one moment you’re fine-tuning a design, the next you’re troubleshooting a slow server or a mysterious error. Daily management of a complex web of plugins, integrations, and performance tools often feels like you’re just reacting to problems—putting out fires instead of building something new.</p>

<p>This reactive cycle is exhausting, and it pulls your focus away from meaningful work and into the technical weeds. A recent industry event, <a href="https://www.cloudways.com/en/bfcm-prepathon.php">Cloudways Prepathon 2025</a>, put a sharp focus on this very challenge. The discussions made it clear: the future of web work demands a better way. It requires an infrastructure that’s ready for AI; one that can actively help you turn this daily chaos into clarity.</p>

<p><em>The stakes for performance are higher than ever.</em></p>

<p>Suhaib Zaheer, SVP of Managed Hosting at DigitalOcean, and Ali Ahmed Khan, Sr. Director of Product Management, shared a telling statistic during their panel: <strong><a href="https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-site-load-time-statistics/">53% of mobile visitors</a> will leave a site if it takes more than three seconds to load.</strong></p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg"
			
			sizes="100vw"
			alt="Google data showing mobile page speed"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Data from Google underscores the critical importance of mobile page speed for retaining visitors. (Image Source: <a href='https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-site-load-time-statistics/'>Think with Google</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Think about that for a second, and within half that time, your potential traffic is gone. This isn’t just about a slow website, but about lost trust, abandoned carts, and missed opportunities. Performance is no longer just a feature; it’s the foundation of user experience. And in today’s landscape, automation is the key to maintaining it consistently.</p>

<p>So how do we stop reacting and start preventing?</p>

<h2 id="the-old-way-a-constant-state-of-alert">The Old Way: A Constant State Of Alert</h2>

<p>For too long, server management has worked like this: something breaks, you receive an alert (or worse, a client complaint), and you start digging. You log into your server, check logs, try to correlate different metrics, and eventually (hopefully) find the root cause. Then you manually apply a fix.</p>

<p>This process is fragile and relies on your constant attention while eating up hours that could be spent on development, strategy, or client work. For freelancers and small teams, this time is your most valuable asset. Every minute spent manually diagnosing a disk space issue or a web stack failure is a minute not spent on growing your business.</p>

<p>The problem isn&rsquo;t a lack of tools. It&rsquo;s that most tools just show you the data; they don&rsquo;t help you understand it or act on it. They add to the noise instead of providing clarity.</p>

<h2 id="a-new-approach-from-diagnosis-to-automatic-resolution">A New Approach: From Diagnosis To Automatic Resolution</h2>

<p>This is where a shift towards intelligent automation changes the game. Tools like <a href="https://www.cloudways.com/en/cloudways-ai-copilot.php">Cloudways Copilot</a>, which became generally available earlier this year, are built specifically to simplify this workflow. The goal is straightforward: combine AI-driven diagnostics with automated fixes to predict and resolve performance issues before they affect your users.</p>

<p>Here’s a practical look at how it works.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png"
			
			sizes="100vw"
			alt="Cloudways Copilot workflow"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Cloudways Copilot workflow: Continuous monitoring leads to instant alerts, AI-powered diagnosis, and actionable recommendations. (Image source: <a href='https://www.cloudways.com/en/cloudways-ai-copilot.php'>Cloudways</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Imagine your site starts running slowly. In the past, you&rsquo;d begin the tedious investigation.</p>

<h3 id="1-the-ai-insights">1. The AI Insights</h3>

<p>Instead of a generic &ldquo;high CPU&rdquo; alert, you get a detailed insight. It tells you what happened (e.g., &ldquo;MySQL process is consuming excessive resources&rdquo;), why it happened (e.g., &ldquo;caused by a poorly optimized query from a recent plugin update&rdquo;), and provides a step-by-step guide to fix it manually. This alone cuts diagnosis time from 30-40 minutes down to about five. You understand the problem, not just the diagnosis.</p>

<h3 id="2-the-smartfix">2. The SmartFix</h3>

<p>This is where it moves from helpful to transformative. For common issues, you don’t just get a manual guide. You get a one-click <em>SmartFix</em> button. After reviewing the actions Copilot will take, you can let it automatically resolve the issue. It applies the necessary steps safely and without you needing to touch a command line. This is the clarity we’re talking about. The system doesn’t just tell you about the problem; it solves it for you.</p>

<p>For developers managing multiple sites, this is a fundamental change. It means you can handle routine server issues at scale. A disk cleanup that would have required logging into ten different servers can now be handled with a few clicks. It frees your brain from repetitive troubleshooting and lets you focus on the work that actually requires your expertise.</p>

<h2 id="building-an-ai-ready-foundation">Building An AI-Ready Foundation</h2>

<p>The principles discussed at Prepathon go beyond any single tool. The theme was about building a resilient foundation. Meeky Hwang, CEO at Ndevr, introduced the <em>&ldquo;3E Framework,&rdquo;</em> which perfectly applies here. A strong platform must balance:</p>

<ul>
<li><strong>Audience Experience</strong><br />
What your visitors see and feel—blazing speed and seamless operation.</li>
<li><strong>Creator Experience</strong><br />
The workflow for you and your team—managing content and marketing without technical friction.</li>
<li><strong>Developer Experience</strong><br />
The backend foundation—server management that is secure, stable, and efficient.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png"
			
			sizes="100vw"
			alt="3E Framework"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A balanced platform is a resilient one. The 3E Framework shows how a strong foundation depends on three connected experiences. (Image source: <a href='https://www.cloudways.com/en/video/event-replays/prepathon-2025/from-fragile-to-ai-ready-websites-prepathon-2025'>Meeky Hwang / Ndevr</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>AI-driven server management directly strengthens all three. A faster, more stable server improves the <em>Audience Experience</em>. Fewer emergencies and simpler workflows improve the <em>Creator</em> and <em>Developer Experience</em>. When these are aligned, you can scale with confidence.</p>

<h2 id="this-isn-t-about-replacing-you">This Isn’t About Replacing You</h2>

<p>It’s important to be clear. This isn’t about replacing the developer but about augmenting your capabilities. As Vito Peleg, Co-founder &amp; CEO at Atarim, noted during <a href="https://www.cloudways.com/en/video/event-replays/prepathon-2025/whats-truly-working-in-ai-marketing-and-tech-prepathon-2025">Prepathon</a>:</p>

<blockquote>“We're all becoming prompt engineers in the modern world. Our job is no longer to do the task, but to orchestrate the fleet of AI agents that can do it at a scale we never could alone.”<br /><br />&mdash; Vito Peleg, Co-founder & CEO at Atarim</blockquote>

<p>Think of <a href="https://www.cloudways.com/en/cloudways-ai-copilot.php">Cloudways Copilot</a> as an expert sysadmin on your team. It handles the routine, often tedious, work. It alerts you to what’s important and provides clear, actionable context. This gives you back the mental space and time to focus on architecture, innovation, and client strategy.</p>

<blockquote>“The challenge isn’t managing servers anymore &mdash; it’s managing focus,”<br /><br /><a href="https://www.linkedin.com/in/zaheersuhaib/">Suhaib Zaheer</a> noted.<br /><br />“AI-driven infrastructure should help developers spend less time reacting to issues and more time creating better digital experiences.”</blockquote>

<h2 id="a-practical-path-forward">A Practical Path Forward</h2>

<p>For freelancers, WordPress experts, and small agency developers, this shift offers a tangible way to:</p>

<ul>
<li>Drastically reduce the hours spent manually troubleshooting infrastructure issues.</li>
<li>Implement predictive monitoring that catches slowdowns and bottlenecks early.</li>
<li>Manage your entire stack through clear, plain-English AI insights instead of raw data.</li>
<li>Balance speed, security, and uptime without needing an enterprise-scale budget or team.</li>
</ul>

<p>The goal is to make powerful infrastructure simple, while also giving you back control and your time so you can focus on what you do best: creating exceptional web experiences.</p>

<p><em>You can <a href="https://unified.cloudways.com/signup?coupon=BFCM5050">use promo code BFCM5050</a> to get 50% off for 3 months plus 50 Free Migrations using Cloudways. This offer is valid from November 18th to December 4th, 2025.</em></p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Matt Zeunert</author><title>Effectively Monitoring Web Performance</title><link>https://www.smashingmagazine.com/2025/11/effectively-monitoring-web-performance/</link><pubDate>Tue, 11 Nov 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/effectively-monitoring-web-performance/</guid><description>There are lots of tips for &lt;a href="https://www.debugbear.com/blog/improve-website-performance?utm_campaign=sm-10">improving your website performance&lt;/a>. But even if you follow all of the advice, are you able to maintain an optimized site? And are you targeting the right pages? Matt Zeunert outlines an effective strategy for web performance optimization and explains the roles that different types of data play in it.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/effectively-monitoring-web-performance/" />
              <title>Effectively Monitoring Web Performance</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Effectively Monitoring Web Performance</h1>
                  
                    
                    <address>Matt Zeunert</address>
                  
                  <time datetime="2025-11-11T10:00:00&#43;00:00" class="op-published">2025-11-11T10:00:00+00:00</time>
                  <time datetime="2025-11-11T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p><a href="https://www.smashingmagazine.com/2023/08/running-page-speed-test-monitoring-versus-measuring/">There’s no single way to measure website performance.</a> That said, the <a href="https://www.smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/">Core Web Vitals</a> metrics that Google <a href="https://www.debugbear.com/docs/page-speed-seo?utm_campaign=sm-10">uses as a ranking factor</a> are a great starting point, as they cover different aspects of visitor experience:</p>

<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> Measures the initial page load time.</li>
<li><strong>Cumulative Layout Shift (CLS)</strong>: Measures if content is stable after rendering.</li>
<li><strong>Interaction to Next Paint (INP)</strong>: Measures how quickly the page responds to user input.</li>
</ul>

<p>There are also <a href="https://www.debugbear.com/docs/web-performance-metrics?utm_campaign=sm-10">many other web performance metrics</a> that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.</p>

<p>You can also use the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Performance_API/User_timing">User Timing API</a> to track page load milestones that are important on your website specifically.</p>

<h2 id="synthetic-and-real-user-data">Synthetic And Real User Data</h2>

<p>There are <a href="https://www.debugbear.com/blog/synthetic-vs-rum?utm_campaing=sm-10">two different types</a> of web performance data:</p>

<ul>
<li><strong>Synthetic tests</strong> are run in a controlled test environment.</li>
<li><strong>Real user data</strong> is collected from actual website visitors.</li>
</ul>

<p>Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.</p>

<p>Get a hands-on feel for synthetic monitoring by using the free <a href="https://www.debugbear.com/test/website-speed?utm_campaign=sm-10">DebugBear website speed test</a> to check on your website.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="672"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png"
			
			sizes="100vw"
			alt="DebugBear website speed report"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/1-debugbear-page-speed-report.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.</p>

<p>That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.</p>

<p>At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.</p>

<p>DebugBear offers both <a href="https://www.debugbear.com/synthetic-website-monitoring?utm_campaign=sm-10">synthetic monitoring</a> and <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-10">real user monitoring</a>:</p>

<ul>
<li>To set up synthetic tests, you just need to enter a website URL, and</li>
<li>To collect real user metrics, you need to install an analytics snippet on your website.</li>
</ul>

<h2 id="three-steps-to-a-fast-website">Three Steps To A Fast Website</h2>

<p>Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:</p>

<ol>
<li><strong>Identify</strong>: Collect data across your website and identify slow visitor experiences.</li>
<li><strong>Diagnose</strong>: Dive deep into technical analysis to find optimizations.</li>
<li><strong>Monitor</strong>: Check that optimizations are working and get alerted to performance regressions.</li>
</ol>

<p>Let’s take a look at each step in detail.</p>

<h2 id="step-1-identify-slow-visitor-experiences">Step 1: Identify Slow Visitor Experiences</h2>

<p>What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the <a href="https://www.debugbear.com/blog/search-console-core-web-vitals?utm_campaign=sm-10">Core Web Vitals section of Google Search Console</a>.</p>

<p>Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.</p>

<p>The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="644"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png"
			
			sizes="100vw"
			alt="Web Vitals dashboard in DebugBear’s RUM product"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/2-web-vitals-dashboard-debugbear-rum-product.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You can also run a <a href="https://www.debugbear.com/docs/website-scan?utm_campaign=sm-10">website scan</a> to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s <a href="https://developer.chrome.com/docs/crux">Chrome User Experience Report (CrUX)</a>. However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.</p>

<p>The scan result highlights pages with poor web vitals scores where you might want to investigate further.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="632"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png"
			
			sizes="100vw"
			alt="Website scan result for ahrefs.com"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/3-website-scan-result-debugbear.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If no real-user data is available, then there is a scanning tool called <a href="https://www.debugbear.com/software/unlighthouse-website-scan">Unlighthouse</a>, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.</p>

<h2 id="step-2-diagnose-web-performance-issues">Step 2: Diagnose Web Performance Issues</h2>

<p>Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.</p>

<h3 id="debugging-page-load-time">Debugging Page Load Time</h3>

<p>If there are issues with page load time metrics &mdash; like the <a href="https://www.debugbear.com/docs/metrics/largest-contentful-paint?utm_campaign=sm-10">Largest Contentful Paint (LCP)</a> &mdash; synthetic test results can provide a detailed analysis. You can also run <a href="https://www.debugbear.com/docs/experiments?utm_campaign=sm-10">page speed experiments</a> to try out and measure the impact of certain optimizations.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="652"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png"
			
			sizes="100vw"
			alt="Page speed recommendations in synthetic data"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/4-page-speed-recommendations.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.</p>

<h3 id="debugging-slow-interactions">Debugging Slow Interactions</h3>

<p>RUM data is also generally needed to properly diagnose issues related to the <a href="https://debugbear.com/docs/rum/fix-inp-issues?utm_campaign=sm-10">Interaction to Next Paint (INP)</a> metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:</p>

<ul>
<li>What page elements are responsible?</li>
<li>Is time spent processing already-active background tasks or handling the interaction itself?</li>
<li>What scripts contribute the most to overall CPU processing time?</li>
</ul>

<p>You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="642"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png"
			
			sizes="100vw"
			alt="Interaction to Next Paint metric, which reviews specific page views"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/5-inp-interaction-element.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="step-3-monitor-performance-respond-to-regressions">Step 3: Monitor Performance &amp; Respond To Regressions</h2>

<p>Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.</p>

<p>How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.</p>

<h3 id="synthetic-data">Synthetic Data</h3>

<p>Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.</p>

<p>When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, <a href="https://www.debugbear.com/blog/bandwidth-competition-page-speed?utm_campaign=sm-10">competing for bandwidth with other page resources</a>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="720"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png"
			
			sizes="100vw"
			alt="Before-and-after comparison between the two synthetic test results"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/6-synthetic-tests.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.</p>

<h3 id="real-user-data">Real User Data</h3>

<p>When you see a change in real user metrics, there can be two causes:</p>

<ol>
<li>A shift in visitor characteristics or behavior, or</li>
<li>A technical change on your website.</li>
</ol>

<p>Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="370"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png"
			
			sizes="100vw"
			alt="LCP by UTM campaign"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/7-lcp-utm-campaign.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.</p>

<p>To identify the cause of a technical change, take a look at component breakdown metrics, such as <a href="https://www.smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/">LCP subparts</a>. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.</p>

<p>You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png"
			
			sizes="100vw"
			alt="INP subparts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/8-lcp-subparts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web <strong>performance strategy</strong> that helps you stay fast for the long term.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="477"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png"
			
			sizes="100vw"
			alt="Summary of performance metrics on DebugBear"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/effectively-monitoring-web-performance/9-debugbear-web-performance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Get <a href="https://www.debugbear.com/?utm_campaign=sm-10">a free DebugBear trial</a> on our website!</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>James Chudley</author><title>How To Minimize The Environmental Impact Of Your Website</title><link>https://www.smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/</link><pubDate>Thu, 18 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/</guid><description>As responsible digital professionals, we are becoming increasingly aware of the environmental impact of our work and need to find effective and pragmatic ways to reduce it. James Chudley shares a new decarbonising approach that will help you to minimise the environmental impact of your website, benefiting people, profit, purpose, performance, and the planet.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/" />
              <title>How To Minimize The Environmental Impact Of Your Website</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Minimize The Environmental Impact Of Your Website</h1>
                  
                    
                    <address>James Chudley</address>
                  
                  <time datetime="2025-09-18T10:00:00&#43;00:00" class="op-published">2025-09-18T10:00:00+00:00</time>
                  <time datetime="2025-09-18T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sun’s heat.</p>

<p>The average temperature of the earth’s surface is now <a href="https://www.un.org/en/climatechange/what-is-climate-change">1.2°C warmer than it was in the late 1800’s, and projected to more than double by the end of the century.</a></p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="453"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png"
			
			sizes="100vw"
			alt="Image shows a series of vertical stripes of varying colours showing how average temperatures have risen since 1850, containing the text ‘each stripe represents the average temperature for a single year, relative to the average temperature from 1850 to 2024’"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://showyourstripes.info/'>‘Climate stripes’ by Professor Ed Hawkins (University of Reading)</a>. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/1-climate-stripes.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity.</p>

<h2 id="the-internet-is-a-significant-part-of-the-problem">The Internet Is A Significant Part Of The Problem</h2>

<p>Shockingly, the <a href="https://www.wholegraindigital.com/blog/new-sustainable-web-design-model-changes-the-context-of-internet-emissions/">internet is responsible for higher global greenhouse emissions than the aviation industry</a>, and is <a href="https://climateproductleaders.org/">projected to be responsible for 14% of all global greenhouse gas emissions by 2040</a>.</p>

<p>If the internet were a country, it would be <a href="https://www.sustainablewebmanifesto.com/">the 4th largest polluter in the world</a> and represents the <a href="https://www.mozillafoundation.org/en/blog/internet-climate-change-fixes/">largest coal-powered machine on the planet</a>.</p>

<p>But how can something digital like the internet produce harmful emissions?</p>

<p>Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.</p>

<p>Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.</p>

<p>Unsurprisingly, internet related emissions are increasing, <a href="https://www.nature.com/articles/s41467-024-47621-w">given that 60% of the world’s population spend, on average, 40% of their waking hours online</a>.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="we-must-urgently-reduce-the-environmental-impact-of-the-internet">We Must Urgently Reduce The Environmental Impact Of The Internet</h2>

<p>As responsible digital professionals, we must act quickly to minimise the environmental impact of our work.</p>

<p>It is encouraging to see the UK government encourage action by adding “<a href="https://www.gov.uk/guidance/government-design-principles#minimise-environmental-impact">Minimise environmental impact</a>” to their best practice design principles, but there is <strong>still too much talking and not enough corrective action</strong> taking place within our industry.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="535"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg"
			
			sizes="100vw"
			alt="Photo shows an artwork containing small figures representing politicians who are in a puddle, depicting how, as they continue to talk, the water level rises around them, representing rising sea levels due to climate change."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Credit: ‘Politicians discussing climate change’ by Isaac Cordal. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/politicians-discussing-climate-change.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.</p>

<p>So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?</p>

<p>A eureka moment on a recent web optimisation project gave me an idea.</p>

<h2 id="my-eureka-moment">My Eureka Moment</h2>

<p>I led a project to optimise the mobile performance of <a href="http://www.talktofrank.com">www.talktofrank.com</a>, a government drug advice website that aims to keep everyone safe from harm.</p>

<p>Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.</p>

<p>Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as <a href="https://developer.chrome.com/docs/lighthouse/overview">Google Lighthouse</a> that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.</p>

<p>We hadn’t set out to reduce the carbon footprint, but seeing these results led to my eureka moment.</p>

<blockquote>I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet &mdash; so everyone wins.</blockquote>

<p>This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the <a href="https://online.hbs.edu/blog/post/what-is-the-triple-bottom-line">triple bottom line</a> of people, planet and profit to include performance and purpose.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png"
			
			sizes="100vw"
			alt="Image shows the word ‘Minimise’ surrounded by the words ‘people’, ‘planet’, ‘performance’, ‘profit’ and ‘purpose’."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The multiple benefits of minimising make it a great digital sustainability strategy. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/2-minimising-benefits.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>So why is ‘minimising’ such a great digital sustainability strategy?</p>

<ul>
<li><strong>Profit</strong><br />
Website providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills.</li>
<li><strong>People</strong><br />
People win because they get to use a website that downloads faster, is quick and easy to use because it&rsquo;s been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy.</li>
<li><strong>Performance</strong><br />
Lightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections.</li>
<li><strong>Planet</strong><br />
The planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced.</li>
<li><strong>Purpose</strong><br />
We know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment.</li>
</ul>

<p>In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.</p>

<p>So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).</p>

<p>Doing the right thing isn’t always easy, <strong>but we’ve done it before</strong> when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects.</p>

<p>Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so don’t worry about it all being another new thing to learn about!</p>

<p>So this all makes sense in theory, but what’s the master plan to use when putting it into practice?</p>

<h2 id="the-masterplan">The Masterplan</h2>

<p>The masterplan for creating websites that have minimal environmental impact is to <strong>focus on offering the maximum value from the minimum input of energy</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="449"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png"
			
			sizes="100vw"
			alt="Image shows a diagram with a ‘digital service’ at the centre with an input stating ‘minimise the energy required to operate it’ and another input stating ‘minimise the human energy to use it’. The output states ‘maximise the value created by it’."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The digital sustainability masterplan is to offer maximum value from the minimum input of energy. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/3-sustainability-masterplan.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It’s an adaptation of <a href="https://en.wikipedia.org/wiki/Buckminster_Fuller">Buckminister Fuller’s</a> <a href="https://en.wikipedia.org/wiki/Dymaxion">‘Dymaxion’</a> principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.</p>

<p>Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them.</p>

<p>You can achieve this by <strong>minimising websites to their core content, features, and functionality</strong>, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isn’t adding a proportional amount of value to the amount of energy it requires to provide it should be removed.</p>

<p>So that’s the masterplan, but how do you put it into practice?</p>

<div class="partners__lead-place"></div>

<h2 id="decarbonise-your-highest-value-user-journeys">Decarbonise Your Highest Value User Journeys</h2>

<p>I’ve developed a new approach called ‘Decarbonising User Journeys’ that will help you to minimise the environmental impact of your website and maximise its performance.</p>

<p><strong>Note</strong>: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started.</p>

<p>The secret here is to start small, demonstrate improvements, and then scale.</p>

<p>The approach consists of five simple steps:</p>

<ol>
<li><strong>Identify</strong> your highest value user journey,</li>
<li><strong>Benchmark</strong> your user journey,</li>
<li>Set <strong>targets</strong>,</li>
<li><strong>Decarbonise</strong> your user journey,</li>
<li><strong>Track</strong> and <strong>share</strong> your progress.</li>
</ol>

<p>Here’s how it works.</p>

<h3 id="step-1-identify-your-highest-value-user-journey">Step 1: Identify Your Highest Value User Journey</h3>

<p>Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation.</p>

<p>You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.</p>

<p>You may have lots of important user journeys, and it’s fine to decarbonise multiple journeys in parallel if you have the resources, but <strong>I’d recommend starting with one</strong> first to keep things simple.</p>

<p>To bring this to life, let’s consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="232"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg"
			
			sizes="100vw"
			alt="Image shows a series of four blue boxes linked by arrows containing the words ‘home’, ‘’fixtures’, ‘news’ and ‘tickets’ and represents a hypothetical user journey through a football club website."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A hypothetical high-value ticket purchasing user journey for a football club website. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/ticket-user-journey.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="step-2-benchmark-your-user-journey">Step 2: Benchmark Your User Journey</h3>

<p>Once you’ve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint.</p>

<blockquote>It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.</blockquote>

<p>You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.</p>

<p>You can benchmark the carbon footprint and performance of your user journey using online tools such as <a href="https://cardamon.io/">Cardamon</a>, <a href="https://ecograder.com/">Ecograder</a>, <a href="https://www.websitecarbon.com/">Website Carbon Calculator</a>, <a href="https://developer.chrome.com/docs/lighthouse/overview">Google Lighthouse</a>, and <a href="https://bioscore.com/">Bioscore</a>. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.</p>

<p>To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. <a href="https://developer.chrome.com/docs/lighthouse/overview">Google Lighthouse</a> works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating.</p>

<p>A great way to bring your benchmarking scores to life is to <strong>visualise</strong> them in a similar way to how you would present a customer journey map or service blueprint.</p>

<p>This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png"
			
			sizes="100vw"
			alt="Image shows a visualisation of the carbon footprint of a hypothetical user journey consisting of 4 steps showing how the energy efficiency ratings of the different pages vary across the journey."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Visualising the carbon footprint of your user journey makes it easy to see where the problems are. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/5-carbon-footprint-user-journey.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I’ve found that adding the <strong>energy efficiency ratings</strong> is really effective because it’s an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.</p>

<p>Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png"
			
			sizes="100vw"
			alt="Image shows a screenshot of the Manchester City football club homepage with a breakdown of information for that page, including energy efficiency rating, page size, usage stats, power consumption, CO2 emissions, CO2 per view, and hosting power source information."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Page-level breakdowns are useful to see how each page within the user journey is performing. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/6-page-level-breakdowns.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view.</p>

<p>In our football user journey example, it’s clear that the ‘News’ and ‘Tickets’ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.</p>

<h3 id="step-3-set-targets">Step 3: Set Targets</h3>

<p>Use your benchmarking results to help you set targets to aim for, such as a <strong>carbon budget</strong>, <strong>energy efficiency</strong>, <strong>maximum page weight</strong>, and <strong>minimum Google Lighthouse performance targets</strong> for each individual page, in addition to your existing UX metrics and business KPIs.</p>

<p>There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and you’ll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png"
			
			sizes="100vw"
			alt="Image shows a representation of the carbon footprint of a user journey with different energy efficiency targets for each step of the journey."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Visualising your targets makes it easy for your stakeholders to understand what you’re aiming for. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/7-carbon-footprint-targets.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.</p>

<h3 id="step-4-decarbonise-your-user-journey">Step 4: Decarbonise Your User Journey</h3>

<p>Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.</p>

<p>It’s up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.</p>

<p>Here’s a list of activities that will all help to reduce the carbon footprint of your user journey:</p>

<ul>
<li>Work through the recommendations in the ‘diagnostics’ section of your <a href="https://developer.chrome.com/docs/lighthouse/overview">Google Lighthouse</a> report to help optimise page performance.</li>
<li>Switch to a <strong>green hosting provider</strong> if you are not already using one. Use the <a href="https://www.thegreenwebfoundation.org/tools/directory/">Green Web Directory</a> to help you choose one.</li>
<li>Work through the <a href="https://w3c.github.io/sustainableweb-wsg/">W3C Web Sustainability Guidelines</a>, implementing the most relevant guidelines to your specific user journey.</li>
<li><strong>Remove</strong> anything that is not adding any user or business value.</li>
<li><strong>Reduce</strong> the amount of information on your webpages to make them easier to read and less overwhelming for people.</li>
<li><strong>Replace</strong> content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value.</li>
<li><strong>Optimise</strong> assets such as photos, videos, and code to reduce file sizes.</li>
<li>Remove any <strong>barriers</strong> to accessing your website and any <strong>distractions</strong> that are getting in the way.</li>
<li><strong>Re-use</strong> familiar components and design patterns to make your websites quicker and easier to use.</li>
<li>Write <strong>simply</strong> and <strong>clearly</strong> in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve.</li>
<li><strong>Fix</strong> any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible.</li>
<li>Ensure your user journey is as <a href="https://aaardvarkaccessibility.com/wcag-plain-english/">accessible</a> as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website.</li>
</ul>

<h3 id="step-5-track-and-share-your-progress">Step 5: Track And Share Your Progress</h3>

<p>As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.</p>

<p>All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint.</p>

<p>Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.</p>

<p>You should also start to <strong>communicate your progress with your users</strong>.</p>

<p>It’s important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use.</p>

<p>Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly.</p>

<p>Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.</p>

<p>You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a <strong>digital sustainability statement</strong>, such as Unilever’s, as shown below.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="537"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png"
			
			sizes="100vw"
			alt="Image shows a screenshot of the digital sustainability statement from Unilever’s website."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Unilever’s digital sustainability statement is a great example of what every website should offer. (<a href='https://files.smashing.media/articles/how-minimize-environmental-impact-website/8-unilever-digital-sustainability-statement.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAs%20an%20industry,%20we%20should%20normalise%20publishing%20digital%20sustainability%20statements%20in%20the%20same%20way%20that%20accessibility%20statements%20have%20become%20a%20standard%20addition%20to%20website%20footers.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fhow-minimize-environmental-impact-website%2f">
      
As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<div class="partners__lead-place"></div>

<h2 id="useful-decarbonising-principles">Useful Decarbonising Principles</h2>

<p>Keep these principles in mind to help you decarbonise your user journeys:</p>

<ul>
<li><strong>More doing and less talking.</strong><br />
Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change.</li>
<li><strong>Start small.</strong><br />
Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster.</li>
<li><strong>Aim to do more with less.</strong><br />
Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming.</li>
<li><strong>Make your website as useful and as easy to use as possible.</strong><br />
Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm.</li>
<li><strong>Focus on progress over perfection.</strong><br />
Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference.</li>
</ul>

<h2 id="start-decarbonising-your-user-journeys-today">Start Decarbonising Your User Journeys Today</h2>

<p>Decarbonising user journeys shouldn’t be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a <strong>continual basis</strong> as part of your broader <a href="https://docs.google.com/document/d/1adM94O0u-YMoFgkFg0FwoPASYevw_Xa6wCYKJL8Ni34/edit?usp=sharing">digital sustainability strategy</a>.</p>

<p>We know that websites are never finished and that the best websites continually improve as both user and business needs change. I’d like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aDecarbonising%20will%20happen%20most%20effectively%20when%20digital%20professionals%20challenge%20themselves%20on%20a%20daily%20basis%20to%20%e2%80%98minimise%e2%80%99%20the%20things%20they%20are%20working%20on.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fhow-minimize-environmental-impact-website%2f">
      
Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to ‘minimise’ the things they are working on.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This avoids building ‘carbon debt’ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.</p>

<p>By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority.</p>

<p>You’ll have noticed that, other than using website carbon calculator tools, this approach doesn’t require any skills that don’t already exist within typical digital teams today. This is great because it means <strong>you’ve already got the skills that you need</strong> to do this important work.</p>

<p>I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.</p>

<p>Good luck!</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Matt Zeunert</author><title>How To Fix Largest Contentful Paint Issues With Subpart Analysis</title><link>https://www.smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/</link><pubDate>Thu, 06 Mar 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/</guid><description>Struggling with slow Largest Contentful Paint (LCP)? Newly introduced by Google, LCP subparts help you pinpoint where page load delays come from. Now, in the Chrome UX Report, this data provides real visitor insights to speed up your site and boost rankings. Matt Zeunert unpacks what LCP subparts are, what they mean for your website speed, and how you can measure them.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/" />
              <title>How To Fix Largest Contentful Paint Issues With Subpart Analysis</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Fix Largest Contentful Paint Issues With Subpart Analysis</h1>
                  
                    
                    <address>Matt Zeunert</address>
                  
                  <time datetime="2025-03-06T10:00:00&#43;00:00" class="op-published">2025-03-06T10:00:00+00:00</time>
                  <time datetime="2025-03-06T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>The <a href="https://www.debugbear.com/docs/metrics/largest-contentful-paint?utm_campaign=sm-9">Largest Contentful Paint</a> (LCP) in Core Web Vitals measures how quickly a website loads from a visitor’s perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, that’s bad for user experience and can also cause your site to <a href="https://developers.google.com/search/docs/appearance/page-experience#ranking">rank lower in Google</a>.</p>

<p>When trying to fix LCP issues, it’s not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing <a href="https://www.debugbear.com/blog/lcp-subparts?utm_campaign=sm-9">LCP subparts</a>, which tell you where page load delays are coming from. They’ve also added this data to the <a href="https://www.debugbear.com/blog/chrome-user-experience-report?utm_campaign=sm-9">Chrome UX Report</a>, allowing you to see what causes delays for real visitors on your website!</p>

<p>Let’s take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.</p>

<h2 id="the-four-lcp-subparts">The Four LCP Subparts</h2>

<p>LCP subparts split the Largest Contentful Paint metric into four different components:</p>

<ol>
<li><strong>Time to First Byte (TTFB)</strong>: How quickly the server responds to the document request.</li>
<li><strong>Resource Load Delay</strong>: Time spent before the LCP image starts to download.</li>
<li><strong>Resource Load Time</strong>: Time spent downloading the LCP image.</li>
<li><strong>Element Render Delay</strong>: Time before the LCP element is displayed.</li>
</ol>

<p>The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.</p>

<h2 id="how-to-measure-lcp-subparts">How To Measure LCP Subparts</h2>

<p>One way to measure how much each component contributes to the LCP score on your website is to use DebugBear’s <a href="https://www.debugbear.com/test/website-speed?utm_campaign=sm-9">website speed test</a>. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.</p>

<p>Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="441"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png"
			
			sizes="100vw"
			alt="LCP Subparts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/1-lcp-subparts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What’s happening during each of these stages? A network <a href="https://www.debugbear.com/docs/waterfall?utm_campaign=sm-9">request waterfall</a> can help us understand what resources are loading through each stage.</p>

<p>The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and won’t always be the case.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="331"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png"
			
			sizes="100vw"
			alt="LCP image discovery"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/2-lcp-image-discovery.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="time-to-first-byte">Time To First Byte</h2>

<p>The first step to display the largest page element is fetching the document HTML. We recently published an <a href="https://www.smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/">article about how to improve the TTFB metric</a>.</p>

<p>In this example, we can see that creating the server connection doesn’t take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.</p>

<h2 id="resource-load-delay">Resource Load Delay</h2>

<p>The “resource” we want to load is the LCP image. Ideally, we just have an <code>&lt;img&gt;</code> tag near the top of the HTML, and the browser finds it right away and starts loading it.</p>

<p>But sometimes, we get a <a href="https://www.debugbear.com/blog/lcp-resource-load-delay?utm_campaign=sm-9">Load Delay</a>, as is the case here. Instead of loading the image directly, the page uses <code>lazysize.js</code>, <strong>an image lazy loading library</strong> that only loads the LCP image once it has detected that it will appear in the viewport.</p>

<p>Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, there’s a CPU task (in orange) that leads up to the <a href="https://www.debugbear.com/docs/metrics/first-contentful-paint?utm_campaign=sm-9">First Contentful Paint</a> milestone, when the page starts rendering. Only then does the library trigger the LCP image request.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="255"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png"
			
			sizes="100vw"
			alt="Load Delay"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/3-load-delay.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>How do we optimize this? First of all, instead of using a lazy loading library, you can use the native <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#loading"><code>loading=&quot;lazy&quot;</code> image attribute</a>. That way, loading images <strong>no longer depends on first loading JavaScript code</strong>.</p>

<p>But more specifically, <a href="https://www.debugbear.com/docs/lcp-lazily-loaded?utm_campaign=sm-9">the LCP image should not be lazily loaded</a>. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to <a href="https://web.dev/articles/optimize-lcp#1_eliminate_resource_load_delay">eliminate resource load delay entirely</a>.</p>

<h2 id="resources-load-duration">Resources Load Duration</h2>

<p>The <a href="https://www.debugbear.com/blog/lcp-resource-load-duration?utm_campaign=sm-9">Load Duration subpart</a> is probably the most straightforward: you need to download the LCP image before you can display it!</p>

<p>In this example, the image is loaded from the same domain as the HTML. That’s good because the browser doesn’t have to connect to a new server.</p>

<p>Other techniques you can use to reduce load delay:</p>

<ul>
<li>Use a <a href="https://www.debugbear.com/blog/image-formats?utm_campaign=sm-9">modern image format</a> that provides better compression.</li>
<li>Load images at a size that <a href="https://developer.chrome.com/docs/lighthouse/performance/uses-responsive-images">matches the size they are displayed at</a>.</li>
<li>Deprioritize other resources that might <a href="https://www.debugbear.com/blog/bandwidth-competition-page-speed?utm_campaign=sm-9">compete with the LCP image</a>.</li>
</ul>

<h2 id="element-render-delay">Element Render Delay</h2>

<p>The fourth and final LCP component, <a href="https://www.debugbear.com/blog/lcp-render-delay?utm_campaign=sm-9">Render Delay</a>, is often the most confusing. The resource has loaded, but for some reason, the browser isn’t ready to show it to the user yet!</p>

<p>Luckily, in the example we’ve been looking at so far, the LCP image appears quickly after it’s been loaded. One common reason for render delay is that <strong>the LCP element is not an image</strong>. In that case, the render delay is caused by <strong>render-blocking scripts</strong> and <strong>stylesheets</strong>. The text can only appear after these have loaded and the browser has completed the rendering process.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="395"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png"
			
			sizes="100vw"
			alt="Render Delay"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/4-render-delay.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Another reason you might see render delay is when the website <a href="https://www.debugbear.com/blog/preload-largest-contentful-paint-image?utm_campaign=sm-9">preloads the LCP image</a>. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.</p>

<p>However, if the image finishes downloading before the page is ready to render, you’ll see an increase in render delay on the page. And that’s fine! You’ve improved your website speed overall, but after optimizing your image, you’ve uncovered a new bottleneck to focus on.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="382"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png"
			
			sizes="100vw"
			alt="Render Delay with preloaded LCP image"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/5-render-delay-preloaded-lcp-image.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="lcp-subparts-in-real-user-crux-data">LCP Subparts In Real User CrUX Data</h2>

<p>Looking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in <a href="https://www.debugbear.com/blog/lcp-lab-field-differences?utm_campaign=sm-9">the lab doesn’t match what’s happening for real users</a>!</p>

<p>That’s why, in February 2025, Google started <a href="https://developer.chrome.com/blog/crux-2025-02">including subpart data in the CrUX data report</a>. It’s not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBear’s “Web Vitals” tab.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="523"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png"
			
			sizes="100vw"
			alt="Subpart data in the CrUX data report"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/6-crux-lcp-subparts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>One super useful bit of info here is <strong>the LCP resource type</strong>: it tells you how many visitors saw the LCP element as a text element or an image.</p>

<p>Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.</p>

<p>To make the data easier to interpret, Google only reports subpart data for images.</p>

<blockquote>If the LCP element is usually text on the page, then the subparts info won’t be very helpful, as it won’t apply to most of your visitors.</blockquote>

<p>But breaking down text LCP is relatively easy: everything that’s not part of the TTFB score is render-delayed.</p>

<h2 id="track-subparts-on-your-website-with-real-user-monitoring">Track Subparts On Your Website With Real User Monitoring</h2>

<p><a href="https://www.smashingmagazine.com/2023/08/running-page-speed-test-monitoring-versus-measuring/">Lab data doesn’t always match what real users experience.</a> CrUX data is superficial, <strong>only reported for high-traffic pages</strong>, and takes at least <strong>4 weeks</strong> to fully update after a change has been rolled out.</p>

<p>That’s why a <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-9">real-user monitoring tool like DebugBear</a> comes in handy when fixing your LCP scores. You can <strong>track scores across all pages</strong> on your website over time and get dedicated dashboards for each LCP subpart.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="601"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png"
			
			sizes="100vw"
			alt="Dashboards for each LCP subpart"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/7-dashboards-each-lcp-subpart.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You can also <strong>review specific visitor experiences</strong>, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. <a href="https://www.debugbear.com/signup?utm_campaign=sm-9">Sign up for a free trial</a>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="582"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png"
			
			sizes="100vw"
			alt="DebugBear tool where you can review visitor experiences and check LCP subpart timings"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-fix-largest-contentful-issues-subpart-analysis/8-lcp-scores-visitor-experiences.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>Having more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.</p>

<p>Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations you’re considering would really be impactful.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Matt Zeunert</author><title>Time To First Byte: Beyond Server Response Time</title><link>https://www.smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/</link><pubDate>Wed, 12 Feb 2025 17:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/</guid><description>Optimizing web performance means looking beyond surface-level metrics. Time to First Byte (TTFB) is crucial, but improving it requires more than tweaking server response time. Matt Zeunert breaks down what TTFB is, what causes its poor score, and why reducing server response time alone isn’t enough for optimization and often won’t be the most impactful change you can make to your website.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/" />
              <title>Time To First Byte: Beyond Server Response Time</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Time To First Byte: Beyond Server Response Time</h1>
                  
                    
                    <address>Matt Zeunert</address>
                  
                  <time datetime="2025-02-12T17:00:00&#43;00:00" class="op-published">2025-02-12T17:00:00+00:00</time>
                  <time datetime="2025-02-12T17:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>Loading your website HTML quickly has a big impact on visitor experience. After all, no page content can be displayed until after the first chunk of the HTML has been loaded. That’s why the <a href="https://www.debugbear.com/docs/metrics/time-to-first-byte?utm_campaign=sm-8">Time to First Byte</a> (TTFB) metric is important: it measures how soon after navigation the browser starts receiving the HTML response.</p>

<p>Generating the HTML document quickly plays a big part in minimizing TTFB delays. But actually, there’s a lot more to optimizing this metric. In this article, we’ll take a look at what else can cause poor TTFB and what you can do to fix it.</p>

<h2 id="what-components-make-up-the-time-to-first-byte-metric">What Components Make Up The Time To First Byte Metric?</h2>

<p>TTFB stands for Time <em>to</em> First Byte. But where does it measure <em>from</em>?</p>

<p>Different tools handle this differently. Some only count the time spent sending the HTTP request and getting a response, ignoring everything else that needs to happen first before the resource can be loaded. However, when looking at Google’s <a href="https://www.debugbear.com/docs/metrics/core-web-vitals?utm_campaign=sm-8">Core Web Vitals</a>, TTFB starts from the time <a href="https://developer.chrome.com/docs/crux/methodology/metrics#ttfb-metric">when the users start navigating to a new page</a>. That means TTFB includes:</p>

<ul>
<li>Cross-origin redirects,</li>
<li>Time spent connecting to the server,</li>
<li>Same-origin redirects, and</li>
<li>The actual request for the HTML document.</li>
</ul>

<p>We can see an example of this in this <a href="https://developer.mozilla.org/en-US/blog/optimize-web-performance/#working_with_network_request_waterfalls">request waterfall visualization</a>.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="290"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png"
			
			sizes="100vw"
			alt="Request waterfall visualization"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/1-request-waterfall-visualization.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The <a href="https://www.debugbear.com/blog/reduce-initial-server-response-time?utm_campaign=sm-8">server response time</a> here is only 183 milliseconds, or about 12% of the overall TTFB metric. Half of the time is instead spent on a cross-origin redirect &mdash; a separate HTTP request that returns a redirect response before we can even make the request that returns the website’s HTML code. And when we make that request, most of the time is spent on establishing the server connection.</p>

<p>Connecting to a server on the web typically takes three round trips on the network:</p>

<ol>
<li><strong>DNS:</strong> Looking up the server IP address.</li>
<li><strong>TCP:</strong> Establishing a reliable connection to the server.</li>
<li><strong>TLS:</strong> Creating a secure encrypted connection.</li>
</ol>

<h2 id="what-network-latency-means-for-time-to-first-byte">What Network Latency Means For Time To First Byte</h2>

<p>Let’s add up all the network round trips in the example above:</p>

<ul>
<li>2 server connections: 6 round trips.</li>
<li>2 HTTP requests: 2 round trips.</li>
</ul>

<p>That means that before we even get the first response byte for our page <strong>we actually have to send data back and forth between the browser and a server eight times!</strong></p>

<p>That’s where network latency comes in, or <a href="https://www.debugbear.com/blog/network-round-trip-time-latency?utm_campaign=sm-8">network round trip time</a> (RTT) if we look at the time it takes to send data to a server and receive a response in the browser. On a high-latency connection with a 150 millisecond RTT, making those eight round trips will take 1.2 seconds. So, even if the server always responds instantly, we can’t get a TTFB lower than that number.</p>

<p>Network latency depends a lot on the geographic distances between the visitor’s device and the server the browser is connecting to. You can see the impact of that in practice by running a <a href="https://www.debugbear.com/test/ttfb?utm_campaign=sm-8">global TTFB test</a> on a website. Here, I’ve tested a website that’s hosted in Brazil. We get good TTFB scores when testing from Brazil and the US East Coast. However, visitors from Europe, Asia, or Australia wait a while for the website to load.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="393"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png"
			
			sizes="100vw"
			alt="Visualisation with a map of a global TTFB test"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/2-global-ttfb-test.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="what-content-delivery-networks-mean-for-time-to-first-byte">What Content Delivery Networks Mean for Time to First Byte</h2>

<p>One way to speed up your website is by using a <a href="https://www.cloudflare.com/en-gb/learning/cdn/what-is-a-cdn/">Content Delivery Network</a> (CDN). These services provide a network of globally distributed server locations. Instead of each round trip going all the way to where your web application is hosted, browsers instead connect to a nearby CDN server (called an edge node). That greatly reduces the time spent on establishing the server connection, improving your overall TTFB metric.</p>

<p>By default, the actual HTML request still has to be sent to your web app. However, if your content isn’t dynamic, you can also <a href="https://www.debugbear.com/blog/cdn-cache?utm_campaign=sm-8">cache responses at the CDN edge node</a>. That way, the request can be served entirely through the CDN instead of data traveling all across the world.</p>

<p>If we run a TTFB test on a website that uses a CDN, we can see that each server response comes from a regional data center close to where the request was made. In many cases, we get a TTFB of under 200 milliseconds, thanks to the response already being cached at the edge node.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="787"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png"
			
			sizes="100vw"
			alt="An expanded version of TTFB test with a list of test locations with its server responses"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/3-global-ttfb-test.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="how-to-improve-time-to-first-byte">How To Improve Time To First Byte</h2>

<p>What you need to do to improve your website’s TTFB score depends on what its biggest contributing component is.</p>

<ul>
<li>A lot of time is spent establishing the connection: <strong>Use a global CDN.</strong></li>
<li>The server response is slow: <strong>Optimize your application code</strong> or <strong>cache the response</strong></li>
<li>Redirects delay TTFB: <a href="https://www.debugbear.com/blog/avoid-multiple-page-redirects?utm_campaign=sm-8"><strong>Avoid chaining redirects</strong></a> and <strong>optimize the server</strong> returning the redirect response.</li>
</ul>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="314"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png"
			
			sizes="100vw"
			alt="TTFB details, including Redirect, DNS Lookup, TCP Connection, SSL Handshake, Response"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/4-ttfb-details.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Keep in mind that TTFB depends on how visitors are accessing your website. For example, if they are logged into your application, the page content probably can’t be served from the cache. You may also see a spike in TTFB when running an ad campaign, as visitors are redirected through a click-tracking server.</p>

<h2 id="monitor-real-user-time-to-first-byte">Monitor Real User Time To First Byte</h2>

<p>If you want to get a breakdown of what TTFB looks like for different visitors on your website, you need <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-8">real user monitoring</a>. That way, you can break down how visitor location, login status, or the referrer domain impact real user experience.</p>

<p><a href="https://www.debugbear.com/?utm_campaign=sm-8">DebugBear</a> can help you collect real user metrics for Time to First Byte, Google Core Web Vitals, and other page speed metrics. You can track individual TTFB components like TCP duration or redirect time and break down website performance by country, ad campaign, and more.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="381"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png"
			
			sizes="100vw"
			alt="Time to First Byte map"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/time-to-first-byte-beyond-server-response-time/5-time-to-first-byte-map.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>By looking at everything that’s involved in serving the first byte of a website to a visitor, we’ve seen that just reducing server response time isn’t enough and often won’t even be the most impactful change you can make on your website.</p>

<p>Just because your website is fast in one location doesn’t mean it’s fast for everyone, as website speed varies based on where the visitor is accessing your site from.</p>

<p><strong>Content Delivery Networks</strong> are an incredibly powerful way to improve TTFB. Even if you don’t use any of their advanced features, just using their global server network saves a lot of time when establishing a server connection.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Geoff Graham</author><title>Tight Mode: Why Browsers Produce Different Performance Results</title><link>https://www.smashingmagazine.com/2025/01/tight-mode-why-browsers-produce-different-performance-results/</link><pubDate>Thu, 09 Jan 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/01/tight-mode-why-browsers-produce-different-performance-results/</guid><description>We know that browsers do all sorts of different things under the hood. One of those things is the way they not only &lt;em>fetch&lt;/em> resources like images and scripts from the server but how they &lt;a href="https://www.debugbear.com/blog/request-priorities?utm_campaign=sm-7">prioritize those resources&lt;/a>. Chrome and Safari have implemented a “Tight Mode” that constrains which resources are loaded and in what order, but they each take drastically different approaches to it. With so little information about Tight Mode available, this article attempts a high-level explanation of what it is, what triggers it, and how it is treated differently in major browsers.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/01/tight-mode-why-browsers-produce-different-performance-results/" />
              <title>Tight Mode: Why Browsers Produce Different Performance Results</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Tight Mode: Why Browsers Produce Different Performance Results</h1>
                  
                    
                    <address>Geoff Graham</address>
                  
                  <time datetime="2025-01-09T13:00:00&#43;00:00" class="op-published">2025-01-09T13:00:00+00:00</time>
                  <time datetime="2025-01-09T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>I was chatting with <a href="https://www.debugbear.com?utm_campaign=sm-7">Debug</a><a href="https://www.debugbear.com">B</a><a href="https://www.debugbear.com">ear</a>’s Matt Zeunert and, in the process, he casually mentioned this thing called <strong>Tight Mode</strong> when describing how browsers fetch and prioritize resources. I wanted to nod along like I knew what he was talking about but ultimately had to ask: <em>What the heck is “Tight” mode?</em></p>

<p>What I got back were two artifacts, one of them being the following video of Akamai web performance expert Robin Marx speaking at We Love Speed in France a few weeks ago:</p>


<figure class="video-embed-container">
  <div
  
  class="video-embed-container--wrapper">
		<lite-youtube
			videoid="p0lFyPuH8Zs"
      params="start=2"
			
		></lite-youtube>
	</div>
	
</figure>

<p>The other artifact is a Google document originally published by <a href="https://blog.patrickmeenan.com">Patrick Meenan</a> in 2015 but updated somewhat recently in November 2023. Patrick’s blog has been inactive since 2014, so I’ll simply <a href="https://docs.google.com/document/d/1bCDuq9H1ih9iNjgzyAL0gpwNFiEP4TZS-YLRp_RuMlc/edit?tab=t.0#">drop a link to the Google document for you to review</a>.</p>

<p>That’s all I have and what I can find on the web about this thing called Tight Mode that appears to have so much influence on the way the web works. Robin acknowledged the lack of information about it in his presentation, and the amount of first-person research in his talk is noteworthy and worth calling out because it attempts to describe and illustrate how different browsers fetch different resources with different prioritization. Given the dearth of material on the topic, I decided to share what I was able to take away from Robin’s research and Patrick’s updated article.</p>

<h2 id="it-s-the-first-of-two-phases">It’s The First of Two Phases</h2>

<p>The fact that Patrick’s original publication date falls in 2015 makes it no surprise that we’re talking about something roughly 10 years old at this point. The 2023 update to the publication is already fairly old in “web years,” yet Tight Mode is still nowhere when I try looking it up.</p>

<p>So, how do we define Tight Mode? This is how Patrick explains it:</p>

<blockquote>“Chrome loads resources in 2 phases. “Tight mode” is the initial phase and constraints [sic] loading lower-priority resources until the body is attached to the document (essentially, after all blocking scripts in the head have been executed).”<br /><br />&mdash; Patrick Meenan</blockquote>

<p>OK, so we have this two-part process that Chrome uses to fetch resources from the network and the first part is focused on anything that isn’t a “lower-priority resource.” We have ways of telling browsers which resources <em>we</em> think are low priority in the form of the <a href="https://web.dev/articles/fetch-priority">Fetch Priority API</a> and lazy-loading techniques that asynchronously load resources when they enter the viewport on scroll &mdash; all of which Robin covers in his presentation. But Tight Mode has its own way of determining what resources to load first.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="448"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png"
			
			sizes="100vw"
			alt="Chrome Tight Mode screenshot"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1: Chrome loads resources in two phases, the first of which is called “Tight Mode.” (<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/1-chrome-tight-mode.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Tight Mode discriminates resources, taking anything and everything marked as High and Medium priority. Everything else is constrained and left on the outside, looking in until the body is firmly attached to the document, signaling that blocking scripts have been executed. It’s at that point that resources marked with Low priority are allowed in the door during the second phase of loading.</p>

<p>There’s a big caveat to that, but we’ll get there. The important thing to note is that…</p>

<h2 id="chrome-and-safari-enforce-tight-mode">Chrome And Safari Enforce Tight Mode</h2>

<p>Yes, both Chrome and Safari have some working form of Tight Mode running in the background. That last image illustrates Chrome’s Tight Mode. Let’s look at Safari’s next and compare the two.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png"
			
			sizes="100vw"
			alt="A screenshot comparing Tight Mode in Chrome with Tight Mode in Safari."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2: Comparing Tight Mode in Chrome with Tight Mode in Safari. Notice that Chrome allows five images marked with High priority to slip out of Tight Mode. (<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/2-tight-mode-chrome-vs-safari.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Look at that! Safari discriminates High-priority resources in its initial fetch, just like Chrome, but we get wildly different loading behavior between the two browsers. Notice how Safari appears to exclude the first five PNG images marked with Medium priority where Chrome allows them. In other words, Safari makes all Medium- and Low-priority resources wait in line until all High-priority items are done loading, even though we’re working with the exact same HTML. You might say that Safari’s behavior makes the most sense, as you can see in that last image that Chrome seemingly excludes some High-priority resources out of Tight Mode. There’s clearly some tomfoolery happening there that we’ll get to.</p>

<p>Where’s Firefox in all this? It doesn’t take any extra tightening measures when evaluating the priority of the resources on a page. We might consider this the “classic” waterfall approach to fetching and loading resources.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="447"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png"
			
			sizes="100vw"
			alt="Comparison of Chrome, Safari, and Firefox Tight Mode"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: Chrome and Safari have implemented Tight Mode while Firefox maintains a simple waterfall.(<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/3-tight-mode-chtome-safari-firefox.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="chrome-and-safari-trigger-tight-mode-differently">Chrome And Safari Trigger Tight Mode Differently</h2>

<p>Robin makes this clear as day in his talk. Chrome and Safari are both Tight Mode proponents, yet trigger it under differing circumstances that we can outline like this:</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
      <th></th>
            <th>Chrome</th>
            <th>Safari</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Tight Mode triggered</td>
            <td>While blocking JS in the <code>&lt;head&gt;</code> is busy.</td>
      <td>While blocking JS or CSS anywhere is busy.</td>
        </tr>
    </tbody>
</table>

<p>Notice that Chrome only looks at the document <code>&lt;head&gt;</code> when prioritizing resources, and <strong>only when it involves JavaScript</strong>. Safari, meanwhile, also looks at JavaScript, but CSS as well, and anywhere those things might be located in the document &mdash; regardless of whether it’s in the <code>&lt;head&gt;</code> or <code>&lt;body&gt;</code>. That helps explain why Chrome excludes images marked as High priority in Figure 2 from its Tight Mode implementation &mdash; it only cares about JavaScript in this context.</p>

<p>So, even if Chrome encounters a script file with <code>fetchpriority=&quot;high&quot;</code> in the document body, the file is not considered a “High” priority and it will be loaded after the rest of the items. Safari, meanwhile, honors <code>fetchpriority</code> anywhere in the document. This helps explain why Chrome leaves two scripts on the table, so to speak, in Figure 2, while Safari appears to load them during Tight Mode.</p>

<p>That’s not to say Safari isn’t doing anything weird in its process. Given the following markup:</p>

<pre><code class="language-html">&lt;head&gt;
  &lt;!-- two high-priority scripts --&gt;
  &lt;script src="script-1.js"&gt;&lt;/script&gt;
  &lt;script src="script-1.js"&gt;&lt;/script&gt;

  &lt;!-- two low-priority scripts --&gt;
  &lt;script src="script-3.js" defer&gt;&lt;/script&gt;
  &lt;script src="script-4.js" defer&gt;&lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;!-- five low-priority scripts --&gt;
  &lt;img src="image-1.jpg"&gt;
  &lt;img src="image-2.jpg"&gt;
  &lt;img src="image-3.jpg"&gt;
  &lt;img src="image-4.jpg"&gt;
  &lt;img src="image-5.jpg"&gt;
&lt;/body&gt;
</code></pre>

<p>…you might expect that Safari would delay the two Low-priority scripts in the <code>&lt;head&gt;</code> until the five images in the <code>&lt;body&gt;</code> are downloaded. But that’s not the case. Instead, Safari loads those two scripts during its version of Tight Mode.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="452"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png"
			
			sizes="100vw"
			alt="Safari deferred scripts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: Safari treats deferred scripts in the <code>&lt;head&gt;</code> with High priority. (<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/4-safari-deferred-scripts-head.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="chrome-and-safari-exceptions">Chrome And Safari Exceptions</h2>

<p>I mentioned earlier that Low-priority resources are loaded in during the second phase of loading after Tight Mode has been completed. But I also mentioned that there’s a big caveat to that behavior. Let’s touch on that now.</p>

<p>According to Patrick’s article, we know that Tight Mode is “the initial phase and constraints loading lower-priority resources until the body is attached to the document (essentially, after all blocking scripts in the head have been executed).” But there’s a second part to that definition that I left out:</p>

<blockquote>“In tight mode, low-priority resources are only loaded if there are less than two in-flight requests at the time that they are discovered.”</blockquote>

<p>A-ha! So, there <em>is</em> a way for low-priority resources to load in Tight Mode. It’s when there are less than two “in-flight” requests happening when they’re detected.</p>

<p>Wait, what does “in-flight” even mean?</p>

<p>That’s what’s meant by less than two High- or Medium-priority items being requested. Robin demonstrates this by comparing Chrome to Safari under the same conditions, where there are only two High-priority scripts and ten regular images in the mix:</p>

<pre><code class="language-html">&lt;head&gt;
  &lt;!-- two high-priority scripts --&gt;
  &lt;script src="script-1.js"&gt;&lt;/script&gt;
  &lt;script src="script-1.js"&gt;&lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;!-- ten low-priority images --&gt;
  &lt;img src="image-1.jpg"&gt;
  &lt;img src="image-2.jpg"&gt;
  &lt;img src="image-3.jpg"&gt;
  &lt;img src="image-4.jpg"&gt;
  &lt;img src="image-5.jpg"&gt;
  &lt;!-- rest of images --&gt;
  &lt;img src="image-10.jpg"&gt;
&lt;/body&gt;
</code></pre>

<p>Let’s look at what Safari does first because it’s the most straightforward approach:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="231"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg"
			
			sizes="100vw"
			alt="Safari Tight Mode"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/5-safari-tight-mode.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Nothing tricky about that, right? The two High-priority scripts are downloaded first and the 10 images flow in right after. Now let’s look at Chrome:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg"
			
			sizes="100vw"
			alt="Chrome Tight Mode"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/tight-mode-why-browsers-produce-different-performance-results/6-chrome-tight-mode.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We have the two High-priority scripts loaded first, as expected. But then Chrome decides to let in the first five images with Medium priority, then excludes the last five images with Low priority. What. The. Heck.</p>

<p>The reason is a noble one: Chrome wants to load the first five images because, presumably, the <a href="https://www.debugbear.com/docs/metrics/largest-contentful-paint?utm_campaign=sm-7">Largest Contentful Paint</a> (LCP) is often going to be one of those images and Chrome is hedging bets that the web will be faster overall if it automatically handles some of that logic. Again, it’s a noble line of reasoning, even if it isn’t going to be 100% accurate. It does muddy the waters, though, and makes understanding Tight Mode a lot harder when we see Medium- and Low-priority items treated as High-priority citizens.</p>

<p>Even muddier is that Chrome appears to only accept up to two Medium-priority resources in this discriminatory process. The rest are marked with Low priority.</p>

<p>That’s what we mean by “less than two in-flight requests.” If Chrome sees that only one or two items are entering Tight Mode, then <strong>it automatically prioritizes up to the first five non-critical images</strong> as an LCP optimization effort.</p>

<p>Truth be told, Safari does something similar, but in a different context. Instead of accepting Low-priority items when there are less than two in-flight requests, Safari accepts both Medium and Low priority in Tight Mode and from anywhere in the document regardless of whether they are located in the <code>&lt;head&gt;</code> or not. The exception is any asynchronous or deferred script because, as we saw earlier, those get loaded right away anyway.</p>

<h2 id="how-to-manipulate-tight-mode">How To Manipulate Tight Mode</h2>

<p>This might make for a great follow-up article, but this is where I’ll refer you directly to Robin’s video because his first-person research is worth consuming directly. But here’s the gist:</p>

<ul>
<li>We have these high-level features that can help influence priority, including <strong>resource hints</strong> (i.e., <code>preload</code> and <code>preconnect</code>), the <a href="https://www.debugbear.com/blog/fetchpriority-attribute?utm_campaign=sm-7"><strong>Fetch Priority API</strong></a>, and <strong>lazy-loading techniques</strong>.</li>
<li>We can indicate <code>fetchpriority=&quot;high&quot;</code> and <code>fetchpriority=&quot;low&quot;</code> on items.</li>
</ul>

<div class="break-out">
<pre><code class="language-html">&lt;img src="lcp-image.jpg" fetchpriority="high"&gt;
&lt;link rel="preload" href="defer.js" as="script" fetchpriority="low"&gt;
</code></pre>
</div>

<ul>
<li>Using <code>fetchpriority=&quot;high&quot;</code> is one way we can get items lower in the source included in Tight Mode. Using <code>fetchpriority=&quot;low</code> is one way we can get items higher in the source excluded from Tight Mode.</li>
<li>For Chrome, this works on images, asynchronous/deferred scripts, and scripts located at the bottom of the <code>&lt;body&gt;</code>.</li>
<li>For Safari, this only works on images.</li>
</ul>

<p>Again, watch Robin’s talk for the full story <a href="https://youtu.be/p0lFyPuH8Zs?feature=shared&amp;t=1712">starting around the 28:32 marker</a>.</p>

<h2 id="that-s-tight-mode">That’s Tight… Mode</h2>

<p>It’s bonkers to me that there is so little information about Tight Mode floating around the web. I would expect something like this to be well-documented somewhere, certainly over at Chrome Developers or somewhere similar, but all we have is a lightweight Google Doc and a thorough presentation to paint a picture of how two of the three major browsers fetch and prioritize resources. Let me know if you have additional information that you’ve either published or found &mdash; I’d love to include them in the discussion.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Saad Khan</author><title>How To Design For High-Traffic Events And Prevent Your Website From Crashing</title><link>https://www.smashingmagazine.com/2025/01/designing-high-traffic-events-cloudways/</link><pubDate>Tue, 07 Jan 2025 14:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/01/designing-high-traffic-events-cloudways/</guid><description>Product drops and sales are a great way to increase revenue, but these events can result in traffic spikes that affect a site’s availability and performance. To prevent website crashes, you’ll have to make sure that the sites you design can handle large numbers of server requests at once. Let’s discuss how!</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/01/designing-high-traffic-events-cloudways/" />
              <title>How To Design For High-Traffic Events And Prevent Your Website From Crashing</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Design For High-Traffic Events And Prevent Your Website From Crashing</h1>
                  
                    
                    <address>Saad Khan</address>
                  
                  <time datetime="2025-01-07T14:00:00&#43;00:00" class="op-published">2025-01-07T14:00:00+00:00</time>
                  <time datetime="2025-01-07T14:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>Cloudways</b></p>
                

<p>Product launches and sales typically attract large volumes of traffic. Too many concurrent server requests can lead to website crashes if you’re not equipped to deal with them. This can result in a loss of revenue and reputation damage.</p>

<p>The good news is that you can maximize availability and prevent website crashes by designing websites specifically for these events. For example, you can switch to a scalable cloud-based web host, or compress/optimize images to save bandwidth.</p>

<p>In this article, we’ll discuss six ways to design websites for high-traffic events like product drops and sales:</p>

<ol>
<li><a href="#compress-and-optimize-images">Compress and optimize images</a>,</li>
<li><a href="#choose-a-scalable-web-host">Choose a scalable web host</a>,</li>
<li><a href="#use-a-cdn">Use a CDN</a>,</li>
<li><a href="#leverage-caching">Leverage caching</a>,</li>
<li><a href="#stress-test-websites">Stress test websites</a>,</li>
<li><a href="#refine-the-bankend">Refine the backend</a>.</li>
</ol>

<p>Let’s jump right in!</p>

<h2 id="how-to-design-for-high-traffic-events">How To Design For High-Traffic Events</h2>

<p>Let’s take a look at six ways to design websites for high-traffic events, without worrying about website crashes and other performance-related issues.</p>

<h3 id="1-compress-and-optimize-images">1. Compress And Optimize Images</h3>

<p>One of the simplest ways to design a website that accommodates large volumes of traffic is to optimize and compress images. Typically, images have very large file sizes, which means they take longer for browsers to parse and display. Additionally, they can be a huge drain on bandwidth and lead to slow loading times.</p>

<p>You can free up space and reduce the load on your server by compressing and optimizing images. It’s a good idea to resize images to make them physically smaller. You can often do this using built-in apps on your operating system.</p>

<p>There are also <a href="https://www.smashingmagazine.com/2022/07/powerful-image-optimization-tools/">online optimization tools</a> available like <a href="https://tinypng.com/">Tinify</a>, as well as advanced image editing software like <a href="https://www.adobe.com/products/photoshop.html">Photoshop</a> or <a href="https://www.gimp.org/">GIMP</a>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.gimp.org/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="231"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/gimp-preview.png"
			
			sizes="100vw"
			alt="GIMP"
		/>
    
    </a>
  

  
</figure>

<p>Image format is also a key consideration. Many designers rely on JPG and PNG, but adaptive <a href="https://www.smashingmagazine.com/2021/09/modern-image-formats-avif-webp/">modern image formats</a> like WebP can reduce the weight of the image and <a href="https://www.smashingmagazine.com/2024/06/scent-ux-unrealized-potential-olfactory-design/">provide a better user experience (UX)</a>.</p>

<p>You may even consider installing an image optimization plugin or an image CDN to compress and scale images automatically. Additionally, you can implement lazy loading, which prioritizes the loading of images above the fold and delays those that aren’t immediately visible.</p>

<h3 id="2-choose-a-scalable-web-host">2. Choose A Scalable Web Host</h3>

<p>The most convenient way to design a high-traffic website without worrying about website crashes is to upgrade your web hosting solution.</p>

<p>Traditionally, when you sign up for a web hosting plan, you’re allocated a pre-defined number of resources. This can negatively impact your website performance, particularly if you use a shared hosting service.</p>

<p>Upgrading your web host ensures that you have adequate resources to serve visitors flocking to your site during high-traffic events. If you’re not prepared for this eventuality, your website may crash, or your host may automatically upgrade you to a higher-priced plan.</p>

<p>Therefore, the best solution is to switch to a scalable web host like <a href="https://www.cloudways.com/en/autonomous.php?id=377681&amp;data1=smash&amp;data2=Jan_article">Cloudways Autonomous</a>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.cloudways.com/en/?id=377681&amp;data1=smash&amp;data2=Jan_article">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="387"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/cloudways-preview.png"
			
			sizes="100vw"
			alt="Cloudways"
		/>
    
    </a>
  

  
</figure>

<p>This is a fully managed WordPress hosting service that automatically adjusts your web resources based on demand. This means that you’re able to handle sudden traffic surges without the hassle of resource monitoring and without compromising on speed.</p>

<p>With Cloudways Autonomous your website is hosted on multiple servers instead of just one. It uses Kubernetes with advanced load balancing to distribute traffic among these servers. Kubernetes is capable of spinning up additional pods (think of pods as servers) based on demand, so there’s no chance of overwhelming a single server with too many requests.</p>

<p>High-traffic events like sales can also make your site a prime target for hackers. This is because, in high-stress situations, many sites enter a state of greater vulnerability and instability. But with Cloudways Autonomous, you’ll benefit from DDoS mitigation and a web application firewall to <a href="https://www.smashingmagazine.com/2010/06/10-ways-to-beef-up-your-websites-security/">improve website security</a>.</p>

<h3 id="3-use-a-cdn">3. Use A CDN</h3>

<p>As you’d expect, large volumes of traffic can significantly impact the security and stability of your site’s network. This can result in website crashes unless you take the proper precautions when designing sites for these events.</p>

<p>A content delivery network (CDN) is an excellent solution to the problem. You’ll get access to a collection of strategically-located servers, scattered all over the world. This means that you can reduce latency and <a href="https://www.smashingmagazine.com/2023/08/optimize-performance-serve-global-audience/">speed up your content delivery times</a>, regardless of where your customers are based.</p>

<p>When a user makes a request for a website, they’ll receive content from a server that’s physically closest to their location. Plus, having extra servers to distribute traffic can prevent a single server from crashing under high-pressure conditions. Cloudflare is one of the most robust CDNs available, and luckily, you’ll get access to it when you use Cloudways Autonomous.</p>

<p>You can also find optimization plugins or caching solutions that give you access to a CDN. Some tools like Jetpack include a dedicated image CDN, which is built to accommodate and auto-optimize visual assets.</p>

<h3 id="4-leverage-caching">4. Leverage Caching</h3>

<p>When a user requests a website, it can take a long time to load all the HTML, CSS, and JavaScript contained within it. Caching can help your website combat this issue.</p>

<p>A cache functions as a temporary storage location that keeps copies of your web pages on hand (once they’ve been requested). This means that every subsequent request will be served from the cache, enabling users to access content much faster.</p>

<p>The cache mainly deals with static content like HTML which is much quicker to parse compared to dynamic content like JavaScript. However, you can find caching technologies that accommodate both types of content.</p>

<p>There are different caching mechanisms to consider when designing for high-traffic events. For example, edge caching is generally used to cache static assets like images, videos, or web pages. Meanwhile, database caching enables you to optimize server requests.</p>

<p>If you’re expecting fewer simultaneous sessions (which isn’t likely in this scenario), server-side caching can be a good option. You could even implement browser caching, which affects static assets based on your <a href="https://www.smashingmagazine.com/2021/12/headers-https-requests-ui-automation-testing/">HTTP headers</a>.</p>

<p>There are plenty of caching plugins available if you want to add this functionality to your site, but some web hosts provide built-in solutions. For example, Cloudways Autonomous uses Cloudflare’s edge cache and integrated object cache.</p>

<h3 id="5-stress-test-websites">5. Stress Test Websites</h3>

<p>One of the best ways to design websites while preparing for peak traffic is to carry out comprehensive stress tests.</p>

<p>This enables you to find out how your website performs in various conditions. For instance, you can simulate high-traffic events and discover the upper limits of your server’s capabilities. This helps you avoid resource drainage and prevent website crashes.</p>

<p>You might have experience with speed testing tools like Pingdom, which <a href="https://www.smashingmagazine.com/2023/08/running-page-speed-test-monitoring-versus-measuring/">assess your website performance</a>. But these tools don’t help you understand how performance may be impacted by high volumes of traffic.</p>

<p>Therefore, you’ll need to use a dedicated stress test tool like <a href="http://Loader.io">Loader.io</a>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://loader.io/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="275"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-high-traffic-events-cloudways/loader-io-preview.png"
			
			sizes="100vw"
			alt="Loader.io"
		/>
    
    </a>
  

  
</figure>

<p>This is completely free to use, but you’ll need to register for an account and verify your website domain. You can then download your preferred file and upload it to your server via FTP.</p>

<p>After that, you’ll find three different tests to carry out. Once your test is complete, you can take a look at the average response time and maximum response time, and see how this is affected by a higher number of clients.</p>

<h3 id="6-refine-the-backend">6. Refine The Backend</h3>

<p>The final way to design websites for high-traffic events is to refine the WordPress back end.</p>

<p>The admin panel is where you install plugins, activate themes, and add content. The more of these features that you have on your site, the slower your pages will load.</p>

<p>Therefore, it’s a good idea to delete any old pages, posts, and images that are no longer needed. If you have access to your database, you can even go in and remove any archived materials.</p>

<p>On top of this, it’s best to remove plugins that aren’t essential for your website to function. Again, with database access, you can get in there and delete any tables that sometimes get left behind when you uninstall plugins via the WordPress dashboard.</p>

<p>When it comes to themes, you’ll want to opt for a simple layout with a minimalist design. Themes that come with lots of built-in widgets or rely on third-party plugins will likely add bloat to your loading times. Essentially, the lighter your back end, the quicker it will load.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Product drops and sales are a great way to increase revenue, but these events can result in traffic spikes that affect a site’s availability and performance. To prevent website crashes, you’ll have to make sure that the sites you design can handle large numbers of server requests at once.</p>

<p>The easiest way to support fluctuating traffic volumes is to upgrade to a scalable web hosting service like Cloudways Autonomous. This way, you can adjust your server resources automatically, based on demand. Plus, you’ll get access to a CDN, caching, and an SSL certificate. <a href="https://www.cloudways.com/en/autonomous.php?id=377681&amp;data1=smash&amp;data2=Jan_article">Get started today</a>!</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Geoff Graham</author><title>Why Optimizing Your Lighthouse Score Is Not Enough For A Fast Website</title><link>https://www.smashingmagazine.com/2024/11/why-optimizing-lighthouse-score-not-enough-fast-website/</link><pubDate>Tue, 05 Nov 2024 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/11/why-optimizing-lighthouse-score-not-enough-fast-website/</guid><description>Feeling good with your Lighthouse score of 100%? You should! But you should also know that you’re only looking at part of the performance picture. Learn how Lighthouse scores are measured differently than other tools, the impact that has on measuring performance metrics, and why you need real-user monitoring for a complete picture.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/11/why-optimizing-lighthouse-score-not-enough-fast-website/" />
              <title>Why Optimizing Your Lighthouse Score Is Not Enough For A Fast Website</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Why Optimizing Your Lighthouse Score Is Not Enough For A Fast Website</h1>
                  
                    
                    <address>Geoff Graham</address>
                  
                  <time datetime="2024-11-05T10:00:00&#43;00:00" class="op-published">2024-11-05T10:00:00+00:00</time>
                  <time datetime="2024-11-05T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>We’ve all had that moment. You’re optimizing the performance of some website, scrutinizing every millisecond it takes for the current page to load. You’ve fired up Google Lighthouse from Chrome’s DevTools because everyone and their uncle uses it to evaluate performance.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="518"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png"
			
			sizes="100vw"
			alt="A screenshot from Google DevTools"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/1-google-lighthouse.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>After running your 151st report and completing all of the recommended improvements, you experience nirvana: <strong>a perfect 100% performance score!</strong></p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="518"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png"
			
			sizes="100vw"
			alt="A screenshot with the 100% performance score on DevTools"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Heck yeah. (<a href='https://files.smashing.media/articles/why-optimizing-lighthouse-score-not-enough-fast-website/2-devtools-performance-score.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Time to pat yourself on the back for a job well done. Maybe you can use this to get that pay raise you’ve been wanting! Except, don’t &mdash; at least not using Google Lighthouse as your sole proof. I know a perfect score produces all kinds of good feelings. That’s what we’re aiming for, after all!</p>

<p>Google Lighthouse is merely <em>one</em> tool in a complete performance toolkit. What it’s not is a complete picture of how your website performs in the real world. Sure, we can glean plenty of insights about a site’s performance and even spot issues that ought to be addressed to speed things up. But again, it’s <em>an incomplete picture</em>.</p>

<h2 id="what-google-lighthouse-is-great-at">What Google Lighthouse Is Great At</h2>

<p>I hear other developers boasting about perfect Lighthouse scores and see the screenshots published all over socials. Hey, I just did that myself in the introduction of this article!</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aLighthouse%20might%20be%20the%20most%20widely%20used%20web%20performance%20reporting%20tool.%20I%e2%80%99d%20wager%20its%20ubiquity%20is%20due%20to%20convenience%20more%20than%20the%20quality%20of%20its%20reports.%0a&url=https://smashingmagazine.com%2f2024%2f11%2fwhy-optimizing-lighthouse-score-not-enough-fast-website%2f">
      
Lighthouse might be the most widely used web performance reporting tool. I’d wager its ubiquity is due to convenience more than the quality of its reports.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Open DevTools, click the Lighthouse tab, and generate the report! There are even many ways we can configure Lighthouse to measure performance in simulated situations, such as slow internet connection speeds or creating separate reports for mobile and desktop. It’s a very powerful tool for something that comes baked into a free browser. It’s also <a href="https://developers.google.com/speed/docs/insights/v5/about">baked right into Google’s PageSpeed Insights tool</a>!</p>

<p>And it’s fast. Run a report in Lighthouse, and you’ll get something back in about 10-15 seconds. Try running reports with other tools, and you’ll find yourself refilling your coffee, hitting the bathroom, and maybe checking your email (in varying order) while waiting for the results. There’s a good reason for that, but all I want to call out is that Google Lighthouse is <em>lightning</em> fast as far as performance reporting goes.</p>

<p><strong>To recap:</strong> Lighthouse is great at many things!</p>

<ul>
<li>It’s convenient to access,</li>
<li>It provides a good deal of configuration for different levels of troubleshooting,</li>
<li>And it spits out reports in record time.</li>
</ul>

<p>And what about that bright and lovely animated green score &mdash; who doesn’t love that?!</p>

<p>OK, that’s the rosy side of Lighthouse reports. It’s only fair to highlight its limitations as well. This isn’t to dissuade you or anyone else from using Lighthouse, but more of a heads-up that your score may not perfectly reflect reality &mdash; or even match the scores you’d get in other tools, including Google’s own <a href="https://pagespeed.web.dev">PageSpeed Insights</a>.</p>

<h2 id="it-doesn-t-match-real-users">It Doesn’t Match “Real” Users</h2>

<p>Not all data is created equal in capital Web Performance. It’s important to know this because data represents assumptions that reporting tools make when evaluating performance metrics.</p>

<p>The data Lighthouse relies on for its reporting is called <strong>simulated data</strong>. You might already have a solid guess at what that means: it’s <em>synthetic</em> data. Now, before kicking simulated data in the knees for not being “real” data, know that it’s the reason Lighthouse is super fast.</p>

<p>You know how there’s a setting to “throttle” the internet connection speed? That simulates different conditions that either slow down or speed up the connection speed, something that you configure directly in Lighthouse. By default, Lighthouse collects data on a fast connection, but we can configure it to something slower to gain insights on slow page loads. But beware! <strong>Lighthouse then estimates how quickly the page would have loaded on a different connection</strong>.</p>

<p><a href="https://www.debugbear.com">DebugBear</a> founder <a href="https://www.mattzeunert.com">Matt Zeunert</a> outlines <a href="https://calendar.perfplanet.com/2021/how-does-lighthouse-simulated-throttling-work/">how data runs in a simulated throttling environment</a>, explaining how Lighthouse uses “optimistic” and “pessimistic” averages for making conclusions:</p>

<blockquote>“[Simulated throttling] reduces variability between tests. But if there’s a single slow render-blocking request that shares an origin with several fast responses, then Lighthouse will underestimate page load time.<br /><br />Lighthouse averages optimistic and pessimistic estimates when it’s unsure exactly which nodes block rendering. In practice, metrics may be closer to either one of these, depending on which dependency graph is more correct.”</blockquote>

<p>And again, the environment is a configuration, not reality. It’s unlikely that your throttled conditions match the connection speeds of an average real user on the website, as they may have a faster network connection or run on a slower CPU. What Lighthouse provides is more like <strong>“on-demand” testing</strong> that’s immediately available.</p>

<p>That makes simulated data great for running tests quickly and under certain artificially sweetened conditions. However, it sacrifices accuracy by making assumptions about the connection speeds of site visitors and averages things in a way that divorces it from reality.</p>

<p>While simulated throttling is the default in Lighthouse, it also supports <a href="https://www.debugbear.com/blog/packet-level-throttling?utm_campaign=sm-6">more realistic throttling methods</a>. Running those tests will take more time but give you more accurate data. The easiest way to run Lighthouse with more realistic settings is using an online tool like the <a href="https://www.debugbear.com/test/website-speed?utm_campaign=sm-6">DebugBear website speed test</a> or <a href="https://www.webpagetest.org/">WebPageTest</a>.</p>

<h2 id="it-doesn-t-impact-core-web-vitals-scores">It Doesn’t Impact Core Web Vitals Scores</h2>

<p>These <a href="https://www.debugbear.com/docs/metrics/core-web-vitals?utm_campaign=sm-6">Core Web Vitals</a> everyone talks about are Google’s standard metrics for measuring performance. They go beyond simple “Your page loaded in X seconds” reports by looking at a slew of more pertinent details that are diagnostic of how the page loads, resources that might be blocking other resources, slow user interactions, and how much the page shifts around from loading resources and content. Zeunert has <a href="https://www.smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/">another great post here on Smashing Magazine</a> that discusses each metric in detail.</p>

<p>The main point here is that the simulated data Lighthouse produces may (and often does) differ from performance metrics from other tools. <a href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/">I spent a good deal explaining this in another article.</a> The gist of it is that <strong>Lighthouse scores do not impact Core Web Vitals data</strong>. The reason for that is Core Web Vitals relies on data about real users pulled from the monthly-updated <a href="https://developer.chrome.com/docs/crux">Chrome User Experience (CrUX) report</a>. While CrUX data may be limited by how recently the data was pulled, it is a more accurate reflection of user behaviors and browsing conditions than the simulated data in Lighthouse.</p>

<p>The ultimate point I’m getting at is that Lighthouse is simply ineffective at measuring Core Web Vitals performance metrics. Here’s how I explain it in my bespoke article:</p>

<blockquote>“[Synthetic] data is fundamentally limited by the fact that <strong>it only looks at a single experience in a pre-defined environment</strong>. This environment often doesn’t even match the average real user on the website, who may have a faster network connection or a slower CPU.”</blockquote>

<p>I emphasized the important part. In real life, users are likely to have more than one experience on a particular page. It’s not as though you navigate to a site, let it load, sit there, and then close the page; you’re more likely to do something on that page. And for a Core Web Vital metric that looks for slow paint in response to user input &mdash; namely, <a href="https://www.smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/">Interaction to Next Paint (INP)</a> &mdash; there’s no way for Lighthouse to measure that at all!</p>

<p>It’s the same deal for a metric like Cumulative Layout Shift (CLS) that <a href="https://web.dev/articles/user-centric-performance-metrics#types-of-metrics">measures the</a> <a href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/#why-is-my-lighthouse-cls-score-better-than-the-real-user-data">“visible stability” of a page layout</a> because layout shifts often happen lower on the page <em>after</em> a user has scrolled down. If Lighthouse relied on CrUX data (which it doesn’t), then it would be able to make assumptions based on real users who interact with the page and can experience CLS. Instead, Lighthouse waits patiently for the full page load and never interacts with parts of the page, thus having no way of knowing anything about CLS.</p>

<h2 id="but-it-s-still-a-good-start">But It’s Still a “Good Start”</h2>

<p>That’s what I want you to walk away with at the end of the day. A Lighthouse report is incredibly good at producing reports quickly, thanks to the simulated data it uses. In that sense, I’d say that Lighthouse is a handy “gut check” and maybe even a first step to identifying opportunities to optimize performance.</p>

<p>But a complete picture, it’s not. For that, what we’d want is a tool that leans on <strong>real user data</strong>. Tools that integrate CrUX data are pretty good there. But again, that data is pulled every month (<a href="https://developer.chrome.com/docs/crux/methodology/tools">28 days to be exact</a>) so it may not reflect the most recent user behaviors and interactions, although it is updated daily on a rolling basis and it is indeed possible to <a href="https://developer.chrome.com/docs/crux/history-api">query historical records</a> for larger sample sizes.</p>

<p>Even better is using a tool that monitors users in real-time.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aData%20pulled%20directly%20from%20the%20site%20of%20origin%20is%20truly%20the%20gold%20standard%20data%20we%20want%20because%20it%20comes%20from%20the%20source%20of%20truth.%20That%20makes%20tools%20that%20integrate%20with%20your%20site%20the%20best%20way%20to%20gain%20insights%20and%20diagnose%20issues%20because%20they%20tell%20you%20exactly%20how%20your%20visitors%20are%20experiencing%20your%20site.%0a&url=https://smashingmagazine.com%2f2024%2f11%2fwhy-optimizing-lighthouse-score-not-enough-fast-website%2f">
      
Data pulled directly from the site of origin is truly the gold standard data we want because it comes from the source of truth. That makes tools that integrate with your site the best way to gain insights and diagnose issues because they tell you exactly how your visitors are experiencing your site.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>I’ve written about <a href="https://www.smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/">using the Performance API in JavaScript</a> to evaluate custom and Core Web Vitals metrics, so it’s possible to roll that on your own. But there are plenty of existing services out there that do this for you, complete with visualizations, historical records, and true <strong>real-time user monitoring</strong> (often abbreviated as RUM). What services? Well, <a href="https://www.debugbear.com/?utm_campaing=sm-6">DebugBear is a great place to start</a>. I cited Matt Zeunert earlier, and DebugBear is his product.</p>

<p>So, if what you want is a complete picture of your site’s performance, go ahead and start with Lighthouse. But don’t stop there because you’re only seeing part of the picture. You’ll want to augment your findings and <a href="https://www.debugbear.com/blog/synthetic-vs-rum/?utm_campaing=sm-6">diagnose performance with real-user monitoring</a> for the most complete, accurate picture.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Salma Alam-Naylor</author><title>How To Hack Your Google Lighthouse Scores In 2024</title><link>https://www.smashingmagazine.com/2024/06/how-hack-google-lighthouse-scores-2024/</link><pubDate>Tue, 11 Jun 2024 18:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/06/how-hack-google-lighthouse-scores-2024/</guid><description>Do perfect Lighthouse scores mean the performance of your website is perfect? As it turns out, Lighthouse is influenced by a number of things that can be manipulated and bent to make sites seem more performant than they really are, as Salma Alam-Naylor demonstrates in several experiments.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/06/how-hack-google-lighthouse-scores-2024/" />
              <title>How To Hack Your Google Lighthouse Scores In 2024</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Hack Your Google Lighthouse Scores In 2024</h1>
                  
                    
                    <address>Salma Alam-Naylor</address>
                  
                  <time datetime="2024-06-11T18:00:00&#43;00:00" class="op-published">2024-06-11T18:00:00+00:00</time>
                  <time datetime="2024-06-11T18:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>Sentry.io</b></p>
                

<p>Google Lighthouse has been one of the most effective ways to gamify and promote web page performance among developers. Using Lighthouse, we can assess web pages based on overall performance, accessibility, SEO, and what Google considers “best practices”, all with the click of a button.</p>

<p>We might use these tests to evaluate out-of-the-box performance for front-end frameworks or to celebrate performance improvements gained by some diligent refactoring. And you know you love sharing screenshots of your perfect Lighthouse scores on social media. It’s a well-deserved badge of honor worthy of a confetti celebration.</p>

<figure><a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/1-google-lighthouse-scores-800px.gif"><img src="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/1-google-lighthouse-scores-800px.gif" width="800" height="430" alt="Animated gif of four perfect Google Lighthouse scores with confetti popping in all over the place" /></a></figure>

<p>Just the fact that Lighthouse gets developers like us talking about performance is a win. But, whilst I don’t want to be a party pooper, the truth is that web performance is far more nuanced than this. In this article, we’ll examine how Google Lighthouse calculates its performance scores, and, using this information, we will attempt to “hack” those scores in our favor, <strong>all in the name of fun and science</strong> &mdash; because in the end, Lighthouse is simply a good, but rough guide for debugging performance. We’ll have some fun with it and see to what extent we can “trick” Lighthouse into handing out better scores than we may deserve.</p>

<p>But first, let’s talk about data.</p>

<h2 id="field-data-is-important">Field Data Is Important</h2>

<p>Local performance testing is a great way to understand if your website performance is trending in the right direction, but it won’t paint a full picture of reality. The World Wide Web is the Wild West, and collectively, we’ve almost certainly lost track of the variety of device types, internet connection speeds, screen sizes, browsers, and browser versions that people are using to access websites &mdash; all of which can have an impact on page performance and user experience.</p>

<p>Field data &mdash; and lots of it &mdash; collected by an <a href="http://sentry.io/for/performance/">application performance monitoring</a> tool like Sentry from real people using your website on their devices will give you a far more accurate report of your website performance than your lab data collected from a small sample size using a high-spec super-powered dev machine under a set of controlled conditions. Philip Walton reported in 2021 that “<a href="https://philipwalton.com/articles/my-challenge-to-the-web-performance-community/">almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds</a>” based on data from the HTTP Archive.</p>

<p>Web performance is more than a single <a href="https://sentry.io/for/web-vitals/">core web vital metric</a> or Lighthouse performance score. What we’re talking about goes way beyond the type of raw data we’re working with.</p>

<h2 id="web-performance-is-more-than-numbers">Web Performance Is More Than Numbers</h2>

<p><em>Speed</em> is often the first thing that comes up when talking about web performance &mdash; just how long does a page take to load? This isn’t the worst thing to measure, but we must bear in mind that speed is probably influenced heavily by business KPIs and sales targets. Google <a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-new-industry-benchmarks/">released a report in 2018</a> suggesting that the probability of bounces increases by 32% if the page load time reaches higher than three seconds, and soars to 123% if the page load time reaches 10 seconds. So, we must conclude that converting more sales requires reducing bounce rates. And to reduce bounce rates, we must make our pages <em>load faster</em>.</p>

<p>But what does “load faster” even mean? At some point, we’re physically incapable of making a web page load any faster. Humans &mdash; and the servers that connect them &mdash; are spread around the globe, and modern internet infrastructure can only deliver so many bytes at a time.</p>

<p>The bottom line is that page load is not a single moment in time. In an article titled “<a href="https://web.dev/articles/what-is-speed">What is speed?</a>” Google explains that a page load event is:</p>

<blockquote>[…] “an experience that no single metric can fully capture. There are multiple moments during the load experience that can affect whether a user perceives it as ‘fast’, and if you just focus solely on one, you might miss bad experiences that happen during the rest of the time.”</blockquote>

<p>The key word here is <em>experience</em>. Real web performance is less about <em>numbers</em> and <em>speed</em> than it is about <em>how we experience</em> page load and page usability as users. And this segues nicely into a discussion of how Google Lighthouse calculates performance scores. (It’s much less about pure speed than you might think.)</p>

<h2 id="how-google-lighthouse-performance-scores-are-calculated">How Google Lighthouse Performance Scores Are Calculated</h2>

<p>The Google Lighthouse performance score is calculated using a weighted combination of scores based on core web vital metrics (i.e., First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS)) and other speed-related metrics (i.e., Speed Index (SI) and Total Blocking Time (TBT)) that are <strong>observable throughout the page load timeline</strong>.</p>

<p>This is <a href="https://developer.chrome.com/docs/lighthouse/performance/performance-scoring/#weightings">how the metrics are weighted</a> in the overall score:</p>

<table class="tablesaw break-out">
  <thead>
    <tr>
      <th>Metric</th>
      <th>Weighting (%)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Total Blocking Time</td>
      <td>30</td>
    </tr>
    <tr>
      <td>Cumulative Layout Shift</td>
      <td>25</td>
    </tr>
    <tr>
      <td>Largest Contentful Paint</td>
      <td>25</td>
    </tr>
    <tr>
      <td>First Contentful Paint</td>
      <td>10</td>
    </tr>
    <tr>
      <td>Speed Index</td>
      <td>10</td>
    </tr>
  </tbody>
</table>

<p>The weighting assigned to each score gives us insight into how Google prioritizes the different building blocks of a good user experience:</p>

<h3 id="1-a-web-page-should-respond-to-user-input">1. A Web Page Should Respond to User Input</h3>

<p>The highest weighted metric is <strong>Total Blocking Time (TBT),</strong> a metric that looks at the total time after the <strong>First Contentful Paint (FCP)</strong> to help indicate where the main thread may be blocked long enough to prevent speedy responses to user input. The main thread is considered “blocked” any time there’s a JavaScript task running on the main thread for more than 50ms. Minimizing TBT ensures that a web page responds to physical user input (e.g., key presses, mouse clicks, and so on).</p>

<h3 id="2-a-web-page-should-load-useful-content-with-no-unexpected-visual-shifts">2. A Web Page Should Load Useful Content With No Unexpected Visual Shifts</h3>

<p>The next most weighted Lighthouse metrics are <strong>Largest Contentful Paint (LCP</strong>) and <strong>Cumulative Layout Shift (CLS)</strong>. LCP marks the point in the page load timeline when the page’s main content has <em>likely</em> loaded and is therefore <em>useful</em>.</p>

<p>At the point where the main content has likely loaded, you also want to maintain visual stability to ensure that users can use the page and are not affected by unexpected visual shifts (CLS). A good LCP score is anything less than 2.5 seconds (which is a lot higher than we might have thought, given we are often trying to make our websites <em>as fast as possible</em>).</p>

<h3 id="3-a-web-page-should-load-something">3. A Web Page Should Load Something</h3>

<p>The <strong>First Contentful Paint (FCP)</strong> metric marks the first point in the page load timeline where the user can see <em>something</em> on the screen, and the <strong>Speed Index (SI)</strong> measures how quickly content is visually displayed during page load over time until the page is “complete”.</p>

<p>Your page is scored based on the speed indices of real websites using performance <a href="https://developer.chrome.com/docs/lighthouse/performance/performance-scoring#metric-scores">data from the HTTP Archive</a>. A good FCP score is less than 1.8 seconds and a good SI score is less than 3.4 seconds. Both of these thresholds are higher than you might expect when thinking about <em>speed</em>.</p>

<h2 id="usability-is-favored-over-raw-speed">Usability Is Favored Over Raw Speed</h2>

<p>Google Lighthouse’s performance scoring is, without a doubt, less about speed and more about <strong>usability</strong>. Your SI and FCP could be super quick, but if your LCP takes too long to paint, and if CLS is caused by large images or external content taking some time to load and shifting things visually, then your overall performance score will be lower than if your page was a little slower to render the FCP but didn’t cause any CLS. Ultimately, if the page is unresponsive due to JavaScript blocking the main thread for more than 50ms, your performance score will suffer more than if the page was a little slow to paint the FCP.</p>

<p>To understand more about how the weightings of each metric contribute to the final performance score, you can play about with the sliders on the <a href="https://googlechrome.github.io/lighthouse/scorecalc/">Lighthouse Scoring Calculator</a>, and here’s a rudimentary table demonstrating the effect of skewed individual metric weightings on the overall performance score, proving that page usability and responsiveness is favored over raw speed.</p>

<table class="tablesaw break-out">
  <thead>
    <tr>
      <th>Description</th>
      <th>FCP (ms)</th>
      <th>SI (ms)</th>
      <th>LCP (ms)</th>
      <th>TBT (ms)</th>
      <th>CLS</th>
      <th>Overall Score</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Slow to show something on screen</td>
      <td>6000</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>90</td>
    </tr>
    <tr>
      <td>Slow to load content over time</td>
      <td>0</td>
      <td>5000</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>90</td>
    </tr>
    <tr>
      <td>Slow to load the largest part of the page</td>
      <td>0</td>
      <td>0</td>
      <td>6000</td>
      <td>0</td>
      <td>0</td>
      <td>76</td>
    </tr>
    <tr>
      <td>Visual shifts occurring during page load</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>0.82</td>
      <td>76</td>
    </tr>
     <tr>
      <td>Page is unresponsive to user input</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>2000</td>
      <td>0</td>
      <td>70</td>
    </tr>
  </tbody>
</table>

<p>The overall Google Lighthouse performance score is calculated by converting each raw metric value into a score from 0 to 100 according to where it falls on its Lighthouse scoring distribution, which is a <strong>log-normal</strong> distribution derived from the performance metrics of real website performance data from the HTTP Archive. There are two main takeaways from this mathematically overloaded information:</p>

<ol>
<li>Your Lighthouse performance score is plotted against real website performance data, not in isolation.</li>
<li>Given that the scoring uses log-normal distribution, the relationship between the individual metric values and the overall score is non-linear, meaning you can make substantial improvements to low-performance scores quite easily, but it becomes more difficult to improve an already high score.</li>
</ol>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="461"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png"
			
			sizes="100vw"
			alt="Log-normal distribution curve visualization, high on the left, low on the right."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/2-log-normal-distribution-curve-visualization.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Read more about <a href="https://developer.chrome.com/docs/lighthouse/performance/performance-scoring#metric-scores">how metric scores are determined</a>, including a visualization of the log-normal distribution curve on <a href="http://developer.chrome.com/">developer.chrome.com</a>.</p>

<h2 id="can-we-trick-google-lighthouse">Can We “Trick” Google Lighthouse?</h2>

<p>I appreciate Google’s focus on usability over pure speed in the web performance conversation. It urges developers to think less about aiming for raw numbers and more about the real experiences we build. That being said, I’ve wondered whether today in 2024, it’s possible to fool Google Lighthouse into believing that a bad page in terms of <em>usability and usefulness</em> is actually a great one.</p>

<p>I put on my lab coat and science goggles to investigate. All tests were conducted:</p>

<ul>
<li>Using the Chromium Lighthouse plugin,</li>
<li>In an incognito window in the Arc browser,</li>
<li>Using the “navigation” and “mobile” settings (apart from where described differently),</li>
<li>By me, in a lab (i.e., no field data).</li>
</ul>

<p>That all being said, I fully acknowledge that my controlled test environment contradicts my advice at the top of this post, but the experiment is an interesting ride nonetheless. What I hope you’ll take away from this is that Lighthouse scores are only one piece &mdash; and a tiny one at that &mdash; of a very large and complex web performance puzzle. And, without field data, I’m not sure any of this matters anyway.</p>

<h2 id="how-to-hack-fcp-and-lcp-scores">How to Hack FCP and LCP Scores</h2>

<p><strong>TL;DR: Show the smallest amount of LCP-qualifying content on load to boost the FCP and LCP scores until the Lighthouse test has likely finished.</strong></p>

<p>FCP marks the first point in the page load timeline where the user can see <em>anything</em> at all on the screen, while LCP marks the point in the page load timeline when the main page content (i.e., the largest text or image element) has <em>likely</em> loaded. A fast LCP helps reassure the user that the page is <em>useful</em>. “Likely” and “useful” are the important words to bear in mind here.</p>

<h3 id="what-counts-as-an-lcp-element">What Counts as an LCP Element</h3>

<p>The types of elements on a web page considered by Lighthouse for LCP are:</p>

<ul>
<li><code>&lt;img&gt;</code> elements,</li>
<li><code>&lt;image&gt;</code> elements inside an <code>&lt;svg&gt;</code> element,</li>
<li><code>&lt;video&gt;</code> elements,</li>
<li>An element with a background image loaded using the <code>url()</code> function, (and not a CSS gradient), and</li>
<li>Block-level elements containing text nodes or other inline-level text elements.</li>
</ul>

<p>The following elements are <em>excluded</em> from LCP consideration due to the likelihood they do not contain useful content:</p>

<ul>
<li>Elements with zero opacity (invisible to the user),</li>
<li>Elements that cover the full viewport (likely to be background elements), and</li>
<li>Placeholder images or other images with low entropy (i.e., low informational content, such as a solid-colored image).</li>
</ul>

<p>However, the notion of an image or text element being useful is completely subjective in this case and generally out of the realm of what machine code can reliably determine. For example, <a href="https://hacking-lighthouse.netlify.app/lcp/">I built a page</a> containing nothing but a <code>&lt;h1&gt;</code> element where, after 10 seconds, JavaScript inserts more descriptive text into the DOM and hides the <code>&lt;h1&gt;</code> element.</p>

<p>Lighthouse considers the heading element to be the LCP element in this experiment. At this point, the page load timeline has finished, but the page’s main content has <em>not</em> loaded, even though Lighthouse thinks it is <em>likely</em> to have loaded within those 10 seconds. Lighthouse still awards us with a perfect score of 100 even if the heading is replaced by a single punctuation mark, such as a full stop, which is even <em>less useful</em>.</p>

<p>This test suggests that if you need to load page content via client-side JavaScript, we‘ll want to avoid displaying a skeleton loader screen since that requires loading more elements on the page. And since we know the process will take some time &mdash; and that we can offload the network request from the main thread to a web worker so it won’t affect the TBT &mdash; we can use some arbitrary “splash screen” that contains a minimal viable LCP element (for better FCP scoring). This way, we’re giving Lighthouse the <em>impression</em> that the page is useful to users quicker than it actually is.</p>

<p>All we need to do is include a valid LCP element that contains something that counts as the FCP. While I would never recommend loading your main page content via client-side JavaScript in 2024 (serve static HTML from a CDN instead or build as much of the page as you can on a server), I would definitely not recommend this “hack” for a good user experience, regardless of what the Lighthouse performance score tells you. This approach also won’t earn you any favors with search engines indexing your site, as the robots are unable to discover the main content while it is absent from the DOM.</p>

<p>I also tried this experiment with a variety of random images representing the LCP to make the page even less useful. But given that I used small file sizes &mdash; made smaller and converted into “next-gen” image formats using a third-party image API to help with page load speed &mdash; it seemed that Lighthouse interpreted the elements as “placeholder images” or images with “low entropy”. As a result, those images were disqualified as LCP elements, which is a good thing and makes the LCP slightly less hackable.</p>

<p>View <a href="https://hacking-lighthouse.netlify.app/lcp/">the demo page</a> and use Chromium DevTools in an incognito window to see the results yourself.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="461"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png"
			
			sizes="100vw"
			alt="In-browser proof that the non-useful page scored 100 on Lighthouse performance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/3-non-useful-page-scored-100-lighthouse-performance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This hack, however, probably won’t hold up in many other use cases. Discord, for example, uses the “splash screen” approach when you hard-refresh the app in the browser, and it receives a sad 29 performance score.</p>

<p>Compared to my DOM-injected demo, the LCP element was calculated as some content behind the splash screen rather than elements contained within the splash screen content itself, given there were one or more large images in the focussed text channel I tested on. One could argue that Lighthouse scores are less important for apps that are behind authentication anyway: they don’t need to be indexed by search engines.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="461"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png"
			
			sizes="100vw"
			alt="Lighthouse screenshot of a score of 29 next to a blurred-out Discord server channel."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/4-lighthouse-score-29.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>There are likely many other situations where apps serve user-generated content and you might be unable to control the LCP element entirely, particularly regarding images.</p>

<p>For example, if you can control the sizes of all the images on your web pages, you might be able to take advantage of an interesting hack or “optimization” (in <em>very</em> large quotes) to arbitrarily game the system, as was the case of RentPath. In 2021, developers at RentPath managed to <a href="https://blog.rentpathcode.com/we-increased-our-lighthouse-score-by-17-points-by-making-our-images-larger-83f60b33a942">improve their Lighthouse performance score by 17 points</a> when <em>increasing</em> the size of image thumbnails on a web page. They convinced Lighthouse to calculate the LCP element as one of the larger thumbnails instead of a Google Map tile on the page, which takes considerably longer to load via JavaScript.</p>

<p>The bottom line is that you can gain higher Lighthouse performance scores if you are aware of your LCP element and in control of it, whether that’s through a hack like RentPath’s or mine or a real-deal improvement. That being said, whilst I’ve described the splash screen approach as a hack in this post, that doesn’t mean this type of experience couldn’t offer a purposeful and joyful experience. Performance and user experience are about understanding what’s happening during page load, and it’s also about intent.</p>

<h2 id="how-to-hack-cls-scores">How to Hack CLS Scores</h2>

<p><strong>TL;DR: Defer loading content that causes layout shifts until the Lighthouse test has</strong> <strong><em>likely</em></strong> <strong>finished to make the test think it has enough data. CSS transforms do not negatively impact CLS, except if used in conjunction with new elements added to the DOM.</strong></p>

<p>CLS is measured on a decimal scale; a good score is less than 0.1, and a poor score is greater than 0.25. Lighthouse calculates CLS from the largest burst of unexpected layout shifts that occur during a user’s time on the page based on a combination of the viewport size and the movement of unstable elements in the viewport between two rendered frames. Smaller one-off instances of layout shift may be inconsequential, but a bunch of layout shifts happening one after the other will negatively impact your score.</p>

<p>If you know your page contains annoying layout shifts on load, you can defer them until after the page load event has been completed, thus fooling Lighthouse into thinking there is no CLS. <a href="https://hacking-lighthouse.netlify.app/cls-bad/">This demo page I created</a>, for example, earns a CLS score of 0.143 even though JavaScript immediately starts adding new text elements to the page, shifting the original content up. By pausing the JavaScript that adds new nodes to the DOM by an arbitrary five seconds with a <code>setTimeout()</code>, Lighthouse doesn’t capture the CLS that takes place.</p>

<p><a href="https://hacking-lighthouse.netlify.app/cls-hacked/">This other demo page</a> earns a performance score of 100, even though it is arguably less useful and useable than the last page given that the added elements pop in <em>seemingly</em> at random without any user interaction.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="516"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png"
			
			sizes="100vw"
			alt="Lighthouse performance score of 100 following the second test."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/5-lighthouse-performance-score-100-second-test.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Whilst it is possible to defer layout shift events for a page load test, this hack definitely won’t work for field data and user experience over time (which is a more important focal point, as we discussed earlier). If we perform a “time span” test in Lighthouse on the page with deferred layout shifts, Lighthouse will correctly report a non-green CLS score of around 0.186.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="516"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png"
			
			sizes="100vw"
			alt="Screenshot of a timespan test performed on the same page with layout shifts."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/6-timespan-test.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If you do want to intentionally create a chaotic experience similar to the demo, you can use CSS animations and transforms to more purposefully pop the content into view on the page. In <a href="https://web.dev/articles/cls">Google’s guide to CLS</a>, they state that “content that moves gradually and naturally from one position to another can often help the user better understand what’s going on and guide them between state changes” &mdash; again, highlighting the importance of user experience in context.</p>

<p>On <a href="https://hacking-lighthouse.netlify.app/cls-animated/">this next demo page</a>, I’m using CSS <code>transform</code> to <code>scale()</code> the text elements from <code>0</code> to <code>1</code> and move them around the page. The transforms fail to trigger CLS because the text nodes are already in the DOM when the page loads. That said, I did observe in my testing that if the text nodes are added to the DOM programmatically after the page loads via JavaScript and <em>then</em> animated, Lighthouse will indeed detect CLS and score things accordingly.</p>

<h2 id="you-can-t-hack-a-speed-index-score">You Can’t Hack a Speed Index Score</h2>

<p>The Speed Index score is based on the visual progress of the page as it loads. The quicker your content loads nearer the beginning of the page load timeline, the better.</p>

<p>It is possible to do some hack to trick the Speed Index into thinking a page load timeline is <em>slower</em> than it is. Conversely, there’s no real way to “fake” loading content faster than it does. The only way to make your Speed Index score better is to optimize your web page for loading as much of the page as possible, as soon as possible. Whilst not entirely realistic in the web landscape of 2024 (mainly because it would put designers out of a job), you could go all-in to lower your Speed Index as much as possible by:</p>

<ul>
<li>Delivering static HTML web pages only (no server-side rendering) straight from a CDN,</li>
<li>Avoiding images on the page,</li>
<li>Minimizing or eliminating CSS, and</li>
<li>Preventing JavaScript or any external dependencies from loading.</li>
</ul>

<h2 id="you-also-can-t-really-hack-a-tbt-score">You Also Can’t (Really) Hack A TBT Score</h2>

<p>TBT measures the total time after the FCP where the main thread was blocked by JavaScript tasks for long enough to prevent responses to user input. A good TBT score is anything lower than 200ms.</p>

<p>JavaScript-heavy web applications (such as single-page applications) that perform complex state calculations and DOM manipulation on the client on page load (rather than on the server before sending rendered HTML) are prone to suffering poor TBT scores. In this case, you could probably hack your TBT score by deferring all JavaScript until after the Lighthouse test has finished. That said, you’d need to provide some kind of placeholder content or loading screen to satisfy the FCP and LCP and to inform users that something will happen <em>at some point</em>. Plus, you’d have to go to extra lengths to hack around the front-end framework you’re using. (You don’t want to load a placeholder page that, at some point in the page load timeline, loads a separate React app after an arbitrary amount of time!)</p>

<p>What’s interesting is that while we’re still doing all sorts of fancy things with JavaScript in the client, advances in the modern web ecosystem are helping us all reduce the probability of a less-than-stellar TBT score. Many front-end frameworks, in partnership with modern hosting providers, are capable of rendering pages and processing complex logic on demand without any client-side JavaScript. While eliminating JavaScript on the client is not the goal, we certainly have a lot of options to use a lot <em>less</em> of it, thus minimizing the risk of doing too much computation on the main thread on page load.</p>

<h2 id="bottom-line-lighthouse-is-still-just-a-rough-guide">Bottom Line: Lighthouse Is Still Just A Rough Guide</h2>

<p>Google Lighthouse can’t detect everything that’s wrong with a particular website. Whilst Lighthouse performance scores prioritize page usability in terms of responding to user input, it still can’t detect every terrible usability or accessibility issue in 2024.</p>

<p>In 2019, Manuel Matuzović <a href="https://www.matuzo.at/blog/building-the-most-inaccessible-site-possible-with-a-perfect-lighthouse-score/">published an experiment</a> where he intentionally created a terrible page that Lighthouse thought was pretty great. I hypothesized that five years later, Lighthouse might do better; but it doesn’t.</p>

<p>On this final <a href="https://hacking-lighthouse.netlify.app/unusable/">demo page</a> I put together, input events are disabled by CSS and JavaScript, making the page technically unresponsive to user input. After five seconds, JavaScript flips a switch and allows you to click the button. The page still scores 100 for both performance <em>and</em> accessibility.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="461"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png"
			
			sizes="100vw"
			alt="Lighthouse showing perfect performance and accessibility scores for a useless, inaccessible page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/how-hack-google-lighthouse-scores-2024/7-lighthouse-perfect-performance-useless-inaccessible-page.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You really can’t rely on Lighthouse as a substitute for usability testing and common sense.</p>

<h2 id="some-more-silly-hacks">Some More Silly Hacks</h2>

<p>As with everything in life, there’s always a way to game the system. Here are some more tried and tested guaranteed hacks to make sure your Lighthouse performance score artificially knocks everyone else’s out of the park:</p>

<ul>
<li>Only run Lighthouse tests using the fastest and highest-spec hardware.</li>
<li>Make sure your internet connection is the fastest it can be; relocate if you need to.</li>
<li>Never use field data, only lab data, collected using the aforementioned fastest and highest-spec hardware and super-speed internet connection.</li>
<li>Rerun the tests in the lab using different conditions and all the special code hacks I described in this post until you get the result(s) you want to impress your friends, colleagues, and random people on the internet.</li>
</ul>

<p><strong>Note</strong>: <em>The best way to learn about web performance and how to optimize your websites is to do the complete opposite of everything we’ve covered in this article all of the time. And finally, to seriously level up your performance skills, <a href="https://sentry.io/for/performance/?utm_source=smashingmag&amp;utm_medium=paid-community&amp;utm_campaign=perf-fy25q2-evergreen&amp;utm_content=blog-lighthouseblog-signup">use an application monitoring tool like Sentry</a>. Think of Lighthouse as the canary and Sentry as the real-deal production-data-capturing, lean, mean, <a href="https://docs.sentry.io/product/performance/web-vitals/?utm_source=smashingmag&amp;utm_medium=paid-community&amp;utm_campaign=perf-fy25q2-evergreen&amp;utm_content=blog-lighthouseblog-signup">web vitals</a> machine.</em></p>

<p>And finally-finally, <a href="https://hacking-lighthouse.netlify.app/">here’s the link to the full demo site</a> for educational purposes.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Addy Osmani</author><title>Scaling Success: Key Insights And Practical Takeaways</title><link>https://www.smashingmagazine.com/2024/06/scaling-success-key-insights-pratical-takeaways/</link><pubDate>Tue, 04 Jun 2024 12:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/06/scaling-success-key-insights-pratical-takeaways/</guid><description>The web is still a young platform, and we’re only now beginning to recognize what “success” looks like for large projects. In his recent Smashing book, &lt;a href="https://www.smashingmagazine.com/printed-books/success-at-scale/">Success at Scale&lt;/a>, Addy Osmani presents practical case studies featuring the web’s most renowned companies and their efforts to make big changes to existing apps and sites. In this article, Addy shows some of the key insights he has learned.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/06/scaling-success-key-insights-pratical-takeaways/" />
              <title>Scaling Success: Key Insights And Practical Takeaways</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Scaling Success: Key Insights And Practical Takeaways</h1>
                  
                    
                    <address>Addy Osmani</address>
                  
                  <time datetime="2024-06-04T12:00:00&#43;00:00" class="op-published">2024-06-04T12:00:00+00:00</time>
                  <time datetime="2024-06-04T12:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In <a href="https://www.smashingmagazine.com/printed-books/success-at-scale/"><em>Success at Scale</em></a>, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.</p>

<p>Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build <strong>a roadmap for teams striving to achieve scalable success</strong> in the ever-evolving digital landscape.</p>

<h2 id="cultivating-a-mindset-for-scaling-success">Cultivating A Mindset For Scaling Success</h2>

<p>The foundation of scaling success lies in fostering the right mindset within your team. The case studies in <em>Success at Scale</em> highlight several critical mindsets that permeate the culture of successful organizations.</p>

<h3 id="user-centricity">User-Centricity</h3>

<blockquote>Successful teams prioritize the user experience above all else.</blockquote>

<p>They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.</p>

<p>By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.</p>

<h3 id="data-driven-decision-making">Data-Driven Decision Making</h3>

<blockquote>Scaling success relies on data, not assumptions.</blockquote>

<p>Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. <a href="https://www.smashingmagazine.com/2021/05/improving-performance-shopify-themes-case-study/">Shopify’s UI performance improvements</a> showcase the power of <strong>data-driven optimization</strong>, using detailed profiling and user data to prioritize efforts and drive meaningful results.</p>

<p>By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.</p>

<h3 id="continuous-improvement">Continuous Improvement</h3>

<blockquote>Scaling is an ongoing process, not a one-time achievement.</blockquote>

<p>Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. <a href="https://www.smashingmagazine.com/2021/12/core-web-vitals-case-study-smashing-magazine/">Smashing Magazine’s case study on enhancing Core Web Vitals</a> demonstrates <strong>the impact of iterative enhancements</strong>, leading to significant performance gains and improved user satisfaction.</p>

<p>By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.</p>

<h3 id="collaboration-and-inclusivity">Collaboration And Inclusivity</h3>

<blockquote>Silos hinder scalability.</blockquote>

<p>High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. <a href="https://www.smashingmagazine.com/2022/08/organization-improved-web-accessibility-case-study/">The Understood’s accessibility journey</a> highlights the power of <strong>cross-functional collaboration</strong>, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.</p>

<p>By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed <strong>inclusive design practices</strong> throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p><p>Meet <a data-instant href="/the-smashing-newsletter/"><strong>Smashing Email Newsletter</strong></a> with useful tips on front-end, design &amp; UX. Subscribe and <strong>get “Smart Interface Design Checklists”</strong> &mdash; a <strong>free PDF deck</strong> with 150+ questions to ask yourself when designing and building almost <em>anything</em>.</p><div><section class="nlbf"><form action="//smashingmagazine.us1.list-manage.com/subscribe/post?u=16b832d9ad4b28edf261f34df&amp;id=a1666656e0" method="post"><div class="nlbwrapper"><label for="mce-EMAIL-hp" class="sr-only">Your (smashing) email</label><div class="nlbgroup"><input type="email" name="EMAIL" class="nlbf-email" id="mce-EMAIL-hp" placeholder="Your email">
<input type="submit" value="Meow!" name="subscribe" class="nlbf-button"></div></div></form><style>.c-garfield-the-cat .nlbwrapper{margin-bottom: 0;}.nlbf{display:flex;padding-bottom:.25em;padding-top:.5em;text-align:center;letter-spacing:-.5px;color:#fff;font-size:1.15em}.nlbgroup:hover{box-shadow:0 1px 7px -5px rgba(50,50,93,.25),0 3px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025)}.nlbf .nlbf-button,.nlbf .nlbf-email{flex-grow:1;flex-shrink:0;width:auto;margin:0;padding:.75em 1em;border:0;border-radius:11px;background:#fff;font-size:1em;box-shadow:none}.promo-box .nlbf-button:focus,.promo-box input.nlbf-email:active,.promo-box input.nlbf-email:focus{box-shadow:none}.nlbf-button:-ms-input-placeholder,.nlbf-email:-ms-input-placeholder{color:#777;font-style:italic}.nlbf-button::-webkit-input-placeholder,.nlbf-email::-webkit-input-placeholder{color:#777;font-style:italic}.nlbf-button:-ms-input-placeholder,.nlbf-button::-moz-placeholder,.nlbf-button::placeholder,.nlbf-email:-ms-input-placeholder,.nlbf-email::-moz-placeholder,.nlbf-email::placeholder{color:#777;font-style:italic}.nlbf .nlbf-button{transition:all .2s ease-in-out;color:#fff;background-color:#0168b8;font-weight:700;box-shadow:0 1px 1px rgba(0,0,0,.3);width:100%;border:0;border-left:1px solid #ddd;flex:2;border-top-left-radius:0;border-bottom-left-radius:0}.nlbf .nlbf-email{border-top-right-radius:0;border-bottom-right-radius:0;width:100%;flex:4;min-width:150px}@media all and (max-width:650px){.nlbf .nlbgroup{flex-wrap:wrap;box-shadow:none}.nlbf .nlbf-button,.nlbf .nlbf-email{border-radius:11px;border-left:none}.nlbf .nlbf-email{box-shadow:0 13px 27px -5px rgba(50,50,93,.25),0 8px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025);min-width:100%}.nlbf .nlbf-button{margin-top:1em;box-shadow:0 1px 1px rgba(0,0,0,.5)}}.nlbf .nlbf-button:active,.nlbf .nlbf-button:focus,.nlbf .nlbf-button:hover{cursor:pointer;color:#fff;background-color:#0168b8;border-color:#dadada;box-shadow:0 1px 1px rgba(0,0,0,.3)}.nlbf .nlbf-button:active,.nlbf .nlbf-button:focus{outline:0!important;text-shadow:1px 1px 1px rgba(0,0,0,.3);box-shadow:inset 0 3px 3px rgba(0,0,0,.3)}.nlbgroup{display:flex;box-shadow:0 13px 27px -5px rgba(50,50,93,.25),0 8px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025);border-radius:11px;transition:box-shadow .2s ease-in-out}.nlbwrapper{display:flex;flex-direction:column;justify-content:center}.nlbf form{width:100%}.nlbf .nlbgroup{margin:0}.nlbcaption{font-size:.9em;line-height:1.5em;color:#fff;border-radius:11px;padding:.5em 1em;display:inline-block;background-color:#0067b859;text-shadow:1px 1px 1px rgba(0,0,0,.3)}.wf-loaded-stage2 .nlbf .nlbf-button{font-family:Mija}.mts{margin-top: 5px !important;}.mbn{margin-bottom: 0 !important;}</style></section><p class="mts mbn"><small class="promo-box__footer mtm block grey"><em>Once a week. Useful tips on <a href="https://www.smashingmagazine.com/the-smashing-newsletter/">front-end &amp; UX</a>. Trusted by 207.000 friendly folks.</em></small></p></div></p>
</div>
</div>
<div class="feature-panel-right-col">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-firechat.svg"
    alt="Feature Panel"
    width="310"
    height="400"
/>

</div>

<p></div>
</aside>
</div></p>

<h2 id="making-strategic-decisions-for-scalability">Making Strategic Decisions for Scalability</h2>

<p>Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.</p>

<h3 id="technology-choices">Technology Choices</h3>

<p>Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies <strong>the importance of choosing a technology stack that aligns with long-term scalability goals</strong>.</p>

<p>By adopting Next.js, <a href="https://www.notion.so/blog/migrating-notion-marketing-to-next-js">Notion was able to leverage its performance optimizations</a>, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.</p>

<h3 id="ship-only-the-code-a-user-needs-when-they-need-it">Ship Only The Code A User Needs, When They Need It</h3>

<p>This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, <a href="https://instagram-engineering.com/making-instagram-com-faster-code-size-and-execution-optimizations-part-4-57668be796a8">Instagram made a concerted effort</a> to improve the web performance of <a href="http://instagram.com/">instagram.com</a>, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.</p>

<p>The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.</p>

<p>While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:</p>

<ul>
<li>Improved code-splitting, moving more logic off the critical path,</li>
<li>Optimizing scrolling performance,</li>
<li>Adapting to varying network conditions,</li>
<li>Modularizing their Redux state management.</li>
</ul>

<p>Continued efforts will be needed to keep <a href="http://instagram.com/">instagram.com</a> performing well as new features are added and the product grows in complexity.</p>

<h3 id="accessibility-integration">Accessibility Integration</h3>

<blockquote>Accessibility should be an integral part of the product development process, not an afterthought.</blockquote>

<p>Wix’s <strong>comprehensive approach to accessibility</strong>, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.</p>

<p>By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, <a href="https://support.wix.com/en/article/accessibility-checklist-for-improving-your-sites-accessibility">Wix was able to create a platform that empowered its users to build accessible websites</a>. This <strong>holistic approach to accessibility</strong> not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.</p>

<h3 id="developer-experience-investment">Developer Experience Investment</h3>

<blockquote>Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.</blockquote>

<p>Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.</p>

<p><a href="https://blog.apideck.com/how-to-build-a-great-developer-experience">By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources</a>, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.</p>

<div class="partners__lead-place"></div>

<h2 id="leveraging-performance-optimization-techniques">Leveraging Performance Optimization Techniques</h2>

<p>Achieving optimal performance is a critical aspect of scaling success. The case studies in <em>Success at Scale</em> showcase various performance optimization techniques that have proven effective.</p>

<h3 id="progressive-enhancement-and-graceful-degradation">Progressive Enhancement and Graceful Degradation</h3>

<p>Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in <em>Success at Scale</em> highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.</p>

<p>By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.</p>

<h3 id="adaptive-loading-strategies">Adaptive Loading Strategies</h3>

<p>The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.</p>

<p>By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.</p>

<h3 id="efficient-resource-management">Efficient Resource Management</h3>

<p>Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like <strong>lazy loading</strong> and <strong>responsive images</strong> to reduce page weight and improve load times.</p>

<p>By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.</p>

<h3 id="continuous-performance-monitoring">Continuous Performance Monitoring</h3>

<p>Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.</p>

<p>By establishing <strong>a performance monitoring infrastructure</strong>, Yahoo! Japan News <a href="https://web.dev/case-studies/yahoo-japan-news">gains visibility into the real-world performance experienced by their users</a>. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.</p>

<h2 id="embracing-accessibility-and-inclusive-design">Embracing Accessibility and Inclusive Design</h2>

<p>Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in <em>Success at Scale</em> emphasize the importance of accessibility and inclusive design.</p>

<h3 id="comprehensive-accessibility-testing">Comprehensive Accessibility Testing</h3>

<p>Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.</p>

<p>By leveraging tools like <a href="https://www.deque.com/axe/devtools/">Deque’s axe</a> and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues <em>before</em> they reach production. This <strong>proactive approach to accessibility testing</strong> not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes <a href="https://www.linkedin.com/help/linkedin/ask/DAD">the importance of manual testing and user feedback in uncovering complex accessibility issues</a> that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.</p>

<h3 id="inclusive-design-practices">Inclusive Design Practices</h3>

<p>Designing with accessibility in mind from the outset leads to more inclusive and usable products. <em>Success With Scale</em>\’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.</p>

<p>By embracing inclusive design principles, <a href="https://www.intercom.com/blog/live-chat-support/">Intercom ensures that their messenger is usable by a wide range of users</a>, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By <strong>designing with empathy</strong> and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.</p>

<h3 id="user-research-and-feedback">User Research And Feedback</h3>

<p>Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. <a href="https://www.smashingmagazine.com/2022/08/organization-improved-web-accessibility-case-study/">The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts</a> to identify and address accessibility barriers effectively.</p>

<p>By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.</p>

<blockquote>By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.</blockquote>

<h3 id="accessibility-as-a-shared-responsibility">Accessibility As A Shared Responsibility</h3>

<p>Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of <strong>educating and empowering teams to prioritize accessibility</strong>, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.</p>

<p><a href="https://www.shopify.com/accessibility/policy#8-training-for-staff">By providing accessibility training, guidelines, and resources</a> to designers, developers, and content creators, Shopify ensures that accessibility is considered at <em>every</em> stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters <strong>a culture of inclusivity and empathy</strong>. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also <a href="https://www.shopify.com/partners/blog/theme-store-accessibility-requirements">sets an example for the wider industry on the importance of inclusive design</a>.</p>

<h2 id="fostering-a-culture-of-collaboration-and-knowledge-sharing">Fostering A Culture of Collaboration And Knowledge Sharing</h2>

<p>Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in <em>Success at Scale</em> highlight the impact of effective collaboration and knowledge management practices.</p>

<h3 id="cross-functional-collaboration">Cross-Functional Collaboration</h3>

<p>Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.</p>

<p><a href="https://airbnb.design/the-way-we-build/">By establishing a shared language and a set of reusable components</a>, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also <strong>accelerates the development process</strong> by reducing duplication of effort and promoting code reuse.</p>

<h3 id="knowledge-sharing-and-documentation">Knowledge Sharing And Documentation</h3>

<p>Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. <a href="https://slab.com/blog/stripe-writing-culture/">Stripe’s investment in internal frameworks and documentation</a> exemplifies the value of creating a shared understanding and facilitating knowledge transfer.</p>

<p>By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only <strong>reduces the learning curve for new hires</strong> but also <strong>promotes consistency and adherence to established patterns and practices</strong>. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.</p>

<h3 id="communities-of-practice">Communities Of Practice</h3>

<p>Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of <a href="https://ux.shopify.com/accessibility-is-more-than-a-technical-problem-ca6bb9dee8ce">creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward</a>.</p>

<p>By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters <strong>a sense of community</strong> and <strong>collective ownership</strong>. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.</p>

<h3 id="leveraging-open-source-and-external-expertise">Leveraging Open Source And External Expertise</h3>

<p>Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. <a href="https://nolanlawson.com/2019/11/05/what-ive-learned-about-accessibility-in-spas/">Pinafore’s journey</a> highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.</p>

<p>By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.</p>

<div class="partners__lead-place"></div>

<h2 id="the-path-to-sustainable-success">The Path To Sustainable Success</h2>

<p>Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The <em>Success at Scale</em> book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.</p>

<p>By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.</p>

<p>Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.</p>

<p>Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.</p>

<p>Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.</p>

<p>The case studies featured in <em>Success at Scale</em> serve as powerful examples of how these principles and strategies can be applied in <strong>real-world contexts</strong>. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.</p>

<p>As you embark on your path to scaling success, remember that <strong>it is an ongoing process of iteration, learning, and adaptation</strong>. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the <em>Success at Scale</em> book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Scaling successful web products requires <strong>a holistic approach</strong> that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the <em>Success at Scale</em> book, teams can gain valuable insights and practical guidance on their journey towards <strong>sustainable success</strong>.</p>

<p><strong>Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability.</strong> By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.</p>

<p>Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.</p>

<p>Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://www.smashingmagazine.com/printed-books/success-at-scale/" title="Success at Scale">
    <img width="900" height="621" style="border-radius: 11px" src="https://files.smashing.media/articles/success-at-scale-book-release/success-at-scale-support-opt.png" alt="Success at Scale. Thanks for your kind support!">
    </a>
</figure>


<div class="book-cta__inverted">
	


	
	
	




















<div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)">

  
 
<nav class="content-tabs content-tabs--books">
  <ul>
    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          Print + eBook
        </button>
      </a>
    </li>

    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          eBook
        </button>
      </a>
    </li>
  </ul>
</nav>


	<div class="book-cta__col book-cta__hardcover content-tab--content">
		<h3 class="book-cta__title">
			<span>Print + eBook</span>
		</h3>

		
			



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="success-at-scale" data-type="Book">
	{
		"sku": "success-at-scale",
		"type": "Book",
		"price": "44.00",
		
		"prices": [{
			"amount": "44.00",
			"currency": "USD",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}, {
			"amount": "44.00",
			"currency": "EUR",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="success-at-scale" data-type="Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
44<span class="sup">.00</span>


    

    
  </span>
</span>

		
		<button class="btn btn--full btn--medium btn--text-shadow"
						
		        data-product-path="/printed-books/success-at-scale/"
						data-product-sku="success-at-scale"
            data-author="Addy Osmani"
            data-authors=""
						data-link=""
						
            data-component="AddToCart">
			 Get Print + eBook
		</button>
		<p class="book-cta__desc">
			Quality hardcover. <a href="https://www.smashingmagazine.com/delivery-times/">Free worldwide shipping</a>.<br/> 100 days money-back-guarantee.
		</p>
	</div>
	<div class="book-cta__col book-cta__ebook content-tab--content">
		<h3 class="book-cta__title">
			<span>eBook</span>
		</h3>

		
			<div data-audience="anonymous free supporter" data-remove="true">
				



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="success-at-scale-ebook" data-type="E-Book">
	{
		"sku": "success-at-scale-ebook",
		"type": "E-Book",
		"price": "19.00",
		
		"prices": [{
			"amount": "19.00",
			"currency": "USD"
		}, {
			"amount": "19.00",
			"currency": "EUR"
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="success-at-scale-ebook" data-type="E-Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
19<span class="sup">.00</span>


    

    
  </span>
</span>

			</div>
		

    
      <span class="book-cta__price hidden" data-audience="smashing member" data-remove="true">
        <span class="green">Free!</span>
      </span>
    

		<button class="btn btn--full btn--medium btn--text-shadow"
		        data-product-path="/printed-books/success-at-scale/"
						data-product-sku="success-at-scale-ebook"
            data-author="Addy Osmani"
            data-authors=""
						data-link=""
            data-component="AddToCart"
						
            
              data-audience="anonymous free supporter"
              data-remove="true"
            
            >
			  Get the eBook
		</button>
		<p
      class="book-cta__desc"
      
        data-audience="anonymous free supporter"
        data-remove="true"
      
    >
			DRM-free, of course. ePUB, Kindle, PDF.<br/>Included with your <a href="https://www.smashingmagazine.com/membership/">Smashing Membership.</a>
		</p>

    
  <div data-audience="smashing member" class="hidden" data-remove="true">
    <a href="successatscalepdf" class="btn btn--medium btn--green btn--full js-add-to-cart">
      Get the eBook
    </a>
    <p class="book-cta__desc book-cta__desc--light">
      <a href="successatscalepdf">Download PDF</a>, <a href="successatscaleepub">ePUB</a>, <a href="successatscalemobi">Kindle</a>.<br/>Thanks for being smashing!&nbsp;❤️
    </p>
  </div>


	</div>
</div>

</div>

<h2 id="were-trying-out-something-new">We’re Trying Out Something New</h2>

<p>In an effort to conserve resources here at Smashing, we’re trying something new with <em>Success at Scale</em>. The printed book is 304 pages, and we make an <strong>expanded PDF version available to everyone who purchases a print book</strong>. This accomplishes a few good things:</p>

<ul>
<li>We will use <strong>less paper and materials</strong> because we are making a smaller printed book;</li>
<li>We’ll use fewer resources in general to print, ship, and store the books, leading to a <strong>smaller carbon footprint</strong>; and</li>
<li>Keeping the book at more manageable size means we can <strong>continue to offer free shipping</strong> on all Smashing orders!</li>
</ul>

<p>Smashing Books have always been printed with materials from <a href="https://fsc.org/en">FSC Certified</a> forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://www.smashingmagazine.com/printed-books/success-at-scale/" title="Success at Scale">
    <img width="900" height="621" style="border-radius: 11px" src="https://files.smashing.media/articles/success-at-scale-book-release/success-at-scale-new-addition-opt.png" alt="Success at Scale. Thanks for your kind support!">
    </a>
</figure>

<h2 id="community-matters">Community Matters ❤️</h2>

<p>Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful <strong>community</strong>. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be <a href="https://www.smashingmagazine.com/membership">free for <em>Smashing Members</em></a>. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)</p>

<h2 id="more-smashing-books-and-goodies">More Smashing Books &amp; Goodies</h2>

<p>Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the <strong>core of everything we do</strong> at Smashing.</p>

<p>In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as <a href="/printed-books/">printed books that stand the test of time</a>. Heather and Steven are two of these people. Have you checked out their books already?</p>

<div class="book-grid break-out book-grid__in-post">

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/understanding-privacy/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/d2da7a90-acdb-43c7-82b2-5225c33ca4d7/understanding-privacy-cover-new-opt.png" alt="Understanding Privacy" width="160" height="232" /></a></div><figcaption><h4 class="book--featured__title"><a href="/printed-books/understanding-privacy/" style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;">Understanding Privacy</a></h4><p class="book--featured__desc">Everything you need to know to put your users first and make a better web.</p><p><a style="line-height: 1.3 !important; font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/understanding-privacy/" data-product-path="/printed-books/understanding-privacy/" data-product-sku="understanding-privacy" data-component="AddToCart">Get Print + eBook</a></p></figcaption></figure>

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/touch-design-for-mobile-interfaces/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/b14658fc-bb2d-41a6-8d1a-70eaaf1b8ec8/touch-design-book-shop-opt.png" alt="Touch Design for Mobile Interfaces" width="160" height="232" /></a></div><figcaption><h4 class="book--featured__title"><a style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;" href="/printed-books/touch-design-for-mobile-interfaces/">Touch Design for Mobile Interfaces</a></h4><p class="book--featured__desc">Learn how touchscreen devices really work &mdash; and how people really use them.</p><p><a style="line-height: 1.3 !important; font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/touch-design-for-mobile-interfaces/" data-product-path="/printed-books/touch-design-for-mobile-interfaces/" data-product-sku="touch-design-for-mobile-interfaces" data-component="AddToCart">Get Print + eBook</a></p></figcaption></figure>

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/checklist-cards/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/efffa8a0-82a0-415a-8aa5-8684a32083eb/checklist-cards-box-opt.png"  width="160" height="232" alt="Interface Design Checklists Cards" /></a></div><figcaption><h4 class="book--featured__title"><a style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;" href="/printed-books/checklist-cards/">Interface Design Checklists</a></h4><p class="book--featured__desc">100 practical cards for common interface design challenges.</p>
<p><a style="line-height: 1.3 !important; font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/checklist-cards/" data-product-path="/printed-books/checklist-cards/" data-product-sku="checklist-cards" data-component="AddToCart">Get Print + eBook</a></p></figcaption></figure>

</div>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk, vf, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lazar Nikolov</author><title>The Forensics Of React Server Components (RSCs)</title><link>https://www.smashingmagazine.com/2024/05/forensics-react-server-components/</link><pubDate>Thu, 09 May 2024 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/05/forensics-react-server-components/</guid><description>We love client-side rendering for the way it relieves the server of taxing operations, but serving an empty HTML page often leads to taxing user experiences during the initial page load. We love server-side rendering because it allows us to serve static assets on speedy CDNs, but they’re unfit for large-scale projects with dynamic content. React Server Components (RSCs) combine the best of both worlds, and author Lazar Nikolov thoroughly examines how we got here with a deep look at the impact that RSCs have on the page load timeline.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/05/forensics-react-server-components/" />
              <title>The Forensics Of React Server Components (RSCs)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Forensics Of React Server Components (RSCs)</h1>
                  
                    
                    <address>Lazar Nikolov</address>
                  
                  <time datetime="2024-05-09T13:00:00&#43;00:00" class="op-published">2024-05-09T13:00:00+00:00</time>
                  <time datetime="2024-05-09T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>Sentry.io</b></p>
                

<p>In this article, we’re going to look deeply at React Server Components (RSCs). They are the latest innovation in React’s ecosystem, leveraging both server-side and client-side rendering as well as <a href="https://en.wikipedia.org/wiki/Chunked_transfer_encoding">streaming HTML</a> to deliver content as fast as possible.</p>

<p>We will get really nerdy to get a full understanding of how RSCs fit into the React picture, the level of control they offer over the rendering lifecycle of components, and what page loads look like with RSCs in place.</p>

<p>But before we dive into all of that, I think it’s worth looking back at how React has rendered websites up until this point to set the context for why we need RSCs in the first place.</p>

<h2 id="the-early-days-react-client-side-rendering">The Early Days: React Client-Side Rendering</h2>

<p>The first React apps were rendered on the client side, i.e., in the browser. As developers, we wrote apps with JavaScript classes as components and packaged everything up using bundlers, like Webpack, in a nicely compiled and tree-shaken heap of code ready to ship in a production environment.</p>

<p>The HTML that returned from the server contained a few things, including:</p>

<ul>
<li>An HTML document with metadata in the <code>&lt;head&gt;</code> and a blank <code>&lt;div&gt;</code> in the <code>&lt;body&gt;</code> used as a hook to inject the app into the DOM;</li>
<li>JavaScript resources containing React’s core code and the actual code for the web app, which would generate the user interface and populate the app inside of the empty <code>&lt;div&gt;</code>.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="566"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg"
			
			sizes="100vw"
			alt="Diagram of the client-side rendering process of a React app, starting with a blank loading page in the browser followed by a series of processes connected to CDNs and APIs to produce content on the loading page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1. (<a href='https://files.smashing.media/articles/forensics-react-server-components/1-client-side-rendering-process.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A web app under this process is only fully interactive once JavaScript has fully completed its operations. You can probably already see the tension here that comes with an <strong>improved developer experience (DX) that negatively impacts the user experience (UX)</strong>.</p>

<p>The truth is that there were (and are) pros and cons to CSR in React. Looking at the positives, web applications delivered <strong>smooth, quick transitions</strong> that reduced the overall time it took to load a page, thanks to reactive components that update with user interactions without triggering page refreshes. CSR lightens the server load and allows us to serve assets from speedy content delivery networks (CDNs) capable of delivering content to users from a server location geographically closer to the user for even more optimized page loads.</p>

<p>There are also not-so-great consequences that come with CSR, most notably perhaps that components could fetch data independently, leading to <a href="https://blog.sentry.io/fetch-waterfall-in-react/"><strong>waterfall network requests</strong></a> that dramatically slow things down. This may sound like a minor nuisance on the UX side of things, but the damage can actually be quite large on a human level. Eric Bailey’s “<a href="https://ericwbailey.design/published/modern-health-frameworks-performance-and-harm/">Modern Health, frameworks, performance, and harm</a>” should be a cautionary tale for all CSR work.</p>

<p>Other negative CSR consequences are not quite as severe but still lead to damage. For example, it used to be that an HTML document containing nothing but metadata and an empty <code>&lt;div&gt;</code> was illegible to search engine crawlers that never get the fully-rendered experience. While that’s solved today, the SEO hit at the time was an anchor on company sites that rely on search engine traffic to generate revenue.</p>

<h2 id="the-shift-server-side-rendering-ssr">The Shift: Server-Side Rendering (SSR)</h2>

<p>Something needed to change. CSR presented developers with a powerful new approach for constructing speedy, interactive interfaces, but users everywhere were inundated with blank screens and loading indicators to get there. The solution was to move the rendering experience from the <strong>client</strong> to the <strong>server</strong>. I know it sounds funny that we needed to improve something by going back to the way it was before.</p>

<p>So, yes, React gained server-side rendering (SSR) capabilities. At one point, SSR was such a topic in the React community that <a href="https://sentry.io/resources/moving-to-server-side-rendering/">it had a moment</a> in the spotlight. The move to SSR brought significant changes to app development, specifically in how it influenced React behavior and how content could be delivered by way of servers instead of browsers.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="600"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg"
			
			sizes="100vw"
			alt="Diagram of the server-side rendering process of a React app, starting with a blank loading page in the browser followed by a screen of un-interactive content, then a fully interactive page of content."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2. (<a href='https://files.smashing.media/articles/forensics-react-server-components/2-diagram-server-side-rendering-process.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="addressing-csr-limitations">Addressing CSR Limitations</h3>

<p>Instead of sending a blank HTML document with SSR, we rendered the initial HTML on the server and sent it to the browser. The browser was able to immediately start displaying the content without needing to show a loading indicator. This significantly improves the <a href="https://docs.sentry.io/product/performance/web-vitals/web-vitals-concepts/#first-contentful-paint-fcp">First Contentful Paint (FCP) performance metric in Web Vitals</a>.</p>

<p>Server-side rendering also fixed the SEO issues that came with CSR. Since the crawlers received the content of our websites directly, they were then able to index it right away. The data fetching that happens initially also takes place on the server, which is a plus because it’s closer to the data source and can eliminate fetch waterfalls <a href="https://blog.sentry.io/fetch-waterfall-in-react/#fetch-data-on-server-to-avoid-a-fetch-waterfall"><em>if done properly</em></a>.</p>

<h3 id="hydration">Hydration</h3>

<p>SSR has its own complexities. For React to make the static HTML received from the server interactive, it needs to <strong>hydrate</strong> it. Hydration is the process that happens when React reconstructs its Virtual Document Object Model (DOM) on the client side based on what was in the DOM of the initial HTML.</p>

<blockquote><strong>Note</strong>: React maintains its own <a href="https://legacy.reactjs.org/docs/faq-internals.html">Virtual DOM</a> because it’s faster to figure out updates on it instead of the actual DOM. It synchronizes the actual DOM with the Virtual DOM when it needs to update the UI but performs the diffing algorithm on the Virtual DOM.</blockquote>

<p>We now have two flavors of Reacts:</p>

<ol>
<li><strong>A server-side flavor</strong> that knows how to render static HTML from our component tree,</li>
<li><strong>A client-side flavor</strong> that knows how to make the page interactive.</li>
</ol>

<p>We’re still shipping React and code for the app to the browser because &mdash; in order to hydrate the initial HTML &mdash; React needs the same components on the client side that were used on the server. During hydration, <a href="https://css-tricks.com/how-react-reconciliation-works/">React performs a process called</a> <a href="https://css-tricks.com/how-react-reconciliation-works/"><em>reconciliation</em></a> in which it compares the server-rendered DOM with the client-rendered DOM and tries to identify differences between the two. If there are differences between the two DOMs, React attempts to fix them by rehydrating the component tree and updating the component hierarchy to match the server-rendered structure. And if there are <em>still</em> inconsistencies that cannot be resolved, React will throw errors to indicate the problem. This problem is commonly known as a <em>hydration error</em>.</p>

<h3 id="ssr-drawbacks">SSR Drawbacks</h3>

<p>SSR is not a silver bullet solution that addresses CSR limitations. SSR comes with its own drawbacks. Since we moved the initial HTML rendering and data fetching to the server, those servers are now experiencing a much greater load than when we loaded everything on the client.</p>

<p>Remember when I mentioned that SSR generally improves the FCP performance metric? That may be true, but the <a href="https://docs.sentry.io/product/performance/web-vitals/web-vitals-concepts/#time-to-first-byte-ttfb">Time to First Byte (TTFB) performance metric</a> took a negative hit with SSR. The browser literally has to wait for the server to fetch the data it needs, generate the initial HTML, and send the first byte. And while TTFB is not a Core Web Vital metric in itself, it influences the metrics. A negative TTFB leads to negative Core Web Vitals metrics.</p>

<p>Another drawback of SSR is that the entire page is unresponsive until client-side React has finished hydrating it. Interactive elements cannot listen and “react” to user interactions before React hydrates them, i.e., React attaches the intended event listeners to them. The hydration process is typically fast, but the internet connection and hardware capabilities of the device in use can slow down rendering by a noticeable amount.</p>

<h2 id="the-present-a-hybrid-approach">The Present: A Hybrid Approach</h2>

<p>So far, we have covered two different flavors of React rendering: CSR and SSR. While the two were attempts to improve one another, we now get the best of both worlds, so to speak, as SSR has branched into three additional React flavors that offer a hybrid approach in hopes of reducing the limitations that come with CSR and SSR.</p>

<p>We’ll look at the first two &mdash; <strong>static site generation</strong> and <strong>incremental static regeneration</strong> &mdash; before jumping into an entire discussion on React Server Components, the third flavor.</p>

<h3 id="static-site-generation-ssg">Static Site Generation (SSG)</h3>

<p>Instead of regenerating the same HTML code on every request, we came up with SSG. This React flavor compiles and builds the entire app at build time, generating static (as in vanilla HTML and CSS) files that are, in turn, hosted on a speedy CDN.</p>

<p>As you might suspect, this hybrid approach to rendering is a nice fit for smaller projects where the content doesn’t change much, like a marketing site or a personal blog, as opposed to larger projects where content may change with user interactions, like an e-commerce site.</p>

<p>SSG reduces the burden on the server while improving performance metrics related to TTFB because the server no longer has to perform heavy, expensive tasks for re-rendering the page.</p>

<h3 id="incremental-static-regeneration-isr">Incremental Static Regeneration (ISR)</h3>

<p>One SSG drawback is having to rebuild all of the app’s code when a content change is needed. The content is set in stone &mdash; being static and all &mdash; and there’s no way to change just one part of it without rebuilding the whole thing.</p>

<p>The Next.js team created the second hybrid flavor of React that addresses the drawback of complete SSG rebuilds: <strong>incremental static regeneration (ISR)</strong>. The name says a lot about the approach in that ISR only rebuilds what’s needed instead of the entire thing. We generate the “initial version” of the page statically during build time but are also able to rebuild any page containing stale data <em>after</em> a user lands on it (i.e., the server request triggers the data check).</p>

<p>From that point on, the server will serve new versions of that page statically in increments when needed. That makes ISR a hybrid approach that is neatly positioned between SSG and traditional SSR.</p>

<p>At the same time, ISR does not address the “stale content” symptom, where users may visit a page before it has finished being generated. Unlike SSG, ISR needs an actual server to regenerate individual pages in response to a user’s browser making a server request. That means we lose the valuable ability to deploy ISR-based apps on a CDN for optimized asset delivery.</p>

<h2 id="the-future-react-server-components">The Future: React Server Components</h2>

<p>Up until this point, we’ve juggled between CSR, SSR, SSG, and ISR approaches, where all make some sort of trade-off, negatively affecting performance, development complexity, and user experience. Newly introduced <a href="https://nextjs.org/docs/app/building-your-application/rendering/server-components">React Server Components</a> (RSC) aim to address most of these drawbacks by allowing us &mdash; the developer &mdash; to <strong>choose the right rendering strategy for each individual React component</strong>.</p>

<p>RSCs can significantly reduce the amount of JavaScript shipped to the client since we can selectively decide which ones to serve statically on the server and which render on the client side. There’s a lot more control and flexibility for striking the right balance for your particular project.</p>

<blockquote><strong>Note:</strong> It’s important to keep in mind that as we adopt more advanced architectures, like RSCs, monitoring solutions become invaluable. Sentry offers robust <a href="https://docs.sentry.io/product/performance/">performance monitoring</a> and error-tracking capabilities that help you keep an eye on the real-world performance of your RSC-powered application. Sentry also helps you gain insights into how your releases are performing and how stable they are, which is yet another crucial feature to have while migrating your existing applications to RSCs. Implementing Sentry in an RSC-enabled framework like <a href="https://sentry.io/for/nextjs/">Next.js</a> is as easy as running a single terminal command.</blockquote>

<p>But what exactly <em>is</em> an RSC? Let’s pick one apart to see how it works under the hood.</p>

<h2 id="the-anatomy-of-react-server-components">The Anatomy of React Server Components</h2>

<p>This new approach introduces two types of rendering components: <strong>Server Components</strong> and <strong>Client Components</strong>. The differences between these two are not <em>how</em> they function but <em>where</em> they execute and the environments they’re designed for. At the time of this writing, the only way to use RSCs is through React frameworks. And at the moment, there are only three frameworks that support them: <a href="https://nextjs.org/docs/app/building-your-application/rendering/server-components">Next.js</a>, <a href="https://www.gatsbyjs.com/docs/conceptual/partial-hydration/">Gatsby</a>, and <a href="https://redwoodjs.com/blog/rsc-now-in-redwoodjs">RedwoodJS</a>.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="763"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg"
			
			sizes="100vw"
			alt="Wire diagram showing connected server components and client components represented as gray and blue dots, respectively."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: Example of an architecture consisting of Server Components and Client Components. (<a href='https://files.smashing.media/articles/forensics-react-server-components/3-wire-diagram-server-client-components.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="server-components">Server Components</h3>

<p>Server Components are designed to be executed on the server, and their code is never shipped to the browser. The HTML output and any props they might be accepting are the only pieces that are served. This approach has multiple performance benefits and user experience enhancements:</p>

<ul>
<li><strong>Server Components allow for large dependencies to remain on the server side.</strong><br />
Imagine using a large library for a component. If you’re executing the component on the client side, it means that you’re also shipping the full library to the browser. With Server Components, you’re only taking the static HTML output and avoiding having to ship any JavaScript to the browser. Server Components are truly static, and they remove the whole hydration step.</li>
<li><strong>Server Components are located much closer to the data sources &mdash; e.g., databases or file systems &mdash; they need to generate code.</strong><br />
They also leverage the server’s computational power to speed up compute-intensive rendering tasks and send only the generated results back to the client. They are also generated in a single pass, which <a href="https://blog.sentry.io/fetch-waterfall-in-react/#fetch-data-on-server-to-avoid-a-fetch-waterfall">avoids request waterfalls and HTTP round trips</a>.</li>
<li><strong>Server Components safely keep sensitive data and logic away from the browser.</strong><br />
That’s thanks to the fact that personal tokens and API keys are executed on a secure server rather than the client.</li>
<li><strong>The rendering results can be cached and reused between subsequent requests and even across different sessions.</strong><br />
This significantly reduces rendering time, as well as the overall amount of data that is fetched for each request.</li>
</ul>

<p>This architecture also makes use of <strong>HTML streaming</strong>, which means the server defers generating HTML for specific components and instead renders a fallback element in their place while it works on sending back the generated HTML. Streaming Server Components wrap components in <a href="https://react.dev/reference/react/Suspense"><code>&lt;Suspense&gt;</code></a> tags that provide a fallback value. The implementing framework uses the fallback initially but streams the newly generated content when it‘s ready. We’ll talk more about streaming, but let’s first look at Client Components and compare them to Server Components.</p>

<h3 id="client-components">Client Components</h3>

<p>Client Components are the components we already know and love. They’re executed on the client side. Because of this, Client Components are capable of handling user interactions and have access to the browser APIs like <code>localStorage</code> and geolocation.</p>

<p>The term “Client Component” doesn’t describe anything new; they merely are given the label to help distinguish the “old” CSR components from Server Components. Client Components are defined by a <a href="https://react.dev/reference/react/use-server"><code>&quot;use client&quot;</code></a> directive at the top of their files.</p>

<pre><code class="language-javascript">"use client"
export default function LikeButton() {
  const likePost = () =&gt; {
    // ...
  }
  return (
    &lt;button onClick={likePost}&gt;Like&lt;/button&gt;
  )
}
</code></pre>

<p>In Next.js, all components are Server Components by default. That’s why we need to explicitly define our Client Components with <code>&quot;use client&quot;</code>. There’s also a <code>&quot;use server&quot;</code> directive, but it’s used for Server Actions (which are RPC-like actions that invoked from the client, but executed on the server). You don’t use it to define your Server Components.</p>

<p>You might (rightfully) assume that Client Components are only rendered on the client, but Next.js renders Client Components on the server to generate the initial HTML. As a result, browsers can immediately start rendering them and then perform hydration later.</p>

<h3 id="the-relationship-between-server-components-and-client-components">The Relationship Between Server Components and Client Components</h3>

<p>Client Components can only <em>explicitly</em> import other Client Components. In other words, we’re unable to import a Server Component into a Client Component because of re-rendering issues. But we can have Server Components in a Client Component’s subtree &mdash; only passed through the <code>children</code> prop. Since Client Components live in the browser and they handle user interactions or define their own state, they get to re-render often. When a Client Component re-renders, so will its subtree. But if its subtree contains Server Components, how would they re-render? They don’t live on the client side. That’s why the React team put that limitation in place.</p>

<p>But hold on! We actually <em>can</em> import Server Components into Client Components. It’s just not a direct one-to-one relationship because the Server Component will be converted into a Client Component. If you’re using server APIs that you can’t use in the browser, you’ll get an error; if not &mdash; you’ll have a Server Component whose code gets “leaked” to the browser.</p>

<p>This is an incredibly important nuance to keep in mind as you work with RSCs.</p>

<h2 id="the-rendering-lifecycle">The Rendering Lifecycle</h2>

<p>Here’s the order of operations that Next.js takes to stream contents:</p>

<ol>
<li>The app router matches the page’s URL to a Server Component, builds the component tree, and instructs the server-side React to render that Server Component and all of its children components.</li>
<li>During render, React generates an “RSC Payload”. The RSC Payload informs Next.js about the page and what to expect in return, as well as what to fall back to during a <code>&lt;Suspense&gt;</code>.</li>
<li>If React encounters a suspended component, it pauses rendering that subtree and uses the suspended component’s fallback value.</li>
<li>When React loops through the last static component, Next.js prepares the generated HTML and the RSC Payload before streaming it back to the client through one or multiple chunks.</li>
<li>The client-side React then uses the instructions it has for the RSC Payload and client-side components to render the UI. It also hydrates each Client Component as they load.</li>
<li>The server streams in the suspended Server Components as they become available as an RSC Payload. Children of Client Components are also hydrated at this time if the suspended component contains any.</li>
</ol>

<p>We will look at the RSC rendering lifecycle from the browser’s perspective momentarily. For now, the following figure illustrates the outlined steps we covered.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="489"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg"
			
			sizes="100vw"
			alt="Wire diagram of the RSC rendering lifecycle going from a blank page to a page shell to a complete page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: Diagram of the RSC Rendering Lifecycle. (<a href='https://files.smashing.media/articles/forensics-react-server-components/4-wire-diagram-rsc-rendering-lifecycle.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We’ll see this operation flow from the browser’s perspective in just a bit.</p>

<h2 id="rsc-payload">RSC Payload</h2>

<p>The RSC payload is a special data format that the server generates as it renders the component tree, and it includes the following:</p>

<ul>
<li>The rendered HTML,</li>
<li>Placeholders where the Client Components should be rendered,</li>
<li>References to the Client Components’ JavaScript files,</li>
<li>Instructions on which JavaScript files it should invoke,</li>
<li>Any props passed from a Server Component to a Client Component.</li>
</ul>

<p>There’s no reason to worry much about the RSC payload, but it’s worth understanding what exactly the RSC payload contains. Let’s examine an example (truncated for brevity) from a <a href="https://github.com/nikolovlazar/rsc-forensics">demo app I created</a>:</p>

<div class="break-out">
<pre><code class="language-javascript">1:HL["/&#95;next/static/media/c9a5bc6a7c948fb0-s.p.woff2","font",{"crossOrigin":"","type":"font/woff2"}]
2:HL["/&#95;next/static/css/app/layout.css?v=1711137019097","style"]
0:"$L3"
4:HL["/&#95;next/static/css/app/page.css?v=1711137019097","style"]
5:I["(app-pages-browser)/./node&#95;modules/next/dist/client/components/app-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
8:"$Sreact.suspense"
a:I["(app-pages-browser)/./node&#95;modules/next/dist/client/components/layout-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
b:I["(app-pages-browser)/./node&#95;modules/next/dist/client/components/render-from-template-context.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
d:I["(app-pages-browser)/./src/app/global-error.jsx",["app/global-error","static/chunks/app/global-error.js"],""]
f:I["(app-pages-browser)/./src/components/clearCart.js",["app/page","static/chunks/app/page.js"],"ClearCart"]
7:["$","main",null,{"className":"page&#95;main&#95;&#95;GlU4n","children":[["$","$Lf",null,{}],["$","$8",null,{"fallback":["$","p",null,{"children":"🌀 loading products..."}],"children":"$L10"}]]}]
c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}]...
9:["$","p",null,{"children":["🛍️ ",3]}]
11:I["(app-pages-browser)/./src/components/addToCart.js",["app/page","static/chunks/app/page.js"],"AddToCart"]
10:["$","ul",null,{"children":[["$","li","1",{"children":["Gloves"," - $",20,["$...
</code></pre>
</div>

<p>To find this code in the demo app, open your browser’s developer tools at the Elements tab and look at the <code>&lt;script&gt;</code> tags at the bottom of the page. They’ll contain lines like:</p>

<pre><code class="language-javascript">self.&#95;&#95;next&#95;f.push([1,"PAYLOAD&#95;STRING&#95;HERE"]).
</code></pre>

<p>Every line from the snippet above is an individual RSC payload. You can see that each line starts with a number or a letter, followed by a colon, and then an array that’s sometimes prefixed with letters. We won’t get into too deep in detail as to what they mean, but in general:</p>

<ul>
<li><strong><code>HL</code> payloads</strong> are called “hints” and link to specific resources like CSS and fonts.</li>
<li><strong><code>I</code> payloads</strong> are called “modules,” and they invoke specific scripts. This is how Client Components are being loaded as well. If the Client Component is part of the main bundle, it’ll execute. If it’s not (meaning it’s lazy-loaded), a fetcher script is added to the main bundle that fetches the component’s CSS and JavaScript files when it needs to be rendered. There’s going to be an <code>I</code> payload sent from the server that invokes the fetcher script when needed.</li>
<li><strong><code>&quot;$&quot;</code> payloads</strong> are DOM definitions generated for a certain Server Component. They are usually accompanied by actual static HTML streamed from the server. That’s what happens when a suspended component becomes ready to be rendered: the server generates its static HTML and RSC Payload and then streams both to the browser.</li>
</ul>

<h2 id="streaming">Streaming</h2>

<p>Streaming allows us to progressively render the UI from the server. With RSCs, each component is capable of fetching its own data. Some components are fully static and ready to be sent immediately to the client, while others require more work before loading. Based on this, Next.js splits that work into multiple chunks and streams them to the browser as they become ready. So, when a user visits a page, the server invokes all Server Components, generates the initial HTML for the page (i.e., the page shell), replaces the “suspended” components’ contents with their fallbacks, and streams all of that through one or multiple chunks back to the client.</p>

<p>The server returns a <code>Transfer-Encoding: chunked</code> header that lets the browser know to expect streaming HTML. This prepares the browser for receiving multiple chunks of the document, rendering them as it receives them. We can actually see the header when opening Developer Tools at the Network tab. Trigger a refresh and click on the document request.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="238"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg"
			
			sizes="100vw"
			alt="Response header output highlighting the line containing the chunked transfer endcoding"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 5: Providing a hint to the browser to expect HTML streaming. (<a href='https://files.smashing.media/articles/forensics-react-server-components/5-streaming-header.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We can also debug the way Next.js sends the chunks in a terminal with the <code>curl</code> command:</p>

<pre><code class="language-bash">curl -D - --raw localhost:3000 &gt; chunked-response.txt
</code></pre>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="416"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg"
			
			sizes="100vw"
			alt="Headers and chunked HTML payloads."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 6. (<a href='https://files.smashing.media/articles/forensics-react-server-components/6-chunked-response.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You probably see the pattern. For each chunk, the server responds with the chunk’s size before sending the chunk’s contents. Looking at the output, we can see that the server streamed the entire page in 16 different chunks. At the end, the server sends back a zero-sized chunk, indicating the end of the stream.</p>

<p>The first chunk starts with the <code>&lt;!DOCTYPE html&gt;</code> declaration. The second-to-last chunk, meanwhile, contains the closing <code>&lt;/body&gt;</code> and <code>&lt;/html&gt;</code> tags. So, we can see that the server streams the entire document from top to bottom, then pauses to wait for the suspended components, and finally, at the end, closes the body and HTML before it stops streaming.</p>

<p>Even though the server hasn’t completely finished streaming the document, the browser’s fault tolerance features allow it to draw and invoke whatever it has at the moment without waiting for the closing <code>&lt;/body&gt;</code> and <code>&lt;/html&gt;</code> tags.</p>

<h3 id="suspending-components">Suspending Components</h3>

<p>We learned from the render lifecycle that when a page is visited, Next.js matches the RSC component for that page and asks React to render its subtree in HTML. When React stumbles upon a suspended component (i.e., async function component), it grabs its fallback value from the <code>&lt;Suspense&gt;</code> component (or the <code>loading.js</code> file if it’s a Next.js route), renders that instead, then continues loading the other components. Meanwhile, the RSC invokes the async component in the background, which is streamed later as it finishes loading.</p>

<p>At this point, Next.js has returned a full page of static HTML that includes either the components themselves (rendered in static HTML) or their fallback values (if they’re suspended). It takes the static HTML and RSC payload and streams them back to the browser through one or multiple chunks.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg"
			
			sizes="100vw"
			alt="Showing suspended component fallbacks"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 7. (<a href='https://files.smashing.media/articles/forensics-react-server-components/7-fallbacks-suspended-components.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As the suspended components finish loading, React generates HTML recursively while looking for other nested <code>&lt;Suspense&gt;</code> boundaries, generates their RSC payloads and then lets Next.js stream the HTML and RSC Payload back to the browser as new chunks. When the browser receives the new chunks, it has the HTML and RSC payload it needs and is ready to replace the fallback element from the DOM with the newly-streamed HTML. And so on.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg"
			
			sizes="100vw"
			alt="Static HTML and RSC Payload replacing suspended fallback values."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 8. (<a href='https://files.smashing.media/articles/forensics-react-server-components/8-suspended-components-html.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In Figures 7 and 8, notice how the fallback elements have a unique ID in the form of <code>B:0</code>, <code>B:1</code>, and so on, while the actual components have a similar ID in a similar form: <code>S:0</code> and <code>S:1</code>, and so on.</p>

<p>Along with the first chunk that contains a suspended component’s HTML, the server also ships an <code>$RC</code> function (i.e., <code>completeBoundary</code> from <a href="https://github.com/facebook/react/blob/main/packages/react-dom-bindings/src/server/fizz-instruction-set/ReactDOMFizzInstructionSetShared.js#L46">React’s source code</a>) that knows how to find the <code>B:0</code> fallback element in the DOM and replace it with the <code>S:0</code> template it received from the server. That’s the “replacer” function that lets us see the component contents when they arrive in the browser.</p>

<p>The entire page eventually finishes loading, chunk by chunk.</p>

<h3 id="lazy-loading-components">Lazy-Loading Components</h3>

<p>If a suspended Server Component contains a lazy-loaded Client Component, Next.js will also send an RSC payload chunk containing instructions on how to fetch and load the lazy-loaded component’s code. This represents a <em>significant performance improvement</em> because the page load isn’t dragged out by JavaScript, which might not even be loaded during that session.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="442"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg"
			
			sizes="100vw"
			alt="Fetching additional JavaScript and CSS files for a lazy-loaded Client Component, as shown in developer tools."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 9. (<a href='https://files.smashing.media/articles/forensics-react-server-components/9-fetching-lazy-loaded-scripts.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>At the time I’m writing this, the dynamic method to lazy-load a Client Component in a Server Component in Next.js does not work as you might expect. To effectively lazy-load a Client Component, put it in a <a href="https://github.com/nikolovlazar/rsc-forensics/blob/main/src/components/addToCartWrapper.js">“wrapper” Client Component</a> that uses the <code>dynamic</code> method itself to lazy-load the actual Client Component. The wrapper will be turned into a script that fetches and loads the Client Component’s JavaScript and CSS files at the time they’re needed.</p>

<h3 id="tl-dr">TL;DR</h3>

<p>I know that’s a lot of plates spinning and pieces moving around at various times. What it boils down to, however, is that a page visit triggers Next.js to render as much HTML as it can, using the fallback values for any suspended components, and then sends that to the browser. Meanwhile, Next.js triggers the suspended async components and gets them formatted in HTML and contained in RSC Payloads that are streamed to the browser, one by one, along with an <code>$RC</code> script that knows how to swap things out.</p>

<h2 id="the-page-load-timeline">The Page Load Timeline</h2>

<p>By now, we should have a solid understanding of how RSCs work, how Next.js handles their rendering, and how all the pieces fit together. In this section, we’ll zoom in on what exactly happens when we visit an RSC page in the browser.</p>

<h3 id="the-initial-load">The Initial Load</h3>

<p>As we mentioned in the TL;DR section above, when visiting a page, Next.js will render the initial HTML minus the suspended component and stream it to the browser as part of the first streaming chunks.</p>

<p>To see everything that happens during the page load, we’ll visit the “Performance” tab in Chrome DevTools and click on the “reload” button to reload the page and capture a profile. Here’s what that looks like:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="442"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg"
			
			sizes="100vw"
			alt="Showing the first chunks of HTML streamed at the beginning of the timeline in DevTools."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 10. (<a href='https://files.smashing.media/articles/forensics-react-server-components/10-first-chunks-being-streamed.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>When we zoom in at the very beginning, we can see the first “Parse HTML” span. That’s the server streaming the first chunks of the document to the browser. The browser has just received the initial HTML, which contains the page shell and a few links to resources like fonts, CSS files, and JavaScript. The browser starts to invoke the scripts.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="442"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg"
			
			sizes="100vw"
			alt="The first frames appear, and parts of the page are rendered"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 11. (<a href='https://files.smashing.media/articles/forensics-react-server-components/11-first-frames.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>After some time, we start to see the page’s first frames appear, along with the initial JavaScript scripts being loaded and hydration taking place. If you look at the frame closely, you’ll see that the whole page shell is rendered, and “loading” components are used in the place where there are suspended Server Components. You might notice that this takes place around 800ms, while the browser started to get the first HTML at  100ms. During those 700ms, the browser is continuously receiving chunks from the server.</p>

<p>Bear in mind that this is a Next.js demo app running locally in development mode, so it’s going to be slower than when it’s running in production mode.</p>

<h3 id="the-suspended-component">The Suspended Component</h3>

<p>Fast forward few seconds and we see another “Parse HTML” span in the page load timeline, but this one it indicates that a suspended Server Component finished loading and is being streamed to the browser.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="442"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg"
			
			sizes="100vw"
			alt="The suspended component’s HTML and RSC Payload are streamed to the browser, as shown in the developer tools Network tab."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 12. (<a href='https://files.smashing.media/articles/forensics-react-server-components/12-suspended-component.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We can also see that a lazy-loaded Client Component is discovered at the same time, and it contains CSS and JavaScript files that need to be fetched. These files weren’t part of the initial bundle because the component isn’t needed until later on; the code is split into their own files.</p>

<p>This way of code-splitting certainly improves the performance of the initial page load. It also makes sure that the Client Component’s code is shipped only if it’s needed. If the Server Component (which acts as the Client Component’s parent component) throws an error, then the Client Component does not load. It doesn’t make sense to load all of its code before we know whether it will load or not.</p>

<p>Figure 12 shows the <code>DOMContentLoaded</code> event is reported at the end of the page load timeline. And, just before that, we can see that the <code>localhost</code> HTTP request comes to an end. That means the server has likely sent the last zero-sized chunk, indicating to the client that the data is fully transferred and that the streaming communication can be closed.</p>

<h3 id="the-end-result">The End Result</h3>

<p>The main <code>localhost</code> HTTP request took around five seconds, but thanks to streaming, we began seeing page contents load much earlier than that. If this was a traditional SSR setup, we would likely be staring at a blank screen for those five seconds before anything arrives. On the other hand, if this was a traditional CSR setup, we would likely have shipped <em>a lot</em> more of JavaScript and put a heavy burden on both the browser and network.</p>

<p>This way, however, the app was fully interactive in those five seconds. We were able to navigate between pages and interact with Client Components that have loaded as part of the initial main bundle. This is a pure win from a user experience standpoint.</p>

<h2 id="conclusion">Conclusion</h2>

<p>RSCs mark a significant evolution in the React ecosystem. They leverage the strengths of server-side and client-side rendering while embracing HTML streaming to speed up content delivery. This approach not only addresses the SEO and loading time issues we experience with CSR but also improves SSR by reducing server load, thus enhancing performance.</p>

<p>I’ve refactored the same RSC app I shared earlier so that it uses the Next.js Page router with SSR. The improvements in RSCs are significant:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg"
			
			sizes="100vw"
			alt="Comparing Next.js Page Router and App Router, side-by-side."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 13. (<a href='https://files.smashing.media/articles/forensics-react-server-components/13-ssr-vs-rscs.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Looking at these two reports I pulled from Sentry, we can see that streaming allows the page to start loading its resources before the actual request finishes. This significantly improves the Web Vitals metrics, which we see when comparing the two reports.</p>

<p>The conclusion: <strong>Users enjoy faster, more reactive interfaces with an architecture that relies on RSCs.</strong></p>

<p>The RSC architecture introduces two new component types: Server Components and Client Components. This division helps React and the frameworks that rely on it &mdash; like Next.js &mdash; streamline content delivery while maintaining interactivity.</p>

<p>However, this setup also introduces new challenges in areas like state management, authentication, and component architecture. Exploring those challenges is a great topic for another blog post!</p>

<p>Despite these challenges, the benefits of RSCs present a compelling case for their adoption. We definitely will see guides published on how to address RSC’s challenges as they mature, but, in my opinion, they already look like the future of rendering practices in modern web development.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Matt Zeunert</author><title>How To Monitor And Optimize Google Core Web Vitals</title><link>https://www.smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/</link><pubDate>Tue, 16 Apr 2024 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/</guid><description>The three Core Web Vitals metrics don’t only tell you how visitors experience your website but also impact your Google search result rankings. In this article, we’ll look at what Core Web Vitals are, how they are measured, and how you can use DebugBear to monitor them continuously.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/" />
              <title>How To Monitor And Optimize Google Core Web Vitals</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Monitor And Optimize Google Core Web Vitals</h1>
                  
                    
                    <address>Matt Zeunert</address>
                  
                  <time datetime="2024-04-16T10:00:00&#43;00:00" class="op-published">2024-04-16T10:00:00+00:00</time>
                  <time datetime="2024-04-16T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>Google’s <a href="https://web.dev/articles/vitals">Core Web Vitals initiative</a> has increased the attention website owners need to pay to user experience. You can now more easily see when users have poor experiences on your website, and <a href="https://www.debugbear.com/docs/core-web-vitals-ranking-factor?utm_campaign=sm-5">poor UX also has a bigger impact on SEO</a>.</p>

<p>That means you need to test your website to identify optimizations. Beyond that, <strong>monitoring ensures that you can stay ahead of your Core Web Vitals scores</strong> for the long term.</p>

<p>Let’s find out how to work with different types of Core Web Vitals data and how monitoring can help you gain a deeper insight into user experiences and help you optimize them.</p>

<h2 id="what-are-core-web-vitals">What Are Core Web Vitals?</h2>

<p>There are three web vitals metrics Google uses to measure different aspects of website performance:</p>

<ul>
<li>Largest Contentful Paint (LCP),</li>
<li>Cumulative Layout Shift (CLS),</li>
<li>Interaction to Next Paint (INP).</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="255"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png"
			
			sizes="100vw"
			alt="Three web vitals metrics that measure different aspects of website performance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/1-core-web-vitals.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="largest-contentful-paint-lcp">Largest Contentful Paint (LCP)</h3>

<p>The <a href="https://www.debugbear.com/docs/metrics/largest-contentful-paint?utm_campaign=sm-5">Largest Contentful Paint</a> metric is the closest thing to a traditional load time measurement. However, LCP doesn’t track a purely technical page load milestone like the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/load_event">JavaScript Load Event</a>. Instead, it focuses on what the user can see by measuring <strong>how soon after opening a page, the largest content element on the page appears</strong>.</p>

<p>The faster the LCP happens, the better, and Google rates a passing LCP score below 2.5 seconds.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="260"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png"
			
			sizes="100vw"
			alt="Largest Contentful Paint"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/2-largest-contentful-paint.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="cumulative-layout-shift-cls">Cumulative Layout Shift (CLS)</h3>

<p><a href="https://www.debugbear.com/docs/metrics/cumulative-layout-shift?utm_campaign=sm-5">Cumulative Layout Shift</a> is a bit of an odd metric, as it doesn’t measure how fast something happens. Instead, it looks at <strong>how stable the page layout is once the page starts loading</strong>. Layout shifts mean that content moves around, disorienting the user and potentially causing accidental clicks on the wrong UI element.</p>

<p>The CLS score is calculated by looking at how far an element moved and how big the element is. Aim for a score below 0.1 to get a good rating from Google.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="313"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png"
			
			sizes="100vw"
			alt="Cumulative Layout Shift"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/3-cumulative-layout-shift.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="interaction-to-next-paint-inp">Interaction to Next Paint (INP)</h3>

<p>Even websites that load quickly often frustrate users when interactions with the page feel sluggish. That’s why <a href="https://www.debugbear.com/docs/metrics/interaction-to-next-paint?utm_campaign=sm-5">Interaction to Next Paint</a> measures <strong>how long the page remains frozen after user interaction with no visual updates</strong>.</p>

<p>Page interactions should feel practically instant, so Google recommends an INP score below 200 milliseconds.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="295"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png"
			
			sizes="100vw"
			alt="Interaction to Next Paint"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/4-interaction-to-next-paint.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="what-are-the-different-types-of-core-web-vitals-data">What Are The Different Types Of Core Web Vitals Data?</h2>

<p>You’ll often see different page speed metrics reported by different tools and data sources, so it’s important to understand the differences. We’ve <a href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/">published a whole article just about that</a>, but here’s the high-level breakdown along with the pros and cons of each one:</p>

<ul>
<li><strong>Synthetic Tests</strong><br />
These tests are run on-demand in a controlled lab environment in a fixed location with a fixed network and device speed. They can produce very detailed reports and recommendations.</li>
<li><strong>Real User Monitoring (RUM)</strong><br />
This data tells you how fast your website is for your actual visitors. That means you need to install an analytics script to collect it, and the reporting that’s available is less detailed than for lab tests.</li>
<li><strong>CrUX Data</strong><br />
Google collects from Chrome users as part of the <a href="https://www.debugbear.com/blog/chrome-user-experience-report?utm_campaign=sm-5">Chrome User Experience Report</a> (CrUX) and uses it as a ranking signal. It’s available for every website with enough traffic, but since it covers a 28-day rolling window, it takes a while for changes on your website to be reflected here. It also doesn’t include any debug data to help you optimize your metrics.</li>
</ul>

<h2 id="start-by-running-a-one-off-page-speed-test">Start By Running A One-Off Page Speed Test</h2>

<p>Before signing up for a monitoring service, it’s best to run a one-off lab test with a free tool like <a href="https://pagespeed.web.dev/">Google’s PageSpeed Insights</a> or the <a href="https://www.debugbear.com/test/website-speed?utm_campaign=sm-5">DebugBear Website Speed Test</a>. Both of these tools report with Google CrUX data that reflects whether real users are facing issues on your website.</p>

<p><strong>Note</strong>: <em>The lab data you get from some Lighthouse-based tools &mdash; like PageSpeed Insights &mdash; <a href="https://www.debugbear.com/blog/is-pagespeed-insights-reliable?utm_campaign=sm-5">can be unreliable</a>.</em></p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="452"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png"
			
			sizes="100vw"
			alt="One-Off Page Speed Test with DebugBear"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/5-one-off-page-speed-test.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>INP is best measured for real users, where you can see the elements that users interact with most often and where the problems lie. But a free tool like the <a href="https://www.debugbear.com/inp-debugger?utm_campaign=sm-5">INP Debugger</a> can be a good starting point if you don’t have RUM set up yet.</p>

<h2 id="how-to-monitor-core-web-vitals-continuously-with-scheduled-lab-based-testing">How To Monitor Core Web Vitals Continuously With Scheduled Lab-Based Testing</h2>

<p>Running tests continuously has a few advantages over ad-hoc tests. Most importantly, <strong>continuous testing triggers alerts whenever a new issue appears on your website</strong>, allowing you to start fixing them right away. You’ll also have <strong>access to historical data</strong>, allowing you to see exactly when a regression occurred and letting you compare test results before and after to see what changed.</p>

<p>Scheduled lab tests are easy to set up using a website monitoring tool like <a href="https://www.debugbear.com/?utm_campaign=sm-5">DebugBear</a>. Enter a list of website URLs and pick a device type, test location, and test frequency to get things running:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="420"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png"
			
			sizes="100vw"
			alt="A screenshot of how to schedule lab-based testing with DebugBear"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/6-scheduled-lab-based-testing.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As this process runs, it feeds data into the detailed dashboard with historical Core Web Vitals data. You can monitor a number of pages on your website or track the speed of your competition to make sure you stay ahead.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="436"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png"
			
			sizes="100vw"
			alt="An example of detailed dashboard with historical Core Web Vitals data"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/7-dashboard-historical-core-web-vitals-data.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>When regression occurs, you can dive deep into the results using DebuBears’s <a href="https://www.debugbear.com/docs/compare?utm_campaign=sm-5">Compare mode</a>. This mode lets you see before-and-after test results side-by-side, giving you context for identifying causes. You see exactly what changed. For example, in the following case, we can see that <a href="https://en.wikipedia.org/wiki/HTTP_compression">HTTP compression</a> stopped working for a file, leading to an increase in page weight and longer download times.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="565"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png"
			
			sizes="100vw"
			alt="A screenshot with DebuBears’s Compare mode"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/8-debubears-compare-mode.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="how-to-monitor-real-user-core-web-vitals">How To Monitor Real User Core Web Vitals</h2>

<p>Synthetic tests are great for super-detailed reporting of your page load time. However, other aspects of user experience, like layout shifts and slow interactions, heavily depend on how real users use your website. So, it’s worth <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-5">setting up real user monitoring with a tool like DebugBear</a>.</p>

<p>To monitor real user web vitals, you’ll need to install an analytics snippet that collects this data on your website. Once that’s done, you’ll be able to see data for all three Core Web Vitals metrics across your entire website.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="493"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png"
			
			sizes="100vw"
			alt="An analytics snippet to monitor real user web vitals with DebugBear"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/9-real-user-monitoring-debugbear.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>To optimize your scores, you can go into the dashboard for each individual metric, select a specific page you’re interested in, and then dive deeper into the data.</p>

<p>For example, you can see whether a slow LCP score is caused by a slow server response, <a href="https://www.debugbear.com/blog/render-blocking-resources?utm_campaign=sm-5">render blocking resources</a>, or by the LCP content element itself.</p>

<p>You’ll also find that the LCP element varies between visitors. Lab test results are always the same, as they rely on a single fixed screen size. However, in the real world, visitors use a wide range of devices and will see different content when they open your website.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="524"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png"
			
			sizes="100vw"
			alt="An example of a dashboard for the LCP metric with data reflecting the LCP score"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/10-dashboard-lcp-debugbear.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>INP is tricky to debug without real user data. Yet an analytics tool like DebugBear can tell you exactly what page elements users are interacting with most often and which of these interactions are slow to respond.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="624"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png"
			
			sizes="100vw"
			alt="INP elements"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/11-inp-elements-debugbear.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Thanks to the new <a href="https://www.debugbear.com/blog/long-animation-frames">Long Animation Frames API</a>, we can also see specific scripts that contribute to slow interactions. We can then decide to optimize these scripts, remove them from the page, or run them in a way that does not block interactions for as long.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="324"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png"
			
			sizes="100vw"
			alt="Long Animation Frames API with a list of INP primary scripts that slow interactions"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/monitor-optimize-google-core-web-vitals/12-inp-primary-scripts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>Continuously monitoring Core Web Vitals lets you see how website changes impact user experience and ensures you get alerted when something goes wrong. While it’s possible to measure Core Web Vitals using a wide range of tools, those tools are limited by the type of data they use to evaluate performance, not to mention they only provide a single snapshot of performance at a specific point in time.</p>

<p>A tool like DebugBear gives you access to several different types of data that you can use to troubleshoot performance and optimize your website, complete with RUM capabilities that offer a historial record of performance for identifying issues where and when they occur. <a href="https://www.debugbear.com/signup?utm_campaign=sm-5">Sign up for a free DebugBear trial here</a>.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Geoff Graham</author><title>Reporting Core Web Vitals With The Performance API</title><link>https://www.smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/</link><pubDate>Tue, 27 Feb 2024 12:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/</guid><description>The Performance API is a set of standards for measuring and evaluating performance metrics with JavaScript. Think of it as a box containing all of the same functionality for reporting on Core Web Vitals and general performance statistics that you’d get in many performance testing tools. This article demonstrates how to use the Performance API to generate performance metrics directly in the DOM to create your own reporting.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/" />
              <title>Reporting Core Web Vitals With The Performance API</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Reporting Core Web Vitals With The Performance API</h1>
                  
                    
                    <address>Geoff Graham</address>
                  
                  <time datetime="2024-02-27T12:00:00&#43;00:00" class="op-published">2024-02-27T12:00:00+00:00</time>
                  <time datetime="2024-02-27T12:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>There’s quite a buzz in the performance community with the Interaction to Next Paint (INP) metric becoming an official <a href="https://www.debugbear.com/docs/metrics/core-web-vitals?utm_campaign=sm-4">Core Web Vitals</a> (CWV) metric in a few short weeks. If you haven’t heard, INP is replacing the First Input Delay (FID) metric, something <a href="https://www.smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/">you can read all about here on Smashing Magazine</a> as a guide to prepare for the change.</p>

<p>But that’s not what I really want to talk about. With performance at the forefront of my mind, I decided to head over to MDN for a fresh look at the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Performance_API">Performance API</a>. We can use it to report the load time of elements on the page, even going so far as to report on Core Web Vitals metrics in real time. Let’s look at a few ways we can use the API to report some CWV metrics.</p>

<h2 id="browser-support-warning">Browser Support Warning</h2>

<p>Before we get started, a quick word about browser support. The Performance API is huge in that it contains a lot of different interfaces, properties, and methods. While the majority of it is supported by all major browsers, Chromium-based browsers are the only ones that support all of the CWV properties. The only other is Firefox, which supports the First Contentful Paint (FCP) and Largest Contentful Paint (LCP) API properties.</p>

<p>So, we’re looking at a feature of features, as it were, where some are well-established, and others are still in the experimental phase. But as far as Core Web Vitals go, we’re going to want to work in Chrome for the most part as we go along.</p>

<h2 id="first-we-need-data-access">First, We Need Data Access</h2>

<p>There are two main ways to retrieve the performance metrics we care about:</p>

<ol>
<li>Using the <code>performance.getEntries()</code> method, or</li>
<li>Using a <code>PerformanceObserver</code> instance.</li>
</ol>

<p>Using a <code>PerformanceObserver</code> instance offers a few important advantages:</p>

<ul>
<li><strong><code>PerformanceObserver</code> observes performance metrics and dispatches them over time.</strong> Instead, using <code>performance.getEntries()</code> will always return the entire list of entries since the performance metrics started being recorded.</li>
<li><strong><code>PerformanceObserver</code> dispatches the metrics asynchronously,</strong> which means they don’t have to block what the browser is doing.</li>
<li><strong>The <code>element</code> performance metric type doesn’t work</strong> with the <code>performance.getEntries()</code> method anyway.</li>
</ul>

<p>That all said, let’s create a <code>PerformanceObserver</code>:</p>

<pre><code class="language-javascript">const lcpObserver = new PerformanceObserver(list =&gt; {});
</code></pre>

<p>For now, we’re passing an empty callback function to the <code>PerformanceObserver</code> constructor. Later on, we’ll change it so that it actually does something with the observed performance metrics. For now, let’s start observing:</p>

<div class="break-out">
<pre><code class="language-javascript">lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
</code></pre>
</div>

<p>The first very important thing in that snippet is the <code>buffered: true</code> property. Setting this to <code>true</code> means that we not only get to observe performance metrics being dispatched <em>after</em> we start observing, but we also want to get the performance metrics that were queued by the browser <em>before</em> we started observing.</p>

<p>The second very important thing to note is that we’re working with the <code>largest-contentful-paint</code> property. That’s what’s cool about the Performance API: it can be used to measure very specific things but also supports properties that are mapped directly to CWV metrics. We’ll start with the LCP metric before looking at other CWV metrics.</p>

<h2 id="reporting-the-largest-contentful-paint">Reporting The Largest Contentful Paint</h2>

<p>The <code>largest-contentful-paint</code> property looks at everything on the page, identifying the biggest piece of content on the initial view and how long it takes to load. In other words, we’re observing the full page load and getting stats on the largest piece of content rendered in view.</p>

<p>We already have our Performance Observer and callback:</p>

<div class="break-out">
<pre><code class="language-javascript">const lcpObserver = new PerformanceObserver(list =&gt; {});
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
</code></pre>
</div>

<p>Let’s fill in that empty callback so that it returns a list of entries once performance measurement starts:</p>

<div class="break-out">
<pre><code class="language-javascript">// The Performance Observer
const lcpObserver = new PerformanceObserver(list =&gt; {</code>
  <code style="font-weight: bold;">// Returns the entire list of entries</code>
  <code style="font-weight: bold;">const entries = list.getEntries();</code>
<code class="language-javascript">});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
</code></pre>
</div>

<p>Next, we want to know which element is pegged as the LCP. It’s worth noting that the element representing the LCP is always the <em>last</em> element in the <a href="https://w3c.github.io/largest-contentful-paint/#sec-report-largest-contentful-paint">ordered list of entries</a>. So, we can look at the list of returned entries and return the last one:</p>

<div class="break-out">
<pre><code class="language-javascript">// The Performance Observer
const lcpObserver = new PerformanceObserver(list =&gt; {
  // Returns the entire list of entries
  const entries = list.getEntries();</code>
  <code style="font-weight: bold;">// The element representing the LCP</code>
  <code style="font-weight: bold;">const el = entries[entries.length - 1];</code>
<code class="language-javascript">});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
</code></pre>
</div>

<p>The last thing is to display the results! We could create some sort of dashboard UI that consumes all the data and renders it in an aesthetically pleasing way. Let’s simply log the results to the console rather than switch gears.</p>

<div class="break-out">
<pre><code class="language-javascript">// The Performance Observer
const lcpObserver = new PerformanceObserver(list =&gt; {
  // Returns the entire list of entries
  const entries = list.getEntries();
  // The element representing the LCP
  const el = entries[entries.length - 1];</code>
  
  <code style="font-weight: bold;">// Log the results in the console</code>
  <code style="font-weight: bold;">console.log(el.element);</code>
<code class="language-javascript">});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });
</code></pre>
</div>

<p>There we go!</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="505"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png"
			
			sizes="100vw"
			alt="Open Chrome window showing the LCP results in the DevTools console while highlighting the result on the Smashing Magazine homepage."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      LCP support is limited to Chrome and Firefox at the time of writing. (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/1-smashingmagazine-devtools-console.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It’s certainly nice knowing which element is the largest. But I’d like to know more about it, say, how long it took for the LCP to render:</p>

<div class="break-out">
<pre><code class="language-javascript">// The Performance Observer
const lcpObserver = new PerformanceObserver(list =&gt; {

  const entries = list.getEntries();
  const lcp = entries[entries.length - 1];

  entries.forEach(entry =&gt; {
    // Log the results in the console
    console.log(
      `The LCP is:`,
      lcp.element,
      `The time to render was ${entry.startTime} milliseconds.`,
    );
  });
});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

// The LCP is:
// &lt;h2 class="author-post&#95;&#95;title mt-5 text-5xl"&gt;…&lt;/h2&gt;
// The time to  render was 832.6999999880791 milliseconds.
</code></pre>
</div>

<h2 id="reporting-first-contentful-paint">Reporting First Contentful Paint</h2>

<p>This is all about the time it takes for the very first piece of DOM to get painted on the screen. Faster is better, of course, but the way Lighthouse reports it, a “passing” score comes in between 0 and 1.8 seconds.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.debugbear.com/docs/web-performance-metrics#first-contentful-paint-fcp?utm_campaign=sm-4">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="411"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png"
			
			sizes="100vw"
			alt="Showing a timeline of mobile screen frames measured in seconds and how much is painted to the screen at various intervals."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://www.debugbear.com/docs/web-performance-metrics#first-contentful-paint-fcp?utm_campaign=sm-4'>Source: DebugBear</a>. (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/2-timeline-mobile-screen-frames.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Just like we set the <code>type</code> property to <code>largest-contentful-paint</code> to fetch performance data in the last section, we’re going to set a different type this time around: <code>paint</code>.</p>

<p>When we call <code>paint,</code> we tap into the <code>PerformancePaintTiming</code> interface that opens up reporting on <strong>first paint</strong> and <strong>first contentful paint</strong>.</p>

<div class="break-out">
<pre><code class="language-javascript">// The Performance Observer
const paintObserver = new PerformanceObserver(list =&gt; {
  const entries = list.getEntries();
  entries.forEach(entry =&gt; {    
    // Log the results in the console.
    console.log(
      `The time to ${entry.name} took ${entry.startTime} milliseconds.`,
    );
  });
});

// Call the Observer.
paintObserver.observe({ type: "paint", buffered: true });

// The time to first-paint took 509.29999999981374 milliseconds.
// The time to first-contentful-paint took 509.29999999981374 milliseconds.
</code></pre>
</div>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="505"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png"
			
			sizes="100vw"
			alt="DevTools open on the Smashing Magazine website displaying the paint results in the console."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/3-devtools-smashingmagazine.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Notice how <code>paint</code> spits out two results: one for the <code>first-paint</code> and the other for the <code>first-contenful-paint</code>. I know that a lot happens between the time a user navigates to a page and stuff starts painting, but I didn’t know there was a difference between these two metrics.</p>

<p>Here’s how <a href="https://w3c.github.io/paint-timing/#first-paint-and-first-contentful-paint">the spec</a> explains it:</p>

<blockquote>“The primary difference between the two metrics is that [First Paint] marks the first time the browser renders anything for a given document. By contrast, [First Contentful Paint] marks the time when the browser renders the first bit of image or text content from the DOM.”</blockquote>

<p>As it turns out, the first paint and FCP data I got back in that last example are identical. Since first paint can be <a href="https://www.debugbear.com/docs/web-performance-metrics?utm_campaign=sm-4#first-paint-fp"><em>anything</em> that prevents a blank screen</a>, e.g., a background color, I think that the identical results mean that whatever content is first painted to the screen just so happens to also be the first contentful paint.</p>

<p>But there’s apparently a lot more nuance to it, as Chrome measures FCP differently based on what version of the browser is in use. <a href="https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/metrics_changelog/fcp.md">Google keeps a full record of the changelog</a> for reference, so that’s something to keep in mind when evaluating results, especially if you find yourself with different results from others on your team.</p>

<h2 id="reporting-cumulative-layout-shift">Reporting Cumulative Layout Shift</h2>

<p>How much does the page shift around as elements are painted to it? Of course, we can get that from the Performance API! Instead of <code>largest-contentful-paint</code> or <code>paint</code>, now we’re turning to the <code>layout-shift</code> type.</p>

<p>This is where browser support is dicier than other performance metrics. The <code>LayoutShift</code> interface is still in “experimental” status at this time, with <a href="https://caniuse.com/mdn-api_layoutshift">Chromium browsers being the sole group of supporters</a>.</p>

<p>As it currently stands, <code>LayoutShift</code> opens up several pieces of information, including a <code>value</code> representing the amount of shifting, as well as the <code>sources</code> causing it to happen. More than that, we can tell if any user interactions took place that would affect the CLS value, such as zooming, changing browser size, or actions like <code>keydown</code>, <code>pointerdown</code>, and <code>mousedown</code>. This is the <a href="https://developer.mozilla.org/en-US/docs/Web/API/LayoutShift/lastInputTime"><code>lastInputTime</code> property</a>, and there’s an accompanying <a href="https://developer.mozilla.org/en-US/docs/Web/API/LayoutShift/hadRecentInput"><code>hasRecentInput</code> boolean</a> that returns <code>true</code> if the <code>lastInputTime</code> is less than <code>500ms</code>.</p>

<p>Got all that? We can use this to both see how much shifting takes place during page load and identify the culprits while excluding any shifts that are the result of user interactions.</p>

<div class="break-out">
<pre><code class="language-javascript">const observer = new PerformanceObserver((list) =&gt; {
  let cumulativeLayoutShift = 0;
  list.getEntries().forEach((entry) =&gt; {
    // Don't count if the layout shift is a result of user interaction.
    if (!entry.hadRecentInput) {
      cumulativeLayoutShift += entry.value;
    }
    console.log({ entry, cumulativeLayoutShift });
  });
});

// Call the Observer.
observer.observe({ type: "layout-shift", buffered: true });
</code></pre>
</div>

<p>Given the experimental nature of this one, here’s what an <code>entry</code> object looks like when we query it:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="276"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png"
			
			sizes="100vw"
			alt="Tree outline showing the object properties and values for entries in the LayoutShift class produced by a query."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/4-tree-outline.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Pretty handy, right? Not only are we able to see how much shifting takes place (<code>0.128</code>) and which element is moving around (<code>article.a.main</code>), but we have the exact coordinates of the element’s box from where it starts to where it ends.</p>

<h2 id="reporting-interaction-to-next-paint">Reporting Interaction To Next Paint</h2>

<p>This is the new kid on the block that got my mind wondering about the Performance API in the first place. It’s been possible for some time now to measure INP as it transitions to replace First Input Delay as a Core Web Vitals metric in March 2024. When we’re talking about INP, we’re talking about measuring the time between a user interacting with the page and the page responding to that interaction.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="393"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg"
			
			sizes="100vw"
			alt="Timeline illustration showing the tasks in between input delay and presentation delay in response to user interaction."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/5-timeline-illustration-tasks.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We need to hook into the <a href="https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEventTiming"><code>PerformanceEventTiming</code> class</a> for this one. And there’s so much we can dig into when it comes to user interactions. Think about it! There’s what type of event happened (<code>entryType</code> and <code>name</code>), when it happened (<code>startTime</code>), what element triggered the interaction (<code>interactionId</code>, experimental), and when processing the interaction starts (<code>processingStart</code>) and ends (<code>processingEnd</code>). There’s also a way to exclude interactions that can be canceled by the user (<code>cancelable</code>).</p>

<pre><code class="language-javascript">const observer = new PerformanceObserver((list) =&gt; {
  list.getEntries().forEach((entry) =&gt; {
    // Alias for the total duration.
    const duration = entry.duration;
    // Calculate the time before processing starts.
    const delay = entry.processingStart - entry.startTime;
    // Calculate the time to process the interaction.
    const lag = entry.processingStart - entry.startTime;

    // Don't count interactions that the user can cancel.
    if (!entry.cancelable) {
      console.log(`INP Duration: ${duration}`);
      console.log(`INP Delay: ${delay}`);
      console.log(`Event handler duration: ${lag}`);
    }
  });
});

// Call the Observer.
observer.observe({ type: "event", buffered: true });
</code></pre>
  

<h2 id="reporting-long-animation-frames-loafs">Reporting Long Animation Frames (LoAFs)</h2>

<p>Let’s build off that last one. We can now track INP scores on our website and break them down into specific components. But what code is actually running and causing those delays?</p>

<p>The <a href="https://www.debugbear.com/blog/long-animation-frames/?utm_campaign=sm-4">Long Animation Frames API</a> was developed to help answer that question. It won’t land in Chrome stable until mid-March 2024, but you can already use it in Chrome Canary.</p>

<p>A <code>long-animation-frame</code> entry is reported every time the browser couldn’t render page content immediately as it was busy with other processing tasks. We get an overall <code>duration</code> for the long frame but also a <code>duration</code> for different <code>scripts</code> involved in the processing.</p>

<div class="break-out">
<pre><code class="language-javascript">const observer = new PerformanceObserver((list) =&gt; {
  list.getEntries().forEach((entry) =&gt; {
    if (entry.duration &gt; 50) {
      // Log the overall duration of the long frame.
      console.log(`Frame took ${entry.duration} ms`)
      console.log(`Contributing scripts:`)
      // Log information on each script in a table.
      entry.scripts.forEach(script =&gt; {
        console.table({
          // URL of the script where the processing starts
          sourceURL: script.sourceURL,
          // Total time spent on this sub-task
          duration: script.duration,
          // Name of the handler function
          functionName: script.sourceFunctionName,
          // Why was the handler function called? For example, 
          // a user interaction or a fetch response arriving.
          invoker: script.invoker
        })
      })
    }
  });
});

// Call the Observer.
observer.observe({ type: "long-animation-frame", buffered: true });
</code></pre>
</div>

<p>When an INP interaction takes place, we can find the closest long animation frame and investigate what processing delayed the page response.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="344"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png"
			
			sizes="100vw"
			alt="Long animation frames data the Chrome DevTools Console"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/6-animation-frames-data-chrome-devtools-console.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="there-s-a-package-for-this">There’s A Package For This</h2>

<p>The Performance API is so big and so powerful. We could easily spend an entire bootcamp learning all of the interfaces and what they provide. There’s network timing, navigation timing, resource timing, and plenty of custom reporting features available on top of the Core Web Vitals we’ve looked at.</p>

<p>If CWVs are what you’re really after, then you might consider looking into the <a href="https://github.com/GoogleChrome/web-vitals">web-vitals library</a> to wrap around the browser Performance APIs.</p>

<p>Need a CWV metric? All it takes is a single function.</p>

<pre><code class="language-javascript">webVitals.getINP(function(info) {
  console.log(info)
}, { reportAllChanges: true });
</code></pre>

<p>Boom! That <code>reportAllChanges</code> property? That’s a way of saying we only want to report data every time the metric changes instead of only when the metric reaches its final value. For example, as long as the page is open, there’s always a chance that the user will encounter an even slower interaction than the current INP interaction. So, without <code>reportAllChanges</code>, we’d only see the INP reported when the page is closed (or when it’s hidden, e.g., if the user switches to a different browser tab).</p>

<p>We can also report purely on the difference between the preliminary results and the resulting changes. From the <a href="https://github.com/GoogleChrome/web-vitals?tab=readme-ov-file#report-only-the-delta-of-changes">web-vitals docs</a>:</p>

<pre><code class="language-javascript">function logDelta({ name, id, delta }) {
  console.log(`${name} matching ID ${id} changed by ${delta}`);
}

onCLS(logDelta);
onINP(logDelta);
onLCP(logDelta);
</code></pre>

<h2 id="measuring-is-fun-but-monitoring-is-better">Measuring Is Fun, But Monitoring Is Better</h2>

<p>All we’ve done here is scratch the surface of the Performance API as far as programmatically reporting Core Web Vitals metrics. It’s fun to play with things like this. There’s even a slight feeling of <em>power</em> in being able to tap into this information on demand.</p>

<p>At the end of the day, though, you’re probably just as interested <a href="https://www.smashingmagazine.com/2023/08/running-page-speed-test-monitoring-versus-measuring/">in <em>monitoring</em> performance as you are in <em>measuring</em> it</a>. We could do a deep dive and detail what a performance dashboard powered by the Performance API is like, complete with historical records that indicate changes over time. That’s ultimately the sort of thing we can build on this &mdash; we can build our own real user monitoring (RUM) tool or perhaps compare Performance API values against historical data from the <a href="https://developer.chrome.com/docs/crux/history-api">Chrome User Experience Report</a> <a href="https://developer.chrome.com/docs/crux/bigquery">(CrUX)</a>.</p>

<p>Or perhaps you want a solution right now without stitching things together. That’s what you’ll get from a paid commercial service like <a href="https://www.debugbear.com/?utm_campaign=sm-4">DebugBear</a>. All of this is already baked right in with all the metrics, historical data, and charts you need to gain insights into the overall performance of a site over time… <a href="https://www.debugbear.com/real-user-monitoring/?utm_campaign=sm-4">and in real-time, monitoring real users</a>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="474"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png"
			
			sizes="100vw"
			alt="DebugBear Largest Contentful Paint dashboard showing overall speed, a histogram, a  timeline, and a performance breakdown of the most popular pages."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/7-debugbear-largest-contentful-paint-dashboard.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>DebugBear can help you identify why users are having slow experiences on any given page. If there is slow INP, what page elements are these users interacting with? What elements often shift around on the page and cause high CLS? Is the LCP typically an image, a heading, or something else? And does the type of LCP element impact the LCP score?</p>

<p>To help explain INP scores, DebugBear also supports the upcoming Long Animation Frames API we looked at, allowing you to see what code is responsible for interaction delays.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="316"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png"
			
			sizes="100vw"
			alt="Table showing CSS selectors identifying different page elements that users have interacted with, along with their INP score."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/8-table-css-selectors.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The Performance API can also report a list of all resource requests on a page. DebugBear uses this information to show a <a href="https://www.debugbear.com/docs/waterfall/?utm_campaign=sm-4">request waterfall chart</a> that tells you not just when different resources are loaded but also whether the resources were render-blocking, loaded from the cache or whether an image resource is used for the LCP element.</p>

<p>In this screenshot, the blue line shows the FCP, and the red line shows the LCP. We can see that the LCP happens right after the LCP image request, marked by the blue “LCP” badge, has finished.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="413"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png"
			
			sizes="100vw"
			alt="A request waterfall visualization showing what resources are loaded by a website and when they are loaded."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/reporting-core-web-vitals-performance-api/9-request-waterfall-visualization.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>DebugBear offers a <a href="https://www.debugbear.com/signup/?utm_campaign=sm-4">14-day free trial</a>. See how fast your website is, what’s slowing it down, and how you can improve your Core Web Vitals. You’ll also get monitoring alerts, so if there’s a web vitals regression, you’ll find out before it starts impacting Google search results.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Geoff Graham</author><title>Preparing For Interaction To Next Paint, A New Web Core Vital</title><link>https://www.smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/</link><pubDate>Thu, 07 Dec 2023 21:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/</guid><description>Starting in March 2024, Interaction to Next Paint will formally replace First Input Delay as a Core Web Vital metric. Learn how the two metrics differ, why we needed a new way to measure interaction responsiveness, and how you can start optimizing the performance of your site now for a seamless transition to the latest Core Web Vital metric.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2023/12/preparing-interaction-next-paint-web-core-vital/" />
              <title>Preparing For Interaction To Next Paint, A New Web Core Vital</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Preparing For Interaction To Next Paint, A New Web Core Vital</h1>
                  
                    
                    <address>Geoff Graham</address>
                  
                  <time datetime="2023-12-07T21:00:00&#43;00:00" class="op-published">2023-12-07T21:00:00+00:00</time>
                  <time datetime="2023-12-07T21:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>There’s a change coming to the Core Web Vitals lineup. If you’re reading this before March 2024 and fire up your favorite performance monitoring tool, you’re going to to get a Core Web Vitals report like this one pulled from PageSpeed Insights:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="323"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png"
			
			sizes="100vw"
			alt="Core Web Vitals report with six mini-charts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/1-core-web-vitals-assessment.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You’re likely used to seeing most of these metrics. But there’s a good reason for the little blue icon sitting next to the second metric in the second row, <strong>Interaction to Next Paint (INP)</strong>. It’s the newest metric of the bunch and is set to formally be a ranking factor in Google search results <a href="https://web.dev/blog/inp-cwv">beginning in March 2024</a>.</p>

<p>And there’s a good reason that INP sits immediately below the First Input Delay (FID) in that chart. INP will officially replace FID when it becomes an official Core Web Vital metric.</p>

<p>The fact that INP is already available in performance reports means we have an opportunity to familiarize ourselves with it today, in advance of its release. That’s what this article is all about. Rather than pushing off INP until after it starts influencing the way we measure site performance, let’s take a few minutes to level up our understanding of what it is and why it’s designed to replace FID. This way, you’ll not only have the information you need to read your performance reports come March 2024 but can proactively prepare your website for the change.</p>

<h2 id="i-m-not-seeing-those-metrics-in-my-reports">“I’m Not Seeing Those Metrics In My Reports”</h2>

<p>Chances are that you’re looking at Lighthouse or some other report <a href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/#does-lighthouse-use-rum-data-too">based on lab data.</a> And by that, I mean data that isn’t coming from the field in the form of “real” users. You configure the test by applying some form of simulated throttling and start watching the results pour in. In other words, the data is not looking at your <em>actual</em> web traffic but a <em>simulated</em> environment that gives you an approximate view of traffic when certain conditions are in place.</p>

<p>I say all that because it’s important to remember that <strong>not all performance data is equal</strong>, and some metrics are simply impossible to measure with certain types of data. INP and FID happen to be a couple of metrics where lab data is unsuitable for meaningful results, and that’s because both <strong>INP and FID are measurements of user interactions</strong>. That may not have been immediately obvious by the name “First Input Delay,” but it’s clear as day when we start talking about “<em>Interaction</em> to Next Paint” &mdash; it’s right there in the name!</p>

<p>Simulated lab data, like what is used in Lighthouse reports, does not interact with the page. That means there is no way for it to evaluate the first input a user makes or any other interactions on the page.</p>

<p>So, that’s why you’re not seeing INP or FID in your reports. If you want these metrics, then you will want to use a performance tool that is capable of using real user data, such as DebugBear, which can <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-3">monitor your actual traffic on an ongoing basis in real time</a>, or PageSpeed Insights which bases its finding on Google’s “<a href="https://developer.chrome.com/docs/crux/">Chrome User Experience Report</a>” (commonly referred to as CrUX), though DebugBear is capable of providing CrUX reporting as well. The difference between real-time user monitoring and measuring performance against CrUX data is big enough that it’s worth reading up on it, and <a href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/">we have a full article on Smashing Magazine</a> that goes deeply into the differences for you.</p>

<h2 id="inp-improves-how-page-interactions-are-measured">INP Improves How Page Interactions Are Measured</h2>

<p>OK, so we now know that both INP and FID are about page interactions. <strong>Specifically, they are about measuring the time between a user interacting with the page and the page responding to that interaction.</strong></p>

<p>What’s the difference between the two metrics, then? The answer is two-fold. First, FID is a measure of the time it takes the page to start processing an interaction or the <strong>input delay</strong>. That sounds fine on the surface &mdash; we want to know how much time it takes for a user to start an interaction and optimize it if we can. The problem with it, though, is that it takes just one part of the time for the page to <em>fully</em> respond to an interaction.</p>

<p>A more complete picture considers the input delay in addition to two other components: <strong>processing time</strong> and <strong>presentation delay</strong>. In other words, we should also look at the time it takes to process the interaction and the time it takes for the page to render the UI in response. As you may have already guessed, INP considers all three delays, whereas FID considers only the input delay.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="393"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg"
			
			sizes="100vw"
			alt="Diagram of a timeline aligned with the three INP components."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/2-timeline-inp-components.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The second difference between INP and FID is <em>which</em> interactions are evaluated. FID is not shy about which interaction it measures: the very first one, as in the input delay of the <em>first</em> interaction on the page. <strong>We can think of INP as a more complete and accurate representation of how fast your page responds to user interactions because it looks at</strong> <strong><em>every single one</em></strong> <strong>on the page.</strong> It’s probably rare for a page to have only one interaction, and whatever interactions there are after the first interaction are likely located well down the page and happen after the page has fully loaded.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aSo,%20where%20FID%20looks%20at%20the%20first%20interaction%20%e2%80%94%20and%20only%20the%20input%20delay%20of%20that%20interaction%20%e2%80%94%20INP%20considers%20the%20entire%20lifecycle%20of%20all%20interactions.%0a&url=https://smashingmagazine.com%2f2023%2f12%2fpreparing-interaction-next-paint-web-core-vital%2f">
      
So, where FID looks at the first interaction — and only the input delay of that interaction — INP considers the entire lifecycle of all interactions.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="measuring-interaction-to-next-paint">Measuring Interaction To Next Paint</h2>

<p>Both FID and INP are measured in milliseconds. Don’t get too worried if you notice your INP time is greater than your FID. That’s bound to happen when <em>all</em> of the interactions on the page are evaluated instead of the first interaction alone.</p>

<p>Google’s guidance is to <a href="https://web.dev/articles/fid#what_is_a_good_fid_score">maintain an FID under 100ms</a>. And remember, FID does not take into account the time it takes for the event to process, nor does it consider the time it takes the page to update following the event. It only looks at the delay of the event process.</p>

<p>And since INP does indeed take all three of those factors into account &mdash; the input delay, processing time, and presentation delay &mdash; Google’s guidance for measuring INP is inherently larger than FID: <strong>under 200ms for a “good” result, and between 200-500ms for a passing result.</strong> Any interaction that adds up to a delay greater than 500ms is a clear bottleneck.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="184"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg"
			
			sizes="100vw"
			alt="Showing the 200ms-500ms range of passing INP scores."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/3-200ms-500ms-range-passing-inp-scores.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The goal is to spot slow interactions and optimize them for a smoother user experience. How exactly do you identify those problems? That’s what we’re looking at next.</p>

<h2 id="identifying-slow-interactions">Identifying Slow Interactions</h2>

<p>There’s already plenty you can do right now to optimize your site for INP before it becomes an official Core Web Vital in March 2024. Let’s walk through the process.</p>

<p>Of course, we’re talking about the user doing <em>something</em> on the page, i.e., an action such as a click or keyboard focus. That might be expanding a panel in an accordion component or perhaps triggering a modal or a prompt any change in a state where the UI updates in response.</p>

<p>Your page may consist of little more than content and images, making for very few, if any, interactions. It could just as well be some sort of game-based UI with thousands of interactions. INP can be a heckuva lot of work, but it really comes down to how many interactions we’re talking about.</p>

<p>We’ve already talked about the difference between <strong>field data</strong> and <strong>lab data</strong> and how lab data is simply unable to measure page interactions accurately. That means you will want to rely on field data when pulling INP reports to identify bottlenecks. And when we’re talking about field data, we’re talking about two different flavors:</p>

<ol>
<li><strong>Data from the CrUX report</strong> that is based on the results of real Chrome users. This is readily available in PageSpeed Insights and Google Search Console, not to mention DebugBear. If you use either of Google’s tools, just note that their throttling methods collect metrics on a fast connection and then estimate how fast the page would be on a slower connection. DebugBear actually tests with a slower network, resulting in more accurate data.</li>
<li><strong>Monitoring your website’s real-time traffic</strong>, which will require adding a snippet to your source code that sends traffic data to a service. And, yes, DebugBear is one such service, though there are others. You can even take advantage of <a href="https://developer.chrome.com/docs/crux/bigquery/">historical CrUX data integrated with BigQuery</a> to get a historical view of your results dating back as far as 2017 with new data coming in monthly, which isn’t exactly “real-time” monitoring of your actual traffic, but certainly useful.</li>
</ol>

<p>You will get the most bang for your buck with real-time monitoring that keeps a historical record of data you can use to evaluate INP results over time.</p>

<p>That said, you can still start identifying bottlenecks today if you prefer not to dive into real-time monitoring right this second. <a href="https://www.debugbear.com/inp-debugger?utm_campaign=sm-3">DebugBear has a tool that</a> analyzes any URL your throw at it. What’s great about this is that it <em>shows</em> you the elements that receive user interaction and provides the results right next to them. <strong>The result of the element that takes the longest is your INP result.</strong> That’s true whether you have one component above the 500ms threshold or 100 of them on the page.</p>

<p>The fact that DebugBear’s tool highlights all of the interactions and organizes them by INP makes identifying bottlenecks a straightforward process.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="690"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png"
			
			sizes="100vw"
			alt="DebugBear INP report."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Whoa there! We have a bottleneck! (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/4-debugbear-inp-report.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>See that? There’s a clear INP offender on Smashing Magazine’s homepage, and it comes in slightly outside the healthy INP range for a score of 510ms even though the next “slowest” result is 184ms. There’s a little work we need to do between now and March to remedy that.</p>

<p>Notice, too, that there are actually two scores in the report: the INP Debugger Result and the Real User Google Data. The results aren’t even close! If we were to go by the Google CrUX data, we’re looking at a result that is 201ms <em>faster</em> than the INP Debugger’s result &mdash; a big enough difference that would result in the Smashing Magazine homepage fully passing INP.</p>

<p>Ultimately, what matters is how real users experience your website, and you need to look at the CrUX data to see that. The elements identified by the INP Debugger may cause slow interactions, but if users only interact with them very rarely, that might not be a priority to fix. But for a perfect user experience, you would want both results to be in the green.</p>

<h2 id="optimizing-slow-interactions">Optimizing Slow Interactions</h2>

<p>This is the ultimate objective, right? Once we have identified slow interactions &mdash; whether through a quick test with CrUX data or a real-time monitoring solution &mdash; we need to optimize them so their delays are at least under 500ms, but ideally under 200ms.</p>

<p>Optimizing INP comes down to CPU activity at the end of the day. But as we now know, INP measures two additional components of interactions that FID does not for a total of three components: <strong>input delay</strong>, <strong>processing time</strong>, and <strong>presentation delay</strong>. Each one is an opportunity to optimize the interaction, so let’s break them down.</p>

<h3 id="reduce-the-input-delay">Reduce The Input Delay</h3>

<p>This is what FID is solely concerned with, and it’s the time it takes between the user’s input, such as a click, and for the interaction to start.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="389"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg"
			
			sizes="100vw"
			alt="Diagram showing FID in relation to Total Blocking Time and UI update."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/5-diagram-fid-relation-total-blocking-time-ui-update.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This is where the <strong>Total Blocking Time (TBT)</strong> metric is a good one because it looks at CPU activity happening on the main thread, which adds time for the page to be able to respond to a user’s interaction. TBT does not count toward Google’s search rankings, but FID and INP do, and both are directly influenced by TBT. So, it’s a pretty big deal.</p>

<p>You will want to heavily audit what tasks are running on the main thread to improve your TBT and, as a result, your INP. Specifically, you want to watch for <strong>long tasks</strong> on the main thread, which are those that take more than 50ms to execute. You can get a decent visualization of tasks on the main thread in DevTools:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="538"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png"
			
			sizes="100vw"
			alt="Safari DevTools Timelines report"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The “Timelines” feature in Safari DevTools records the page load and produces a timeline of activity that can be broken down by network requests, layout and rendering, JavaScript and events, and CPU. Chrome and Firefox offer the same features but may look a little different, as browsers tend to do. (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/6-safari-devtools-timelines-report.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The bottom line: <strong>Optimize those long tasks!</strong> There are plenty of approaches you could take depending on your app. Not all scripts are equal in the sense that one may be executing a core feature while another is simply a nice-to-have. You’ll have to ask yourself:</p>

<ul>
<li><strong>Who</strong> is the script serving?</li>
<li><strong>When</strong> is it served?</li>
<li><strong>Where</strong> is it served from?</li>
<li><strong>What</strong> is it serving?</li>
</ul>

<p>Then, depending on your answers, you have plenty of options for how to optimize your long tasks:</p>

<ul>
<li><a href="https://www.smashingmagazine.com/2023/04/potential-web-workers-multithreading-web/">Use Web Workers</a> to establish separate threads for tasks to get scripts off the main thread.</li>
<li><a href="https://www.smashingmagazine.com/2022/02/javascript-bundle-performance-code-splitting/">Split JavaScript bundles</a> into individual pieces for smaller payloads.</li>
<li><a href="https://www.debugbear.com/blog/async-vs-defer?utm_campaign=sm-3">Async or defer scripts</a> that can run later without affecting the initial page load.</li>
<li><a href="https://web.dev/articles/preconnect-and-dns-prefetch">Preconnect network connections</a>, so browsers have a hint for other domains they might need to connect to. (It’s worth noting that this could reveal the user’s IP address and <a href="https://css-tricks.com/bunny-fonts/">conflict with GDPR compliance</a>.)</li>
</ul>

<p>Or, nuke any scripts that might no longer be needed!</p>

<h3 id="reduce-processing-time">Reduce Processing Time</h3>

<p>Let’s say the user’s input triggers a heavy task, and you need to serve a bunch of JavaScript in response &mdash; heavy enough that you know a second or two is needed for the app to fully process the update.</p>

<ul>
<li>Try creating a loading state that triggers immediately and <a href="https://developer.mozilla.org/en-US/docs/Web/API/setTimeout">perform the work in a <code>setTimeout()</code> callback</a> because that’s a much quicker way to respond to the user interaction than waiting for the complete update.</li>
<li>If you’re working in React, make sure you are <a href="https://www.debugbear.com/blog/react-rerenders?utm_campaign=sm-3">preventing your components from re-rendering unnecessarily</a>.</li>
<li>Remember that <code>alert()</code>, <code>confirm()</code>, and <code>prompt()</code> are capable of adding to the total processing time as they run synchronously and block the main thread. That said, it appears <a href="https://twitter.com/mmocny/status/1679110039880040449">there could be plans to change that behavior</a> ahead of INP becoming a formal Core Web Vital.</li>
</ul>

<h3 id="reduce-presentation-delay">Reduce Presentation Delay</h3>

<p>Reducing the time it takes for the presentation is really about reducing the time it takes the browser to display updates to the UI, paint styles, and do all of the calculations needed to produce the layout.</p>

<p>Of course, this is entirely dependent on the complexity of the page. That said, there are a few things to consider to help decrease the gap between when an interaction’s callbacks have finished running and when the browser is able to paint the resulting visual changes.</p>

<p>One thing is being mindful of <strong>the overall size of the DOM</strong>. The bigger the DOM, the more HTML that needs to be processed. That’s generally true, at least, even though <a href="https://web.dev/articles/dom-size-and-interactivity#how_can_i_reduce_dom_size">the relationship between DOM size and rendering isn’t exactly 1:1</a>; the browser still needs to work harder to render a larger DOM on the initial page load and when there’s a change on the page. That link will take you to a deep explanation of what contributes to the DOM size, how to measure it, and approaches for reducing it. The gist, though, is trying to <strong>maintain a flat structure</strong> (i.e., limit the levels of nested elements). Additionally, reviewing your CSS for overly complex selectors is another piece of low-hanging fruit to help move things along.</p>

<p>While we’re talking about CSS, you might consider looking into the <code>content-visibility</code> property and how it could possibly help reduce presentation delay. It comes with a lot of considerations, but if used effectively, it can provide the browser with a hint as far as which elements to defer fully rendering. The idea is that we can render an element’s layout containment but skip the paint until other resources have loaded. <a href="https://css-tricks.com/more-on-content-visibility/">Chris Coyier explains how and why that happens</a>, and there are <a href="https://html5accessibility.com/stuff/2020/08/25/short-note-on-content-visibility-hidden/">aspects of accessibility to bear in mind</a>.</p>

<p>And remember, if you’re outputting HTML from JavaScript, that JavaScript will have to load in order for the HTML to render. That’s a potential cost that comes with many single-page application frameworks.</p>

<h2 id="gain-insight-on-your-real-user-inp-breakdown">Gain Insight On Your Real User INP Breakdown</h2>

<p>The tools we’ve looked at so far can help you look at specific interactions, especially when testing them on your own computer. But how close is that to what your actual visitors experience?</p>

<p><strong>Real user-monitoring (RUM)</strong> lets you track how responsive your website is in the real world:</p>

<ul>
<li>What pages have the slowest INP?</li>
<li>What INP components have the biggest impact in real life?</li>
<li>What page elements do users interact with most often?</li>
<li>How fast is the average interaction for a given element?</li>
<li>Is our website less responsive for users in different countries?</li>
<li>Are our INP scores getting better or worse over time?</li>
</ul>

<p>There are many RUM solutions out there, and <a href="https://www.debugbear.com/real-user-monitoring?utm_campaign=sm-3">DebugBear RUM</a> is one of them.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="516"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png"
			
			sizes="100vw"
			alt="DebugBear RUM"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/7-debugbear-rum.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>DebugBear also supports the proposed <a href="https://www.debugbear.com/blog/long-animation-frames">Long Animation Frames API</a> that can help you identify the source code that’s responsible for CPU tasks in the browser.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="397"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png"
			
			sizes="100vw"
			alt="Long Animation Frames API"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/preparing-interaction-next-paint-web-core-vital/8-long-animation-frames-api.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>When Interaction to Next Paint makes its official debut as a Core Web Vital in March 2024, we’re gaining a better way to measure a page’s responsiveness to user interactions that is set to replace the First Input Delay metric.</p>

<p>Rather than looking at the input delay of the first interaction on the page, we get a high-definition evaluation of the least responsive component on the page &mdash; including the <strong>input delay</strong>, <strong>processing time</strong>, and <strong>presentation delay</strong> &mdash; whether it’s the first interaction or another one located way down the page. In other words, INP is a clearer and more accurate way to measure the speed of user interactions.</p>

<p>Will your app be ready for the change in March 2024? You now have a roadmap to help optimize your user interactions and prepare ahead of time as well as all of the tools you need, <a href="https://www.debugbear.com/inp-debugger?utm_campaign=sm-3">including a quick, free option from the team over at DebugBear</a>. This is the time to get a jump on the work; otherwise, you could find yourself with unidentified interactions that exceed the 500ms threshold for a “passing” INP score that negatively impacts your search engine rankings… and user experiences.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Geoff Graham</author><title>Answering Common Questions About Interpreting Page Speed Reports</title><link>https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/</link><pubDate>Tue, 31 Oct 2023 16:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/</guid><description>Have you noticed that different tools’ reports result in different performance scores? Shouldn’t they be the same if reports rely on the same underlying tooling to generate scores? Take a closer look at how various performance tools audit and report on performance metrics, such as core web vitals. Geoff Graham answers a set of common questions that pop up during performance audits.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2023/10/answering-questions-interpreting-page-speed-reports/" />
              <title>Answering Common Questions About Interpreting Page Speed Reports</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Answering Common Questions About Interpreting Page Speed Reports</h1>
                  
                    
                    <address>Geoff Graham</address>
                  
                  <time datetime="2023-10-31T16:00:00&#43;00:00" class="op-published">2023-10-31T16:00:00+00:00</time>
                  <time datetime="2023-10-31T16:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>DebugBear</b></p>
                

<p>Running a performance check on your site isn’t too terribly difficult. It may even be something you do regularly with Lighthouse in Chrome DevTools, where testing is freely available and produces a very attractive-looking report.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="561"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png"
			
			sizes="100vw"
			alt="Lighthouse report on Smashing Magazine performance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Can’t be perfect every time! (<a href='https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/1-lighthouse-report-smashing-magazine.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Lighthouse is only one performance auditing tool out of many. The convenience of having it tucked into Chrome DevTools is what makes it an easy go-to for many developers.</p>

<p>But do you know <em>how</em> Lighthouse calculates performance metrics like First Contentful Paint (FCP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS)? There’s a <a href="https://googlechrome.github.io/lighthouse/scorecalc/#FCP=620&amp;SI=730&amp;FMP=635&amp;TTI=635&amp;FCI=6500&amp;LCP=980&amp;TBT=0&amp;CLS=0.01&amp;device=desktop&amp;version=10">handy calculator</a> linked up in the report summary that lets you adjust performance values to see how they impact the overall score. Still, there’s nothing in there to tell us about the data Lighthouse is using to evaluate metrics. The <a href="https://developer.chrome.com/docs/lighthouse/performance/performance-scoring/">linked-up explainer</a> provides more details, from how scores are weighted to why scores may fluctuate between test runs.</p>

<p>Why do we need Lighthouse at all when Google also offers similar reports in <a href="https://pagespeed.web.dev">PageSpeed Insights</a> (PSI)? The truth is that the two tools were fairly distinct until <a href="https://developers.google.com/speed/docs/insights/release_notes#november-2018">PSI was updated in 2018</a> to use Lighthouse reporting.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="598"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png"
			
			sizes="100vw"
			alt="PSI report on Smashing Magazine performance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/2-psi-report-smashing-magazine-performance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Did you notice that the Performance score in Lighthouse is different from that PSI screenshot? How can one report result in a near-perfect score while the other appears to find more reasons to lower the score? Shouldn’t they be the same if both reports rely on the same underlying tooling to generate scores?</p>

<p>That’s what this article is about. <strong>Different tools make different assumptions using different data</strong>, whether we are talking about Lighthouse, PageSpeed Insights, or commercial services like <a href="https://www.debugbear.com/?utm_campaign=sm-2">DebugBear</a>. That’s what accounts for different results. But there are more specific reasons for the divergence.</p>

<p>Let’s dig into those by answering a set of common questions that pop up during performance audits.</p>

<h2 id="what-does-it-mean-when-pagespeed-insights-says-it-uses-real-user-experience-data">What Does It Mean When PageSpeed Insights Says It Uses “Real-User Experience Data”?</h2>

<p>This is a great question because it provides a lot of context for why it’s possible to get varying results from different performance auditing tools. In fact, when we say “real user data,” we’re really referring to two different types of data. And when discussing the two types of data, we’re actually talking about what is called <strong>real-user monitoring</strong>, or RUM for short.</p>

<h3 id="type-1-chrome-user-experience-report-crux">Type 1: Chrome User Experience Report (CrUX)</h3>

<p>What PSI means by “real-user experience data” is that it evaluates the performance data used to measure the core web vitals from your tests against the core web vitals data of actual real-life users. That real-life data is pulled from the <a href="https://developer.chrome.com/docs/crux/">Chrome User Experience (CrUX) report</a>, a set of anonymized data collected from Chrome users &mdash; at least <a href="https://developer.chrome.com/docs/crux/methodology/#user-eligibility">those who have consented to share data</a>.</p>

<p>CrUX data is important because it is how web core vitals are measured, which, in turn, <a href="https://www.debugbear.com/docs/core-web-vitals-ranking-factor?utm_campaign=sm-2">are a ranking factor for Google’s search results</a>. <strong>Google focuses on the 75th percentile of users</strong> in the CrUX data when reporting core web vitals metrics. This way, the data represents a vast majority of users while minimizing the possibility of outlier experiences.</p>

<p>But it comes with caveats. For example, the data is pretty slow to update, refreshing every 28 days, meaning it is not the same as real-time monitoring. At the same time, if you plan on using the data yourself, you may find yourself limited to reporting within that floating 28-day range unless you make use of the <a href="https://developer.chrome.com/docs/crux/history-api/">CrUX History API</a> or <a href="https://developer.chrome.com/docs/crux/bigquery/">BigQuery</a> to produce historical results you can measure against. CrUX is what fuels PSI and Google Search Console, but it is also available in other tools you may already use.</p>

<p>Barry Pollard, a web performance developer advocate for Chrome, <a href="https://www.smashingmagazine.com/2021/04/complete-guide-measure-core-web-vitals/">wrote an excellent primer on the CrUX Report for Smashing Magazine</a>.</p>

<h3 id="type-2-full-real-user-monitoring-rum">Type 2: Full Real-User Monitoring (RUM)</h3>

<p>If CrUX offers one flavor of real-user data, then we can consider “full real-user data” to be another flavor that provides even more in the way individual experiences, such as specific network requests made by the page. This data is distinct from CrUX because it’s collected directly by the website owner by installing an analytics snippet on their website.</p>

<p>Unlike CrUX data, full RUM pulls data from other users using other browsers in addition to Chrome and does so on a continual basis. That means there’s no waiting 28 days for a fresh set of data to see the impact of any changes made to a site.</p>

<p>You can see how you might wind up with different results in performance tests simply by the type of real-user monitoring (RUM) that is in use. Both types are useful, but</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYou%20might%20find%20that%20CrUX-based%20results%20are%20excellent%20for%20more%20of%20a%20current%20high-level%20view%20of%20performance%20than%20they%20are%20an%20accurate%20reflection%20of%20the%20users%20on%20your%20site%20because%20of%20that%2028-day%20waiting%20period,%20which%20is%20where%20full%20RUM%20shines%20with%20more%20immediate%20results%20and%20a%20greater%20depth%20of%20information.%0a&url=https://smashingmagazine.com%2f2023%2f10%2fanswering-questions-interpreting-page-speed-reports%2f">
      
You might find that CrUX-based results are excellent for more of a current high-level view of performance than they are an accurate reflection of the users on your site because of that 28-day waiting period, which is where full RUM shines with more immediate results and a greater depth of information.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="does-lighthouse-use-rum-data-too">Does Lighthouse Use RUM Data, Too?</h2>

<p>It does not! It uses synthetic data, or what we commonly call <strong>lab data</strong>. And, just like RUM, we can explain the concept of lab data by breaking it up into two different types.</p>

<h3 id="type-1-observed-data">Type 1: Observed Data</h3>

<p>Observed data is performance as the browser sees it. So, instead monitoring real information collected from real users, <strong>observed data</strong> is more like defining the test conditions ourselves. For example, we could add throttling to the test environment to enforce an artificial condition where the test opens the page on a slower connection. You might think of it like racing a car in virtual reality, where the conditions are decided in advance, rather than racing on a live track where conditions may vary.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="334"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png"
			
			sizes="100vw"
			alt="A screenshot with a specific performance tab in Chrome DevTools where you can choose between fast 3G, slow 3G, and offline"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Chrome DevTools includes a separate “Performance” tab where the testing environment’s CPU and network connection can be artificially throttled to mirror a specific testing condition, such as slow internet speeds. (<a href='https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/3-chrome-devtools-performance-testing-conditions.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="type-2-simulated-data">Type 2: Simulated Data</h3>

<p>While we called that last type of data “observed data,” that is not an official industry term or anything. It’s more of a necessary label to help distinguish it from <strong>simulated data</strong>, which describes how Lighthouse (and many other tools that include Lighthouse in its feature set, such as PSI) <a href="https://www.debugbear.com/blog/simulated-throttling?utm_campaign=sm-2">applies throttling to a test environment and the results it produces</a>.</p>

<p>The reason for the distinction is that <a href="https://www.debugbear.com/blog/network-throttling-methods?utm_campaign=sm-2">there are different ways to throttle a network</a> for testing. Simulated throttling <a href="https://www.debugbear.com/blog/simulated-throttling#what-is-simulated-throttling?utm_campaign=sm-2">starts by collecting data on a fast internet connection</a>, then <strong>estimates how quickly the page would have loaded on a different connection</strong>. The result is a much <em>faster</em> test than it would be to apply throttling before collecting information. Lighthouse can often grab the results and calculate its estimates faster than the time it would take to gather the information and parse it on an artificially slower connection.</p>

<h3 id="simulated-and-observed-data-in-lighthouse">Simulated And Observed Data In Lighthouse</h3>

<p>Simulated data is the data that Lighthouse uses by default for performance reporting. It’s also what PageSpeed Insights uses since it is powered by Lighthouse under the hood, although PageSpeed Insights also relies on real-user experience data from the CrUX report.</p>

<p>However, it is also possible to collect observed data with Lighthouse. This data is more reliable since it doesn’t depend on an incomplete simulation of Chrome internals and the network stack. The accuracy of observed data depends on how the test environment is set up. If <a href="https://www.debugbear.com/blog/packet-level-throttling?utm_campaign=sm-2">throttling is applied at the operating system level</a>, then the metrics match what a real user with those network conditions would experience. <a href="https://www.debugbear.com/blog/chrome-devtools-network-throttling#how-exactly-does-devtools-network-throttling-work?utm_campaign=sm-2">DevTools throttling</a> is easier to set up, but doesn’t accurately reflect how server connections work on the network.</p>

<h3 id="limitations-of-lab-data">Limitations Of Lab Data</h3>

<p>Lab data is fundamentally limited by the fact that it only looks at a single experience in a pre-defined environment. This environment often doesn’t even match the average real user on the website, who may have a faster network connection or a slower CPU. Continuous real-user monitoring can actually tell you how users are experiencing your website and whether it’s fast enough.</p>

<p>So why use lab data at all?</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20biggest%20advantage%20of%20lab%20data%20is%20that%20it%20produces%20much%20more%20in-depth%20data%20than%20real%20user%20monitoring.%0a&url=https://smashingmagazine.com%2f2023%2f10%2fanswering-questions-interpreting-page-speed-reports%2f">
      
The biggest advantage of lab data is that it produces much more in-depth data than real user monitoring.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Google CrUX data only reports metric values with no debug data telling you how to improve your metrics. In contrast, lab reports contain a lot of analysis and recommendations on how to improve your page speed.</p>

<h2 id="why-is-my-lighthouse-lcp-score-worse-than-the-real-user-data">Why Is My Lighthouse LCP Score <em>Worse</em> Than The Real User Data?</h2>

<p>It’s a little easier to explain different scores now that we’re familiar with the different types of data used by performance auditing tools. We now know that Google reports on the 75th percentile of real users when reporting web core vitals, which includes LCP.</p>

<blockquote>“By using the 75th percentile, we know that most visits to the site (3 of 4) experienced the target level of performance or better. Additionally, the 75th percentile value is less likely to be affected by outliers. Returning to our example, for a site with 100 visits, 25 of those visits would need to report large outlier samples for the value at the 75th percentile to be affected by outliers. While 25 of 100 samples being outliers is possible, it is much less likely than for the 95th percentile case.”<br /><br />&mdash; <a href="https://web.dev/articles/defining-core-web-vitals-thresholds">Brian McQuade</a></blockquote>

<p>On the flip side, simulated data from Lighthouse neither reports on real users nor accounts for outlier experiences in the same way that CrUX does. <strong>So, if we were to set heavy throttling on the CPU or network of a test environment in Lighthouse, we’re actually embracing outlier experiences that CrUX might otherwise toss out.</strong> Because Lighthouse applies heavy throttling by default, the result is that we get a worse LCP score in Lighthouse than we do PSI simply because Lighthouse’s data effectively looks at a slow outlier experience.</p>

<h2 id="why-is-my-lighthouse-cls-score-better-than-the-real-user-data">Why Is My Lighthouse CLS Score <em>Better</em> Than The Real User Data?</h2>

<p>Just so we’re on the same page, Cumulative Layout Shift (CLS) measures the <a href="https://web.dev/user-centric-performance-metrics/#types-of-metrics">“visible stability” of a page layout</a>. If you’ve ever visited a page, scrolled down it a bit before the page has fully loaded, and then noticed that your place on the page shifts when the page load is complete, then you know exactly what CLS is and how it feels.</p>

<p>The nuance here has to do with page interactions. We know that real users are capable of interacting with a page even before it has fully loaded. This is a big deal when measuring CLS because layout shifts often occur lower on the page after a user has scrolled down the page. CrUX data is ideal here because it’s based on real users who would do such a thing and bear the worst effects of CLS.</p>

<p>Lighthouse’s simulated data, meanwhile, does no such thing. It waits patiently for the full page load and never interacts with parts of the page. It doesn’t scroll, click, tap, hover, or interact in any way.</p>

<p>This is why you’re more likely to receive a lower CLS score in a PSI report than you’d get in Lighthouse. It’s not that PSI likes you less, but that the real users in its report are a better reflection of how users interact with a page and are more likely to experience CLS than simulated lab data.</p>

<h2 id="why-is-interaction-to-next-paint-missing-in-my-lighthouse-report">Why Is Interaction to Next Paint Missing In My Lighthouse Report?</h2>

<p>This is another case where it’s helpful to know the different types of data used in different tools and how that data interacts &mdash; or not &mdash; with the page. That’s because the Interaction to Next Paint (INP) metric is <em>all about interactions</em>. It’s right there in the name!</p>

<p>The fact that Lighthouse’s simulated lab data does not interact with the page is a dealbreaker for an INP report. INP is a measure of the latency for all interactions on a given page, where the highest latency &mdash; or close to it &mdash; informs the final score. For example, if a user clicks on an accordion panel and it takes longer for the content in the panel to render than any other interaction on the page, that is what gets used to evaluate INP.</p>

<p>So, when INP <a href="https://web.dev/inp/">becomes an official core web vitals metric in March 2024</a>, and you notice that it’s not showing up in your Lighthouse report, you’ll know exactly why it isn’t there.</p>

<p><strong>Note</strong>: <em>It is possible to script user flows with Lighthouse, including in DevTools. But that probably goes too deep for this article.</em></p>

<h2 id="why-is-my-time-to-first-byte-score-worse-for-real-users">Why Is My Time To First Byte Score <em>Worse</em> For Real Users?</h2>

<p>The Time to First Byte (TTFB) is what immediately comes to mind for many of us when thinking about page speed performance. We’re talking about the time between establishing a server connection and receiving the first byte of data to render a page.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="216"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png"
			
			sizes="100vw"
			alt="A graph illustrating the Time to First Byte, which consists of full TTFB and HTTP request TTFB"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Source: <a href='https://www.debugbear.com/docs/metrics/time-to-first-byte#ttfb-in-lighthouse'>Source: DebugBear</a>. (<a href='https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/4-ttfb-page-speed-performance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>TTFB identifies how fast or slow a web server is to respond to requests. What makes it special in the context of core web vitals &mdash; even though it is not considered a core web vital itself &mdash; is that it <em>precedes</em> all other metrics. The web server needs to establish a connection in order to receive the first byte of data and render everything else that core web vitals metrics measure. TTFB is essentially an <strong>indication of how fast users can navigate</strong>, and core web vitals can’t happen without it.</p>

<p>You might already see where this is going. When we start talking about server connections, there are going to be differences between the way that RUM data observes the TTFB versus how lab data approaches it. As a result, we’re bound to get different scores based on which performance tools we’re using and in which environment they are. As such, TTFB is more of a “rough guide,” as <a href="https://web.dev/ttfb/">Jeremy Wagner and Barry Pollard explain</a>:</p>

<blockquote>“Websites vary in how they deliver content. A low TTFB is crucial for getting markup out to the client as soon as possible. However, if a website delivers the initial markup quickly, but that markup then requires JavaScript to populate it with meaningful content […], then achieving the lowest possible TTFB is especially important so that the client-rendering of markup can occur sooner. […] This is why the TTFB thresholds are a “rough guide” and will need to be weighed against how your site delivers its core content.”<br /><br />&mdash; <a href="https://web.dev/ttfb/">Jeremy Wagner and Barry Pollard</a></blockquote>

<p>So, if your TTFB score comes in higher when using a tool that relies on RUM data than the score you receive from Lighthouse’s lab data, it’s probably because of caches being hit when testing a particular page. Or perhaps the real user is coming in from a shortened URL that redirects them before connecting to the server. It’s even possible that a real user is connecting from a place that is really far from your web server, which takes a little extra time, particularly if you’re not using a CDN or running edge functions. It really depends on both the user and how you serve data.</p>

<h2 id="why-do-different-tools-report-different-core-web-vitals-what-values-are-correct">Why Do Different Tools Report Different Core Web Vitals? What Values Are Correct?</h2>

<p>This article has already introduced some of the nuances involved when collecting web vitals data. Different tools and data sources often report different metric values. So which ones can you trust?</p>

<p>When working with lab data, I suggest preferring observed data over simulated data. But you’ll see differences even between tools that all deliver high-quality data. That’s because no two tests are the same, with different test locations, CPU speeds, or Chrome versions. There’s no one right value. Instead, you can use the lab data to identify optimizations and see how your website changes over time when tested in a consistent environment.</p>

<p>Ultimately, what you want to look at is how real users experience your website. From an SEO standpoint, the 28-day Google CrUX data is the gold standard. However, it won’t be accurate if you’ve rolled out performance improvements over the last few weeks. Google also doesn’t report CrUX data for some high-traffic pages because the visitors may not be logged in to their Google profile.</p>

<p>Installing a custom RUM solution on your website can solve that issue, but the numbers won’t match CrUX exactly. That’s because visitors using browsers other than Chrome are now included, as are users with Chrome analytics reporting disabled.</p>

<p>Finally, while Google focuses on the fastest 75% of experiences, that doesn’t mean the 75th percentile is the correct number to look at. Even with good core web vitals, 25% of visitors may still have a slow experience on your website.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>This has been a close look at how different performance tools audit and report on performance metrics, such as core web vitals. Different tools rely on different types of data that are capable of producing different results when measuring different performance metrics.</p>

<p>So, if you find yourself with a CLS score in Lighthouse that is far lower than what you get in PSI or DebugBear, go with the Lighthouse report because it makes you look better to the big boss. Just kidding! That difference is a big clue that the data between the two tools is uneven, and you can use that information to help diagnose and fix performance issues.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="641"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png"
			
			sizes="100vw"
			alt="A screenshot of the DebugBear LCP"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/answering-questions-interpreting-page-speed-reports/5-debugbear-lcp.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Are you looking for a tool to track lab data, Google CrUX data, and full real-user monitoring data? <a href="https://www.debugbear.com/?utm_campaign=sm-2">DebugBear</a> helps you keep track of all three types of data in one place and optimize your page speed where it counts.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item></channel></rss>