
How To Fix Largest Contentful Issues With Subpart Analysis
smashingmagazine.com
This article is a sponsored by DebugBearThe Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitors perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, thats bad for user experience and can also cause your site to rank lower in Google.When trying to fix LCP issues, its not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. Theyve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!Lets take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.The Four LCP SubpartsLCP subparts split the Largest Contentful Paint metric into four different components:Time to First Byte (TTFB): How quickly the server responds to the document request.Resource Load Delay: Time spent before the LCP image starts to download.Resource Load Time: Time spent downloading the LCP image.Element Render Delay: Time before the LCP element is displayed.The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.How To Measure LCP SubpartsOne way to measure how much each component contributes to the LCP score on your website is to use DebugBears website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.Whats happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and wont always be the case.Time To First ByteThe first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.In this example, we can see that creating the server connection doesnt take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.Resource Load DelayThe resource we want to load is the LCP image. Ideally, we just have an <img> tag near the top of the HTML, and the browser finds it right away and starts loading it.But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, theres a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.Resources Load DurationThe Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!In this example, the image is loaded from the same domain as the HTML. Thats good because the browser doesnt have to connect to a new server.Other techniques you can use to reduce load delay:Use a modern image format that provides better compression.Load images at a size that matches the size they are displayed at.Deprioritize other resources that might compete with the LCP image.Element Render DelayThe fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isnt ready to show it to the user yet!Luckily, in the example weve been looking at so far, the LCP image appears quickly after its been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.However, if the image finishes downloading before the page is ready to render, youll see an increase in render delay on the page. And thats fine! Youve improved your website speed overall, but after optimizing your image, youve uncovered a new bottleneck to focus on.LCP Subparts In Real User CrUX DataLooking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesnt match whats happening for real users!Thats why, in February 2025, Google started including subpart data in the CrUX data report. Its not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBears Web Vitals tab.One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.To make the data easier to interpret, Google only reports subpart data for images.If the LCP element is usually text on the page, then the subparts info wont be very helpful, as it wont apply to most of your visitors.But breaking down text LCP is relatively easy: everything thats not part of the TTFB score is render-delayed.Track Subparts On Your Website With Real User MonitoringLab data doesnt always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.Thats why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.ConclusionHaving more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations youre considering would really be impactful.
0 Comentários
·0 Compartilhamentos
·35 Visualizações