Tracking Core Web Vitals with Honeycomb and Vercel

Core Web Vitals Image

6 Min. Read

Google’s Core Web Vitals (CWVs) are used to rank the performance of mobile sites or pages. It’s easy to see when your CWV scores are low, but it’s not always clear exactly why that’s happening. In Honeycomb’s new guide, Tracking Core Web Vitals with Honeycomb and Vercel, you can learn how to capture, analyze, and debug your real-world CWV performance using a free Honeycomb account.

What are Core Web Vitals?

In 2020, Google introduced three new page performance metrics, known as the Core Web Vitals (CWV). CWVs measure different aspects of a good user experience on the web and, in 2021, they were added to the Google Search algorithm as a signal for ranking mobile webpages. Sites or pages that have good CWVs are eligible for the Search Carousel and will rank higher in search results.

The three CWV metrics are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift. Collectively, these measures analyze how long it takes page elements to render, how responsive (or snappy) the app feels, and how likely it is that page elements might unexpectedly shift (likely resulting in users clicking the wrong things). These measures are critical to providing good user experience for web applications.

Google surfaces the CWV metrics it measures in most of its popular tools (like Search Console or Google Analytics). But many developers experience a discrepancy between how Google reports CWV metrics and what they’re able to replicate themselves.

Why is it hard to measure and debug Core Web Vitals for yourself?

Google scores Core Web Vitals using the Chrome UX report (CRUX), which is generated by tracking real-world usage data. Most developers use web performance tools that report your site’s CWV scores using test data in a lab setting. Lab testing can’t capture the actual network and device conditions that users experience. As a result, there are significant gaps between what most users can measure for themselves and what Google reports.

Therefore, when it comes to CWVs, most developers will take a monitoring approach to finding issues. If the CWV scores are high, trigger an alert so that someone can investigate. For example, Next.js has support for capturing Core Web Vitals performance metrics. Some tools use Real User Monitoring (RUM) to analyze real-world user sessions, and they can more closely report CWV scores as measured by Google.

However, it’s important to note that CWV scores are captured as metrics, or aggregated measures of overall performance. The tools and approaches above can tell you when a page is performing below a desired threshold, but not why that’s happening. To determine why performance is slow, you need the ability to dig beyond aggregate measures and into the individual user experiences that are at the root of the performance issues you need to debug.

For example, let’s say you get an alert that LCP is 3s, FID is 120ms, and CLS is 0.5. That tells you there’s a problem happening. But it doesn’t tell you where that problem is happening, or under what circumstances. If this happened, you (as the page developer) would then need to kick off your own murder-mystery-style-whodunnit investigation: following a trail of breadcrumbs, checking various systems of record, spelunking through your code, taking various dramatic twists and turns along the way as you burn through several completely wrong suspects, dead-end hypotheses, and several hours before getting close to uncovering the right set of circumstances to replicate in order to understand the complex combination of factors that could be causing an elusive bug.

Unfortunately, as developers, we’re all too accustomed to that arduous crime scene investigation style of debugging. But there’s a better way: instead, you could debug issues in minutes (or seconds!). And it all starts by capturing wide events to analyze them with a tool like Honeycomb.

How can you quickly identify elusive issues with Core Web Vitals?

Monitoring tools can tell you an issue is happening. Observability tools can help you see the causal factors behind why that issue is happening. Honeycomb surfaces issues happening at the individual user level. For mobile sites and pages, that means capturing details about how every single page element performs, every single time a user loads your page. Unlike aggregate measures that simply tells you whether overall performance is good or bad, this type of detailed event data can show you what is different between the “good” (or snappy) page loads and the “bad” (or sluggish) ones.

This low-level analysis is particularly useful when chasing down elusive issues that aren’t immediately obvious. For example, like the example above, let’s say you get an alert that CWV scores are too high. With Honeycomb, you start by visualizing overall system performance. Perhaps you notice that only a particular subset of page loads is responsible for generating abnormally high CWV scores (skewing the entire average). You would then dig into what that set of page loads has in common, and how that differs from pages with lower CWV scores. Your analysis might show that all of the page loads with high CWV scores are originating from Android users in Canada, using the French-Canadian language pack. Now, you know enough about what you’re looking for to replicate the issue, and which questions to ask in order to find the answers you need.

How can you debug Core Web Vitals for your Vercel apps?

A detailed step-by-step approach is available in the Tracking Core Web Vitals with Honeycomb and Vercel guide. This guide shows you how to capture, analyze, and debug your real-world CWV performance using a free Honeycomb account, a sample app, and a hobby Vercel account. Designed for beginners, this guide shows you how to use Honeycomb along with a simple frontend web app to debug high Core Web Vitals scores and correctly identify the sources of issues.

The guide delves into concepts captured in this blog post with much greater detail. It walks you through the steps we took to generate the debugging data from the sample app. It also shows you how to use Honeycomb tools like the Query Builder and BubbleUp to quickly surface causal attributes when analyzing anomalous user sessions.

Try it today

The approaches in this guide fall well within the free usage tiers for both Honeycomb and Vercel. Try it today for yourself with the samples, then apply those same concepts to your own Vercel applications.

Let us know what you think. When you sign up for a free Honeycomb account, you’ll be invited to the Pollinators Community Slack group. Drop in with any questions or comments. We’d love to hear from you.

Don’t forget to share!
George Miranda

George Miranda

Head of Ecosystem & Partnerships

George is a talky person that makes with the mouth words and the typey-typey. He loves bringing tools to market that improve the lives of engineers managing production systems (PagerDuty, Buoyant, Chef Software). He enjoys roaming the world in a nomad-ish fashion, small batch artisanal whiskey, and writing third-person biographies no one reads.

Related posts