OpenTelemetry Browser Instrumentation

By Michael Sickles  |   Last modified on August 15, 2022

One of the most common questions we get at Honeycomb is “What insights can you get in the browser?” Browser-based code has become orders of magnitude more complex than it used to be. There are many different patterns, and, with the rise of Single Page App frameworks, a lot of the code that is traditionally done in a backend or middle layer is now being pushed up to the browser. 

Instead, the questions should be: What insights do frontend engineers want? What types of questions are they going to ask? In this post, I’ll walk through getting started on browser instrumentation so you can get answers to any of your questions. 

Browser instrumentation data flow

Currently, OpenTelemetry browser traces are sent via OTLP (OpenTelemetryLineProtocol) with HTTP/JSON. Honeycomb only supports directly ingesting data via OTLP with HTTP/protobuf or GRPC/protobuf. 

This means you will need to set up an OpenTelemetry Collector to accept any browser traces before you send them to Honeycomb. Routing through an OpenTelemetry Collector has the added benefit of being the recommended approach anyway since sensitive data is never safe in the browser, so adding your Honeycomb API keys to code exposed in the browser is not a great idea. By using a collector, you can store sensitive credentials there and ensure that any data being sent to Honeycomb is processed safely (along with any additional data control enabled by the collector).

frontend performance monitoring

An OpenTelemetry Collector typically collects your browser telemetry and takes responsibility for transmitting that data to Honeycomb.

OpenTelemetry Collector Configuration for Frontend Monitoring

The first step to enable getting browser traces is setting up a collector and making it available to the public. For those in a Kubernetes world, we offer a helm chart to get you up and running quickly. The OpenTelemetry Collector docs site also has alternative ways to deploy the collector to fit your needs (if you’re not using Kubernetes). 

Setting up a collector means you have to set up a pipeline. With OpenTelemetry, think of a pipeline as defining how data is received, processed, then exported to other places. Below is an example pipeline for receiving OTLP data, then sending it to Honeycomb for a collector on v0.41.0.

       endpoint: ""
         - https://*.<yourdomain>.com


   endpoint: ""
     "x-honeycomb-team": "YOUR_API_KEY"
     "x-honeycomb-dataset": "YOUR_DATASET"

     receivers: [otlp]
     processors: [batch]
     exporters: [otlp/honeycomb]

This snippet shows the yaml configuration options to accept http data then routing it to Honeycomb.

In the snippet above, you should notice that the receivers will be accepting HTTP data on port 4318 and the cors_allowed_origins setting allows you to control where that data is allowed to originate. Next, you’ll need to make the collector available to the Internet to post that data. (Configuring an OpenTelemetry collector to be available to the Internet is beyond the scope of this blog since it is heavily dependent on your infrastructure). You will also need to have a load balancer configured to accept that data, terminate the SSL connection, and forward it to your collector on the correct port.

Connect your browser instrumentation

Next, you need to set up your browser instrumentation. You’ll need NPM to install the required OpenTelemetry libraries. It’s best to start with OpenTelemetry’s auto instrumentation package for a quick and easy install. NPM installs the necessary packages and creates a file to load the instrumentation. For example, below is a tracing.js file you can use to get started:

import { WebTracerProvider } from '@opentelemetry/sdk-trace-web';
import { getWebAutoInstrumentations } from '@opentelemetry/auto-instrumentations-web';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { ZoneContextManager } from '@opentelemetry/context-zone';
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');

const exporter = new OTLPTraceExporter({
 url: 'https://<your collector endpoint>:443/v1/traces'
const provider = new WebTracerProvider({
 resource: new Resource({

   [SemanticResourceAttributes.SERVICE_NAME]: 'browser',
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
 contextManager: new ZoneContextManager()

 instrumentations: [
     // load custom configuration for xml-http-request instrumentation
     '@opentelemetry/instrumentation-xml-http-request': {
       propagateTraceHeaderCorsUrls: [
     // load custom configuration for fetch instrumentation
     '@opentelemetry/instrumentation-fetch': {
       propagateTraceHeaderCorsUrls: [

And here are the dependencies used:This snippet shows configuration of OpenTelemetry to send data to your collector with some default AutoInstrumentations enabled.

npm install --save @opentelemetry/api
npm install --save @opentelemetry/sdk-trace-web
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/auto-instrumentations-web
npm install --save @opentelemetry/context-zone

For Javascript, you register which instrumentation packages to load. Walking through the configuration above:This snippet shows the commands to load in the dependencies for OpenTelemetry in your web application.

  • getWebAutoInstrumentations automatically loads the DocumentLoad, Fetch, UserInteraction, and XMLHTTPRequest instrumentation libraries
  • DocumentLoad is useful to see how fast your page loads and which resources are taking the longest to return
  • Fetch and XMLHTTPRequest will instrument outgoing REST API calls.

*The tracing file and dependencies are based on sdk/api v1.0.1 and exporter v0.27.0

Frontend to backend

The propagateTraceHeaderCorsUrls setting for the XMLHTTPRequest setting will pass trace headers to any backend calls so that frontend requests and backend requests can be traced together. Essentially, this is the tracing glue for frontend calls to the backend.

Package your tracing

The configuration file above needs to be packaged and served to the frontend. Depending on your frontend framework of choice, accomplishing this can be as simple as importing the file in an existing snippet of javascript. Or, you may need to use a packager to put all dependencies into one file. OpenTelemetry documentation recommends using parcel. To use it, you would run the following command:

npx parcel tracing.js

This snippet shows using parcel to package and generate a javascript file that can be used in the frontend.

This command creates a tracing.js file, with all dependencies loaded and packaged together, that can then be served to your frontend. The default parcel settings will write this file to dist/tracing.js. Somewhere early in your HTML, like in a header, you can load this file like so:

<script type="text/javascript" src="/dist/tracing.js" ></script>

Having done this, you should start seeing some frontend information appear in Honeycomb. Here is an example of the DocumentLoad plugin timing a page:

A trace viewed in Honeycomb of a web page load

Immediately, you can ask some interesting questions like “How long does it take to load a given resource on average? Which resource takes the longest to load?”

A Honeycomb query showing which page resources are taking the longest to load

Navigating instrumentation libraries

There are more instrumentation libraries out in the community ecosystem depending on what you use. For example, if you use React there’s a plugin for that. The best place to search for instrumentation libraries is in the OpenTelemetry repositories (like in the contrib repository) or at Searching for @opentelemetry/instrumentation in NPM is a great way to see other community-created plugins.

An NPM search showing community plugins for OpenTelemetry and javascript.

Make sure to check out each instrumentations README for exact configuration details, but in general you can expect  to see a pattern similar to:

 instrumentations: [
   new UsefulInstrumentationYouFind(),

This snippet shows how instrumentation plugins are generally loaded for Javascript in the tracing.js file.

Check out the CNCF #otel-js slack as well as it can be a great resource for seeing what is out there.

Add custom instrumentation

Oftentimes, the most interesting data is specific to your own applications and you won't get  the data you need from auto instrumentation packages. OpenTelemetry offers an API you can interact with to add custom instrumentation manually.

Adding context to spans

It’s often beneficial to add context to a currently executing span in a trace. For example, you may have an application or service that handles users, and you want to associate the user with the span when querying your dataset in Honeycomb. In order to do this, get the current span from the context and set an attribute with the user ID:

const api = require("@opentelemetry/api");

function handleUser(user) {
 let activeSpan = api.trace.getSpan(;
 activeSpan.setAttribute("user_id", user.getId());

This snippet shows loading in the OpenTelemetry API and setting attributes on a span.

Creating new spans

Auto-instrumentation can show the shape of requests to your system, but only you know which parts are important. In order to get a complete picture of what’s happening in your browser requests, you will need to add custom instrumentation and create custom spans. 

To create new spans, grab the tracer from the OpenTelemetry API and use that to start a new span:

const api = require("@opentelemetry/api");

function runQuery() {
 let tracer = api.trace.getTracer("my-tracer");
 let span = tracer.startSpan("expensive-query");
 // ... do cool stuff

This snippet shows loading the OpenTelemetry API and creating new spans.

Optimize Frontend Performance With Honeycomb

Following the above steps will help you use OpenTelemetry to set up frontend instrumentation so that you can debug browser requests. Between auto instrumentation and custom instrumentation, this setup should get you started down the path of using observability to generate insights on browser requests. With proper instrumentation, there are many different types of questions you may ask about frontend performance. 

Using Honeycomb to analyze your OpenTelemetry data means that you can get fast answers to your questions about frontend performance. Interested in trying it out? Sign up for a free Honeycomb account.


Related Posts

Instrumentation   Ask Miss O11y  

Ask Miss O11y: How Can I Convince My Organization to Invest in Instrumenting for Observability?

"Dear Miss O11y, I’ve been following Honeycomb for a long time, and I understand where the insights from observability fit in. But larger orgs haven’t...

Instrumentation   Ask Miss O11y  

Ask Miss O11y: My Manager Won't Let Me Spend Any Time Instrumenting My Code

In the same way as the business is likely ok with you writing developer-based tests (unit, automation, integration), instrumentation is the same. The conversation we...

Observability   Instrumentation  

Datasets, Traces, and Spans—Oh My!

By giving an overview into datasets, traces, and spans, you’ll get a peek behind the curtain into how Honeycomb facilitates observability in the hopes of...