Training Video

See Inside Your Serverless Execution: Observability in Cloudflare Workers

December 15, 2021



Erwin Van Der Koogh [Product Manager|Cloudfare]: 

Hello, everyone. My name is Erwin van der Koogh, and I’m one of the Product Managers of Cloudflare working on the Works platform. And, today, I’m really excited to show off one of the integrations we have with Honeycomb. We’re going to go through a demo, show how to install the integration, how to configure it, and then go into a little bit as to how you would use that or what you would use it for. Let’s get started.

One of the challenges, when you start to get into sort of serverless computing, is the debugging story. Like, how do you debug applications where there are multiple components that each talk to each other? And that’s why often debugging one of those things turns very quickly into a murder mystery. And, today, we’re going to show you how you’re going to solve that particular mystery. 

What we’re going to be talking about is how to use the Honeycomb tracer. You can find it at And we’re going to be talking about a super simple demo application. And the application is pretty simple. It gives us super simple hazard tales. If you click a button, it will make a request to our backend, and that will tell you whether you’ve won or not. 

Here we have our default export, which is the new syntax that we’re using, but if you’re used to the add_event listener, it works pretty much exactly the same there. We have the export default with our object that has a fetch method. And if the path is flip, it will tell you whether you’ve won or lost. And if not, it’ll just serve up the HTML that we just saw. In practice, this looks something like this. That works. Let’s now go and add Honeycomb. And for that, we go to our example two. And example two, our HTML is still exactly the same. We haven’t made any changes to here yet. But in our worker, we have made a few small changes.      

The first thing is that we have imported a config and wrap module. You don’t actually need config. It’s just a convenient typing to give you autocomplete. But in here, we have hard-coded our API key and our data sets which you can definitely do this way, and it’s great for something simple like an example. Before your production applications, you probably want to use the environment variable. There’s one for each of those to use. But that’s the only two pieces of information that we need.    

And, at the bottom, this is the only change that we make. Instead of export default, the object, we just give that a variable name. And we use the method wrapModule, and we give it the config that we created earlier and the worker that we previously exported straight away. And this is everything we need to do. What the wrapModule does is it will automatically create a proxy that will intercept all requests, and it will do all your logging for you.  

What we can see now is if we do this and then we go to Honeycomb, we now have requests. And the only thing is, like, they’re not very interesting yet because our workers don’t do much. So it takes zero milliseconds to return anything. So that’s where we are. We now know what URLs are used, what browsers and user agents. So we can actually already look at a lot of information, as you can see right here. This is something that I’ll talk about in a bit, but the Honeycomb library will automatically redact certain headers, such as cookies, authorization headers, refer headers because we’re trying to make sure we don’t accidentally put any security or personally identifiable information in there.


But, yeah, user strings. We already have quite a bit of information here with the response status. So this by itself is already really useful, but how can we make this more useful? Now, what we have to do, going quickly through some of the conflict settings because the one we’re going to have to add is a thing called acceptTraceContext. And what this means is that if someone passes in the W3 sort of standard Trace Context headers, we will participate in that trace, and that will allow us to trace from, in this case, the browser to our Cloudflare worker and have them show up in the same trace.     

Use data if you want to set any other information on there. Maybe it’s a version of your worker or anything else, really. We talked about the redacted headers for both the requests and the response. Sample rights is a really fine, great way of saying which things you want to trace and which ones you don’t. And this is super important if you have high-traffic sites because you can’t afford to save every single event that happens in your system. So this allows you to very easily go, I’d like 10% of our 200 response codes and all of our errors.     

sendTraceContext is the exact same as acceptTraceContext except from the other side. And we’re going to talk about this one in detail as well. And serviceName is just sort of what you want to show up in Honeycomb. So let’s talk about how we’re going to get that acceptTraceContext going. And that will take us to example three, in which case our worker is exactly the same except for this particular configuration settings. That’s the only thing we have to change on the worker side of things.     

But we have to make a bit more changes on the HTML side of things. And this looks scary, but it’s a function for generating IDs, unique identifiers. And, as you can see here, here we create just a regular trace. Takes an action. And trace, as you can see here, generates a trace ID and a span ID, and because this is a root, it doesn’t have any sort of parent IDs. But once we have this, we can now start to send that trace. The first thing that we do is sort of create that trace. We add our choice to it because that’s the great thing about Honeycomb, is you can just keep adding information that will help you debug later to any trace. 


And this is the bit that creates that W3C defined trace context. The header is called traceparent, and it takes the trace ID and the span ID, and we just add that to our fetch methods, that header. The other thing that we do is we do the duration, how long this took. And then what we do here is we send our one event, our trace event that we have in the browser, and we send that to /_send_honeycomb_event. And what the library does is it intercepts these requests that come in, and it will sort of match it up to see if there’s a trace that came in with that trace ID; and, if so, it’ll send it off to Honeycomb. 

Again, this is everything we need to do to get our… there we go. I won. What we have now is we have distributed tracing. And here’s what you can see, is we have three flips. We can click into this trace ID, and we can now see that we have two traces. This usually happens when you get sort of really low-level timings. It doesn’t always sort of render properly. But this is what we’re going with. We now have distributed tracing between the browser and our worker. But what if our worker wants to talk to or needs to talk to another service somewhere on the Internet?     

And so, in this case, let’s go talk to Jessica Kerr’s excellent Win With Tracing app she has running on here in Heroku. Basically, what we do is instead of just returning straight away, if we won, what we’re going to do is we are going to keep track of how many times someone has won. And what you see here is that we didn’t use the normal global fetch, but we got a thing called tracer.fetch. And tracer is something that the Honeycomb library automatically puts on request. And this makes sure that this particular fetch request knows which tracing it’s a part of. 

By using the request.tracer.fetch and by setting our sendTraceContext to true, we now have full end-to-end tracing. Let’s see what that looks like. If we run this query again, we now have this particular trace. And, as you can see now, is here we have our browser and our request. This is the browser. This is the worker. This is a subrequest in the worker. And what you can see is the service, the Win With Tracing service, isn’t particularly fast because it takes a long time to simply get started. 


But then, we can see the GET coming in. There’s some middleware, parsing a query. There’s a request handler. But here it gets interesting. Like, ah, the ORM is sort of safe. And here, we can start to see that we’re connecting into the database. And this is what’s so great, is we can trace, like, into the database. This is where we start the query. Here’s where we run the insert. And here is where we committed. Here is a select. We connect to and run the select. 

We can now trace all the way from our browser to a database through Cloudflare workers and an application running on Heroku, which uses the OpenTelemetry library to push events to Honeycomb as well. And here they will be assembled into one trace where you can do your querying. This will help you greatly in finding who did it in your debugging murder mystery when you’re stuck trying to debug something spread out over multiple services. 

I’d love for you to give it a try. If you have any questions, you can find me on the Honeycomb Slack, the Pollinators Slack. You can find me on Twitter or in the Cloudflare workers Discord channel, which you can find on Thank you very much for your attention. Looking forward to seeing what you do with this.

If you see any typos in this text or have any questions, reach out to