Training Videos OpenTelemetry Observability

OpenTelemetry Collector: Getting Data into Honeycomb

The OpenTelemetry Collector can be used to consolidate and process data from multiple different services. This video explains the OpenTelemetry Collector and how you can use it with Honeycomb to observe existing and new applications.


Pierre Tessier [Sales Engineer|Honeycomb]:

The OpenTelemetry Collector. This is an important piece for your distributed tracing initiatives. Spend some time to understand what it is and how we can use this to get data into Honeycomb. Now, the collector itself, it’s really telemetry data pipeline service, and its functionality can be broken down into three distinct components. Receive data from multiple sources. Once received, the data can be transformed and processed in multiple ways. Finally, exporters are used to export the telemetry data to desired backends.

Let’s dig into each component, starting with Receive. Now, as I said, data can be received in multiple formats, be it Jaeger or Zipkin, OpenCensus, and even the new OpenTelemetry protocol. Other formats are also supported. You can receive data all at once. And once received, they all get converged down into a single pipeline. And thus enters the process component.

This is where data can be filtered, attributes modified in your utility processes to help batch or usher the data through the pipeline. A lot more. All of these processes can be used to refine, and augment, and transform your data to meet your exacting of observability requirements.

Finally, we enter the export component. Honeycomb certainly provides an export to send all tracing data to. You could set up multiple exporters and each will receive an exact copy of that data. Perhaps you’re migrating away from Jaeger. With this, you can still send data to Jaeger and Honeycomb, while you migrate your users.

With the OpenTelemetry Collector, you can take and receive data from multiple different inputs, process it, and export it into Honeycomb. You could also work in a scenario where different technologies are used to instrument your different services. Maybe you have some services done using Jaeger, but the future direction is using the OpenTelemtry protocol instead. Well, the collector will allow you to combine different formats into a single stream while you work on migrating the other service.

Let’s take a look at an example of all of this in action. So we’re going to take a look at a couple of .net services that talk to each other, instrumented using Jaeger, sending data through an OpenTelemetry Collector. We are looking here at the code used to set up the Jaeger SDK. Of note, we’re using Jaeger configuration from Environment. We’ll dig into what we’re doing there in a few seconds.

We have a controller set up on our service. When you hit this controller, it’s going to go ahead and call, ‘getthingsfromorderAPI’ and get something from order API. It’s going to go ahead and make a call, just a simple HTB call to our other .net service. And let’s go take a look at that one.

This one here also has a controller set up. This controller makes use of the DB context within .net. And here we’re doing orders out to lists. Let’s take a quick look at what that does. We’re just going to do a lot of database calls here, do some database activity to really generate some spans and some data that we’ll be able to see.

Now, from an Environment perspective for Jaeger, we do pass a Jaeger endpoint. This tells Jaeger to use the thrift HDP endpoint to send data through. And here we have the OpenTelemetry Collector set up and listening on it right there.

Let’s go look at that configuration. Here’s our OpenTelemetry Collector configuration set up. We’re running this one inside of Kubernetes. So all of our standard receivers here for Jaeger, OTLP, Zipkin are all set up to listen on O-O-O-O-O. We have a couple of processors here, batch, memory limiter, and queued retry that are set up. These are the default recommended processors for our OpenTelemetry. Now, on the exporter side, we’re using Honeycomb. And you can see here, we’re going to pass in the API key by an environment variable, as well as in Jaeger endpoint set up. And this is going to our Jaeger or backend set up within Kubernetes as well. And all this is all tied together within the service section where we specify which receivers, the processors, any exporters to use.

Now, let’s take a look at all this in action. We’re going to start off with hitting our services, running on port 89, 89, API values. Let’s go ahead and hit that. And just like that, we got those static value string array back, but we’ve generated a trace.

Let’s go look for that trace itself. So I’ve already set up this query here at Honeycomb. Let’s go ahead and run it. And when I run it here, there it is, my trace. I can go ahead and click on it and go look at that trace. And you can see here it’s full glory in the waterfall chart with all the fields and attributes and span events specified in each one. We go look at one of those database spans, and you can see here all the various span events that happened underneath it for the .net engine.

We could also go to Jaeger and do that exact same search, and we’ll see that exact same trace and the exact same waterfall chart. So as you can see, we can use the OpenTelemetry Collector to collect data from Jaeger or really any other distributed tracing stack out there and send it to various different backends.

If you see any typos in this text or have any questions, reach out to