Irving Popovetsky [Director of Customer Success|Honeycomb]:
Hi, everyone, today I’m going to demonstrate the new Honeycomb exporter functionality, which we’ve contributed to the OpenTelemetry Collector. The OpenTelemetry Collector is a Swiss Army Knife service for tracing. It can receive traces at all of the popular, standard wire formats, including Jaeger, Zipkin, and OpenCensus. You can configure the collector to output those traces in any of those formats, plus commercial observability tools like Honeycomb. It can fan out to multiple destinations, so you can even comparison shop and transition between different systems.
Furthermore, it has processors, which can provide both probabilistic and tail-based sampling, as well as batch senders, retries, and even can edit or drop certain spans or span attributes. Tail-based sampling is really cool because it waits a predetermined amount of time for all the spans of a trace to come in before making a sampling decision. However, this can be quite memory and processor intensive.
Okay, let’s dive in, and build and deploy the OpenTelemetry Collector. On my workstation, I’ve cloned the OpenTelemetry Collector Contrib git repository. This is the repo where all of the contributed exporters like Honeycomb live. For now, I need to build this for myself, at least until the OpenTelemetry project cuts their beta release, which should happen soon. I have Go version 1.13, which is pretty recent. And all I need to run is make docker Otel contrib collector, which will compile the binary and package it in a docker container for me. Now, I simply need to tag this container and push this to my repository.
Great. Now, let’s go and add this to our project. Here, I’m using Google’s popular Microservices Demo project, called Hipster Shop. Hipster Shop is a playground that I highly recommend checking out. There are 10 different microservices written in five different languages, including Go, C#, Node.js, Python, and Java, all of which deploy to a Kubernetes cluster. I’ve modified the Hipster Shop by adding Jaeger tracing to most of the services. I’ve also added the Jaeger all-in-one docker image to the stack, so we have a Jaeger UI to play with. Now, let’s add the OpenTelemetry Collector to this stack.
What we have here is a Kubernetes manifest adapted from the example K8s manifest in the OpenTelemetry Collector Read Me. There are three Kubernetes objects here. First, a config map which holds the configuration file for the OpenTelemetry Collector. As you can see, I have Jaeger and Zipkin’s receivers set up to receive traces in those formats. I also have exporters set up for Jaeger that forwards to the Jaeger all-in-one instance, as well as to Honeycomb. For Honeycomb, I need to define the API token and the dataset name. For safety, I’ve configured this token to be limited to only send events to Honeycomb, and that’s all it can do. Plus, I’ll delete it after this demo is over.
The next object we have is the deployment. Let’s update the image tag to the one we just built. Great. Finally, we have the service definition, which will expose our deployment to the cluster. The name we chose here will become a DNS entry in the namespace called otel collector in the default namespace. Because I’m using Istio, I’ve also defined a second instance of the otel collector that will run in the Istio system namespace. This will receive trace spans from Istio and forward them along to the otel collector that runs alongside our app in the default namespace. In this case, using both receiving and exporting spans in the Zipkin format.
Okay, now we’re ready to deploy. On my workstation, I’m using MicroK8s, which is great for local development. In this case, I’ve enabled a number of MicroK8s extensions. DNS and storage are pretty standard, and most folks use that. I’ve enabled the registry extension, which is what I’ve been pushing all of my docker images to. And I’ve also enabled the Istio extension.
Looks like it’s ready for us to deploy. The Hipster Shop uses a tool called Scaffold to manage its continuous deployment. So, all we need to do after Scaffold is installed is do Scaffold run. Now, I’ve already configured Scaffold to build and push my docker images to this local registry. And because I’ve already built these images a couple of times, Scaffold detected that nothing needed to be rebuilt, and also that all of the images had already been deployed to the container registry. In this case, it just set up new tags for the deployment that I just did and has gone and updated all of the deployments.
All right, and you can see the deployment I did about 10 minutes ago is being replaced with the one that we just ran. Great, everything’s up and running. Now, let’s head over to Honeycomb and see the traces from our app. So, what I can do here is do a new query, and let’s look at some of the traces that we’ve just received within the last couple of minutes. We shouldn’t be too worried about how long some of these requests took because this is a local development system after all. Let me open one of these traces up so we can confirm that we are, in fact, getting the complete trace, great.
All right, and then I can also go back and take a look at the latency of the system and see that there are some requests that take much, much longer than others, and so we can use our BubbleUp tool to dive into those. In this case, it looks like some of the slowest end points was the cart extension, which goes to the cart service, as well as the home page itself, that’s not great. Some of this can be interesting to dive into later.
Anyway, that’s all I have for today, thanks for watching.
If you see any typos in this text or have any questions, reach out to firstname.lastname@example.org.