Honeycomb & Kubernetes

 

+ Transcript:

Pierre Tessier [Sales Engineer|Honeycomb]:

Hi, Pierre Tessier here with another Honeycomb training video. In this one, we’re going to go over Honeycomb and Kubernetes. Kubernetes is a great infrastructure platform that enables developers greater flexibility in deployment, architecture, and scale of your applications. Let’s take a look at how Honeycomb works in these environments for your applications. Let’s start off with a Kubernetes architecture. It’s a couple of nodes, and every node, to do anything on them, you need a kubelet. So let’s put some workloads in here. We’ve got a couple of pods running on our nodes, and now we’re going to go ahead and add the Honeycomb Kubernetes agent. And what this agent will do is it’ll start watching those pods that we specified for logs, of all kinds of different formats. We can be looking for NGINX logs, JSON logs, Redis logs, even RegEx, and more. Once it’s found those logs, it’s going to go ahead and stream the structure data from them into Honeycomb. And as more pods come online, the logs from those pods will also get streamed into Honeycomb.

With the agent are also two optional components, metrics, and events. The metrics component will get resource metrics from the kubelet and send those into Honeycomb. The event component works with the Kubernetes API. As a Kubernetes API emits cluster events, those will get brought into Honeycomb and we’ll be able to use them there as well. So all together, this is the Honeycomb Kubernetes agent, and what really helps to get this all installed is using Helm. So here’s a script on how we’ll get set up with Helm in two batch lines. First, you need to tell Helm where your repo is, “helm repo add Honeycomb” and our URL for our repo. And then Helm installed the Honeycomb chart itself, passing in your API key. This will get you up and running with the standard defaults, which will get some data from your control manager as well as data from your kube-scheduler.

You can modify these defaults. They are called the watchers, and the modification will look something like this inside of a YAML file, where you’ll also put in your API key. These are what the default watchers look like. Here using the glog parser to parse logs from Kubernetes Systems. We’ll get more into what these watchers look like in a little bit. Once you’ve defined this YAML file, save as something maybe mydashbitandvalues/fileyaml, and we can apply that and install it like this instead. So instead of passing an API key as a value, we’re going to pass in that values file instead. And you can see more about this agent source as well as a Helm chart source itself. And you get all kinds of great documentation from it by following these links. Let’s take a look at what that documentation looks like for the Kubernetes agent.

The GitHub page for the Honeycomb Kubernetes agent does have instructions on how to run this yourself but I do recommend you go to the Docs holder here where you’ll find a configuration reference, as well as some example configurations. Inside here, you’ll find everything you need on how to configure the watchers to go find the logs that you’re looking for, for your applications and send them off into Honeycomb. Once you have the Honeycomb Kubernetes agent installed using Helm, you’ll be able to get three brand new data sets inside of your environment. Kubernetes cluster events, Kubernetes logs, and Kubernetes resource metrics. Inside of these data sets, you’ll be able to find the information you’re looking for and run queries such as showing an average CPU load for all of the pods. You could filter this by namespace or whatever’s appropriate for you in your world. I hope you found this video very informative. Thank you for watching.

If you see any typos in this text or have any questions, reach out to marketing@honeycomb.io.