Kubernetes Data Consolidation with Honeycomb
Kubernetes is complicated, which means identifying the causes of problems within a cluster is frequently difficult. Is something wrong with a particular node? Or an app’s latest code? Did a customer start sending traffic that’s causing the service to behave weirdly? Does a pod have a noisy neighbor?
By consolidating Kubernetes logging of app and cluster data into a single stream of rich events, Honeycomb makes it possible to ask unconstrained questions so we can track down and debug issues quickly.
Data Collection
The Honeycomb Kubernetes Agent tails the log files for each container, parses their contents into structured events, and sends those events to Honeycomb. This means that the agent can be rolled out without changing existing deployments.
The agent is also available as a ksonnet mixin to compose it with existing deployments. For example, instead of deploying the agent as a DaemonSet, we might want to add the Honeycomb agent as a sidecar in an existing deployment to watch log files at a particular path.
Getting Answers
With Honeycomb, you can ask questions of consolidated app and cluster data:
- How did response time change after the canary deployment?
- When did that deployment’s containers start crashing?
- How did performance change when we decreased resource limits for an application container
Instead of simply consuming logs, the Honeycomb Kubernetes Agent lets you define how logs from a particular pod are handled. This is important when running third-party apps—-e.g. reverse proxies, queues, or databases—-whose log output you don’t fully control. This makes it possible to follow an investigation across different layers of the architecture, instead of being constrained by cluster-level or app-level metrics:
- What does latency look like for a particular customers’ requests?
- How about on a particular container image?
- For a particular customer and on a particular container image?