Honeycomb Blog

The Price is Right

Here at the hive, we’re working on something that isn’t code or new features(!), but is a big part of our business notwithstanding: figuring out the best way to help people understand how we price Honeycomb and the built-in assumptions we make about how they use Honeycomb. There are some issues (pricing is hard, film…
Read More...

How Honeycomb Uses Honeycomb Part 8: A Bee’s Life

This post continues our dogfooding series from How Honeycomb Uses Honeycomb, Part 7: Measure twice, cut once: How we made our queries 50% faster…with data. To understand how Honeycomb uses Honeycomb at a high level, check out our dogfooding blog posts first — they do a better job of telling the story of problems we’ve…
Read More...

Metrics: not the observability droids you’re looking for

I went to Monitorama last year for my first time. It was great; I had a terrific time. But I couldn’t help but notice how speaker after speaker in talk after talk spent time either complaining about the limitations of their solutions, or proudly/sadly showing off whatever terrible hacks they had done to get around…
Read More...

Reflections on Monitorama 2017: From the Metrics We Love to the Events We Need

There were a bunch of talks at Monitorama 2017 that could be summed up as “Let me show you how I built this behemoth of a metrics system, so I could safely handle billions of metrics.” I saw them, and they were impressive creations, but they still made me a little sad inside. The truth…
Read More...

Instrumenting High Volume Services: Part 2

This is the second of three posts focusing on sampling as a part of your toolbox for handling services that generate large amounts of instrumentation data. The first one was an introduction to sampling. Sampling is a simple concept for capturing useful information about a large quantity of data, but can manifest in many different…
Read More...

Instrumenting High Volume Services: Part 1

This is the first of three posts focusing on sampling as a part of your toolbox for handling services that generate large amounts of instrumentation data. Recording tons of data about every request coming in to your service is easy when you have very little traffic. As your service scales, the impact of measuring its…
Read More...

The Problem with Pre-aggregated Metrics: Part 3, the “metrics”

This is the third of three posts focusing on the limitations of pre-aggregated metrics. The first one explained how, by pre-aggregating, you’re tightly constrained when trying to explore data or debug problems; the second one discussed how implementation and storage constraints further limit what you can do with rollups and time series. Finally, we arrive…
Read More...

The Very Long And Exhaustive Guide To Getting Events Into Honeycomb No Matter How Big Or Small, In Any Language Or From Any Log File

How do you get events in to Honeycomb? This gets confusing for lots of people. Especially when you look at all the gobs of documentation and don’t know where to start. But all you need are these three easy steps: Form JSON blob. Go nuts! Smush as many keys and values as you want into…
Read More...

The Problem with Pre-aggregated Metrics: Part 2, the “aggregated”

This is the second of three posts focusing on the limitations of pre-aggregated metrics. The first one explained how, by pre-aggregating, your flexibility is tightly constrained when trying to explore data or debug problems. The third can be found here. The nature of pre-aggregated time series is such that they all ultimately rely on the…
Read More...

The Problem with Pre-aggregated Metrics: Part 1, the “Pre”

This is the first of three posts focusing on the limitations of pre-aggregated metrics, each corresponding to one of the “pre”, “aggregated”, and “metrics” parts of the phrase. The second can be found here. Pre-aggregated, or write-time, metrics are efficient to store, fast to query, simple to understand… and almost always fall short of being…
Read More...