Why Honeycomb

“We’re all distributed systems engineers now.”


Fast queries on raw, high-cardinality data

In a traditional monitoring system, engineers are frequently pinned between two or more undesirable alternatives: they can either pre-aggregate data and lose precious detail, or keep lots of data but suffer very slow querying and large costs.

Honeycomb, by contrast, hits a sweet spot between these two and is backed by a blazing fast columnar store which can run queries on many millions of rows in seconds. It encourages a fluid workflow and rapid iteration speed in answering questions to solve problems, and still exposes the raw collected data to analyze in as much detail as desired. Because engineers are able to get both high- and low- level information, they become better at solving problems more rapidly.

Having the raw data available is helpful in this case since you donʼt need to know which IP addresses or user IDs to count ahead of time – only that those fields might be of interest to you later.



Your website is suddenly receiving a lot of traffic and it’s not clear whether it’s “good” traffic (e.g., you’re going viral somewhere) or “bad” traffic (someone is attacking you by flooding the site with requests, or a bot has gone crazy). Using Honeycomb you can:

  • BREAK DOWN by the high-cardinality fields within the requests – user ID, IP address, and more – to quickly see if the sources originate from few or from many places using Honeycomb.
  • If itʼs a bad actor, blacklist that user and protect the website.
  • If itʼs desirable traffic, spin up more servers to deal with it.

Proactive, not just reactive

Honeycomb can help you figure out which specific customers are affected by (or even causing) a particular issue in production. This allows you to not only detect when something is wrong, but to rapidly deduce why and take steps to proactively mitigate its impact on the business, or even spot potential issues before they happen.

Honeycomb is able to do this because it was specifically designed and architected to handle “high-cardinality” data. High-cardinality data has a lot of distinct values (such as a customer ID, of which there could be thousands or millions), and many existing monitoring systems do not handle it well because trying to do so can create explosive complexity.



A high priority customer writes in to let you know that they are very unhappy as some pages are loading so slowly that they cannot use them.

  • BREAK DOWN by customer ID and URL with a calculation to quickly spot where in time, and on which page(s), the latency affected that particular customer.
  • Fix the issue using contextual information from the raw data in this query.
  • See if other users were affected by the issue, so you can proactively reach out to customers who were affected and might be unhappy but silent.

Democratized debugging

Large technology companies like Facebook and Google use systems like Scuba to debug code and understand how it runs in production—but to date those tools have not been available to engineers outside of those companies. Honeycomb changes all of that, empowering individual engineers, teams, and organizations to explore and understand their systems, in production, at scale, and in real-time, finding the “needles in a haystack of needles” that yesterday’s toolsets routinely miss.