There are no tools quite like Honeycomb on the market. Here our approach, feature set, and value compares to three traditional categories.
Monitoring and Metrics
Examples: Datadog, SignalFx, Graphite, influxDB, Kibana, statsd, Prometheus, Ganglia
These tools are typically consumed by operations engineers.
A metric is essentially a “dot” of data, e.g. `statsd.increment("api.requests")` is the statsd command to increase the “api.requests” metric by one. Newer time series data stores try to approximate the context of an event with tags or dimensions. You typically are allowed a limited number of tags (because of the write amplification factor) and you can slice and dice your metrics by tags.
The primary method of interacting with metrics is by constructing dashboards. A dashboard is a view of one or several metrics displayed over time, and it may be generated manually or programmatically.
Honeycomb is different ... because it is event-driven and interactive. We accept arbitrarily wide events with no schema, so you may have hundreds or more keys in a dataset, and you may ask questions that look more like business-intelligence queries. Example:
“Some users are reporting elevated latency. Latency does appear to be elevated ... but only for write endpoints, and only for requests hitting replica sets with a primary in AWS availability zone 'us-east-1b' on the r3 instance family, and only for nodes using PIOPS. There seems to be network saturation between storage and instances for those nodes.”
You can't get that out of a dashboard unless it was handcrafted in advance for that specific, EXACT question. You could try to pre-generate dashboards for every possible combination of factors, but you shouldn't; asking questions is a much better model that helps you think perform real data-driven debugging, not passive eyeball-scanning.
Honeycomb provides collaborative “Boards”, where you can bookmark and save any interesting entry points to your data. Teams can share Boards with each other to propagate useful information such as quick links for folks on call, examples for onboarding new teammates, and easy visibility into newly-released features or recent deployments.
Honeycomb has no limits on the number of attributes (tags) you can have (hundreds of millions or more are fine) or combine in a query.
Examples: Splunk, Sumo Logic, ELK, Papertrail, Loggly, Graylog, etc.
Logs are closer on the evolutionary tree to us than metrics, because logs are proto-events. However logs are strings, and Honeycomb accepts only structured data. This means far less costly storage and processing has to be done on the server side.
Log aggregation tools typically rely on regular expressions, which are slow; transport layers like rsyslog or logstash; and some sort of schema or indexes you have to predict and choose. Indexes are expensive to maintain, and write perf degrades across the board if you write too many of them (not to mention the physical cost of storage).
Honeycomb accepts JSON objects. You can turn your strings into JSON however you wish. We have lots of helpers to get you started, e.g. honeytail (which understands most common log formats and can run either from cron or as a lightweight agent), SDKs for most major programming language, even helpers for databases that do high throughput sniffing over the wire and reconstitute your transactions.
Honeycomb is a homegrown column store and has no schema or indexes. We aggregate at read-time and can do horizontal sharding indefinitely, letting us achieve lightning fast interactive performance at Web Scale(sic).
That said, many log aggregation tools are very mature and rich in helpers and features and drop-in connectors to every type of software under the sun. Honeycomb shines at unpredictable workloads and helping you find unknown-unknowns. If you mostly have known-unknowns, logs will probably work fine for you.
Application Performance Monitoring
Examples: New Relic, Dynatrace, AppDynamics
APM tools are typically backed by one of the other two storage backends (metrics or log aggregators) to collect and present data from the perspective of the application itself, as well as surfacing language internals. They often do clever things to sift out the most important data automatically and present it with very little work.
This is terrific! It's a great shortcut for getting started. Some of them also let you define custom triggers or questions at the application level. However, at the presentation layer they have the same shortcoming: you can't ask a new question, or you can't break it down by *just one user* (out of tens of millions of users) or *just one app* and then ask all the same questions as before.
Honeycomb handles these high-cardinality and high-dimensionality cases flawlessly.
APM tools often make it easy to find the “top 10” of something; Honeycomb makes it as trivial to find #100,001 as #10.
With Honeycomb, you have native SDKs —instrumenting your code is approximately the same as adding a comment. You don't get as much pre-baked stuff done for you, but you can insert any data you want in the form of k/v pairs and query on it later.
Examples: Sentry, Airbrake, Rollbar
Exception trackers have some overlap with APM tools but cover an even more specific use case: your application hit an unexpected situation, and you'd like to be notified ASAP—at least, the first time it happens. The next ten, hundred, or thousand times? Maybe not so much.
These tools are absolutely necessary to any developer workflow. Exception trackers have a lot of magic built into deduplicating stack traces, and often have some fantastic product thinking around an issue-resolution workflow—but can fail to surface subtler problems in your system's health.
If exceptions are only thrown when an error is hit, you lose the ability to understand when things get worse-but-not-broken. Increased latencies or poorly-balanced loads only surface in an exception tracker when they hit some sort of threshold, but are just as powerful in ensuring a robust system.