“Observability” is a term that comes from control theory
In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. The observability and controllability of a system are mathematical duals. (Wikipedia).
Observability is achieved when a system is understandable—which is difficult in today’s world of increasing software complexity, where most problems are the convergence of many different things failing at once.
One early definition of software observability focused on the three so-called “pillars” of logs, metrics, and traces. That was a good first effort at shifting the priorities of the industry, but the definition is flawed. Those three elements don’t guarantee that your systems are understandable, or that you have full visibility into all the blind spots, messy intersections, and long tail gotchas that logs, metrics, and traces may or may not cover.
A better and more contemporary litmus test for observability is: can you ask the right questions?. And can you do in a way that’s predictable, fast, and scalable over time—i.e., without having to re-instrument or launch new code?.
Full observability = full control
We built Honeycomb to build an understanding of how your systems and software actually work—including the realities of microservices, serverless, distributed systems, polyglot persistence, containers, CI/CD all as givens. To shift attention from the health of the system (which is meaningless) to the health of the event. Users don’t care whether the system is “up” in general... they care about whether it’s working for them.