Save the date for O11yCon 2026Get Early Access

The Next Era of Observability: Founders’ Reflections - Additional Q&A

Charity took the time to answer a few additional audience questions we didn’t get to. Dive in below!

| February 19, 2026
The Next Era of Observability: Founders’ Reflections Additional Q&A

What happens when the people who helped define observability take a hard look at AI?

That’s what Honeycomb co-founders Christine Yen (CEO) and Charity Majors (CTO) dug into during this webinar, starting with the early days of observability (back when it wasn’t even a category yet).

Christine shared an early story from their pre-Honeycomb days that helped shape that philosophy: a moment that exposed the gap between dev and ops and made it painfully clear that throwing metrics over the wall wasn’t going to cut it. If you want reliable software, developers and operators have to share context, and that means observability has to live closer to the code.

They also talked about where the “three pillars” came from and why organizing telemetry isn’t the same thing as understanding your systems. Observability, from the beginning, wasn’t about dashboards. It was about empowering engineers to ask new questions in production and get answers to questions they didn’t know they’d need to ask.

Then the conversation turned to AI—specifically, what new challenges it has brought to the surface: AI breaks the illusion that software is fully deterministic; it makes our mental models shakier; and the hardest problems are deploying, operating, and understanding behavior when it inevitably drifts. However, AI might also be the catalyst that helps us finally bridge the gap between existing ops tooling and the feedback loops developers need.

If you missed it live, you can watch the full recording here:

We also ran out of time before we ran out of questions (always a good sign). Charity took the time to answer a few additional audience questions we didn’t get to. Dive in below!

Additional Q&A

On observability portability and vendor lock-in

Patrick Mclain, AB Inbev: Beyond the foundational step of adopting OpenTelemetry for instrumentation, what strategic approaches have you seen work well for organizations trying to achieve true observability portability across vendors? I'm particularly interested in areas like query language standardization, dashboards as code practices, and alert definition portability. The operational layer that often creates the deepest lock in even after telemetry collection is standardized.

Charity: Yes, you are right. It is the operational layer that creates the deepest lock-in after the OpenTelemetry part is standardized… and arguably even before that.

As for the question, “What strategic approaches I’ve seen work to achieve true observability portability across vendors” … my answer, unfortunately, is none.

True portability across observability vendors is not going to happen. Sorry, but I’d bet my house on it. It’s hard enough trying to get vendors to stop working to sabotage OpenTelemetry in the sales process. The big vendors have no appetite for anything that would make it easier for customers to move off their platform.

But those specific things you mentioned (dashboards, query languages, alert definitions) are not exactly “growth areas.” Static dashboards are a relic and dashboard proliferation is a curse; query languages a crutch for power users; the number of alerts should be radically decreasing, replaced by SLOs. And ALL of these are increasingly trivial to migrate using things like Claude Skills to encode local knowledge and translate from one platform to another.

Which is why I think a more promising line of inquiry is around derisking migrations themselves, and building not only tech stacks resilient to change, but also people and teams resilient to change. Will Larson once said that “migrations are the sole scalable fix for technical debt,” and I agree. Build migratable systems.

If you’re migrating away from a tool you’ve been using for 10-15 years… phew, it’s gonna be rough. But it can and does get easier. I’ve also seen evidence that rapid vendor migrations are becoming a reality at some consultancies…think three to five days for an enterprise to migrate from Datadog to Dynatrace or vice versa.

Some of the biggest blockers to migrations are not technical ones, they’re the people ones. If you have a team of SREs who have become experts in one tool, and that tool has become part of their identity, there is no technical solution to that problem.

The last thing I’ll say is this: there are good kinds of vendor lock-in, and bad kinds of vendor lock-in.

Good vendor lock-in:

  • This vendor has something that is uniquely valuable to us, and we’re here because we want to be here.

Bad vendor lock-in:

  • Literally everything else you just described.

What’s the difference? If you are locked in for good reasons, you don’t fucking care. You are HAPPY to be locked in. You’re both cheering: “We’re locked in! We’re together on this mission for at least the next few years!” High fives all around.

The same goes for your bill. You should feel like, “I am stoked to pay this observability bill because I understand how much value I am getting from it. There is no other tool that brings me this much value. I know that every time I spend $1 on observability, I make $5 from my product.” That is a solid investment.

If you don’t feel that way, maybe you need a different tool.

The good kind of vendor lock-in isn’t going to be affected by AI. But the bad kind of vendor lock-in absolutely is.

On where managers fit into the world of AI engineering

Will Bollock, Akamai Technologies, Inc: In the world of human plus AI engineering, where do managers fit in? Are they even more susceptible to layoffs than they were pre-AI? Is engineering management still a good career path for early career folks?

Charity: I would say that engineering management has never been a good career path for early career folks. I’m of the opinion that you should be solidly a senior software engineer before you try your hand at management.

This is not because younger people can’t be good managers. It’s because the less experienced you are, the faster your technical skills will decay. It’s hard to be an effective manager for engineers who are significantly more experienced than you are. If a manager isn’t experienced enough to call an engineer out on their bullshit, that’s a problem.

Yes, I do think managers are likely to be more susceptible to layoffs than engineers are. Managers are overhead, which makes them more vulnerable.

I should wrap all of what I just said with a disclaimer: all of this has historically been true. But many of these things are in the process of changing.

In general, skill sets are merging. Eng + product, eng + manager, eng + design, eng + almost everything. Understanding how to talk about your work in the language of the business, instead of just the language of technology, is becoming critical for every role.

Teams are trending smaller. Five years ago, the ideal was a high ratio of engineers to managers, maybe one manager per 7-10 engineers, and managers did not write code. Now that AI is enabling each individual to own more surface area, the new ideal might be something more like teams of three to four engineers where one of them performs the management duties, but also contributes to building the product.

Formal management powers are in some ways the most fragile of powers. But being a manager can teach you a lot of valuable lessons about how businesses operate, how decisions get made, and how to navigate complex organizations… if you work at a place that values and teaches good management.

But management is not for everyone. I would approach this question through the lens of: what moves you? What kind of technologist do you want to be?

On data lakes

Dale Frohman, Disney: Are data lakes the future of observability, or are we just finally admitting it’s all data anyway and the real magic is connecting the dots to squeeze out the value?

Charity: YES! Or as my friend Hazel Weakly likes to remind me, the future of observability is “data lake houses,” which combine the flexibility and scalability of data lakes with the more advanced querying and schema tools of a data warehouse.

Here’s how I think about it: data is made powerful by context. The more context you have, and the more you can preserve of the relationships that do in fact exist in real life, the more powerful that data becomes.

Your data doesn’t become linearly more powerful with context, it gets exponentially more powerful.

In the past, every team has captured telemetry in their own special format, saved it in their own special tool. The cost is astronomical, but worse than that: tools create siloes. Teams that should spend their time collaborating on a solution end up arguing over the very nature of reality as reflected in their respective tools.

I think—and Gartner thinks!—the trajectory we are on is unsustainable and must end. Instead of capturing in a different format for every team, capture once. Context preserves optionality, so you can decide how the data will be used at query time, not at ingest time. You don’t know what you don’t know. Give your future self that gift.

On AI in five years

Carole Pinto: How do you see the future in 5 years as far as the use of AI is concerned?

Charity: Ah, friend, I can’t see that far out. Sorry. <3

On observability 2.0 and eliminating logs platforms

Olga Mirensky: Observability 2.0 advocates against the three pillars, however how can you completely eliminate the need for a dedicated logs platform? Possibly I am misunderstanding the 2.0 philosophy.

Charity: Thank you, this is a GREAT question! I’ve heard it more than once, so I’m very glad for the chance to answer it in public.

The short answer is: you’re right. There’s really no plausible path to getting rid of dedicated logs platforms, as long as you’re running lots of third-party code you can’t control.

There are two kinds of telemetry: the kind you can control, the kind you can’t. The kind you can’t tends to be spammy and noisy and low value, and you should store it as cheaply as possible. You probably have to anyway, for security and compliance reasons. So yeah. Logs are not going away.

The three pillars make sense in a world where you don’t write the code and you don’t control the telemetry, you just have to take what comes out. They make no sense whatsoever in a world where you’re developing software and need to understand the quality of your product and how users are experiencing it.

Both models have their place: the three pillars for infrastructure and ops feedback loops, o11y 2.0 for developers and dev feedback loops.

But for most companies, infrastructure is a cost to be minimized. What they care about most is their own code, their crown jewels, the code that makes them money and defines who they are as a business. That code should follow a different model, one that preserves context and keeps attributes connected, instead of breaking it up into different siloes based on signal type. Because you can’t put Humpty back together again.

On justifying the cost vs. business impact of observability

Ben Schutz: What is generally the action with best impact-to-cost ratio for convincing a business that real, conscious investment in observability capability is worth spending time, money and opportunity cost? (Be it an ROI analysis, POC, brainwashing, or something else.)

Charity: You probably want Chapter 24 of the upcoming second edition of Observability Engineering, called “The Business Case for Observability.” It’s in tech review now, should be on O’Reilly’s pre-release platform soon, and will be in print this coming June. Sign up for updates.

In brief: observability for revenue-generating activities should be treated like an investment, observability for cost-center activities should be managed like a cost center.

Read our O’Reilly book, Observability Engineering

Get your free copy and learn the foundations of observability,

right from the experts.

On dashboards

Nicholas Herring, CCP Games: Are dashboards still a lie?

Charity: Was this ever in doubt? 😉 Static dashboards are indeed still a lie. Still useless. Becoming ever uselesser.