Try Honeycomb Intelligence, AI-native observability with MCP, Anomaly Detection, and CanvasLearn more

5 Best Practices for Incorporating AI Into Your Team

Honeycomb’s Jessica Kerr and Fred Hebert recently hosted a webinar with Courtney Nash of The VOID where they dug into one of the biggest questions in tech right now.

5 Best Practices for Incorporating AI Into Your Team

Honeycomb’s Jessica Kerr and Fred Hebert recently hosted a webinar with Courtney Nash of The VOID where they dug into one of the biggest questions in tech right now: How do we build systems (and teams) that actually learn with AI, not just use it?

The conversation was surprisingly optimistic about what happens when we stop treating AI as a productivity tool and start seeing it as a teammate. You can watch the full webinar here, or read on below for a quick recap.

1: AI isn’t just a tool. It’s a teammate… for the most part.

As Courtney put it, “AI has been developed and productized as an individual productivity tool—but software is a team sport.”

If you’ve ever worked on a distributed system (or in a distributed team), you know that learning doesn’t happen in isolation. Code evolves as humans observe, adapt, and experiment. Teams and systems grow together. Software teams form a learning system. The code actually participates in that learning: it learns from us because we change it, and we learn from it through observability.

AI, in that sense, shouldn’t just make individuals faster. It should become a participant in the collective feedback loop: part of how we understand our systems, not just how we automate them. Fred cautions, however, that while it should be a teammate, if it can’t live up to the expectation, then it must remain a tool and be framed/designed to work as such. He stressed this in a recent blog post: “This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle.”

2: Don’t depend too much on automation

Fred brought in a classic piece of research from 1983: Lisanne Bainbridge’s “Ironies of Automation.” As it turns out, the paradox she described then still applies today.

As Fred explained, “The more often the machine is able to handle it, the bigger an emergency it is when it’s not. When the system fails, the people left to respond haven’t been practicing those skills.” Automation removes the toil—the easy, repetitive work. Sounds great in theory, but that means that in practice, humans are left to deal with stressful high-stakes situations. And when those moments come, people are less ready for them than ever.

Fred’s point was simple but devastating: as automation grows, so does the risk of de-skilling. And when failure happens, the humans who were “in the loop” get blamed for outcomes they no longer have the context or control to influence. That’s what he calls the “accountability scapegoat” effect.

3: Leverage observability as a learning practice

Jessitron described observability as a way for teams to stay in conversation with their systems—a feedback loop that keeps learning alive even as automation increases.

“What I love about observability,” she said, “is that it’s how software learns from us, and how we learn from it. It’s not just a box to check. It’s what makes the whole process easier, safer, and more satisfying.”

That framing shifts observability from an operational necessity to a creative practice. Incidents stop being failures to fix and become opportunities to learn.

4: Where the human is in the loop matters

Courtney brought in the concept of joint cognitive systems: research that originated in aviation and medicine but is increasingly relevant to software engineering. In those fields, humans and machines are treated as a single, collaborative cognitive system. Each has strengths and limitations, and their success depends on how well their roles are aligned.

She explained it this way: “If you can’t say where the human is in the loop, you probably haven’t really designed for them. You’ve just left them to pick up the pieces when it breaks.”

That line could be the thesis of the entire conversation. We don’t get better systems by removing humans from them. We get better systems by being intentional about where and how humans participate.

5: Avoid common AI traps

Across the discussion, the group pointed out some familiar anti-patterns:

  • Teams roll out AI without considering how it reshapes their work.
  • We frame AI as a “helper” for individual productivity instead of as a partner for shared learning.
  • We too often assume AI’s confidence equals competence.

The result? Humans become passive monitors of systems they can no longer influence, and AI starts to narrow how we think and where we look. As Fred warned, “It’s easier to search under the streetlight, but if that’s where AI points you, you’ll miss what’s really happening in the dark.”

Bonus: Take advantage of Honeycomb’s human-centered AI

Jessitron and Fred shared how these ideas are shaping Honeycomb’s own approach to AI. We recently released Honeycomb Intelligence, which is composed of Model Context Protocol (MCP), Anomaly Detection, and Canvas—AI-native features that are all built to keep humans in the loop not as bystanders, but as partners.

Instead of chasing speed, Honeycomb focuses on time to understand: how long it takes for a human to grasp what’s actually going on in their system. AI helps generate context, surface clues, and suggest lines of inquiry. But every output is transparent and inspectable. Humans stay at the center of interpretation and decision-making.

Moving from blame to learning

In the end, the group came back to culture. Systems fail. Automation fails. Humans fail. What matters is how we learn from it all. Courtney summarized it best: “Incidents should be viewed as opportunities for learning, not as failures of individuals.” That means shifting away from shallow metrics like MTTR and toward measures that reflect understanding and growth. Time to understand captures something deeper: how quickly a team can make sense of an unexpected situation and turn it into knowledge.

The takeaway from the webinar is clear: the future of software is a partnership between humans and machines that observe, learn, and adapt together. And that partnership starts with a mindset shift from merely using AI to learning with it.