Save the date for O11yCon 2026Get Early Access

How AI Agents Use Production Feedback to Improve Code

Wednesday, April 1 | 10 a.m. PT / 1 p.m. ET / 6 p.m. GMT

Austin Parker
Austin ParkerDirector of AI Strategy at Honeycomb
akshay-utture
Akshay UttureAI Engineer at Augment Code

AI coding tools generate code faster than teams can verify it works. The latest DORA (DevOps Research and Assessment) report confirms what many of us already feel: AI drove a net increase in both development throughput and software instability across the industry. More code, more changes, more things breaking in ways nobody anticipated. The missing piece isn't more testing or better code review. It's closing the gap between "code that was written" and "code that actually works in production."

One emerging approach is to shorten the feedback loop between development and production. Instead of treating observability as something engineers use only after problems occur, production telemetry can become part of the development process itself.

No hand-waving, no slides-only promises. Join Austin Parker (Director of AI Strategy at Honeycomb) and Akshay Utture (AI Engineer at Augment Code) to get insights into the tools, the terminal, and the real-time feedback loops shaping modern development.

What you’ll learn
  • How AI agents use production telemetry to guide debugging
  • How OpenTelemetry instrumentation evolves from real system feedback
  • How to build a feedback loop between AI development and production observability
  • What changes when production context becomes part of the coding workflow

🔥AMA: We’ll wrap with an open AMA, so bring your toughest AI, MCP, and observability questions!