Blog

Category: LLMs

LLMs   Conferences & Meetups  

The Hater’s Guide to Dealing with Generative AI

Generative AI is having a bit of a moment—well, maybe more than just a bit. It’s an exciting time to be alive for a lot...

Observability   LLMs  

Honeycomb + Google Gemini

Today at Google Next, Charity Majors demonstrated how to use Honeycomb to find unexpected problems in our generative AI integration. Software components that integrate with...

LLMs  

Three Properties of Data to Make LLMs Awesome

Back in May 2023, I helped launch my first bona fide feature that uses LLMs in production. It was difficult in lots of different ways,...

LLMs  

Using Honeycomb for LLM Application Development

Ever since we launched Query Assistant last June, we’ve learned a lot about working with—and improving—Large Language Models (LLMs) in production with Honeycomb. Today, we’re...

LLMs  

Effortless Engineering: Quick Tips for Crafting Prompts

Large Language Models (LLMs) are all the rage in software development, and for good reason: they provide crucial opportunities to positively enhance our software. At...

Observability   LLMs  

So We Shipped an AI Product. Did it Work?

Like many companies, earlier this year we saw an opportunity with LLMs and quickly (but thoughtfully) started building a capability. About a month later, we...

Observability   LLMs  

LLMs Demand Observability-Driven Development

Many software engineers are encountering LLMs for the very first time, while many ML engineers are being exposed directly to production systems for the very...

Software Engineering   LLMs  

Improving LLMs in Production With Observability

In early May, we released the first version of our new natural language querying interface, Query Assistant. We also talked a lot about the hard...

Software Engineering   LLMs  

All the Hard Stuff Nobody Talks About when Building Products with LLMs

There’s a lot of hype around AI, and in particular, Large Language Models (LLMs). To be blunt, a lot of that hype is just some...