Raw & Real Ep 7
The Tracing You Deserve
So You Can Observe

 

+ Transcript:

Kelly Gallamore [Manager, Demand Gen|Honeycomb]: 

Hello, everybody. Welcome to Raw & Real. Glad to have you here. I know everybody is getting their audio hooked up and signed into Zoom. If you’re here already and want to get a cup of coffee or tea, we’re going to start promptly in two minutes. We’ll start at 10:02. 

You’re at Raw & Real. I will share slides so you know where you are. Maybe it’s more so I know where I am. You’re at Raw & Real, our short, sweet product demo, how Honeycomb uses Honeycomb. Today, how we use instrument tracing and how we help our users instrument tracing. We’re glad to have you here today. We start at 10:02. If you need live captions, you can open them at the bottom of your screen in Zoom. If you want to follow along, you can follow this link. Let me paste it in the chat so it’s a little bit easier. Thank you, Kimberly, for joining us today to do captions. Again, you’re at Raw & Real. This is Episode 7, the Tracing You Deserve So You Can Observe. We’ll get started officially in a couple of minutes. 

Again, welcome, everybody, to Raw & Real. We’re glad to have you here today. I’m triple checking all of my buttons to make sure everything’s in place. We are recording this episode today. One second. Something feels off. So I’m just going to check it real fast, everybody. Nope. There we go. I can see that we’re recording. So a reminder that we’re recording for those of you joining us today. This episode will be available after the event. With this new update, I’ve lost some of my buttons. 

Paul, we’re so glad you’re here with us today. Let’s show our faces. Everybody, I’m Kelly. Welcome. Some of you may or may not know our instrumentation engineer, Paul Osman. How are you? 

Paul Osman [Instrumentaion Engineer|Honeycomb]: 

I’m great. Thank you. How are you doing? 

Kelly Gallamore: 

I’m okay. Now that I know where my buttons are, everything is okay. Paul, could you give the folks at home a little information about who you are in terms of developing software and why you might care about instrumentation? 

Paul Osman: 

Yeah, certainly. So, yeah, thanks, everybody, for coming. I’m really happy to be here and happy we’re giving this demo today. As Kelly mentioned, I worked on our instrumentation team. We’re the team that makes it as easy as possible to get data into Honeycomb. So any SDKs or libraries or integrations to help you get data in. Before joining Honeycomb, I spent most of my career working in some form of SRE or ops. I’ve led ops teams. I’ve been part of Ops teams. I come from a background of using a lot of tools like Honeycomb. Now I help build a tool like Honeycomb. Yeah, so lots of experience implementing this stuff and now helping others implement it. 

Kelly Gallamore: 

Okay. It looks like we have a good number of people signed in. I want to remind everyone listening today if you have questions, list them at any time. If you put them in the Q and A box, we can see them a little bit easier. We love hearing… hey, there you go. We love hearing from you in chat. Let us know that you can hear and see us. Let us know you can hear and see the presentation. Paul, I think we had a question for everybody out there. What is helpful to you? Do you want to know how many people in the room are already practicing tracing? 

Paul Osman: 

Yeah, that would be great. 

Kelly Gallamore: 

Okay. How many folks are already… where are you in tracing? Do you do it every day? You’re brand new to it? Or you don’t know what it is? Please let us know. Thank you so much, from the chat. Please let us know where you are with tracing because I know it definitely has gotten a bad rap for being difficult to set up. That’s why we’re here, to talk about how easy it is for you to have it. It’s table stakes these days. New, started last week, fantastic. Okay, perfect. Okay, I think this is going to be really helpful for people just getting started with Honeycomb. Yep, this is great. I think it’s also going to be really good for people who are getting teammates onboarded. 

Paul, you were talking a little bit about your experience. Can you share a little bit of that pain that you’re talking about with tracing and how difficult it has been to get set up? Like, what’s so painful about it? 

Paul Osman: 

Sure. I think there’s a couple of things that come to mind for me. One is the idea that tracing is, two things, really. There’s tracing within an application, which is completely useful. So it gets sort of shoeboxed into the world of just microservices and just tracing along distributed applications. You can add tracing to one application. You can do things like custom instrumentation. So you can create spans around critical parts of your application, et cetera. Then there’s the other part, which is the distributed tracing. That’s, of course, where you have multiple services calling each other. You want to tie together those service calls in a tracing visualization in a tool like Honeycomb. 

There’s an impression that in order to get value out of the second, you have to instrument all of your services. That’s another thing that I found is not actually true. So, you know, there are companies out there with four or five services. There are companies with hundreds of services. If you have hundreds of services, you’re not going to want to have to instrument every single one of those services before you start getting value. Start with one. Start with one that’s most critical to you and your team and your customers, and that could be really dependent on your business. Maybe you have a service that’s been a part of a lot of outages and you want to learn more about it, add tracing to that service. As you discover dependencies that that service calls out to, add tracing to those services. What I want to show today is that you can minimize that implementation difficulty by using tools that provide all the instrumentation for applications. You’re muted. 

Kelly Gallamore: 

Story of my life today. Thank you so much for going through that. I think we should just get right to it and show these folks how it’s done. 

Paul Osman: 

Absolutely. Let me just share my screen. What I’m going to share here is I’ve got two very, very simple contrived example services. Hopefully, they’re representative, at least, of the type of services that you would experience, encounter in the wild in your companies. The first one is a really simple Ruby service. It’s using the web framework Sinatra. All it’s doing is stubbing out a user store. So all of the data is statically coded, but, in the real world, it would be hitting a database like Dynamo or MySQL or something like that. And it’s got two endpoints. One just checks for an email address and the other verifies the password. So it checks the decrypt password for a user account and then returns a 401 if it’s incorrect or returns the user object if it’s correct. 

So really nothing fancy there. You know, an obvious example service. The other service I have is a Python service. I chose to do this in two languages so you can see it’s relatively similar in different languages. This service has one endpoint, and it accepts a post request, and it accepts a user name and password as a JSON document, and then attempts to verify that password by calling out to the Ruby service. That’s what I wanted to do. I wanted to create one service calling another to demonstrate real-world scenarios where you have service dependencies. So let’s run these quickly just to make sure everything is working. Let’s see. I’m going to run my Python app. I’m going to run these locally on my laptop. I’ve run the Ruby service. So I can hit the Ruby service directly. You know, here is the verify password endpoint. If it’s the wrong password, I get a 401. If I type in the right password, I get a 200. 

Let’s call the Python service. This is the one that has a dependency. Here we go. So I’m calling just the only endpoint. I’m sending a post request with a JSON document. That works. As we can see, actually, just by looking at the standard output logs, I’m getting a 200 here, and it’s generating a 200 over here. So it’s clear I’m calling one service. It’s calling another service. So we have a service dependency. But, at the moment, we have no idea what the heck is happening. You know, we can see that they’re successful. That’s great, but if this was actually in a production environment and, you know, users were hitting it with real traffic, I wouldn’t know what the hell was going on. So what I want to do is, first, add just Honeycomb instrumentation. So let’s start with the Ruby library, or the Ruby service. I’m going to go to the documentation for a Ruby Beeline. Beelines, for anybody who’s unfamiliar, these are our auto instrumentation and tracing libraries. Really useful if you’re using any popular web frameworks like Rails or Sinatra, et cetera, in Ruby. 

We have them for other languages, as well, as I will show. I’m going to copy and paste code examples. I have to import the Beeline and just configure it. What I’m going to be showing is this just gets us data into Honeycomb. Let’s say Honeycomb Beeline is the jam name. There is my configuration. I’m also going to give it a service name so on the Honeycomb backend I can see spans, events that are generated from the service name. We’ll give the service name attached to it. I’ve got my credentials set up as environment variables. So let’s just do this. So my Beeline is configured for the Ruby service. The last thing I need to do, as I mentioned I’m using Sinatra. I just need to tell the Beeline that I’m using Sinatra, and I do that by using middleware. So I just copy and paste this line. 

Kelly Gallamore: 

This is all just right in our docs. You’re just taking, this is a bridge. 

Paul Osman: 

This is copying and pasting from our docs. Hopefully, it’s this easy for most people. This is what we really want to aim for. We want it to just be a copy and paste. Search for your framework, grab the line of code you need, and put it in there. And as you can see, I’ve got one, two, three, four, five, six, seven lines of code. It’s all boilerplate. There’s nothing special going on here. So that’s the Ruby service. Let’s actually just restart it quickly and make sure I didn’t screw anything up. All right. It’s, at least, starting. Now I’m going to do the same thing in our Python service. Go to the Beeline for Python documentation page. Same thing. Let’s just grab the initialization code. 

There we go. So I’m importing the Beeline. Same as before, I’m using environment variables. If I could type. There we go. Similarly, I’m going to give it a service name so I can differentiate events on the backend. There we go. Now, just like before, with the Ruby service, I was using Sinatra. This Python service is using a framework called Flask. So I’m looking to go down here and look for this line here. This is, again, just a little bit of middleware that wraps my Flask application. That should be it. Oh yeah. One more piece is because I’m using a library to make outgoing HTTP calls, I also want to import our request wrapper. What this does is it actually just wraps the Python request library so that when I’m making outbound calls, they’re instrumented the same as everything else. With me so far? 

Kelly Gallamore: 

Yes, this is making sense to me. 

Paul Osman: 

Perfect. Let’s try it out. I’m seeing already I have an error here. Honeycomb right key. Okay. Let’s see. It wasn’t finding the environment variable. Same thing with the dataset. It’s not a live demo if everything goes right. I forgot to import the Honeycomb middleware. There we go. That’s all. Cool. Okay. So let’s actually give this a shot. I’m going to do the same thing I did before. I’ve now got both services using the Beelines. What I should expect to see if I’m making these calls is some information on the backend. I’m going to make a few calls that error out. And then, actually, let’s see. There we go. 

(Laughter) 

Paul Osman: 

It’s not interesting if everything is going right. So here’s my Honeycomb account. I can see now that I’ve actually got these traces coming through. I can see total requests. I can see a latency distribution as a heatmap. This is looking good. I know the data is getting to my account. So let’s take a look at some of these. I can see a bunch of traces here. I can click on it here. This is just looking at an individual trace, but what we see is we have the authentication service calling out to the user service. Despite the fact they’re two different codebases, added a few lines of code, and, you know, the main takeaway here is I didn’t have to go in and do a bunch of custom stuff. 

If you do that once to your codebase, you’re going stitch them together. And I’m getting traces assembled on Honeycomb’s backend here. So, if I wanted to, I could also go in and do some querying using our query builder, et cetera. Let’s see. Perfect. And then, of course, I could do any kind of slicing and dicing that I want to and I can look at the view from here as well. So that’s the basic use case. That’s it. You know, that’s when you’re using Beelines across the board. The whole takeaway there is that you just add the library. That’s all you have to do, add, and initialize the library. You don’t have to go and hunt down every part of your code that are doing specific operations. Hopefully, that eases concerns with adding tracing. The next thing I want to show is what happens if you’re mixing and matching different instrumentation libraries. 

Kelly Gallamore: 

Okay. 

Paul Osman: 

In this example, I was using Honeycomb Beelines with Honeycomb Beelines. I’m sure a lot of people are aware of OpenTelemetry. OpenTelemetry is a huge community effort to create a standard in implementations around tracing and metrics and other kinds of telemetry data. And Honeycomb is really being fans of OpenTelemetry. So we want people to be free to experiment with OpenTelemetry, start to look at adopting OpenTelemetry if it’s right for them. One of the things that can be hard, when you’re working on tracing, is taking one part of your system and changing it. Like, if you have these traces flowing through your system, it’s a big ask to say, Okay. Take this other one and instrument it with something else because you run the risk of breaking your traces. Then when you’re debugging, you lose track of the critical customer journeys in your system. 

So what we’ve done to make it easier for people to try out OpenTelemetry or migrate to OpenTelemetry is we’ve added what we call trace header interoperability. So let me demo that really quickly. 

Kelly Gallamore: 

Okay. Yes, please. 

Paul Osman: 

I’m going to take the Python implementation I had that’s instrumented using the Beeline. We’re going to replace it with another one that is instrumented using the OpenTelemetry Flask auto instrumentation. I’m not going to go through the actual exercise of instrumenting it. I already have one here, but just to show you the difference, it’s really not that different. It’s literally just a boilerplate that’s changing from one example to the other. 

Kelly Gallamore: 

Okay. 

Paul Osman: 

I set up a Honeycomb exporter, create a trace provider, and then I auto instrument my app by providing a wrapper. The rest is the same. Let’s go in and run that one. That’s now running. Now, what’s going to happen if I run this, though, is, out of the box, the trace header format is different. So if I try to run this, the Python application is going to send trace context in a header format called W3C trace context, which is used by OpenTelemetry. Our Beeline has to be told to look for that. Pardon me. 

So what we’ve done is we’ve added these hooks that allow you to kind of mix and match trace header formats. If I look at the documentation again, interoperability with OpenTelemetry, I see that in my configure block for my Beeline, I can set up an HTTP tracer hook, and that allows me to either use a canned parser in this case, which I will be for the W3C trace context header or I can do custom stuff here. I can choose to use just parse the trace header, depending on the origin of the request, or I could try multiple trace header formats, if you have an environment where you have, like, maybe engine X is in there sending B3 headers, which is another format. You have some OpenTelemetry in there. I can do whatever I want, but this is the simplest example. 

Kelly Gallamore:

Okay. 

Paul Osman: 

Let me do this with my Ruby app. Here we go. And I’ve got to import the codec. Okay. So now I’ve got two apps. One of them instrumented using a Beeline and the other instrumented using OpenTelemetry. Let’s just add a bunch of requests. I can see that’s coming back okay. Let’s go back to Honeycomb. And I’m seeing… Perfect. I’m seeing traces. So you can see they look a little different just because what is auto instrumented by OpenTelemetry is a little different sometimes than what’s auto instrumented from our Beelines. 

But I have a service here using OpenTelemetry calling out to a service using a Beeline, and because we use the trace header parser hook, which does interoperability between different tracer formats, everything just stitches together on the backend and looks great. And that’s all I have to show. 

Kelly Gallamore: 

That’s it. That’s very, very straightforward. Okay. So what I see here is a lift to help instrument tracing from either Beelines or OTel, and it looks like they’re interoperable, which I didn’t expect. That’s going to help a lot of people out there. I also really appreciate the value that you raise of,    I can imagine that making this big change, especially if you’re so frustrated about being able to understand system behavior across all of these distributed services, being able to make a change that you just take a small chunk and put something in there to give you visibility, you’re definitely taking this mountain and breaking it down into a manageable piece. That takes tracing from difficult to table stakes to understanding user behavior. 

Paul Osman: 

Yeah. Start small. Start where you think the value is going to be the highest and work outwards from there. That’s always my suggestion when taking on some endeavor like adding instrumentation. 

Kelly Gallamore: 

Okay. That sounds fantastic. I will remind everyone in the room… hey, I’ve got one question in the question box. Michael is asking if we’re working on a text stack that supports both styles of auto instrumentation, Beeline and OTel, which do you suggest we use? 

 Paul Osman: 

That’s a great question and one that comes up a lot, actually. Honestly, not to dodge, but I don’t have a straightforward answer. What I’m going to suggest is to use whatever is most important to you. If portability and using an open standard is important to you, I would definitely suggest OpenTelemetry. We are committed to making OpenTelemetry a really first-class experience for Honeycomb users. And so that’s going to continue to be somewhere where we focus effort, making sure that experience is really good. 

On the other hand, OpenTelemetry is not GA yet. So that is a concern for a lot of people, you know, depending on how much of an early adopter you want to be. You may find, as a result, that some of the Beelines are more feature-filled. Really, I would suggest, if you’re willing to take a little bit of a risk and that, you know, lack of vendor lock-in is important to you, then definitely I would recommend giving OpenTelemetry a shot. If you would rather wait, then, you know, as you’ve seen, our Beelines are compatible with OpenTelemetry. So you can always add it in later once it’s made generally available, depending on your risk threshold. Just know that things may change in the next little while. They are looking to get to GA pretty quickly, though. So we’re getting closer and closer. 

Kelly Gallamore: 

How many folks… Here is a good question. What can I do to trace managed services I rely on? I use Timescale Cloud, Postgres service. I see the directions for both, but this feels in between. Here in the chat, what do you have Paul? 

Paul Osman: 

Using anything like RDS, you know, you don’t have the ability to go in and instrument your code, the code there. Obviously, your options are a little bit more limited. One thing we suggest is to just do it in the application. When you’re calling out to Postgres, for instance, you’re about to make a query, implement your Postgres library. A lot of our Beelines will actually do auto instrumentation of a lot of SQL libraries, popular SQL libraries. So that’s one option, and that, at least, gives you the application’s perspective of what’s going on. But it’s not going to give you everything. If you want to look at, like, table locks or roll lock intention, things like that. We have another tool, though, called Honeytail. Let me bring it up here. 

Kelly Gallamore: 

Yeah, we’re good on time   

Paul Osman: 

Cool.

Kelly Gallamore:   

if you want to show. Sure. 

Paul Osman: 

Honeytail is inter documentation in the same section, and what it will do is it will actually parse a lot of popular you know, open-source application log formats and turn those log entries into events that you can look at in Honeycomb. And so what you may want to do is both. Instrument your application so that you wrap all your queries to Postgres or to whatever RDS service you’re using or whatever, and then also run Honeytail and send data directly from your Postgres logs or whatever to separate datasets so that you can look at those as well. You can definitely mix and match there, but, yeah, you’re hitting on something good, which is that your options are limited when you can’t touch the code. 

Kelly Gallamore: 

I’m just throwing a link to the doc section you’re throwing out in the chat. Folks, I will send out as many links as possible in an email following this, but if you’re in the zone, in the context of this today, I want to make sure you can have them. Again, if you have any more questions, we’ll take a few Here. What I see today, Paul, is getting started small. You’ve just taken two services and gotten data into a system so that you can understand how you can trace along the path of the behavior. Is there a limit on how many services tracing can support? 

Paul Osman: 

No, not at all. So, from Honeycomb’s perspective, spans in a trace are just events that happy to contain certain metadata that allows Honeycomb to stitch that event in a trace. I’ve seen traces with 4,000 or more spans in them. It doesn’t matter are the span is coming from. You can have them all admitting spans as part of a trace, and you should have no problem being able to visualize that. 

Kelly Gallamore: 

Thank you. Let’s see. Can you tell us about exporting Honeycomb telemetry context through OTel headers instead of importing OTel headers into Honeycomb? 

Paul Osman: 

That’s a great question. 

Kelly Gallamore: 

I love these questions. 

Paul Osman: 

So that we actually don’t support. So our Beelines cannot, no. Sorry. I shouldn’t say that. Yes, we can do that. So our Beelines actually have what is called a parser hook, which I showed, and then there’s a propagation hook. Because of time, I won’t be able to demo this, but I can show where that propagation hook looks. Here we go. This is where the example code lives in our documentation. So similar to how I added a parser hook to our Ruby app to accept incoming W3C trace context headers, what this will do is if I added a propagation hook, and actually, I can just do this here. It’s all the same libraries. 

So right alongside my parser hook, I can add a propagation hook. What this will do is it’ll take any outbound HTTP requests from my Ruby application, which in this case I don’t have any, but if I then added something where I’m calling out to another service, it would include the W3C trace header in that service. So that if it was an OTel service, just by default, it would be able to pick that up and use it to construct traces. 

Kelly Gallamore: 

Great. Thank you for that answer. I really appreciate it. Let’s see. I know we don’t have time to demo that as well, but there’s a couple of other places that you could possibly see that if you’re interested. We do a weekly live demo. It’s at the top of the website. I will send this out in the email too. I’m going to throw a link right here. It’s really an overall demo, but if you have a specific question, they will try to show it to you. So reach out that way. Reach out if you want to see something specific. 

Or Liz and Shelby also do office hours. That’s our Developer Advocate Team of Liz Fong-Jones and Shelby Spees. They do dedicated office hours. If you want to talk through some of this specifically, we’re at KubeCon this week talking about OpenTelemetry with a lot of people right now. We’re really excited to see more people get the tracing that they need right there, not having to switch tools, but right there in front of them so they can really understand user behavior. Paul, just kind of as an outgoing, I don’t see any last questions. So just as an outgoing thought, do you have any thoughts for folks in the room about the value that tracing can bring? I mean, I can talk about, you know, definitely saving time, understanding if your systems are slower, you can improve them or if a user is having an issue that’s not showing up in your monitoring tools, you can trace along that for sure. But, as an engineer, that pain, how is life better with tracing? 

Paul Osman: 

Yeah, for sure. So, really quickly, I just added a GitHub repo to answer also the previous question that has examples of using both tracing parser and propagation hooks. But, to answer your question, Kelly, like, tracing allows you to do something really important, which is to zoom out and look at what part of a user journey may be causing pain. That’s how I see it, at least. There’s a great example of problems that are easy to detect once you’ve got tracing implemented. One of them would be like when you add microservice dependencies to a system, it’s really easy to do things like, a famous one is end plus one service calls. So let’s say you have a service, like a user service, and it returns a list of users. Then, for each user, you have to look up some specific piece of information like an Avatar or a user name or something like that. 

It’s very common actually to end up, just by accident, calling out to that service for each user. You can look at that in a trace and see there are all these RBCs being made from service A to service B. It becomes instantly apparent, whereas if you’re depending on other forms of telemetry like logs or metrics, you may wonder why, you know, your user service gets so much more traffic than the server that’s calling it. And it’s because of this end plus one problem. There are issues to be identified, perhaps problems. The other one that’s really popular is just looking to see, based on the duration, which is the length of the span and the visualization. You know, what’s slow in your system? What is taking longer in your system than you expected it to? Those are specific situations that come to mind. 

Kelly Gallamore: 

Thank you. I can see how that’s important when you’re on call trying to solve an issue. I can also see how it tightens the feedback loop so that people who are building software can know how their code affects production sooner and might affect users based on their specific situation.  So let’s see. We actually have a couple more questions. Paul, I’m going to keep going, for the folks who can stay. If you have a few minutes? Are you good? 

Paul Osman: 

Yeah.

Kelly Gallamore: 

Great. Do you have any example node JS apps that have the Beeline SDK setup? Specifically apps with Async Promises? 

Paul Osman: 

That is a great question. Something we do have that I will point you to    I’m going to pull up two things. The GitHub repo for the node Beeline. So our node Beeline has the same doc structure as the Python and Ruby one I showed. Hopefully, this will be able to at least give you enough to get started. If you need more than that, we also have this examples repo. You know, this isn’t the most maintained thing we have, but for things like examples of just, you know, using an app, it does have some pretty good stuff. So we have a node tracing web app, for instance, that uses our Beeline. Unfortunately, I’m not sure if it has an async setup. 

The best place to get an example of that, honestly, if you’re part of our Pollinator Slack channel, just pop into the SDK node JS Slack channel and ask somebody. Like, Hey, do you got a quick have a quick GitHub gist or like a code snippet of setting up Beeline with an app that uses Async Promises. A community member might get back to you or somebody that works at Honeycomb may get back to you and show you a quick example. 

Kelly Gallamore: 

Thank you so much for that. We have one more here in the chat as well. If you had to implement telemetry on a small app that has none, what would you trace first? 

(Overlapping speakers) 

Paul Osman: 

Great question. 

Kelly Gallamore: 

Go ahead. 

Paul Osman: 

I was going to say it depends on what you mean by “small app.” I’ve been in situations where you have a monolith, for instance, like a monolithic codebase, and it calls out to a bunch of microservices. In that case, I would start with the monolith because you’re going to get the outbound calls. So, you know, it depends if this app lives alone or exists as part of a microservices architecture. I always think, like, go to the thing that the customers hit the most, you know, or the thing that is the center, so to speak, of your user journey. But let me see. 

If you had to, one service with only two external dependencies. Great. What I would start with, then, is just add a Beeline or an OpenTelemetry auto instrumentation library to that service, and you’re going to get a lot of data for free. When you go to Honeycomb, you’re going to get things like duration for every request handler. You’re going to get, you know, the HTTP method used. Assuming it’s the HTTP service. You’re going to get the path and the status code. That will give you, off the bat, a level of visibility into your error rate and into the durations, and into where the requests are coming from. 

Then you can start to do custom implementation. Once you’ve got the Beeline in there, then you find out, you know, what is most important in your app. What are the parts of the application that are doing things that are more complicated? And then add custom spans around those things where you can add fields to each scan and just see what context matters to you. But start with just the auto instrumentation. You’re going to get a ton of data out of that. We always say auto instrumentation can kind of get you there, then you need to tell your library what is most important to your app because only you know that part. But just start by dropping a Beeline or OpenTelemetry auto instrumentation library. It will get you pretty far to start. 

Kelly Gallamore: 

I really appreciate that. I don’t see any more questions at this time. Paul, thank you so much. I really appreciate it. I also appreciate everyone who has participated and listened to us today, who’s asked questions and gave us answers to our questions. If you have more that we didn’t get to today, you can reach out to team@honeycomb.io. There’s a link in the chat. If you liked this episode or if you didn’t like this episode, I want to hear it. I have a short survey. Help me make, help our team make things that make things better for you. I need some more coffee. Try that again sometime. But I really appreciate everybody coming. And, Paul, thank you so much for sharing with us today. I’m glad to have you here. 

Paul Osman: 

No problem. Thanks to everybody for coming. 

Kelly Gallamore:

Have a great week. 

Paul Osman: 

Take care, all.

If you see any typos in this text or have any questions, reach out to marketing@honeycomb.io.