Webinars Observability Debugging

BubbleUp to Spot Outliers in Production


Summary:


The power of Honeycomb lies in the way you analyze production data using different interactive views. See what's happening across many dimensions (fields) in your system with BubbleUp. Pick the timeframe, breakdown by any field, such as customer name or ID, then filter by a specific dataset or where any errors occur. The query results are heatmap that highlight events over the baseline, over time.

Use BubbleUp to select outliers on the heatmap and drill down to all related fields in that data. It will help you understand which part of the code is misbehaving.

In this Episode of #HNYLearn, we will show:

- How heatmaps illustrate added dimensionality
- BubbleUp and Tracing views work side-by-side for even greater context
- BubbleUp lets you drill down to see exactly where problems occur... blazing fast!

See a Honeycomb product demo and learn the value of real-time analysis across system events to troubleshoot issues and understand how production is behaving right now.

Transcript

Rachel “pie” Perkins [Wordsmith|Content|Honeycomb]:

Good morning everyone, or afternoon, or perhaps it’s evening where you are. We’re going to wait a couple of minutes for folks to dial on their old school telephones, and then we’ll start. Thanks for your patience. Good morning again everyone. Thank you for joining us for Honeycomb Learn number four, “BubbleUp the Spot Outliers and Production.” As you may have noted, there are already three installments in the Honeycomb Learn series, which are all available on-demand via our website at Honeycomb.io/resources/webinars.

Before we dive into the presentation, excuse me, I’d like to go over a couple of housekeeping items. The first thing is if you have a question at any time during the webinar, please use the question tab below the player. We’re not going to stop during the presentation to answer questions, but we will address questions at the end. Then the second thing is, please at the end of this webinar take a moment to rate our presentation and provide feedback using the “Rate This” tab below the player.

Let’s do some introductions. Danyel.

Danyel Fisher [Principal, Design Research|Honeycomb]:

Hi, I’m Danyel Fisher. I’m the Principal Design Researcher here at Honeycomb. I’ve been here for about a year. My work centers on thinking about data visualization and analytics. I’m really interested in how people deal with data and understand it and I’m loving ways of applying visualization to the SRE experience.

Rachel “pie” Perkins:

Hi, and I’m Rachel Perkins, AKA Rachel “pie” Perkins. I’m in charge of words and community and, to some degree, the documentation here at Honeycomb. I spent the previous nine years running docs and community at Splunk, starting back when it was about 60 people. What drew me to Honeycomb was being the same potential for a huge step forward for folks who run code and production. So I’m excited to share some of that with you today. Danyel, you want to run us through the outline for today?

3:37

Danyel Fisher:

Sure. We’re going to start off by talking a little bit about observability and understanding complex systems, which I’d argue is a fundamental way of thinking about visualization for data. We’re going to, throughout this talk and indeed through everything that Honeycomb does, we’re going to talk a lot about the ideas of high dimensional data and high cardinality data, so I’m going to talk a little bit about some examples of what those look like. This is going to lead us to an analysis method that we call the core analysis loop. Explore your data and get to know what’s in it and that’s going to lead us directly into a tool that we call BubbleUp, which I’ll do as a live demo to introduce really new ways to dive into your data and explore it from different perspectives.

Rachel “pie” Perkins:

Awesome, and before we get into the meat of our discussion today, remember we will be gathering questions up in the questions tab. We’ll review and respond to them at the end, so do ask away. We’re also going to be giving a demo of using BubbleUp in Honeycomb, so you will see Honeycomb in action today if you haven’t before.

First, observability. I expect you’re hearing the term a lot lately, but I’d like to make sure that we are all using the same definition. This definition is from Wikipedia. In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. Now we’ve taken this term to heart here at Honeycomb because it really reflects what we’re trying to offer. We want our users to be able to figure out what’s going on inside their systems without having to ship new code. That last part’s important.

Our Co-founder and CTO, Charity Majors, expands on this subject in this Tweet. If you don’t follow Charity on Twitter, I recommend you do. She’s pretty opinionated about a lot of things to do with shipping and running code and production as well as the occasional whiskey discussions. The thing about observability is that it’s not just a thing you get to and stop. It’s a journey. As I’ve mentioned earlier, there are some previous installments of this webcast series. They represent different steps on the path to observability.

Now we’ve got them in order here. But in terms of a progression, the only one you really have to do before the others is the first one because you have to have instrumentation before you can really do the rest of these things. That first webinar provides a great base on which to build your observability practice, so I recommend you check it out and share it with everyone.

But today we’re gonna focus on how to identify outliers, anomalies in your data using a feature we developed specifically for that. First, we’re going to take a closer look at heat maps, what they’re for, how to read them, and the importance of having access to rich event data and lots of context so you can pinpoint where in your code the problem could be. Then we’ll finish the series out after this one with the next episode where we’ll cover how to collaborate using Honeycomb, how you can curate and share what you’ve learned as well as learn from other team members by building on their previous experiences, which is huge. We’re promoting that one in the coming week, weeks or so. So keep an eye out.

Let’s dive into the meat of today’s session. Danyel, I’ll hand things over to you now to talk about high dimensionality and high cardinality in data and why nowadays those things are more important than ever.

Danyel Fisher:

Absolutely. Thanks. You were just talking about the importance of instrumentation and the value of being able to dive into really understanding the context of what happened with your system. I think if Honeycomb has one underlying theme, it really is this drive towards observability and trying to understand as much as we possibly can about what’s going on in the system. The best way to do that is to do a good job of instrumenting, which means that you can record it.

Now, one of the challenges that we’ve seen in certainly some of our competitors’ tools and sometimes in our own instrumentation is that you might not have captured enough data. I’m starting off this conversation about high dimensionality to say we love dealing with many columns of data, as many as you can give us. Today we’re going to be using a demo data set that has, well, as you can see here, oh about 20 columns of data, things like an availability zone, which is someone’s Amazon zone and duration that queries took and the number of endpoints and that sort of thing. But in fact, and 20 something is a fairly impressive number, but in fact, internally when we’re doing real analysis on the Honeycomb production systems, well this is a screenshot of the underlying data for one of the systems that we use. This is, I think, 250 columns at last count.

Rachel “pie” Perkins:

Reality is painful to look at anyway.

Danyel Fisher:

There’s a lot to know about your system. There’s a lot that you might want to ask. We want to make sure that when you’ve got an opportunity to ask that, we’ve captured that important dimension.

We also talk a lot about high cardinality, because not only are there a lot of columns but they can have a lot of possible values. For example, looking at the data set that we’re going to look at this afternoon. We care about customer ID. Each distinct entry might have a different thing from a different customer. We care about things like what query was called on a SQL call and what error came out of it. These can have a lot of different possible values. We want to make sure that we’re ready for all of this. In fact, I’d go so far as to say that Honeycomb’s superpower is the ability to filter and group by any column in your data set. No matter how many columns there are, no matter how many distinct values there are, we’re ready to look at them all, and we’re ready to share, and we’re ready to let you split up and break across all of them.

What that leads to is a process that we’ve started to call the core analysis loop. Let me give you some context here. We’re dealing with the data sets. We’ve been triggered by an alert or we’ve got a notification from a user that something feels wrong. What we want to do is find some way of looking at the data that shows that something’s unusual. For example, we might see this number of errors has spiked or the duration of our calls has spiked or that the number of processes that are running has dropped. Something that allows us to see what looks unusual.

10:00

Then we want to formulate… we are going to naturally start formulating hypotheses to figure out what could possibly do it. We’ll choose a variable to see whether that explains what’s happened. Sometimes that variable is clear, but sometimes we have to guess and stare at the data a little bit and figure out what that is. Then we will break that down. We will either filter it out to see whether once we move that variable, everything looks okay or we’ll break down by that variable to compare the world with that variable without that variable, trying to see whether we have successfully explained it.

Rachel “pie” Perkins:

So basically it’s like, “This looks wrong, is it this? Is it that? Is it this other thing?” And you just do that a bunch of times, right, while you’re troubleshooting and click and then no, and then click and click again. And all the while the world is kind of burning down around you. This is a painful process sometimes.

Danyel Fisher:

That certainly can be. Yes, certainly one way that you can make that a lot simpler is by knowing your data really well, by having an intuitive guess. There certainly are some people I know who can stare at a bump in their data and say, “Ah, that looks to me like a SQL failure.” But for all the rest of us, we’d like to, well find a way that we can help users support that.

We built this tool called BubbleUp. That’s meant to help humans recognize the parts that are interesting, to use your skills to see what’s going on. We’ll turn some of these big questions about how the data sets are different, whether this variable makes a difference or not, over to the computer and let the computer do the computing. Then we’ll let you do the looking and the recognizing to figure out what the actually interesting parts are. This method can save a ton of time for analysis and of course, it makes your customers happy because your systems up faster and your works up more successfully.

Rachel “pie” Perkins:

So we do the calculations all in parallel, so you can do what you’re good at, which is using your instincts and pattern matching with your eyes. You can go, instead of doing that click, no click, no, a bunch of times, it all happens at the same time.

Danyel Fisher:

Yeah. Let me show you how this works. We’re inside the Honeycomb interface. I’d like to weave this scenario that I have just gotten an alert on my phone. It warned me that there are some users whose queries are timing out. Now, I’m in charge of a system that processes a backend queries for other people. So they’re off running their systems and their customers are asking for things and that’s popping out on our system. Now, what’s important to me of course is guaranteeing, therefore, that when they’re calling our API, we’re giving them a good quality of service, that their queries are being serviced rapidly. I might go ahead and check the standard percentile data for looking at how long queries are taking. We can see at the top of the screen here, I’ve looked at the P50 for duration, how long are queries, what’s the median query taking? The answer is, it’s about 35 milliseconds. Sometimes we get a little bit quicker, but we’re looking pretty steady at 35. The median person is doing just fine right now.

P95, the 95th percentile, the 95th percentile person is actually doing pretty well too. We can see that there was a little bump here, but things are looking okay. Now they’re at 300 milliseconds. Our 5% lowest are still experiencing like a 300 milliseconds experience, but that’s been constant.

Now, look at this bottom here, the P99. That’s where you begin to see this substantial bump. Something happened at 10:15 in the morning that raised the 99th percentile time from half a second all the way up to a full second. That’s a little scary. Something’s gone wrong. I’d really like to know what it is. Unfortunately, just looking at this table doesn’t give me enough richness, so I’m going to go ahead and change these over from looking at the duration queries to looking at a heat map query. In Honeycomb, what I’ve done is I’ve gone ahead and I’ve specified that I want to look at the heat map of the duration instead.

When this query comes back, what we see is that we can see that the first one of our initial hypotheses might have been, “Gosh, maybe the count got high.” I can look at this and go see, well, no. Honestly, the number of queries that have come to our system, the amount of events that we’re processing hasn’t actually increased all that much. It will increase later. At 11:00 AM there seems to be a big growth, but right now that’s not the problem. And so I use the heat map.

A heat map is a wonderful visual innovation. I want to see more people getting to know heat maps, because frankly just heat maps are one of those things that make me as a visualization guy, bounce happily in my sleep. On the X-axis, we’re looking at time. On the Y-axis, we’re looking at duration. Each cell is colored by the number of people or the number of events that have looked at that cell or that have had that cell.

For example, you can see that these pale greens at the top mean that there are some people who are experiencing overall 700 milliseconds. That ticks all the way across here. We can see on the bottom row that it’s getting darker and darker and darker. That’s reflecting the count. What that shows is that even as the traffic has been increasing, most people are still experiencing short times, much less than one-tenth of a second. We can see this spike coming off here, this group of places where increasing numbers of events were coming in that took between half a second and a full second.

Rachel “pie” Perkins:

Basically the Lochness monster is there.

Danyel Fisher:

Pretty much, yeah. Now the standard thing that we might want to do is start hypothesizing and making some guesses about what’s going on. For example, I might say, “Hey, maybe that’s the sign of an Amazon endpoint,” or of an “Amazon availability zone failing.” Let’s switch over to the availability zone and rerun that query and see whether that explains it. I look over here and I can see that some of our data is in US East One and some of it is in US West One. That seems to make no difference whatsoever.

17:04

We can start hypothesizing about other things. We can start digging around and say, “Ah, maybe it’s a particular host, right? It’s a particular machine that’s failing us.” Again, we can come back here and start jumping through all the different ones. We can see that each of them is pretty sparse and none of them is directly responsible for this spike.

Rachel “pie” Perkins:

They all have Nessie.

Danyel Fisher:

Right. This is a pretty painful process, what I’ve just started doing. Try a variant, see if it works. Try a variant, see if it works. The idea behind BubbleUp is it allows us to pick the data that we think is interesting and see how it’s different. I’ve now switched over to the BubbleUp tab. I’m selecting this little chunk of data right here, some of the stuff that’s experiencing the worst experience. What Honeycomb is doing is it’s looking at those points that I’ve selected, and it’s comparing them to all the rest, and it’s showing that across every dimension in our data.

Rachel “pie” Perkins:

And all you did is just draw the little box, right?

Danyel Fisher:

I just drew the little box.

Rachel “pie” Perkins:

Yeah, okay.

Danyel Fisher:

We can see very quickly, for example, the stuff inside this box has a very different user ID than everything else. The stuff inside the box is dominantly this user 20109, while the baseline has a wide variety of users. The stuff inside the box also has this very specific endpoint. It’s all talking to API V2 tickets export. We can see, again, it’s the same end, it’s a different endpoint. We can see that it’s all hitting the same error. We can see which service it’s coming from. We can also see some things that don’t seem to matter, like when we come over here and look at the hostname. We can see, as I said before, that the hostname doesn’t seem to matter. It’s not affecting it. The distributions aren’t obviously different. Customer ID doesn’t seem to be too importantly different. Things like platform, is it coming in on Android or iOS doesn’t seem to matter much either nor does the availability zone.

That’s been great. We now know some places to start looking. For example, we can go ahead and just go yell at this user, but I’d like to do something more interesting and point out that these steps can fit together. What I’m going to do is I’m going to grab one particular trace that has experienced this. This trace is from the selection. I’m going to filter on it. I’m going to jump over to traces mode. What I can see is that we’re now going to process one request that came from API V2 tickets export. That’s the thing. It’s a trace that had taken just about a second, which was the time range that we’re concerned about.

I can jump right into that trace now and go look to see what happened. Why was it slow? We can see the structure that’s pretty straightforward about what happens inside a trace, and what happens when we’re processing one of these API V2 tickets export endpoints.

Rachel “pie” Perkins:

That’s a lot of calls.

Danyel Fisher:

Yup. So we call check rate limits and that seemed to go pretty quickly and we called fetch user info and that went pretty quickly. Then we called fetch tickets for export. Check this out. 26 times. We did a little my SQL query. Every time we did that little my SQL query, we did it serially one after the next, after the next, after the next. Each of them seems to take a pretty reasonable amount of time, but wow, it really went all the way out there. That’s a little scary.

Rachel “pie” Perkins:

Yeah. Somebody really needed to get those tickets, I guess, over and over.

Danyel Fisher:

This makes me wonder, is it something about the way that they’re making their call to our system? Is it something about the way that we’re reflecting on this data and the way that we’re … Is it something about the way that we’re processing those calls? Is it something about the way that they’re invoking us? Maybe they’re calling our API in a way that we didn’t expect. I can also start seeing things like, hey, this is an unusual status code. It’s status code 25.

The scroll part that I’m going through on the right is giving us additional information about the span. For example, this particular span has the name query and comes from the service name my SQL. It ran for 39 milliseconds. I can also see, for example, a specific ID and parent ID. I can also see what the SQL query that invoked it was here. This query that invoked it was, “Select star from tickets.” I can …

Rachel “pie” Perkins:

Pretty straightforward.

Danyel Fisher:

What’s that? Yeah, and the great thing is I can grab that query now, jump back, and let’s bring that back into the query builder and go see, and actually go see what the distribution of that time looked like. One of our questions might be, for example, were we calling this in a usual way during this spike, was it getting longer or shorter than it usually does? Looking at the heat map, we can see that this query, “Select star from tickets,” is a baseline of our system. We’ve been calling it constantly. It always runs between 35 and 45 milliseconds. But we have started calling it more in this period. And that’s probably because of this high level of parallelism. That can tell us, again, it begins to be the next step of our investigation. We can start asking questions about how we learned from that query and why we’re calling it in this particular way.

Now, some fun things we might do. Now that we know something. For example about the fact that it was a particular endpoint. We could go back and change that breakdown to particularly go look at the endpoint shape. Let’s go take a look at this data again now. We can flip between different breakdowns on the endpoint shape and see how they look. We can see that the endpoint organizations and macros and search and assignable, all these different things that our API can do didn’t change at all. Only this one, ticket exports, is the one that has the spike.

Rachel “pie” Perkins:

Yeah. That one might be right for refactoring.

Danyel Fisher:

Absolutely. Now if we’d been really, really lucky upfront, we might have guessed endpoint was the thing to filter on. Then we might have been clever enough to have scrolled through until we found this. I think it was really nice to be able to go into BubbleUp and find out directly what was the one field whose fault it was and be able to figure out how to move forward from it.

Rachel “pie” Perkins:

Definitely that typically saves a lot of time. If you don’t know what caused something, why would you even be investigating, or if you knew what caused something, why would you be investigating in the first place?

Danyel Fisher:

Exactly.

Rachel “pie” Perkins:

But now there’s what caused Nessie. Great.

Danyel Fisher:

So with that, I think, I’m ready to hop back into our slides.

Rachel “pie” Perkins:

Oh yeah. So yeah, go ahead. Go ahead. Sorry.

25:09

Danyel Fisher:

No problem. To recap just a little, BubbleUp looked at every dimension of how the selected area, the thing inside the rectangle was different from the things outside the rectangle. For each of those dimensions, it grabbed a few thousand points. It drew these little visualizations of the histogram so that we could see how they were different. It ranked those histograms in order by the amount of difference, so that things with a dramatic standout like the endpoint shape and the user ID got pushed to the front, while things with less dramatic outliers got moved to towards the back.

For each of these little visualizations, we could slide our mouse over it, so that we could actually go look at what percentage of the data it was showing it or it wasn’t showing. We didn’t have a whole lot of it, but there’s a number of events in our data sets that didn’t have all the dimensions defined. And so if they don’t, so for example, if some things don’t have an availability zone, then we show those with partially filled or unfilled circles to help emphasize that difference.

With all these pieces together, you’re really able to pretty quickly zoom in and go figure out what the pieces, where this fits or doesn’t fit, and to identify how an outlier is different or is similar to what the data around it.

Rachel “pie” Perkins:

The team built BubbleUp towards the end of last year. We shifted earlier this year and we’ve made a bunch of improvements based on feedback from our customers who have gotten a lot of value out of it, as these quotes show. In particular, I also recommend that you take a look at a case study we did with the game studio BHVR. They rely a lot on third-party services to deliver their games to customers, especially via the login to different game platforms. They really found that the “Is it this provider? Is it this other provider? Is it the network? Is it us?,” they found that dance to go a lot faster with BubbleUp. So check that case study out on our site at Honeycomb.io/case-studies under BHVR.

With that, it’s time for us to get to questions. Anybody? Let’s see. I’m not seeing any questions. Does anyone? Please do enter your questions.

Danyel Fisher:

As we’re waiting for those, I think it is worth saying that the way that BubbleUp thinks about the world right now, it is based on looking at a heat map. And so right now we’ve only enabled BubbleUp coming out of the heat map mode, because that’s the one that allows you to really unambiguously say, “These are the points that I care about. These are the points that are not as interesting.”

Rachel “pie” Perkins:

Yeah, and drawing that little box is pretty mind-blowing. I’ve watched customers do that for the first time. Their eyes really do seem to just get wider and wider. Hopefully, you’ll get to try it out yourself sometime. Yeah. I’m not seeing any questions from the audience, so I think perhaps we have answered all the questions. Oh, here we are. Who’s got one? Is BubbleUp good for finding issues only? What about other kinds of performance optimizations?

Danyel Fisher:

BubbleUp will help you find really any important distinction that sits in your data. We happened to be looking here at a performance issue, but we can look at … Sorry, we happen to be looking here at a particular alert, but we could look at a performance issue. We could look at almost anything. One of my colleagues has been using it recently to look at the build process. Each piece of the build process generates a time that it took. He’s able to go look and say, “Hey, this horizontal stripe of data that I see across here that’s taking five seconds, how is it different from all the others?” It just pops straight out, oh yes, this is the points that are, I don’t know, the compiler as opposed to being the points that are the testing system. We can link it into almost anything. I see another question.

Rachel “pie” Perkins:

Yeah, the next question is how do we model or plan for how much data the Honeycomb agent will send from our production apps to Honeycomb? Now, I know a little bit about this, but Danyel Fisher, go ahead.

Danyel Fisher:

How do you model or plan the amount of data? We have other experts at the team who know a lot more than that about … who know a lot more than me about that. However, I am happy to say that we’ve been thinking a lot about ways of helping with sampling. In fact, we just released a report from Liz Fong-Jones talking about ways that you can sample and adaptively sample. One of the nice things about that is, that really lets you set up your system to describe approximately how much data you want to keep and what you want your overall sample rate to be and to tell it what the important dimensions that you think are the ones that you want to make absolutely sure you are or aren’t sampling on.

For example, you might say, “I care a lot about HTTP status code and I care a lot about slow queries versus fast queries.” We want to make sure that each of those buckets get well populated. So if there are very few errors, then we will dynamically sample fewer of them, we’ll dynamically sample every one of them. While on the other hand, boring HTTP requests that go quickly, we’ll only sample, you know, one in 100, one in a thousand, one in 10,000. That said, we do know that timing is a little bit limited. Sorry, we do know that our facilities for modeling this sort of thing can be challenging. Our solutions engineers have some great tricks up their sleeves and can tell you a lot more about that than I can.

Rachel “pie” Perkins:

Yeah, and those things should definitely be based on your business goals as well. It’s super important to start from that perspective and build that into your dynamic sampling. Please do ask further questions about this in our Slack. If you’re already trying out Honeycomb, there’s a lot of great info there and smart people to help you.

Danyel Fisher:

We have a couple more questions.

Rachel “pie” Perkins:

Oh, go ahead.

Danyel Fisher:

Let’s see. One user asks, “Can you create a heat map view from a trace view today?” Today, like literally at this moment, you do have to look at a trace and then copy the field out. However, we’ve been making a lot of improvements to tracing. In the next a week or two you are going to see a couple of versions to release that allow you to more quickly break down from one view into another as well as some built-in views inside the trace that can help with that.

32:33

Rachel “pie” Perkins:

All right, the next question is “Can I instrument custom metrics from another source, like a synthetic load test, so that I can correlate those metrics with observability metrics from Honeycomb?”

Danyel Fisher:

We’ve had a couple of users begin to play with instrumenting metrics from other sources who are adapting stats D-type outputs. The other thing that you can actually do is, while you’re collecting your events in Honeycomb, you can put in an extra layer of metadata into those events themselves. So we’ve had some examples. So internally, for example, we often are interested in, for example, the number of routines that are running overall on a machine, on the amount of memory that’s running, because that can be something that explains what’s going on with the failure. When we send an event for some of our high volume services, we’ll throw in the amount of free memory on the service, and the number of processes that are running simultaneously. Alternatively, you can just take those additional events and feed them in through something like either the honey tail log reading system and bring them in as other data sources.

Rachel “pie” Perkins:

All right. It looks like we’ve got one more question here. That’s a good question to wrap up, I feel like unless we could see some more, “What’s next for BubbleUp from a roadmap perspective?”

Danyel Fisher:

Coming up ahead on BubbleUp in the short term is that we’re planning to … Well, the most recent changes were beginning to make it more ready for trace data by being able to properly show things like giving that percentage of events that participate in hiding the nols. That was a pretty big change. We’ve made a couple of small mathematical changes in the backend recently. For example, we got better at the way that we subtract events from each other, which has made for much crisper and cleaner distributions.

Coming up next is broadening the idea of BubbleUp to any selection against any filter. What we’d really like to be able to do is get to the point where when you have one filter that you think is interesting, status equals 500 or I don’t know, memory is greater than 200 or any, if you can write it as a filter inside the system, we’d like to allow you to BubbleUp and go see what other dimensions are correlated with that one and to understand how those dimensions fit.

Overall, all of these fit very well into our broader story of helping users move away from having to do the breakdown dance of breaking down dimension after dimension to go see what’s interesting and go have the system suggest to those differences and to help figure out where to look first.

Rachel “pie” Perkins:

Alright, I think we’re through all the questions. Thank you for joining us today everyone. I’d like to remind you that we’re hoping to get your feedback from the feedback tab. Also, I’d like to let you know that you’re gonna be getting a followup email to let you know when this webinar is available for on-demand viewing, and that will include a copy of the slides. The links here and in other parts of this presentation will be clickable. With that, stay tuned for our next and final installment about collaboration and curation.

Danyel Fisher:

Hey, I’m going to hop in real quick.

Rachel “pie” Perkins:

Oh, go right ahead.

Danyel Fisher:

I’m sorry. Since I see that on this “How to Get Started” slide, we remind people about Honeycomb play.

Rachel “pie” Perkins:

Oh yeah.

Danyel Fisher:

Honeycomb Play is real data that we captured from a past incident that we’ve brought into the Honeycomb system. It guides you through a series of queries and its discovery process. It actually has a little bit of a breakdown dance and it does have a heat map. Now that wasn’t designed for BubbleUp. It was built before BubbleUp happened, but I’m going to encourage you if you want to learn more about BubbleUp, to just drop straight into play or go look at our Ruby Gems interactive data set or one of the other sets that are available online. Without even having to bring in your own data, you can be bubbling up literally today, heck right now because it’s there and it’s fun to play with.

Rachel “pie” Perkins:

Well, there’s the sound of a true convert. I mean, you know, designers have to be a convert. Thank you again. Yes, please do go through, check out our eGuides, go try out with real data in Honeycomb Play, and hopefully, we’ll see you in our chat asking and answering questions. Thanks, everybody.

If you see any typos in this text or have any questions, reach out to marketing@honeycomb.io.

Transcript