Getting Data In With Beelines: Python
Transcript
Nathan LeClaire [Sales Engineer|Honeycomb:
Welcome to another episode of Honeycomb Training. I’m Nathan LeClaire, and today we’re going to talk about getting data in with the Python Beeline. So the Python Beeline is a library that you can bring into your Python applications to start instrumenting your code and sending traces to Honeycomb. It should help you understand what’s going on in your code in production, where things are slow, and much, much more.
So let’s talk about the end goal of what we eventually want to accomplish using the Python Beeline. Ultimately, we want to be able to query the data that it generates and sends to the Honeycomb API, to explain production issues. For instance, you can see a count graph split out by a couple of different fields there, and some of those spikes up, like an error rate, might be interesting to us. We also want to examine traces to understand where our programs are spending their time. So over on the left, you can see a little waterfall style diagram, outlining what was going on in each little step of a program, as it ran some operation in production.
To install the Beeline, it’s quite simple, just pip install honeycomb-beeline package. You obviously might also want to add this into the requirements.txt file for your application, and once you have that installed locally, then you need to somewhere initialize the beeline in your code. So you can import it just by saying import beeline, and then you can initialize it by calling beeline.init(). And you want to pass in your API key, which you can get in the team settings in Honeycomb, a dataset name that will determine which dataset that your data lands in. A dataset is similar to a MySQL table. The service name of your app, so we’ve just said my-python-app here. And I also recommend you turn on the debug mode by setting debug equal to true. That will spit a bunch of information out to the console about what’s going on as the beeline is doing its thing.
Basic usage of the Beeline API looks a little bit like this. We want to kick off a trace by calling beeline.start_trace(). Then we want to generate a variety of spans that are nested within that top-level trace. And one way to do that is this beeline.tracer() method. So, we can start a with block. With beeline.tracer(), and we’ll pass in a span name as an argument. So I’ve just called it, call_github here, because that’s the operation we’re doing. And in this little with block, we do the thing that we’re wanting to track. So I’ve made a request out to Github in this example, and I’ve also added some custom context to this span that we’re generating of the HTTP code. So that will help us track, additionally, what’s going on.
Side note, the Beeline does have a built-in library for wrapping the requests module, so that might be something you want to take advantage of if you’re using that library. And this span is just for example purposes here.
So once that block of code is finished, the beeline will register that that’s all over, and it will end the timer, and send that span along the Honeycomb. And then once we’re done with all the little chunks of work in our trace, we want to call beeline.finish_trace(), passing in that original trace object that we created. And then, beeline.close() before our program exits, so that we can flush all of the things that are waiting to be sent to Honeycomb. The beeline will send to Honeycomb in the background, so it shouldn’t block your apps. And that’s why we need to call beeline.close(), so we make sure that everything is sent along before our program terminates.
So, that’s all well and good, especially if you’re in a background job or something like that. You might want to create a trace from scratch using that API, but we went one step further for you, and we also made a way that you can wrap common libraries. So frameworks like Flask or Django are very popular things to use with Python. And here’s an example of wrapping the common library, Flask, with our Beeline.
So we’d have a beeline.middleware.flask module, from which you can import HoneyMiddleware. You’ll do your normal beeline.init() call. And at some point, you’re going to want to create a Flask app, as you can see on that second to last line of code there. And then you want to call HoneyMiddleware(), and pass in that app. So that we’ll install our middleware on your Flask app, and it will generate a span every time a request is made. You can also turn on the instrumentation for Flask-SQLAlchemy if you’re using that for your database calls, and you’ll get a span for every database call that you make. And that’s pretty nifty in my opinion. That really helps a lot, because the database can be very tricky in production. So let’s take a look at an example of what this looks like in practice.
5:09
Here we have the Beeline code added onto a Flask app. And you can see that as we run some HTTP calls against that app in the background the Beeline is busy at work, intercepting each little request, and pulling all of these properties off of it, and sending events along to the Honeycomb API. So these all represent what’s going on in our program, and we’ll be able to use that for our analysis later. So coming over to the Honeycomb UI, once we’ve sent some data in, we can query that.
So we start off with just a simple count query and we can zoom in some more here on that time range. We might want to group by a field, like Name, to just get a distinct field for which operations we’re looking at here. And here, we can see that they’re all Flask, HTTP operations. You can group by any field. You could group by user_id, or region, anything you can imagine adding onto your data. You can calculate these aggregates for that group.
And so if we want to get a closer look, we can then peak at the trace for a given set of spans. So by clicking on the trace_id in the UI, you can see this example trace here. So our total call is up in the purple at the top there, and that’s how long everything in that request took. And then we can see that part of that call, where these little scattershot Flask DB query spans. So it looks like we’re executing a bunch of database queries. If we wanted to speed that up, maybe we could see if we can roll those all up into one operation, or you might notice when you add it to your code and are running things in production, there might be some notably slow operations that you can either add an index to a database table or just update your code to fix somehow.
That’s the beauty of everything here is that we discover what’s going on, and then we iteratively fix it. So another way that you can create custom spans is that you can use this annotation of @beeline.traced(). So using this annotation around your Python methods, you can just decorate the method with these annotations to generate a span automatically.
And so whether it’s with the Beeline auto instrumentation, or with instrumentation that you’ve set up yourself, you can always call beeline.add_context_field(), to just keep slapping some more details onto the context. So for instance, in this function that will generate a span with a name of chunk_of_work, we are calling this beeline.add_context_field() to add on the user_id as something that we can query over later on. And so, that will help us with our analysis. We’ll know when things are happening to particular users, or by any other property that we can imagine. And it’s very, very useful.
So once we have all this setup, we might also want to do tracing across the network. There is a distributed and distributed tracing after all. And to do that, we have to propagate around the header. So if you’re using beelines, the tracing information header that we lean on is X-Honeycomb-Trace. And so, as I mentioned, there is this wrapper that we have that will wrap the popular requests library to do this automatically, but you also might want to hand off trace context to another process on your own manually. And for that, you can see an example here, where we’re saying beeline.get_beeline().tracer_impl.marshal_trace_context() to marshal up that header, and then send it along to something else downstream.
So, one last thing to talk about is that in a lot of running Python in production, you’ll be using these pre-fork models, like gunicorn that do forking of these processes and thread management, so that your Python app can actually service apps in a more concurrent fashion. And the thing to keep in mind is that if you don’t initialize the beeline at the right time when you’re using one of these models, your results are going to be all screwed up because the state of the beeline is not going to be set correctly as your app runs. Every thread might try to do it on its own, for instance, and that’s just going to fail. So we do have some instructions in the documentation. You can find that using Google, the Beeline for Python. And one example here is how you might configure gunicorn to set this correctly. There’s this post_worker_init() method that you can set where you would call beeline.init() there. So if you’re using one of those, it’s a very important thing to keep in mind that you call beeline.init(). And you also want to remember again, to call beeline.close() when everything is finished, because that’s going to help ensure that all of your requests are sent along to Honeycomb successfully, and you don’t have any issues with your instrumentation.
So that’s a bit of training on getting data in with the Python Beeline. I’m sure your head is all abuzz with all the amazing things you do. I really want to see that first person do machine learning instrumentation with Honeycomb or TensorFlow. I really think that would be nifty to hook into some of those things, too. So consider that a challenge to anyone out there. And for the rest of you, go forth and have fun with the Python Beeline.