Webinars Observability Engineering Best Practices CI/CD Pipelines

De-Risk Your Deployments with Honeycomb & GitHub Actions Deployment Protection Rules

Summary:


Deployments should happen quickly and often. Yet, increasingly complex and distributed cloud systems make it harder to predict the behavior of new releases, causing teams to fear deploying new code. Using Honeycomb queries in GitHub Actions Deployment Protection Rules de-risks deployments, giving developers the confidence to deliver innovation to their customers frequently and stress-free.

GitHub’s Senior Partner Engineer, Alexis Abril and Honeycomb’s Senior API & Partnerships Engineer, Jason Harley dive into how you can leverage Honeycomb’s granular observability in your GitHub Actions workflows to make your deployments safe.

They discuss:
- How GitHub Actions Deployment Protection Rules de-risk your deployment workflows
- Why incorporating Honeycomb query data into your deployment protection rules ensures stable releases
- How to create automated deployments that speed up delivery of new features without compromising safety
- Suggested Honeycomb queries to ensure release stability
- How to utilize SLIs to get SLO-like insights

Transcript

>> REBECCA CARTER: Today we have Alexis Abril at GitHub. And Jason Harley with Honeycomb. They will be presenting on GitHub actions and the new Honeycomb integration. With that, I will pass things over to Alexis.

>> ALEXIS ABRIL: Thank you, Rebecca. I will bring up my screen here and give everyone a crash cross into actions. And lead into what we want to talk about today.

>> ALEXIS ABRIL: Hello, my name is Alexis Abril. I help partner teams develop partnerality. GitHub actions has a broad reach and functionality, but we will focus on the capability centered on continuous deployment. If you are not familiar with GitHub actions, here’s a quick overview. Actions are how pipelines are automated within the GitHub platform. This can be utilized to run tests or link code each time a pearl request is created or a branch is merged. It has a customizable workflow. Automating well-defined tasks. We review the list of tasks and provision a virtual machine for each job that has been declared. Typically, these are for common tasks such as limiting or testing code. However, we are talking about executing code when interesting events occur. What the code does is open to the creativity of the author.

     If you are looking for inspiration or what functionality may be available today, our partner ecosystem, the open source community, and individual contributors are creating new actions every day to accelerate, solidify, and secure your development life cycle. These are all available via the GitHub marketplace. This leads me into where I want to focus today. Awhile back, we introduced environments to GitHub Actions. Environments are, essentially, a mechanism to categorize tasks. They act as a tag for each job associating with levels of each access. Or denote geographic regions. Perhaps there is additional deployment step for the western region verses the eastern. Environments allow for further flexibility in breaking down the most effective automation for publishing your software.

     There are a couple of notable pieces of functionality that come with environments. You can attach secrets or require a number of reviewers depending on each environment. Most recently, we introduced Environment Protection Rules. These are integrations that allows the developer to define certain conditions before allowing a deployment to proceed. Perhaps you run an enterprise. There may be a need for a review of an ancillary system before a new deployment is published. This could be due to checking if dependencies are aligned in the target system, various teams needing to sign off, or if licensing is correct. Some of these rules can be automated while others are manual. Environment Protection Rules allow for the ability to customize how you want your continuous development to be defined.

     Co: CHECK THAT

     To visualize, here we have a simple workflow to test, build, and deploy some code. GitHub action workflows are written in YAML. For this demo, we are using the workflow dispatch event which allows us to manually execute through the GitHub Web Interface.

     As such, these can be run in serial or parallel and have the freedom to define the — I’ll be using the latest image for each of our jobs defined.

     I’m also executing Bash Script within our workflow. Actions can be in line such as you see here or point to a file. In many instances, this may reside within a GitHub marketplace.

     Let’s take a look at running our sample action. I’m going to invoke this by clicking on the actions tab, selecting the workflow I’d like to run, and clicking “run workflow.”  We just need a moment for the virtual machine to be provisioned and now we have a new run in our list.

     Clicking into the workflow allows us to view the execution as it progresses. This particular action is executed as a serial set of jobs. Each with a unique virtual machine instance. Granted, as we are executing no opts, this would have been more efficient in a single instance, but the moral of the story is you have the capability to isolate execution as you deem appropriate.

     This simple action has completed successfully and deployed to our staging and production environments. You can additionally click into each job to see CLI output. Taking a look at a more complex workflow, we have similar execution, but now we are going to deploy to multiple environments in parallel. Action environments are conceptual wrapper. Or perhaps we need a rolling deployment based on time. With actions, we don’t dictate or necessarily recommend how you should deploy, but rather, offer the platform features necessary for you to publish your software securely and efficiently.

     That is a bit of a crash course in actions but we have arrived at the core of our story. First, let’s take a look at a new workflow.

     This has similar steps, as we have seen before, but I have added a canary environment prior to staging. Now, I’ve mentioned environments and some of their rules, but let’s take a look at what that really means. In the repo settings, there is a section titled “environments.”  In here you can see the different environments described in our workflows and add customization to each. Required reviewers and waiting for a defined period of time are two rules you can protect your environment with. The true customization is here now. This is a custom integration where a third party application can listen for deployments being triggered, execute logic, and decide whether the deployment should proceed. The vehicle for this integration is via GitHub Apps. We have a couple of apps available for the BBQ Beats development. You can enable one or many of these protection rules and these can be configured uniquely, per environment. For protection east, I will add a 60 second delay. Triggering this workflow manually, creates a flow diagram we are familiar with at this point. But I want to draw your attention to the console underneath the visualization. And reporting back after deciding how to proceed. For the canary environment, fried okra has observed the error rates are below a 1% threshold and chosen to approve this deployment. A similar logic has been invoked for staging with a 50% threshold and proceeding to deploy to our production regions.

     Production has a slightly different logic. We’re verifying that resources have been properly allocated. And if confirmed, proceeding with the deployment. East has also been approved, however, is awaiting the completion of our 60-second wait timer before continuing.

     The thresholds and logic is arbitrary. It’s important to note that we are essentially executing three steps. We’re asking a third party if we can deploy. The third party is executing the logic to decide. And then, finally, approving or rejecting our deployment.

     There are already many applications utilizing this functionality today. And you can combine these apps from our partner ecosystem with custom apps you develop to propose a more robust CICD pipeline.

>> REBECCA CARTER: Awesome. Thank you, Alexis. I have a quick question. Is there any limit to the number of rules or actions that you can implement?

>> ALEXIS ABRIL: Great question. So there is a couple faucets to that question. In terms of limits to actions, I don’t know of any limits other than maybe overloading, let’s say, the memory of a virtual machine independently. You might run into technical restrictions there. For Deployment Protection Rules at the moment, we have a limit. And that’s a knob we can gauge as time progresses. At the moment, it’s, I believe, less than — we’ll say less than 20 independent rules on a particular environment, but that could be adjusted over time. Great question.

>> REBECCA CARTER: Thank you. Yeah. I think we are ready to move on to Jason who is going to show us the Honeycomb integration.

>> JASON HARLEY: Absolutely. Thank you, Rebecca. My name is Jason Harley. I’m a Software Engineer here at Honeycomb. I worked with the team that built the integration we are going to talk about here.

     Prior to jumping into our Deployment Protection Rule, I want to do a tiny plug for the previous incarnation of Honeycomb and GitHub partnership. This is the GitHub Action that lets you implement your pipeline. Instead of sitting on your engineering team and wondering why it seems like your builds are taking a long time, which is something I think we have all had the pleasure of doing, this is a way that for every step in your action workflow, you can actually send an individual span in a trace that represents the entire build to Honeycomb. This is available, it works with our free tier, it works with any GitHub account, as far as I know.

     For example, this is an example of a trace waterfall view of this action running its own test suite. We started off the build, we moved through an initialization, this is quite boring and we did some sweeps. You get the idea very quickly you can get an overview this build took 41 seconds and all the various pieces that made that up are essentially something you can dive into interact with. So that has existed for a while. It’s something we have used ourselves at Honeycomb. We maintain that internal goal of 15 minutes or bust for CI. We truly believe that shipping code is the heartbeat of your engineering organization, to paraphrase our CTO, Charity. And in doing so, deployments should be something that happen all the time. Really straightforward, not scary things. And the only way to do that is to get comfortable with your code review process, your test suite, maybe your integration test or smoke tests, and then really trust your pipeline. Right? And part of what I was excited to work on this project with GitHub, is that this allows you to bring, sort of, a last pre-flight check, if you will, into that deployment pipeline. And instead of, you know, solely relying on that smoke test or integration test, which are a valuable part of your test suite, when we designed the Deployment Protection Rule feature with the new Honeycomb GitHub App, we said what if you could just utilize your Honeycomb telemetry to regain that last bit of confidence? So that’s what we did. And we will walk through that here today.

     So install the GitHub App, you can initialize this flow from the GitHub marketplace. If that’s your thing. But within a Honeycomb team, if you jump into team settings, and over to the integration center, there is this option available here with this blue button if you happen to be a team owner. So we require an owner to do this installation.

     We are brought into a pretty familiar GitHub flow. I’m going to authorize access to the Honey API team. We are going to get information about deployments. Which is something that Alexis did a great job highlighting just before now. That is these events, an environment is about to be deployed to. That is that meta data we will now have access to. And we are also requesting access to this file called “.Honeycomb.YAML.”  It brings us back to the integration center. Great. Now what?

     So we jump into a repository into the Honey API Team. Fluffy Spoon was the project name we were given. You can see in, this is a sparse repository, for the sake of a demo, but we do have an action workflow already available. I was chuckling when I was watching Alexis’ demo. Our demo is quite similar and this doesn’t do much, but it does illustrate a very good point. And that point is that we have a build whenever we do a push. We are going to build our stuff. And then we are going to deploy to the staging environment. If that goes well, we will deploy to production. Great. Normally you’d have all kinds of tests happening here.

     What makes these Deployment Protection Rules powerful is if you come into these environments and you see our staging has the Honeycomb.io Protection Rule turned on. Same for production. What that actually means though, in practice, is all defined here in this Honeycomb YAML file. When we were thinking through how we want this to work, actions are all controlled via YAML, it’s a very common workflow as a former operator and developer here, being able to codify these things is important to me and my colleagues. So we decided the best way to interact with this new feature at GitHub was another YAML file. Hopefully you will find this one straightforward.

     What we are doing here, for any arbitrary environment — so we saw in our workflow we have two environments, a staging environment and production environment — we allow you to specify a Honeycomb query against a Honeycomb environment. In this particular case, I am setting up staging to run a query against the test Honeycomb environment.

     Now there is this big query blog. That may look intimidating. What I’m going to do is jump into the Honeycomb query builder to show you how we can take some of the sting out of this part of the offering. So

     So back into my team and we’re going to run a query. Let’s say we’re really concerned about HTTP status codes. So what I’m asking Honeycomb here for right now is account of HTTP errors of 500. You can do greater than 499 if you have multiple codes in your application. And I’m going to look for the last two hours. Really. So you can see here there has been about ten. Maybe offering the query builder feels nicer to you. One of the tricks of the trade here, if you go up to this — I think it’s called a kabob menu — and grab that. We make that easily available to extract. So pretty straightforward. There is our count. And the status code check shows up as a filter.

     Switching back into the query specification, we have written, basically, the same thing. We are doing account where the status code is 500. We have introduced one cool bit that I love about this integration and this is where doing good telemetry really compliments this new feature. A practice we have here at Honeycomb and something that you folks practice at home, as well — meat ball menu, I like that, Andrew — is that whenever we build an application, we stamp it with that idea of that build run, and we can compare previous builds either in time or maybe during a rolling deployment we can see that now 20% of them are running the new build. If you happen to be doing that sort of thing and using the GitHub run ID as your source, you can further restrict that. In staging, I only care about 500 from this new build that was, perhaps, already deployed from the test environment. Maybe there are continuous environments happening in another workflow. Maybe someone is manually deploying to that. That’s a terrifying idea, but these things happen.

     The final bit of this, and for people that are familiar with Honeycomb you might hear the word “threshold” and think about Honeycomb triggers, and effectively, we have moved the trigger restrictions into this feature with GitHub. So we’re saying that we want no more — sorry. We are going to fail the build if there has been more than 300 or 500 errors within the last 1800 seconds. Which is the last 30 minutes. For those at home. We then turn this into a YAML anchor. For those that do YAML all day, it’s old hat. Others may be wondering what that’s about. This allows us to use this query for production in the name of simplifying this demo. The application and the integration doesn’t require that you use the same query here. You can have a query that looks at the status of particular performance characteristics. In your test. Before you go to production. In the name of simplicity, we are reusing that query, but what we have chosen to do is be much more stringent. We will query an environment called MS Demo. We are deploying to an environment called Test. And then MS Demo, and now environment. We want to see only one of those 500 errors in the last 15 minutes. Hopefully that all makes sense. And if YAML workflow feels native or intuitive, we wanted it to be part of an environment that lives close to your workflows. If you have any feedback, we are happy to receive it. We are curious if people want to run more queries or different styles of queries. Cool.

     With that said, let’s see this come together. We have put together a query in our YAML file and we are saying that when we do a deployment, we want this final precheck flight mechanism to happen. I’m going to trigger C I. I just did a push to the repo. Over here if the demo Gods are kind, and they are today, we will see that our actions workflow has been kicked off. So these are, again, running serially and running independent stage. You just saw behind the curtain there. But the first thing that is happening as we are going to staging, the query is executing. The thing I really want to highlight here, is that we are bringing the Honeycomb URL in. If this happens to fail, we can instantly click this and be in a very standard Honeycomb query workflow. All of what you are used to using to troubleshoot your production problems, and let’s be honest, production is chaos, I’ve been calling these preflight checks for a good reason. We are all doing our best to get comfortable with that idea of doing the push to prod. And while we believe that doing deployment should be a multiple day, things break sometimes. And you need to quickly solve that problem. The checks help us start that exploratory workflow. And for all other stuff you have your SLO’s and your triggers and things your team relies on every day.

     Jumping back to our workflow, it has continued to move forward. Staging pass and production. I have two independent links to launch that whole process.

     We are pretty excited to see what people build with this integration. It feels pretty Honeycomb native to me. That we are trying to let you interact with the data that you are using to trouble shot and operate your business. And I’m really excited with this — about what this workflow can open up.

     One of the things that we promised in the intro to this webinar was that — let me jump into the right team here, we offer some tips. That query is maybe feeling a little simplistic. You are like, cool, you are checking to see if there is a couple 500 errors and then doing a push. That is better than nothing. One thing we talk a lot at Honeycomb is SLO’s. And if you are using Honeycomb SLO’s, it would be awesome if I could gate my deploy to production. You spent a lot of time fine tuning the SLI that is protecting that environment. This SLI is based on latency. I wouldn’t advise this is what you want to protect your production environment with. By doing an average on the SLI, where the SLI is present, and running a query, we actually get an error rate that SLI is controlling. So if you have ever wanted to pull that data in as a final source of assurance, this is a pretty easy way to pull that off.

     That’s my spiel.

     ( Laughter )

     Is there any questions about how that comes together?

>> REBECCA CARTER: Awesome. Thank you, Jason. Yeah, I actually have a question. Is there a way to set alerts so that if a build does fail based on what it has been gated for, then other people, besides the person that kicked off that build, can receive the alert?

>> JASON HARLEY: You can do that from the action. That’s the easiest way to do that. We push the results to workflows to Slack. Slack is the way we orient ourselves around work. That probably is the best way to do it. The query is actually executing in such a way that the result only comes back to GitHub. So GitHub is, effectively, the source of truth if that Deployment Protection Rule pass or failed. Personally, if you are piggy backing on existing trigger queries. If it’s something that you are looking to work with anyway, you would probably have a trigger fire in the same window. Great question.

>> REBECCA CARTER: Thanks. So while we are waiting for people to post any additional questions they might have, I just want to share we have a survey that is going to go out, as well. And we’d love to get your feedback. And if you fill it out, you get a cool T-shirt. Would you mind posting the secret word in the chat for everybody?

>> REBECCA CARTER: Thanks. I also enjoy “meat ball menu,” Andrew. I asked how many people are deploying. We had one response that says once a month, another every 30 minutes. And, you know — is this what people want to be deploying at? Is this your preferred cadence? Is there anything that is maybe holding people back from deploying as much as they’d like to? You can hop on and discuss here in the Zoom or add to chat, if you’d like.

     Okay. We have another question here. Any plans to somehow integrate with feature flags?

>> JASON HARLEY: That is a really interesting question. Can you elaborate as to what you mean by that?

>> BRIAN:  Hi, guys. Let me get my camera working. Yeah. I mean, I don’t just deploy code and let it go to customers. I have feature flag that ramp my code. I target my ramps. Maybe I want to do 5% of users. If I can get an attribute on to the span, I could tie this in here. And can either do a very similar process.

>> JASON HARLEY: Yep. That would be the way I go. We have relevant flags on all of our spans at Honeycomb for our features. Partially, like, being able to do a group by feature flag is illuminating. But that’s how I would try to do it. And if that is feeling cumbersome to be editing in the Honeycomb YAML that is living on the repository, being able to do something slightly more complex with a derived column that is pulling in the feature flags that you are interested in at deployment time, is a way to separate those concerns.

>> BRIAN:  Okay. Yeah. Yeah. This is tickling my head in terms of I can try to safeguard our deployment process even further. But also feature flag ramp ups and ramp downs by using the traces.

>> JASON HARLEY: Yeah. Your workflow could orchestrate the manipulation of some of those things. Alexis may know if there are actions for the common feature flags in the ecosystem. I would bet so.

>> ALEXIS ABRIL: Definitely. What is nice about the action set up and how Honeycomb orchestrated their action, is, essentially, the goal of actions, in general, you can have a composition of whatever custom pieces you need to execute your workflow. So I’m sure there are other actions in the marketplace or ecosystem. You can customize actions yourself. You can pipe additional data into Honeycomb’s system and provide additional queries. You can execute the same queries for multiple times for different flavors of gating. So there is a lot of capabilities with how you can pose these together. And Honeycomb’s action in itself is composable in that it’s a custom query that can be executed.

>> BRIAN:  Okay.

     (Indiscernible)

>> JASON HARLEY: This is neat.

>> ALEXIS ABRIL: There was a previous actions webinar we did a couple months ago where we addressed feature flags and different ways to address those, as well. Be sure to check that out.

>> REBECCA CARTER: Okay. We have another question. Are there plans to expand the integration to automagically do build defense type stuff? Create spans for steps.

>> JASON HARLEY: That is something we have gotten a little bit of feedback around. So thank you for sharing it. I assume you mean the Honeycomb build events action. Not the GitHub Protection Rule feature. Part of it was on purpose to really allow you to control what a unit of work looked like. Because an individual step might not be what you want to show up in your trace. A lot of the times maybe it is. So that is something the team I work on also owns that action. So it’s something we’ve heard a couple of times. Thank you for the feedback.

Transcript