Product Updates   OpenTelemetry  

OpenTelemetry 2022 Holiday Goodie Bag

By Phillip Carter  |   Last modified on December 15, 2022

We here at Honeycomb really like OpenTelemetry and goodie bags, so we have a nice little OpenTelemetry-flavored holiday goodie bag to share with you before you’re off for the holidays!

Honeycomb Metrics are now available for Free users

With the stabilization of Metrics in many OpenTelemetry SDKs, we're making our Metrics offering available to Free teams!

OpenTelemetry application metrics let you gather data with different kinds of instruments, such as Counters and Histograms, directly in your app and with automatic instrumentation. When combined with OpenTelemetry Tracing, you can get even more context on what's happening in your apps. For example, if you use tracing to notice a spike in request latency with a `HEATMAP(duration_ms)` with automatic instrumentation, you can also look at something like JVM GC metrics on a single service to see if there is a corresponding spike in GC collections. If there is also a spike there, then this is another good signal to add manual tracing to the affected service. While you could come to that conclusion without metrics, the addition of metrics that correlate with a latency spike supports that conclusion more strongly.

Sending this data to Honeycomb is easy: simply configure an OTLP exporter in your app or OpenTelemetry Collector, as with OpenTelemetry Traces and Logs. If you’re using a Honeycomb OpenTelemetry SDK distribution, you can turn on metrics collection in your SDK configuration—or via the HONEYCOMB_METRICS_DATASET environment variable.

Read more about the details in our Metrics documentation.

OpenTelemetry SDK distribution for Node.js

We’ve released an SDK distribution for Node.js. As with our other distributions, it’s focused on several things:

  • Making it as easy as possible to send data directly to Honeycomb with minimal configuration, but the right configuration knobs you can dial later to fit your needs
  • Multi-span attributes via a Baggage Span Processor
  • Local trace visualization that emits a Honeycomb trace URL to standard out when you create a trace
  • Deterministic sampling

It’s easy as heck to get started. First, install some packages:

npm install --save \
    @honeycombio/opentelemetry-node \
    @opentelemetry/auto-instrumentations-node

Then, initialize the SDK. Here’s how to do it with TypeScript:

// tracing.ts
import { NodeSDK } from '@opentelemetry/sdk-node';
import { HoneycombSDK } from '@honeycombio/opentelemetry-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';

// uses HONEYCOMB_API_KEY and OTEL_SERVICE_NAME environment variables
const sdk: NodeSDK = new HoneycombSDK({
  instrumentations: [getNodeAutoInstrumentations()],
});

sdk
  .start()
  .then(() => {
    console.log('Tracing initialized');
  })
  .catch((error) => console.log('Error initializing tracing', error));

And finally, run it with a node app, such as with Express or Fastify, and make sure to --require the tracing file:

export OTEL_SERVICE_NAME="your-service-name"
export HONEYCOMB_API_KEY="your-api-key"
ts-node -r ./tracing.ts APPLICATION_MAIN_FILE.ts

That’s it! Read more about the different configuration options in our docs or run our sample apps to try things out yourself.

Support for filtering Span Events and other data

This one’s for all of y’all with high volumes of event data. One unfortunate consequence of using a bunch of automatic instrumentation is that sometimes, it can make heavy use of span events and overwhelm your Honeycomb event quota with data that’s almost exclusively noise. To address this, we contributed to the OpenTelemetry Collector filterprocessor to give it the ability to filter span events from spans (and lots of other things too, if that matters for your scenario).

Using the latest (v0.66.0 or higher) release of the collector-contrib distribution, you can filter span events from gRPC instrumentation, like so:

processors:
  filter:
    traces:
      # Filter out span events with the 'grpc' attribute,
      # or have a span event name with 'grpc' in it.
      spanevent:
        - 'attributes["grpc"] == true'
        - 'IsMatch(name, ".*grpc.*") == true'

The above example is an OR. If either statement is true, then the span event is filtered out. If you want to use AND or any other more complex rules, you’ll need to express them as OpenTelemetry Transformation Language (OTTL) expressions, like so:

processors:
  filter:
    traces:
      # Filter out only span events with both the 'grpc' attribute and
      # that have a span event name with 'grpc' in it.
      spanevent:
        - 'attributes["grpc"] == true and IsMatch(name, ".*grpc.*") == true'

Read more about this in our docs, and learn more about OTTL here.

Happy Holidays!

Normally, this section would be reserved for what’s next, and there’d be a bullet-pointed list of stuff you can expect. And indeed, we’re still building more OpenTelemetry things, contributing to the project on behalf of our customers, and more. But instead, we think it’s a good idea for everyone to gradually wind down for the rest of the year and enjoy some much-deserved time off for the holidays. See y’all in the new year, where we’ll have more goodie bags to share in due time!

 

Related Posts

OpenTelemetry   Instrumentation  

OpenTelemetry Best Practices #2 Agents, Sidecars, Collectors, Coded Instrumentation

For years, we’ve been installing what vendors have referred to as “agents” that reach into our applications and pull out useful telemetry information from them....

OpenTelemetry  

OpenTelemetry Best Practices #1: Naming

Naming things, and specifically consistently naming things, is still one of the most useful pieces of work you can do in telemetry. It’s often overlooked...

OpenTelemetry  

Sending OpenTelemetry Data From AWS Lambda to Honeycomb

In this post, Chris describes how to send OpenTelemetry (OTel) data from an AWS Lambda instance to Honeycomb....