OpenTelemetry Best Practices #1: Naming

By Martin Thwaites  |   Last modified on February 28, 2024

Naming things, and specifically consistently naming things, is still one of the most useful pieces of work you can do in telemetry. It’s often overlooked as something that will just happen naturally and won’t cause too much of an issue—but it doesn’t happen naturally, it does cause issues, and you end up having to fix the data in pipelines or your backend tool. It’s the biggest problem I dealt with when it comes to structured logs over the past 20 years; standardizing the names of attributes is like a superpower when supporting projects in production. So, of course it's the first topic in my best practices series.

Now that OpenTelemetry is the de facto standard for generating and emitting telemetry in applications, you need to think about strategies for how to use it effectively. Personally, I’ve found that giving engineers and teams guidance on how to name things is the most important step. This includes:

  • Attribute names (for logs and traces)
  • Span names (for traces)
  • Label names (for metrics)

With each of these, above all else, you need to focus on consistency and patterns (read: standardization). The actual names themselves are contextual, and therefore very specific to your use case. As an overriding rule, prefer “local” consistency over “global” consistency, and by that, I mean that as long as the names are consistent at a local level (inside a system boundary or an organization, for example), don’t worry about whether they match global patterns. The exception here is when that comes to “well known” names for concepts. In OpenTelemetry, we call these Semantic Conventions.

Semantic Conventions

If you’re adding context that you feel is a “framework” concern, or that is wider than your specific business domain, then you might find that the name has already been thought about. Look in the OpenTelemetry Semantic Conventions package, which contains both stable and experimental conventions for names. 

Using the pre-established name will allow your backend vendors (like Honeycomb!) to make assumptions about the shape and purpose of the data. For example, if you provide http.request.bytes, we know that you’re telling us the size of the incoming HTTP request, and therefore, that this span is about an HTTP request.

If your backend can make assumptions about your telemetry, there is a lot more that the backend can do to help you understand your system from icons.

So wherever possible, use Semantic Conventions, and use the instrumentation libraries that add them by default. That way, you won’t need to have a naming conversation at all!

What about your domain?

To unlock the real value of telemetry, you need to add your own context to logs or traces, then think about adding new spans, span events, etc. The problem comes with inconsistency, both in people using different names for the same thing (, product.uuid, product.unique_identifier, etc.) and using different formats for the names (, This is because the true power in observability and OpenTelemetry comes with correlation of data, both within signal types (spans with attributes of the same name) and across signal types (correlating a pod name in a metric with that of a span).

The advice here breaks down into two forms: 

  • How to structure the names. 
  • How to ensure consistency.


It’s common in OpenTelemetry to use periods (dots) to separate logical groupings of attributes and labels. This allows backends to provide a better UX when viewing attributes.

You should prefix your attribute names with these two criteria:

  • Be consistent with naming within your organization—standardize when possible.
  • Prefer not using prefixes from the Semantic Conventions when they’re domain-specific.

There are then three types of naming conventions to consider.


If you create a reusable library that lives outside your organization (like an OpenSource library you make available to the community), you should consider creating a unique prefix that won’t conflict with other libraries. Having something like mylib.* would be beneficial to allow end users to quickly see where those attributes are generated from.


For attributes that are shared across your entire organization, use a prefix unique to your organization, like honeycomb.* or hny.* so that you can use it when querying your data later. Be careful here, “organization” is really a proxy for “system” as most companies will have a single system. Where you have multiple systems in your organization, scope the attribute names to that boundary and prefix them accordingly.


Where you have an attribute that is only relevant to a single application or service, this is where I’d consider app.*.

I would avoid team- or service-specific namespaces where possible, as they will increase an engineer’s cognitive load by trying to find the right one.

Shared constants

It’s very common to have shared “packages” or “libraries” in organizations, normally to standardize actions—for example, setting up dependencies or performing common tasks. All languages have some way to share common code whether that’s Nuget for .NET, pip for Python, NPM for Javascript, or GitHub repositories for Golang.

Providing a library to your colleagues that defines standard naming for common attributes can greatly reduce not only the bloat of different names for attributes, but also the cognitive overhead of developers. This is also a great place to provide helper methods for setting values to allow for consistent values, not just names. You don’t want one team using “pre-production” as the environment name and the other team using “pre-prod.”

If you do provide these kinds of packages, make sure to document the desired or intended usage. Utilize functionality like XMLdoc (.NET), Javadoc (Java), Godoc (Go), etc. to provide inline documentation of the attributes and where they should be used. This also has the added benefit of removing “magic strings” from your code, which I personally think makes code more readable and less error-prone.

Stay tuned for OpenTelemetry best practices #2: automatic and custom instrumentation

By implementing the best practices outlined in this blog, your naming conventions should improve dramatically. Remember: be consistent, keep it simple, and standardize where possible. That’s really the key to good naming practices.

Join us for part two of my best practices series, where I’ll dive into automatic and custom instrumentation. But if you simply can’t wait, you can start reading about auto-instrumentation now. 


Related Posts

OpenTelemetry   Customer Stories  

Modern Observability in Action at the University of Oxford 

The Bennett Institute for Applied Data Science at the University of Oxford is pioneering the better use of data, evidence, and digital tools in healthcare,...


OpenTelemetry Best Practices #3: Data Prep and Cleansing

Having telemetry is all well and good—amazing, in fact. It’s easy to do: add some OpenTelemetry auto-instrumentation libraries to your stack and they’ll fill your...

OpenTelemetry   Observability  

Observability, Telemetry, and Monitoring: Learn About the Differences

Over the past five years, software and systems have become increasingly complex and challenging for teams to understand. A challenging macroeconomic environment, the rise of...