How Do We Cultivate the End User Community Within Cloud-Native Projects?By Rynn Mancuso | Last modified on June 1, 2023
Understandably, an unpaid solo developer has no incentive to fulfill feature requests, to consider stakeholders, to gain a security certification like SOC2, or to fill out diversity forms. They’re not a vendor and they don’t neatly tick the boxes in a supplier process, and your company doesn’t get more resources if you put in the extra work to pay them. So the only real argument is an ethical one, and in the current economy, that doesn’t fly. There’s a big gap in this situation between the needs of the end user and the maintainer.
There have been many attempts to address the incentive gaps in this model, including Tidelift, GitHub Sponsors, and Gitcoin. Some maintainers—for example, Fillipo Valsorda, a Go maintainer—have successfully addressed it for themselves by strategically targeting consulting services at their biggest “customers,” getting hired by a company that pays them in part to maintain their project, or getting the project adopted by an open source software foundation.
However, there can also be misaligned incentives in larger open source projects where there is more corporate participation and the contributors are generally well-compensated, which is a topic that has been less discussed. We’re here today to outline this challenge, who it typically affects and potential strategies for addressing it.
Where do cloud-native companies fit in?
By definition, corporate-backed open source projects don’t conform to this classic model, and most cloud-native projects fit in this second category. Cloud hosting generally happens on expensive services. As a result, most people who work with cloud hosting work on projects that pay their bills, and most contributions to cloud-native projects happen on employer-paid time.
Speaking of business advantages, it turns out that the biggest contributors to cloud-native projects are one of two types:
- Large tech companies that pour money into the ecosystem to improve the infrastructure available for cloud hosting and performance.
- Smaller companies with developer-centric SaaS products whose business depends on understanding these cutting-edge open source technologies, and for whom an open standard opens a path to more customers.
For these two groups, the incentives for participation align well.
While these companies have the most obvious use case and clear path to profit from cloud-native technologies, they’re only a small slice of the many diverse cloud native users. Arguably, it’s the slice that is most tolerant of weaknesses in developer ergonomics, technologies under construction, and other challenges that come with working on the bleeding edge.
In large companies, developers can often carve out personal career niches dealing with new cloud-native technologies, and the folks responsible for funding might never notice inefficiencies. Small developer-centric companies see these weaknesses as strategic opportunities. Solving new state-of-the-art problems is often incentivized much more within these types of companies than incrementally improving existing solutions, but both types of work are necessary to move cloud-native to the mainstream.
Getting input from end users
To “cross the chasm” and make cloud-native technologies go beyond those whose bread and butter is innovation, we need to engage other groups who have problems that our tools could solve. These possible adopters are people who may see these weaknesses in our projects, but aren’t paid by their company to fix them. Another case of misaligned incentives.
In the OpenTelemetry project specifically, vendors who accept OpenTelemetry as input are incentivized to pay their employees to contribute directly. There are a few project participants from larger companies, and the Cloud Native Computing Foundation (CNCF), which hosts the OpenTelemetry project, receives a significant amount of funding from those companies. Few smaller teams who sent telemetry contributed to the project in the first couple of years, but customer-facing employees of vendors heard about the challenges these teams experienced with implementation.
The project set out explicitly to fix this by chartering the End User Working Group (EUWG), which has two overarching goals: to increase adoption and awareness of OTel, and to improve the project by gathering feedback from end users and sharing that feedback with relevant special interest groups (SIGs). To that end, we started out by offering a community survey, and then hosting monthly conversations between teams implementing OTel and OTel contributors.
Based on what we have heard from the participants of these ongoing programs (including project contributors), these activities have been successful in generating discussions and feedback about OTel. At least a couple of these discussions have inspired issues that produced incremental improvements to the project. The monthly user interviews in particular are fun for everyone and tend to generate very detailed feedback, and the survey gives us a broad overview of where our priorities should be. For some projects, this level of engagement might be enough, especially if they are very small, centralized, in a position to quickly act on feedback, and maintain a continued connection with many of their users.
We quickly figured out that implementing OTel was so different from system to system that this type of high-level feedback wasn’t enough. We faced challenges with getting feedback to the right people, since OTel is divided into multiple language-specific working groups and doesn’t have a centralized system for tracking releases. The project is currently working on streamlining this loop to turn feedback into trackable action items. One path a project at this juncture could take might be to dig into the details and perform user tests, but we don’t currently have the capacity to perform this testing or address all the tickets it would create.
Both of these problems are common across larger projects, and point to another incentive gap that’s been discussed elsewhere: Project participation is frequently measured by GitHub stats, which excludes many specialist skills like user experience, and affects folks contributing things other than code.
Blending community engagement and education
In our case, we had a number of community-oriented contributors—mostly developer relations engineers from vendors that accept OpenTelemetry. We were all best qualified to contribute education, content, and events rather than user research. Gradually, the emphasis of the working group shifted toward emphasizing educational programs and community-building among end users. This included a speaker series, two conferences that were a blend of both education and community engagement, open discussion groups that were primarily oriented toward engagement, and an end user discussion channel in the CNCF Slack.
Meanwhile, the communications and community demo special interest groups spent a lot of time improving our onboarding and education experience by building the OpenTelemetry community demo, developing new documentation, and building up our blog as a way to communicate about the diversity of teams who work with OpenTelemetry. Adoption is beginning to move beyond the “cutting edge” to the “new normal.”
Of these efforts, the discussion groups and speaker series were most successful in bringing in new end users, and the community demo and communications working groups have delivered continuous educational value to those users.
The conferences were very successful for the project overall and really brought folks together, but because both were co-located with major CNCF events, they primarily drew folks who were already invested in cloud-native technologies, not newer end users. We plan to use our future conferences at KubeCon as a way to meet with other CNCF projects and build relationships and awareness. This year at KubeCon EU, we have an official observability day featuring all the observability projects. In future years, we’d like to consider working with projects in other areas. We’re currently evaluating whether we want to do community day conferences separate from KubeCon.
In 2023, we aim to focus more on the challenges of routing the feedback we get from the discussion groups, making our education programs and discussion groups more self-sustaining, and delivering more content about our events so that those unable to attend live can feel included. We are also investigating whether and how our contributing companies can pass funding through the CNCF to pay for specific contributions from non-employees to further diversify who contributes to the project.
How you can make this happen for your project
To execute this in your cloud-native project, I would start by mapping out the types of people who might care about your project:
- Who are your end users, stakeholders, and contributors?
- Where do they work, in what capacity, and what are their goals and incentives?
If these three groups don’t line up in your map, you’ll have some work to do to get them to line up.
You can do this mapping project no matter who you are—you don’t have to be in a position of power in the project, and you especially don’t have to be a code contributor. Most large, complex projects are well aware of challenges in areas like developer ergonomics. They may be aware of who is involved and excluded, and need more solutions and hands to execute them. Going to meetings or reading the issue tracker will generally reveal enough to get started and help you identify who might already be concerned about the issues you’ve identified.
After mapping, you want to identify your allies and consider the available skill sets you have, whether:
- user research
- demo code
- or something else
Finally, the best way to approach this is always lightweight experimentation and remembering that effective efforts involving people often start simple. A monthly open discussion group might be better as a first action than a highly curated speaker series.
Identifying under-represented stakeholders and cultivating alternative paths for them to contribute is the most powerful work you can do within a large open source project. It’s a good place to start if you work for one of the companies with strongly-aligned incentives that I mentioned before, because it’s likely this work can create new business. This kind of work is essential to grow your internal team’s usage of any project at a large company and to grow your customer base’s adoption of the project at a SaaS developer tools company. I encourage you to start thinking about how you might do this in your projects today.
How Traceloop Leverages Honeycomb and LLMs to Generate E2E Tests
At Traceloop, we’re solving the single thing engineers hate most: writing tests for their code. More specifically, writing tests for complex systems with lots of...
All the Hard Stuff Nobody Talks About when Building Products with LLMs
There’s a lot of hype around AI, and in particular, Large Language Models (LLMs). To be blunt, a lot of that hype is just some...
5 Ways Honeycomb Saves Time, Money, and Sanity
If debugging has sucked the soul out of your engineers, we’ve got the answer: event-based observability. Instead of spending hours and resources trying to find...