Honeycomb is an organization engaged in offering full stack observability. Established based on prior experiences faced while resolving issues at the scale of millions of applications serving a large user base, the company's focus is on high cardinality data and collaborative problem solving. Honeycomb's primary business solution involves enabling every engineer to instrument and observe the behavior of their system, thereby assisting in a comprehensive understanding and debugging of production software.
Do You Manage Peer Insights at Honeycomb?
Access Vendor Portal to update and manage your profile.
1. The query engine and the UX are the real differentiators from a user perspective. High cardinality exploration across traces and events lets you ask questions about production that no existing monitoring tools at our disposal can answer. When an incident involves a combination of customer segment, deployment version, and infrastructure region, using the span attribute feature is such a powerful way for teams to correlate the various subsystems across a landscape as complex as ours. None of the existing monitoring tools allow us to enrich tracing data on the fly. This is not an incremental improvement over APM tooling. It's a step change in understanding production systems. 2. OTel-native instrumentation is non-negotiable for us, both ingest and export. We were consolidating a fragmented observability stack across multiple vendors and needed portable instrumentation to support our telemetry pipelines. Honeycombs commitment to OpenTelemetry meant we could invest in instrumentation once and not face re-work if our vendor landscape evolved. In a regulated environment where vendor changes require long procurement cycles, portability is strategic and necessary. 3. Unlike other vendors, the team engaged as a strategic partner, not a product vendor. Honeycombs willingness to discuss engineering culture change and how observability practice actually scales inside large regulated enterprises was unmatched by any vendor conversation I've had at our company on the buyer side. Product leadership engaged directly with our architecture team on strategic questions.
The event-driven approach includes great detail when you get used to choosing good attributes. Exploration and correlation is powerful when you get used to it.
There are so many 2nd order positive externalities from a shared vision of production -- the things that matter most. Honeycomb's approach brings clarity to engineers unfamiliar with the ecosystem in which their code lives. The long term effects are felt compounding as more and more service teams join in the fun. Refinery is also a key piece of the product offering that is buried in implementation details. It's been an absolute powerhouse for us, and we wouldn't have been able to see the signal through the noise without it.
1. Enterprise procurement and data sovereignty need more attention. For us operating under DORA, GDPR and other country BU specific regulations, the SaaS only model created friction with our Group procurement and third-party risk management assessment functions. Data residency requirements, audit trail expectations, and third-party risk assessment processes are not optional for a globally regulated insurer like us. This added months to contracting and required internal sponsorship from our architecture function. 2. The product is built for engineering-led organizations. At our company, the distance between the teams writing code and the teams operating is significant given our outsourcing model. The adoption curve was steeper than Honeycomb's documentation acknowledges. We had to model internal enablement programs to support application and operations teams during the initial rollout. That investment paid off, but it was ours to make, and it was a learning experience for Honeycomb on how to scale adoption inside enterprises that do not look like their typical customer base.
The dashboarding works fine, but is clunky and generally unpleasant. I prefer the visual and experience of building a dashboard in other products from years ago. Need more flexibility in how I layout the board itself/compactness; unbelievable there isn't already a collapsible row/sections feature. Cloning boards between environments doesn't include text panels, but that's a known limitation. Managing board definitions in code/terrafrom is painful, why can't I just export/import from JSON? Can't create a board that includes an attribute until that attribute has been seen in that environment. The separation of datasets is pretty strict; so combining observability data from different systems basically has to be handled outside the system. Alerting works, but has various limits (how much history can be queried) that make it a little harder to use.
I dislike the managed service nature of platform SaaS for what I think are obvious reasons: adversarial observation and data custody. I know Honeycomb recently released an enterprise version that can run in a private cloud, and that's a step in the right direction, but at what point do I just build my own telemetry platform on ClickHouse internally and keep Honeycomb exclusively for sampled reality? See, I would love it if we could have a better data sovereignty solution, kinda like Solid and pods from timbl, for people and orgs. Honeycomb's focus has been on engineering organizations, but the world is going to need a tiered data classification system that OTel spans does not support today, or the engineers will have to manage all of that on another layer. I dislike that the AI features changed the ToS such that my employer won't enable it. As one of the earliest MCP users, it hurts my soul that we are sending you our reality, and we can't query it with the new tool. That's been a pain, but it's one that is rooted in the platform SaaS problem I've described.