Elementary provides a platform focused on data and AI reliability by integrating observability, quality management, governance, and discovery. The platform supports teams in ensuring the accuracy, governance, and traceability of data workflows, facilitating collaboration between technical and business users. Elementary addresses issues related to data trust and transparency, helping organizations monitor and validate data processes as they scale their AI initiatives.
Do You Manage Peer Insights at Elementary?
Access Vendor Portal to update and manage your profile.
What I like most about the product: 1) Native dbt first integration with extremely low adoption friction: As stated before, it builds directly on dbt metadata so it naturally fits into our existing workflow without requiring heavy operational overhead 2) Clear production focused observability: Between the health dashboards and summary of tests it makes understanding the data issues (or lack thereof) visible and actionable 3) Data quality visibility for non-technical users: This makes it easy for business to trust the data without having to just rely on us telling them that all tests pass; they can see with health dashboards the spread of coverage and easy to understand pass rates 4) Strong roadmap and thoughtful evolution: Despite being a dbt native tool, they are expanding observability to python workflows as well. Additionally, they have expanded upon AI assistance and additional BI tool connections that other tools have not
The integration into our existing tech stack was easy, as we were already using dbt (and in particular Elementary's open source data test) so this just felt like a natural extension of the environment. The AI agent has been invaluable in automating our lineage tracing exercises (which we carry out as a fundamental step in our Critical Data Element management process), and has proven capable of suggesting ETL efficiency improvements and a good coverage of data quality rules - while it cannot suggest rules built off business process, it has accelerated our ability to cover the core DQ rules required for effective monitoring. Tapping into external channel such as Teams and Jira allowed us to define our alert, triage, prioritisation and issue management processes aligned to actual tools and not just as a high level process flow. It centralises the management of these functions and provides an end-to-end visibility during remediation.
What stands out the most is how quickly we were able to get meaningful insights into our data pipelines. The platform does a great job surfacing anomalies and trends in a way that doesn't require deep technical expertise to interpret. From a knowledge management standpoint, that's huge, because it allows more stakeholders to engage with the trust the data. I also appreciate how it integrates into existing workflows rather than forcing a complete process overhaul.
What I dislike most about the product: 1) Expanding scope means teams need to be clear on where Elementary fits: As they continue to grow beyond the original dbt-centric focus, teams may need to align more on how different capabilities are used and how they complement other tools. 2) Differentiation from adjacent tools sometimes requires internal explanation: Elementary is clearly focused on production observability, but in environments with multiple data quality and/or CI tools, some upfront clarification is needed to avoid perceived overlap. 3) The product rewards intentional usage, not just installation: Elementary is close to turnkey and the team is very hands on, but the most value comes when teams actively lean into all the features rather than treating it as a passive tool.
Dashboards are not user customisable, we have had many requests for changing default views, slices by a few dimensions not initially available in the package, or rearranging dashboard elements in order of priority (such as the DQ dimensions).
While the core functionality is strong, I've found that some of the customization options, particularly around alerting and reporting, could be more flexible. There are moments where I want to tailor the output more specifically for different audiences, and that can take a bit of workaround. Documentation is solid, but there were a few areas where clear examples would have sped up adoption internally.