Overview
Product Information on CData Virtuality
What is CData Virtuality?
CData Virtuality Pricing
Overall experience with CData Virtuality
“A great companion for a modern data architecture”
About Company
Company Description
CData Software provides data integration and connectivity solutions that enable access to data from a wide range of on-premises and cloud-based applications. Designed to support diverse deployment environments—including on-premises, cloud, and hybrid—CData solutions simplify how users connect, integrate, and work with data. By facilitating easy and secure data access across systems, CData helps organizations accelerate decision-making, improve process efficiency, and advance data-driven initiatives.
Company Details
Do You Manage Peer Insights at CData Software?
Access Vendor Portal to update and manage your profile.
Key Insights
A Snapshot of What Matters - Based on Validated User Reviews
Reviewer Insights for: CData Virtuality
Performance of CData Virtuality Across Market Features
CData Virtuality Likes & Dislikes
Flexible agile system without compromising performance. Can be built on top of existing data environment Data Silos can be easily analyzed and integrated Access to real time data and API based data Ability to historize and materialize data Data manipulation capabilities via SQL and scripted SQL Self service analytics capabilities An interesting way to support Data Fabric/Data Mesh
The job templates make it very clear what data is going to be available before you even create them based on different parameters as well as the previews available. This allowed us to make sure the job would get the same data we are able to get from the web. The tool efficiently handles large datasets, ensuring that the data is handled efficiently even as it grows in size. This capability also allows you to retrieve data from many years in the past without having to worry about the processing speeds or any capacity limits. This has been key for us in terms of reporting as most of the data comparisons we do require are for current data to the past year and past 2 years. The customer support team helps us to quickly resolve any issues we may run into when using the tool and answer any questions we may have about the data we are getting or the jobs we are creating. We ran into many errors when we started from the data warehouse we were setting up to the jobs we were running, but the support team was prompt in helping us resolve all of these issues to get the tool to work for us.
It is a robust data federation tool that offers a seamless unified layer to access and process data from a multiple data sources all in a single interface. Its user-friendly environment enables users to easily query databases from various vendors, APIs, and flat files all using SQL. This significantly improves your data integration and processing tasks in terms of performance and maintainability. In addition, the functionalities to have a complete automated data processing pipeline is very impressive. Users can intuitively create and schedule jobs that can access tables, views, and stored procedures from numerous data sources and perform complex processing tasks periodically with minimal effort. Query optimizer tool is also very effective in enhancing performance that makes retrieval and processing more efficient.
The product continues to make leaps and bounds. It is now scalable and has disaster recovery tolerant configurations, essentially the previous weaknesses have been addressed.
The tool is limited to using the job templates they have created instead of being able to create custom reports for your specific needs. This leads to trips to the support team to modify or create a job for the specific use case rather than being able to develop something on my own. There are multiple marketplaces that we sell on but were/are not available to be connected to this tool. This will again lead to communication with the support team to see if this is something that they can develop for us which will take some time or if they cannot connect to this marketplace. The pricing structure promotes bundling multiple connectors into a plan at a time when that may not be the most efficient for smaller businesses that would prefer to implement one source at a time. When we started our plan, the time it took to validate one data source while we were paying for 5 sources was not as cost effective as it could have been for us.
Python Coding feature is an exciting addition to their toolbox that can improve your experience considerably. However, the lack of access to some of the most widely used external Python libraries such as Pandas and Numpy is a bit limiting to perform more complex data processing tasks. Having access to those libraries will reduce the need to transfer data between multiple development environments, which will enhance both the experience and performance. In addition, although most of the existing utility functions in DV are very practical, there are a few areas where revisions could be considered mainly for performance reasons. For example, upserting data into tables takes longer using the related utility function in comparison to what you would experience by executing standard SQL commands in Postgres. Although it is not a major issue for smaller datasets, for relatively larger datasets or more complex tasks, it should not be neglected.
Top CData Virtuality Alternatives
Peer Discussions
CData Virtuality Reviews and Ratings
- HEAD DATA OPERATIONS50M-1B USDBankingReview Source
A great companion for a modern data architecture
I'm working now with DV in my second position and overall, I'm very pleased with the product and service they provide. The Data Virtuality platform is a flexible tool that allowed us to enable several different use cases such as data governance, data quality checks, self service BI, data preparation for regulatory reporting, digital marketing analysis and refinement of the implementation of business logic. The features it offers are distinct from other virtualization products. Data Virtuality is especially useful for business analysis tasks. It allows architects to specify and test the correct integrations and identify problems in existing integrations. Additionally I enjoyed working with the Data Virtuality team. I feel that they cared about our concerns and always helped to find solutions. - Supply Chain Associate<50M USDConsumer GoodsReview Source
CData Virtuality: A Scalable Solution for Efficient Data Management
We began looking into this tool because we were looking for something that could connect to and pull data from all of the marketplaces we are active on for automated reporting. This tool has been great in terms of creating this reporting as well as automating some of our current processes. Especially as we have grown and created new reports, the tool has been very flexible in terms of new data sources, jobs, and updates. - Data Engineer50M-1B USDHealthcare and BiotechReview Source
DV: An excellent tool for data integration
Overall, DV is a perfect tool for data integration and process automation. With the addition of support for external Python libraries and some minor improvements in performance, it will be an even more powerful tool for your complex data processing tasks. - Jefe de Base de DatosGov't/PS/EdEducationReview Source
Solving Data Access and Management with CData Virtuality Platform
Our experience with CData Virtuality Platform has been exceptional. the platform has significantly simplified our data integration challenges, empowered our teams with self-service data access, and accelerated our ability to derive valuable insights from our diverses data lanscape. - AnalystGov't/PS/EdEducationReview Source
Great tool and awesome support team!!
Overall, my experience has been highly positive. The ability to connect to and integrate with our many data domains and the ease of use of the Web Interface is making this the new favorite tool for many of our users.



