CData Software provides data integration and connectivity solutions that enable access to data from a wide range of on-premises and cloud-based applications. Designed to support diverse deployment environments—including on-premises, cloud, and hybrid—CData solutions simplify how users connect, integrate, and work with data. By facilitating easy and secure data access across systems, CData helps organizations accelerate decision-making, improve process efficiency, and advance data-driven initiatives.
Do You Manage Peer Insights at CData?
Access Vendor Portal to update and manage your profile.
Flexible agile system without compromising performance. Can be built on top of existing data environment Data Silos can be easily analyzed and integrated Access to real time data and API based data Ability to historize and materialize data Data manipulation capabilities via SQL and scripted SQL Self service analytics capabilities An interesting way to support Data Fabric/Data Mesh
The job templates make it very clear what data is going to be available before you even create them based on different parameters as well as the previews available. This allowed us to make sure the job would get the same data we are able to get from the web. The tool efficiently handles large datasets, ensuring that the data is handled efficiently even as it grows in size. This capability also allows you to retrieve data from many years in the past without having to worry about the processing speeds or any capacity limits. This has been key for us in terms of reporting as most of the data comparisons we do require are for current data to the past year and past 2 years. The customer support team helps us to quickly resolve any issues we may run into when using the tool and answer any questions we may have about the data we are getting or the jobs we are creating. We ran into many errors when we started from the data warehouse we were setting up to the jobs we were running, but the support team was prompt in helping us resolve all of these issues to get the tool to work for us.
It is a robust data federation tool that offers a seamless unified layer to access and process data from a multiple data sources all in a single interface. Its user-friendly environment enables users to easily query databases from various vendors, APIs, and flat files all using SQL. This significantly improves your data integration and processing tasks in terms of performance and maintainability. In addition, the functionalities to have a complete automated data processing pipeline is very impressive. Users can intuitively create and schedule jobs that can access tables, views, and stored procedures from numerous data sources and perform complex processing tasks periodically with minimal effort. Query optimizer tool is also very effective in enhancing performance that makes retrieval and processing more efficient.
The product continues to make leaps and bounds. It is now scalable and has disaster recovery tolerant configurations, essentially the previous weaknesses have been addressed.
The tool is limited to using the job templates they have created instead of being able to create custom reports for your specific needs. This leads to trips to the support team to modify or create a job for the specific use case rather than being able to develop something on my own. There are multiple marketplaces that we sell on but were/are not available to be connected to this tool. This will again lead to communication with the support team to see if this is something that they can develop for us which will take some time or if they cannot connect to this marketplace. The pricing structure promotes bundling multiple connectors into a plan at a time when that may not be the most efficient for smaller businesses that would prefer to implement one source at a time. When we started our plan, the time it took to validate one data source while we were paying for 5 sources was not as cost effective as it could have been for us.
Python Coding feature is an exciting addition to their toolbox that can improve your experience considerably. However, the lack of access to some of the most widely used external Python libraries such as Pandas and Numpy is a bit limiting to perform more complex data processing tasks. Having access to those libraries will reduce the need to transfer data between multiple development environments, which will enhance both the experience and performance. In addition, although most of the existing utility functions in DV are very practical, there are a few areas where revisions could be considered mainly for performance reasons. For example, upserting data into tables takes longer using the related utility function in comparison to what you would experience by executing standard SQL commands in Postgres. Although it is not a major issue for smaller datasets, for relatively larger datasets or more complex tasks, it should not be neglected.