Hadoop distributions are used to provide scalable, distributed computing against on-premises and cloud-based file store data. Distributions are composed of commercially packaged and supported editions of open-source Apache Hadoop-related projects. Distributions provide access to applications, query/reporting tools, machine learning and data management infrastructure components. First introduced as collections of components for any use case, distributions are now often delivered as part of a specific solution for data lakes, machine learning or other uses. They subsequently grow into additional, expanded roles, competing with both older technologies like database management systems (DBMSs) and newer ones like Apache Spark.