Getting data into a cloud data lake is one thing, being able to bring that data into the right structure so that it can be useful for data queries and analytics is a different and more difficult undertaking.
Data preparation vendor Upsolver is looking to make it easier for organizations to get data into cloud data lakes in an approach that helps to enable what the company refers to as an open cloud data lakehouse.
The concept of cloud data lakehouse was first espoused by Databricks as an approach that joins capabilities from data warehousing and data lakes. With a cloud lakehouse, data is stored in a data lake in a format that can allow users to query the data.
With Upsolver, the vendor’s platform enables users to load data into a data lake and then transform in an extract, transform, load (ETL) type process so that it’s usable by query engines. On Oct. 7, Upsolver made generally available new data ingestion connectors that enable the company’s platforms to work with Amazon Redshift and Snowflake. Upsolver had previously largely focused on support for the Amazon Athena query engine.