Unify both data lake and data warehouse automation in one user interface to plan and execute either with ease.
The separation of storage and compute allows for each to be scaled up or down independently, blurring the lines between traditional data warehouses and data lakes. The separation also enables companies to architect a multi-modal lakehouse platform, which provides a single source of truth for all analytic initiatives – AI, BI, machine learning, streaming analytics, data science, and more. Qlik Compose facilitates both data lake and data warehouse automation in one unified user interface, enabling you to plan and execute either project with ease.
The architecture comprises the following components:
Data is ingested from transactional systems with low latency. Change data capture for real-time data replication ingests data without impairing production system performance.
Data Warehouse Automation accelerates the availability of analytics-ready data by automating the entire data warehouse lifecycle.
Data Lake Automation powers the process of providing continuously updated, accurate, and trusted data sets for business analytics.
Custom Transformation allows users to create flexible, fit-for-purpose data pipelines to transform raw data into data that is ready for analytics.
Data Profiling enables users to assess the quality and structure of data sources to fix data quality issues and promote good data governance.
Machine Learning enriches data with prediction, scoring, classification, and more.
Catalog & Lineage capabilities empower users to discover, govern, and protect data using AI and machine learning built on a layer of common enterprise metadata.
Analytics is used to discover, interpret, and communicate meaningful patterns in data to apply toward effective decision making
Reverse ETL replicates enriched data from the warehouse back to the operational systems of record.