Qlik is a strategic partner for Snowflake and we are proud to have been named by Snowflake as the 2020 Partner of the Year for Data Engineering. The Qlik Data Integration platform is a market leader when it comes to continuous and real-time data ingestion from a wide range of enterprise data sources and we work closely with Snowflake to provide an aligned and optimized solution. Following the changes Snowflake introduced to its Cloud Services consumption billing in 2020, we realized this may have an impact on our joint customers. To get ahead of this, we embarked on an effort to gain detailed insight into this new billing model, how the use of Qlik Replicate could impact a customer’s Snowflake bill, and what changes or guidance we need to provide to help customers optimize their cost/performance.
Snowflake has historically billed its customers based on consumption of data warehouse “compute” resources, along with a fee for storage. Not billed for were the computational resources consumed when parsing a SQL statement, generating an execution plan, etc. These non-warehouse computational resources that are expended are what Snowflake calls “Cloud Services”.
In the Fall of 2019, Snowflake announced that it would be adjusting its billing model and begin charging customers for Cloud Services (also sometimes called “GS” or “Global Services”). This Snowflake blog post provides some background on these changes to the billing model.
When you net things out, virtually every interaction you might have with Snowflake involves execution of a SQL statement of some sort, which in turn would result in consumption of Cloud Services resources. As Qlik Replicate can deliver very high volumes of data into Snowflake in real-time, it was important for us to understand how Replicate’s delivery of data into Snowflake would impact a customer’s Snowflake bill.
We set out to run a series of benchmark tests to help us:
- Improve our understanding of Snowflake’s Cloud Services
- Understand Cloud Services credit consumption by different types of SQL statements
- Measure objectively the impact of Qlik Replicate on Cloud Services consumption
- Identify Qlik Replicate tuning guidelines that would impact Cloud Services consumption
- Identify product optimization opportunities
Our test environment was isolated: no other users were accessing the Snowflake database and warehouse that were used in testing. Our tests involved running a consistent and repeatable source database load that contained a mixture of INSERT, UPDATE, and DELETE operations, using Qlik Replicate to deliver the change data into Snowflake. Each test ran for a little more than two-and-one-half hours and generated more than 6 million CDC records, using different combinations of Replicate tuning parameters. The load we generated was not a large load by any means, but it was enough for us to draw some important conclusions.
As a part of our study, we also worked with several customers to examine their production environments. We looked at the configuration of all their production Qlik Replicate tasks to understand how they had Qlik Replicate configured, and we analyzed over 3 million SQL statements generated by Qlik Replicate in those environments to understand their respective Cloud Services resource consumption. Our analysis included both DML statements (Data Manipulation Language – statements that modify or otherwise access data stored in the database) and DDL statements (Data Definition Language – statements that perform administrative functions, define the structure of tables and indexes … essentially all statements that aren’t DML).
We learned quite a bit during our testing. Our key takeaways were:
- All SQL statements result in Cloud Services resource consumption.
- DML rather than DDL statements are responsible for a higher percentage of Cloud Services charges. The reasons for this are twofold:
- Parsing and creation of an execution plan tends to be more involved for DML
- Over time, there tend to be far more DML than DDL operations.
- As DML execution had the highest impact on Cloud Services consumption, we found that we could significantly lower that consumption by configuring Qlik Replicate to deliver larger batch sizes, accepting a small degree of increased latency in exchange for reduced cost.
Our findings from our internal benchmarks were substantiated by what we learned from the customers we worked with. We found in all cases that DML operations resulted in the bulk of Cloud Services resource consumption; and the customers consistently had Qlik Replicate configured to deliver the data in near-real time, leaving considerable room to tune Qlik Replicate to deliver larger batches at a reduced Cloud Services cost. Each customer would need to find the balancing point between acceptable latency and cost.
For details on the results of our research, see the whitepaper below.