Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
AKV0524
Contributor III
Contributor III

Volume of data Qlik replicate is ingesting in snowflake

Hi Team,

we are trying to find out how much volume of data qlik replicate is replicating from source to target on a daily basis. we are able to get this info from AEM dashboard but we are not sure it is correct or not.

For ex:

From Qlik Replicate console a full load table, total 2778 Transferred Volume (MB) it is showing for a particular table but however when we see this actual volume of data in snowflake it is just 27.5 MB.

not sure why there is a so much difference in volume what showing in Qlik Replicate console vs snowflake.

Is there someone who has have idea about this.

 

Thanks

Amit

Labels (3)
1 Solution

Accepted Solutions
john_wang
Support
Support

Greatly agree with @Heinvandenheuvel . 

BTW, the default compression for Snowflake is GZIP, GZIP can reduce the size of a file anywhere between 75% and 95%., depends on the data. It will reduce the cost in the Cloud side significantly.

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

4 Replies
Heinvandenheuvel
Specialist III
Specialist III

2778 MB vs 27 MB is That's a big difference, worth explaining/understanding, and yet - why worry!?

The Replicate documentation only indicates: "As the calculated target data volume includes the Replicate metadata (table_id, stream_ position, flags, bookmarks, and so on), the source data volume will always be less than the target data volume."

0) Source DB? 

1) How did you query the Snowflake storage size? Great Compression in play?

2) What is the volume according to the source DB?

3) Filters in the task?

4) Row count  times row width ?

5) Check your REPTASK_xxx.LOG file? Look for lines with "load finished" like:

[TARGET_LOAD ]I: Load finished for table 'ATT_USER'.'TEST' (Id = 1). X rows received. 0 rows skipped. Volume transferred XXX. 

 

My guess is that some large (VAR)CHARS had lots of trailing spaces which were read but stripped on source read  (DB2 - keepCharTrailingSpaces - default false; Oracle truncateTrailingBinaryZerosInChars default true).

Hein.

 

 

 

 

john_wang
Support
Support

Greatly agree with @Heinvandenheuvel . 

BTW, the default compression for Snowflake is GZIP, GZIP can reduce the size of a file anywhere between 75% and 95%., depends on the data. It will reduce the cost in the Cloud side significantly.

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
AKV0524
Contributor III
Contributor III
Author

Hi John,

Thanks for the info. I think due to compression only it is reducing the volume in snowflake.

Thanks

Amit

john_wang
Support
Support

Thank you for your great support! @AKV0524 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!