Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
kevinpintokpa
Creator II
Creator II

Drastic performance hit with Where Not Exists and Map Using in QlikView Publisher 12.40 SR1

I have been using the Where Not Exists () method successfully to perform incremental loading of some very large QVDs for many years.  It was relatively quick under Publisher 12.20 SR4:

      2019-10-21 19:08:22 0772 Concatenating older data from ..\QVD\JDE\JDE_F0911.qvd
      2019-10-21 19:08:22 0773 
      2019-10-21 19:08:22 0774 Concatenate ( 'JDE_F0911' )
      2019-10-21 19:08:22 0775 Load
      2019-10-21 19:08:22 0776 *
      2019-10-21 19:08:22 0777 From ..\QVD\JDE\JDE_F0911.qvd (qvd)
      2019-10-21 19:08:22 0778 Where Not Exists ( %Key )
      2019-10-21 19:10:04       146 fields found: %Key,  ... GLWY, 
      2019-10-21 19:10:04      36,548,963 lines fetched

 

This operation took less than 2 minutes under 12.20 SR4.  The %Key field was constructed using the Hash128() function since the underlying database table has multiple primary key fields.

We upgraded to 12.40 SR1 yesterday, and this operation now takes 41 minutes!

      2019-10-23 07:16:24 0774 Concatenating older data from ..\QVD\JDE\JDE_F0911.qvd
      2019-10-23 07:16:24 0775 
      2019-10-23 07:16:24 0776 Concatenate ( 'JDE_F0911' )
      2019-10-23 07:16:24 0777 Load
      2019-10-23 07:16:24 0778 *
      2019-10-23 07:16:24 0779 From ..\QVD\JDE\JDE_F0911.qvd (qvd)
      2019-10-23 07:16:24 0780 Where Not Exists ( %Key )
      2019-10-23 07:57:03       146 fields found: %Key, … GLWY, 
      2019-10-23 07:57:03      36,548,963 lines fetched

 

This is seriously impacting the performance of my database loaders that load from very large tables.

Any ideas on how to mitigate or work around this?

Labels (5)
1 Solution

Accepted Solutions
kevinpintokpa
Creator II
Creator II
Author

It turns out that there was an easy fix.  I was able to use this statement just before the Concatenate:

Unmap *

This restored the performance to what it had been in 12.20 SR4.

Note that this worked for me because the QVD being concatenated had already been mapped previously. 

View solution in original post

2 Replies
kevinpintokpa
Creator II
Creator II
Author

So I believe I have found the root cause.  There is a behavioral change in script execution between 12.20 SR4 and 12.40 SR1 that results in a big performance hit whenever Map Using and Where Exists are used together.

I am performing a Map Using to map values in the load.  I ran all four combinations and timed the Concatenate operation:

  1. Without Map, Without Where Exists: 31 seconds optimized load for 30M rows.
  2. With Map, Without Where Exists: 1 min 53 seconds, non-optimized
  3. Without Map, With Where Exists: 40 seconds optimized load
  4. With Map, With Where Exists: 33 minutes non-optimized load!

Note that the Map Using does not map the %Key field in my case.

Now to figure out how to work around it...

kevinpintokpa
Creator II
Creator II
Author

It turns out that there was an easy fix.  I was able to use this statement just before the Concatenate:

Unmap *

This restored the performance to what it had been in 12.20 SR4.

Note that this worked for me because the QVD being concatenated had already been mapped previously.