Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

How to run the Data load script quicker?

Hi everyone,

I'm new to Qlik Sense and just getting the hang of coding. But during my practice, I might change two or three small things about the data load script. When I debug or apply it, Qlik Sense runs the entire jackal script again, which takes for-e-ver. It keeps stagnating my learning streak. I can't find a clear answer in any documentation. Is it possible to arrange Qlik Sense to only load script that was edited? Instead of the entire thing?

3 Replies
dplr-rn
Partner - Master III
Partner - Master III

No way to do what you mentioned. but one way of speeding things up is to use debug mode and Use Limited Load to reduce the number of records added. So script runs faster overall

petter
Partner - Champion III
Partner - Champion III

There are several ways of doing what you request. But there is no simple switch that you can toggle.

I guess that what is taking forever for you are the load statements - right? If so you can use the BUFFER prefix with your LOAD and SELECT statements. Then it will not hit the source file or database to get new data every time but use some automatically generated QVD-files to get the same data quickly - around 20 times as fast.

https://help.qlik.com/en-US/sense/September2018/Subsystems/Hub/Content/Sense_Hub/Scripting/ScriptPre...

There are also other methods like separating the load script into a "pure" load script qvf without any UI or sheets. Then you can use another qvf to do a BINARY load statement and load all that was previously loaded from the first app quickly into memory. All your changes have to be after the BINARY statement in the load script of the second app. You can daisychain like this to save a lot of time when developing more complex and time consuming load scripts.

The traditional approach that is often recommended by Qlik is to use a QVD-layer which you handle yourself - not using the BUFFER prefix. Then you stage all your source tables in QVD-files that corresponds one to one with the source tables. You only need to reload QVD-files maybe once a day and while developing your read the QVD-files in instead in your app - and that process is about 20 times as fast as doing the load from the actual source(s).

If you can live with limited number of rows in the larger tables you could also make use of the FIRST prefix for the LOAD statement although that wouldn't make the extract from a SQL based source faster. But most modern SQL databases have the option of limit the number of rows themselves via the SQL statement itself. The most often and standardised way in SQL is someting like this:

     SELECT * FROM table LIMIT 1000 ;

     or for Microsoft SQL Server:

     SELECT TOP 1000 * FROM table ;

     SQL SELECT TOP, LIMIT, ROWNUM

Many of the more popular databases can even do a random sample of x % of the total rows of a table.

Here is how it's done for Microsoft SQL Server:

How to Return Random Rows Efficiently in SQL Server · Nadeem Afana's Blog

undergrinder
Specialist II
Specialist II

hi Paulien,

the others recommended some handy method to reduce row number to load.

If you want only run a particular part of your script just comment it out.

you can add a header to the sections:

/**************

*  Header   *

***************/

Load

     ....

and when you delete the last / symbol the entire section will be commented out.

AND/OR

you can use the exit script; statement to interrupt the data load.

G.