Skip to main content
Announcements
The New Qlik Learning Experience is Here! GET STARTED
cancel
Showing results for 
Search instead for 
Did you mean: 
RyugaHideki
Contributor III
Contributor III

Application performance

hello guys. 

i'm just wondering how do you guys deal with large datasets that require multiple aggregations and treatements cause for me the loading time takes forever and sometimes fails to complete and using the application becomes quite laggy i tried to resort to making views for the aggregations and import them but the more i make the slower app becomes for example: 
Marque_Tom:
LOAD
[Marques],
Q35_DES,
Marques_TOM;
SELECT
Q35 as Marques,
Q35_DES,
Count(Q35) as Marques_TOM
FROM DataShare.dbo.FAIT_Q35
Group By Q35_DES,Q35;
became like this 
[Marque_TOM]:
SELECT Marque,
Vague,
Code,
"Marques_TOM"
FROM "Market_Study".dbo."Marque_TOM";

it's just an example i removed aggregations from the data loading part and yes if it was this alone it would not be a problem but i have plenty more and it made it very slow cause it keeps creating correspondence keys  and basically impossible to use i hope you help me 

9 Replies
JandreKillianRIC
Partner - Creator III
Partner - Creator III

Hi @RyugaHideki 

A few things.

1. Do you need to load the whole table everytime? So you can't use a incremental load on the db and store that to a qvd. If you do loads from a qvd this will be faster than a loading from the db. 

2. How many times a day should this be updated? I know "4 hours" is long but if this is only updated once a day, running this at 3am in the morning "should be fine". 

3. Depending on how your data looks, a count vs a rownumber / auto number might give the same results (just a example). 

4. How big is your data set and how long is "forever"? 

Quite keen to help you in this one! 

Regards - Jandre

 

Mark the solution as accepted that solved your problem and if you found it useful, press the like button! Check out my YouTube Channel QeepItSimpleQupid
RyugaHideki
Contributor III
Contributor III
Author

the update part not very often actually i thought the problem is aggregations i thought i'd solve the problem by using views but still 

forever is about 10 mn? even after it's loaded it's kind of slow to load the data my dataset is quite big (mainy cause of aggregations) i'd saying 100k lines? as for the first part i don't think i quite know what you're talking about unfortunetly 

marksouzacosta

Hi @RyugaHideki ,

Your query is not running on Qlik side but on your SQL Server - on both cases - so your problem is probably on database side, not on Qlik.

There are tons of things you can do to improve your database performance. The common problems are lack of indexes in your columns, bad queries, concurrent processes, network issues and storage issues. Another possibility is the connection between your Database and Qlik. How are you connecting to your Database? Are you using Qlik Data Gateway? Which database are you using?

For a quick test, you could try materialized views indexing properly your columns and see if you can improve your results. But I highly recommend checking with your DBA for help on this. 

Regards,

Mark Costa

Read more at Data Voyagers - datavoyagers.net
Follow me on my LinkedIn | Know IPC Global at ipc-global.com

RyugaHideki
Contributor III
Contributor III
Author

Hi @marksouzacosta ,@JandreKillianRIC 

what i did was change the indexation now using a star schema all related by one primary key, (loaded views and fact table ), loading time does not take long now but the problem is exploiting the data persists there are millions of rows of calculations i even saved them as qvd files which improved loading time even more but still exploiting data is still slow, qlik sense is hosted on a server 

JandreKillianRIC
Partner - Creator III
Partner - Creator III

Hi @RyugaHideki 

This can be due to a few things, like number of distinct values, number of columns on tables etc etc

What is the CPU spec and RAM size of the server?  

How big is the application "In QMC - Apps - File Size(MB)"

Regards Jandre

Mark the solution as accepted that solved your problem and if you found it useful, press the like button! Check out my YouTube Channel QeepItSimpleQupid
RyugaHideki
Contributor III
Contributor III
Author

@JandreKillianRIC Server has 48gb of ram with a xeon 4216 cpu so it should be fairly enough i do believe the problem is the aggregations and the way i modeled the data cause one table (image) had over 5m rows and since it has 2 mores dimensions that don't figure in the fact table it keeps adding correspondance keys 

RyugaHideki_0-1737530937775.png

size is 18.39mb again i think the problem is the modeling but i can't figure a way to reduce the problem the image table alone as 104k lines so it does kind of explain why the aggregations went to that number

JandreKillianRIC
Partner - Creator III
Partner - Creator III

Hi @RyugaHideki 

Agreed - This shouldn't be so "slow"... 

🤔

Mark the solution as accepted that solved your problem and if you found it useful, press the like button! Check out my YouTube Channel QeepItSimpleQupid
marcus_sommer

There exists different opinions about synthetic keys respectively tables - personally I regard such data-models as invalid. As far as there are "strange" results and/or a poor performance they should be resolved - ideally in the direction of a star-scheme which means having a single fact-table and n dimension-tables.

RyugaHideki
Contributor III
Contributor III
Author

I am using a star scheme model but since there are a lot of dimensions introduced it did create more synthetic keys not much but still enough to slow it down since it's complicated the way I sort of solve it is by partitioning the application i put image on another app completely