Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
JacobTews
Creator
Creator

Tips on starting avenues for performance tuning?

My primary CDC task updates data for the data marts which feed our BI environment every 15 minutes during the workday. Typically I expect this to take 1-2 minutes, but sometimes the task runs for much longer, sometimes more than an hour. Here's a snapshot from yesterday:

JacobTews_0-1706997584589.png

For example: The run at 8a took 11+ min for 308 inserts and 254 updates. Half an hour later, the run took less than 3 min for 383 and 376. Then at 9:15a it took long enough for 960 and 831 that the next 3 runs were missed. Finally an hour after that, we managed 2461 and 1347 in just 4 min.

I would like to understand why this happens despite insignificant variation in quantities of inserts and updates. I have DBA-equivalent access to the SQL Server instance where both the source and target live, so I can dig underneath the surface. However, I have no idea where to begin.

Anyone have suggestions for how I should start diagnosing the issue(s)?

Thanks in advance for the help...

Qlik Compose Qlik Compose for Data Warehouses 

1 Solution

Accepted Solutions
TimGarrod
Employee
Employee

Hi @JacobTews  I would recommend starting by looking at the details of the task that took a long time.    If you review the details, you can see which statements took a long time to process and can start to triage the individual statements / tables. 

 

While the run only showed X records processed to the target tables, there are times when intermediate steps actually process a lot more data.  

When the job is running - if you select the left most number (eg. the below shows 153/153) - click on the left 153 and you'll get the details of the current / prior run with each statement and how long it took to process.

 

This is a good first step in troubleshooting a long running process so you can determine if there is a single or group of statements that are causing the issue.   Additionally you should ensure you have index maintenance and statistics maintenance running on your SQL Server as a general best practice 

TimGarrod_0-1707223902982.png

 

View solution in original post

6 Replies
deepaksahirwar
Creator II
Creator II

Dear @JacobTews ,

Welcome and Thank you for reaching out to us on Qlik Community.

I would like to share you some performance tuning tips for Qlik Compose. Here are some general steps you can take:

1.Monitor Task Performance: Keep an eye on the performance of your tasks. Look for any tasks that are taking longer than expected and try to identify any patterns or common factors.


2.Optimize Data Models: Simplify your data models where possible. This can help to improve performance by reducing the complexity of the data processing.


3.Check System Resources: Make sure that your system has sufficient resources (CPU, memory, disk space, etc.) to handle the tasks. If resources are consistently running high, it may be time to consider scaling up your system.


4.Creating a Diagnostics Package: To assist in troubleshooting issues, you can create a diagnostics package. This package contains information like the project “data” directory, Java logs and workflow logs, .NET logs, and the deployment package file.


5.Manage Logs: Regularly delete old logs for your projects. Keeping logs for more than 45 days might not be necessary and can consume valuable disk space, potentially slowing down your system.


6.Contact Support: If you’re still having trouble after trying these steps, it may be helpful to reach out to Qlik Support. They can help you to further diagnose and resolve performance issues.

The point number 5 is very effective as per my experience incase if you find everything normal with other key points mentioned.
Regularly managing logs can indeed have a significant impact on performance. Over time, logs can consume a lot of disk space and slow down the system. By deleting old logs, you’re not only freeing up disk space but also reducing the amount of data that the system has to manage, which can lead to improved performance.


I hope this helps! Let me know if you have any other queries.

If our response has been helpful, please consider clicking "Accept as Solution". 
This will assist other users in easily finding the answer. ‌‌

Best Regards,
Deepak

sureshkumar
Support
Support

Hello @JacobTews 


Performance Tuning are best handled by our Professional Services Team. I would request to check with your Account Manager regarding Professional Services OR create a support team with us and will route to Professional Team to move forward with performance issue.

More information about the process:

https://community.qlik.com/t5/Knowledge-Base/How-and-When-to-Contact-the-Consulting-Team/ta-p/171493...

 

 

Regards,

Suresh

TimGarrod
Employee
Employee

Hi @JacobTews  I would recommend starting by looking at the details of the task that took a long time.    If you review the details, you can see which statements took a long time to process and can start to triage the individual statements / tables. 

 

While the run only showed X records processed to the target tables, there are times when intermediate steps actually process a lot more data.  

When the job is running - if you select the left most number (eg. the below shows 153/153) - click on the left 153 and you'll get the details of the current / prior run with each statement and how long it took to process.

 

This is a good first step in troubleshooting a long running process so you can determine if there is a single or group of statements that are causing the issue.   Additionally you should ensure you have index maintenance and statistics maintenance running on your SQL Server as a general best practice 

TimGarrod_0-1707223902982.png

 

JacobTews
Creator
Creator
Author

Hi, @TimGarrod!

I wasn't able to catch the process while it was hung up, but I found the offending task in the log:

JacobTews_0-1707245411726.png

Compose populates a temp table with data from our ItemOnHand table before merging changes into the actual data warehouse. Because this table is updated with every customer order, the type 2 fields cause the satellite to balloon quickly. Discussing with the business whether a daily snapshot will be sufficient for historical analysis, rather than a real-time type-2 history.

Also learning about reorganizing indexes and updating statistics now! I'm new to database management, so this is great stuff to digest.

JacobTews
Creator
Creator
Author

Looks like an index reorganization might not be a bad idea:

JacobTews_1-1707245811188.png

 

TimGarrod
Employee
Employee

So you definitely want to set up index maintenance on your SQL server.  There are alot of great blogs and stored procedures you can schedule to check the index fragmentation and based on a threshold re-index.    Another items you can always review is you TEMPDB usage - its used for any sorting and for #TEMP tables.  If you see any auto-growth of Tempdb by very small amounts - that can have a HUGE performance impact as tempdb gets fragmented (the same goes for your data warehouse files / filegroups).  (I was a SQL Server DBA for a number of years - so I've been through the ringers on sql performance 😉 ). 

 

When you implement a re-index strategy, I'd also recommend implementing statistics management as part of that process.   With smaller payloads processed by Compose when you implement lower latency, stats mgmt maybe fine by the db handling it - but it doesn't hurt to update stats once a week or so