Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
BillZrv
Contributor II
Contributor II

Two parallel tasks in QMC run forever

Hello everyone,

We were running two tasks one after the other, and what they do is taking tables from SSMS and create qvd files. This takes around 8 min for each task if run in line.

We wanted to run those tasks in parallel to save time, but after doing so those tasks need at least 30 minutes. Sometimes do work in parallel (rare occasion though), and some times we occur the "false" running situation that I have read some members wrote here.

I have checked our recourses, and while those tasks were running, we were using just the 11% of our RAM and 5-10% of our CPU.  I have noticed that the download speed is steady on 100Mbps, while if those two tasks are running in line, the download speed can reach at least at 300Mbps.

I have also checked that we set up to 5 tasks to run at the same time and yesterday we rebooted the QlikView Server. 

Any ideas? 

Thank you so much!

 

 

Labels (3)
15 Replies
marcus_sommer

From the Qlik-side tables with these sizes are not a problem - especially by your mentioned QlikView environment and the observed CPU/RAM consumption. But by the SQL server I'm not so sure.

Important is to know that QlikView (like the most other tools) no SQL executes else it just transferred the statement per driver to the databases and received on this way back the results. This means everything happens within SQL server environment - and a query on a raw-table of 17 GB with some transformations and some caching/buffering may in the end easily consume 50 GB - depending on settings maybe completely in RAM. Is this amount always available?

Beside this I suspect more your network as limitation because your sql message might just mean that the database mustn't buffer any data if the network is (too) slow. Nowadays nearly everything is virtualized and therefore I suggest a monitoring of the VLAN + (V-)proxies + load-balancer.

Another suggestion is to implement (more) incremental approaches and/or diving both tasks into smaller chunks. 

- Marcus

Dan-Kenobi
Partner - Contributor III
Partner - Contributor III

I agree with @marcus_sommer in the sense it's unlikely to be on the QlikView side. 

One thing I overlooked (because we rarely get this deep into a problem) was the NUMA settings. Not sure if you can disable it.

And, since we are at the hardware level now, I wouldnt be against you standing up another QDS Vm and transition the jobs there to see if it makes any difference.

Another thing that shouldn't be affecting you is the heap setting o  the registry. But since we're kinda running our of ideas, here you go:

https://community.qlik.com/t5/QlikView-Administration/QlikView-Server-Memory-Heap/td-p/933720

 

If you so the change, let us know how it affected your environment. 

I feel like I'm one answer away from offering a you a Teams support call, and then redirecting you to your Qlik Rep for further assistance if we can't figure it out ourselves.

Dan-Kenobi
Partner - Contributor III
Partner - Contributor III

@BillZrv -- any news for us? 

We'd really like to get this sorted if possible

BillZrv
Contributor II
Contributor II
Author

Hello Dan,

thank you so much for your interest!

Unfortunately we still didn't managed to fix it. Some times it works, some times it doesn't. Some times when it stuck on 100Mbps and we run manually another QlikView Task, there are cases where the speed for that task runs normally at 300Mbps. But that again is completely random. Again, sometime it works, some times it doesn't.

We are stack in no mans land where we can't figure out if it is a physical network connection problem (since the machines are physical and not virtual) or it is a problem with the QlikView server network configuration.

To be honest we didn't tried your suggestions yet. We will contact the company that set up the system on the first place and ask for the assistance, and we will definitely inform them regarding your suggestion.

On the mean time, I spread the hourly tasks into two hours so the final QlikView users will have at least an update every two hours.    

I will keep you updated!

All the best,

Bill

marcus_sommer

Were really all network-parts be checked - means also the proxies and any load-balancers? About two years ago we had had a case with accessing external data with an irregular performance and often also timeouts and the IT swore that nothing has been changed and the errors must be on our side.

After a while we got them ready to a few live monitoring-sessions with a running WireShark to monitor each detail of the traffic and the cause in the end was a new virtualized load-balancer cluster which distributes the requests to the proxy-clusters - and not all of them were properly configured. Maybe that is also a possible way for your issue.

- Marcus

Dan-Kenobi
Partner - Contributor III
Partner - Contributor III

@marcus_sommer - that's a great suggestion too. My issue with going to the Networking side is that it's quite difficult to provide evidence to the Networking team that the ball is indeed on their court.

But, @BillZrv , if you are allowed to have WireShark on your server, and assuming it's not going to interfere too much with your tasks' resources , then this might route's probably better than my previous suggestion (since mine required the setup of a whole other box - and the problem might be not related to Qlik or the computer it sits on).