Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Streamlining user types in Qlik Cloud capacity-based subscriptions: Read the Details
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Improving performance: replicate many time same tDBInput

Hi,

 

I have a job in which I am using around 60 times the same tables, as shown in the attached. Is there a way to replicate the table without doing 60 time the same select? and doing so increasing the job's performance?

Thank you in advance!!

 

0683p000009M5md.png

Labels (2)
1 Solution

Accepted Solutions
Anonymous
Not applicable
Author

Hi @Emanuele89 ,

 

Did you try using tReplicate component for your usecase. By using this you only have to read data once from database and the read data flow will be replicated as many times as you want. See below screenshot.0683p000009M5cc.png

 

Regards,

Pratheek Manjunath

View solution in original post

6 Replies
TRF
Champion II
Champion II

If you have to use the same dataset, have a look to tHashOutput / tHashInput.

Anonymous
Not applicable
Author

I already tried to use tHashOutput/Input, but nothing changed.

Anonymous
Not applicable
Author

Hi @Emanuele89 ,

 

Did you try using tReplicate component for your usecase. By using this you only have to read data once from database and the read data flow will be replicated as many times as you want. See below screenshot.0683p000009M5cc.png

 

Regards,

Pratheek Manjunath

Anonymous
Not applicable
Author

I already tried it and it works fast. But, If I have to many components I got java error. 

I can't connect the same tReplicate to 60 components. So I need to create many tReplicate components e connect each one of them.

Anonymous
Not applicable
Author

Hi,

 

    It is not advisable to use a single job to extract the data for 60 tables. You should use 5 to 10 tables maximum in 1 job (not a hard and fast rule but a regular recommendation). You can orchestrate these jobs in parallel and if you want to combine these datasets later, you can do it down the line with another job where you can use tUnite component.

 

    Yes, I agree that we are creating more jobs instead of single job to do the same process. But we need to understand that job readability and maintenance is also equally important.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved 🙂

David_Beaty
Specialist
Specialist

Hi,

Agree with @nthampi there's something wrong with your job structure if you're wanting to do the same thing 60 times.

 

Can you not feed the 60 destination tables in via some flow and use a tFlowToIterate and then perform the DB output 60 times with a much smaller job set?