Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Write Table now available in Qlik Cloud Analytics: Read Blog
cancel
Showing results for 
Search instead for 
Did you mean: 
gpinumalla
Creator
Creator

To load all my sub jobs from source to target

Hi All,

I have 60 sub jobs and these are mappings and mapping will not change for all the environments. 

My source connection will be same but my targets will be changing. is there  a specific way to load contexts whenever we run the main job which will be calling sub jobs.

 

like : 

source --> target1

source --> target2

source ---> target3

 

 

Any robust way to pass it on prejob.

Labels (3)
4 Replies
rmartin2
Creator II
Creator II

Hi,

 

To make it a bit more flexible you can do as is :

 

Subjob 1

TDBOutput==>tHashInput

Subjob 2 

tFixedFlowOutput==>tFlowToIterate==>tHashOutput (don't clean)==>tMap==>tDBInput (with FlowToIterate values)

Subjob 3

tHashOutput (clean) without anything else

 

 

Only work with "small" tables. You can replace the FixedFlow with any configuration file/database. You just need to put your parameters inside.

 

Hope it helps.

 

Sincerely,

gpinumalla
Creator
Creator
Author

Hi,

I am not able to  find thash because I am using open studio. to explain clearly I have 60 tables to load and I have built all mappings fro these 60 tables with 60 sub jobs. 

now I have to trucate and load 60 tables as monthly to difference databases but tables structure and mapping will be same for 60 tables. 

 

sourcedb --> targetdb1

sourcedb --> targetdb2

sourcedb --> targetdb3

 

my source db is same but the target db changes. what is best way to design. 

rmartin2
Creator II
Creator II

You can use tHash components, it's just hidden.

Go to File => Project properties =>Palette and add the "Technical" folder. Click Ok to apply.

 

Then, if the structure of the tMap is always the same, you can use my  step to do it. It's all a matter of parameter from tFixedFlowOutput so you can map all tables (use inline tables, and add a line for each of your 60 tables).

The more appropriate thing would be a .csv file to load it 0683p000009MA9p.png Since you have 60 tables Input and output to configure, it's pretty heavy for a tFixedFlow

gpinumalla
Creator
Creator
Author

Hi,

I am using the oraclebulkexec which uses the sql loader that loades the tbinput into csv and then into the target db. the problem is  if one job fails and everything else fails. 

is there any robust way to run all jobs calling from parent job.


@mhodent wrote:

You can use tHash components, it's just hidden.

Go to File => Project properties =>Palette and add the "Technical" folder. Click Ok to apply.

 

Then, if the structure of the tMap is always the same, you can use my  step to do it. It's all a matter of parameter from tFixedFlowOutput so you can map all tables (use inline tables, and add a line for each of your 60 tables).

The more appropriate thing would be a .csv file to load it 0683p000009MA9p.png Since you have 60 tables Input and output to configure, it's pretty heavy for a tFixedFlow



@mhodent wrote:

You can use tHash components, it's just hidden.

Go to File => Project properties =>Palette and add the "Technical" folder. Click Ok to apply.

 

Then, if the structure of the tMap is always the same, you can use my  step to do it. It's all a matter of parameter from tFixedFlowOutput so you can map all tables (use inline tables, and add a line for each of your 60 tables).

The more appropriate thing would be a .csv file to load it 0683p000009MA9p.png Since you have 60 tables Input and output to configure, it's pretty heavy for a tFixedFlow


1. what about using a parent job with context load from a database and calling all the sub jobs with connection parameters from context load