Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Write Table now available in Qlik Cloud Analytics: Read Blog
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

[resolved] Best practice to build a joc chain and to deploy changes

Hi,
i´ve read through the documentation and loads of forum Posts now, but i can`t get a real opinion on the following:
What is the best/recommended way to build a job chain?
My idea was to just put all my jobs in one big ""masterJob" and run this from the Job conductor. Is that the preferred method, or should i connect the single Jobs in an execution plan? it the latter is the case, how can i save the execution plan outside of TAC? The TAC is in control of another department, and i´m not so sure about it´s "stability" 🙂
Second question, especially regarding the answer to the first one: If i do what i described above (masterJob - approach): If i change one of my "subjobs", in my opinion i would have to re-deploy the "masterJob" also to get the changes to production? How do you move your developments to production - via "pre-compiled" zip-file or via different SVN-Tags/Branches? If via zip-file - what would you do if there are parallel developments in different subjobs and you want to get only one of them into produciton? The masterJob-Export would contain bith of them in this case?
Thank you very much for your help and best regards!
Markus
Labels (2)
13 Replies
Anonymous
Not applicable
Author

Hi again,
you are right of course, my image containing the tParallelize-component was not correct. I´ve updated it with a corrected Version now.
Thanks again!
Markus
Anonymous
Not applicable
Author

Execution Plans:
As well as the stated problems above ---> I had difficulty with restart ability when trying to use Execution plans.  Recovery check points seem to work just fine until I add parallel executions. Couple that with my ever changing and growing job list and it was just unusable.
Third Party Tool:
Using third party tools is not a great solution either because now I cannot use my TAC to deploy all my jobs. When I only have to deploy a job or two, simply manually uploading it was fine.  When I had an update that stretched across several jobs, it was not a great solution to export/upload all of those jobs. 
I simply used a tRunJob, and a trigger check process to build my "master jobs"
The MasterJob controls the execution order.  This job calls the same Dynamic job repeatedly, passing in a new job_name with each call.
Here you can see the job name passed in as a string from the master job.
This calls the Dynamic job-->
The Dynamic job that it calls simply executes that job from its dynamic list.

The components surrounding the tRunJob(Dynamic) are updated a control table.  The table is updated with Error, Running, Complete status.  Allowing restart at point of failure without any user intervention, or specific restart points by updating the status code to "Null" or "Error"
When the job completes, it simply creates a TEMP_FILE, dependent jobs wait for that file to be created before they begin execution.
Anonymous
Not applicable
Author

You may want to check out the details of the TAC metaservlet API.  They are in Appendix B of the TAC User's Guide and there is a detailed example of using the TAC API in the Knowledge Base.  

Using the TAC API will help with some of the issue you raise.  The TAC API will allow you to pass Context Variables to the child Jobs.
Using the TAC API will allow you to run the child Jobs on other Job Servers if you wish.
The TAC API can invoke jobs either synchronously or asynchronously.  When run asynchronously it returns a handle via the execRequestId that you can use to poll the status of your job.
This allows you deploy your jobs independently of each other, so it decouples the SDLC of the child jobs from each other and from the parent job.  So a new child job requires only the deployment of one job, and not recompilation of all the others into an uber job.
You can still use the tParallel approach with synchronous child job invocation if you wish.  But if you are invoking the child jobs asynchronously then invoke them in order, there is no need for the parallel threading.
Because you are invoking your child jobs via the TAC (i.e. just as if you were running them normally) they have access to things like checkpoint recovery.
The execRequestId will give you fine grained control for recovery if you don't like the checkpoint feature.
Anonymous
Not applicable
Author

Hi eost,
this is exactly what the component tRunTask does. Unfortunately Talend does not provide such component, so I had to create it.