Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Connect 2026 Agenda Now Available: Explore Sessions
cancel
Showing results for 
Search instead for 
Did you mean: 
Dhanoop
Contributor II
Contributor II

Bulk DB update

Hi Team

I have a requirement where I need to fetch data from multiple DB tables based on transfer_flag as "N" and write to individual files and then FTP to a particular server.
Once the transfer is done, I need to update the transfer_flag as "Y" for all these records. I was planning to use tDBRow where I would write the query to set transfer_flag as "Y" for records having transfer_flag as "N".
But this may result in an issue as the transfer flag for the records which were newly updated/inserted into the table at time of processing will also be set to "Y" even though those records are not transferred via FTP,

Please let me know how to handle this situation as I don't want to use tDBOutput Component as this will result in more time to update based on key columns.

Thanks
DBS

Labels (4)
1 Reply
gvarun_arthasolutions
Partner - Contributor III
Partner - Contributor III

Hello,

To handle this situation and ensure only the transferred records are updated, you should capture the unique identifiers (e.g., primary keys or a list of IDs) of the records you are processing during the extraction phase and use these specific IDs in your update query
. This avoids a race condition where new records inserted concurrently with transfer_flag="N" are accidentally flagged as "Y".
Here is a recommended job design pattern:

Talend Job Design with ID Management
1. Extract Specific Records:
Use a tDBInput component to select the records with transfer_flag = 'N'. Crucially, select only the primary key(s) or unique ID(s) of these records, along with any other necessary data columns.
2. Store IDs for Update:
o Pipe the output of tDBInput to a component like tHashOutput to temporarily store all the processed record IDs in memory or a temporary file. This list is a snapshot of the exact records being handled by the current job run.
3. Process and FTP Data:
o Replicate the data flow from tDBInput (or use a separate flow linked to tDBInputvia tReplicate) to your file output components (tFileOutputDelimited, etc.) and then to your tFTPPut components. This part of the job handles the actual data transfer.
4. On Successful Transfer, Update Flags:
o Use an OnSubjobOk trigger from the FTP component's subjob to ensure the update only happens if the FTP transfer is successful.
o Read the stored IDs from tHashInput (linked to the tHashOutput from step 2).
o Pipe the IDs to a tDBRow component.
5. Execute Precise Update:
o In the tDBRow component, write a parameterized SQL UPDATE statement that uses the specific IDs from the input flow in a WHERE IN (...) clause or similar condition.

Thanks