Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Connect 2026! Turn data into bold moves, April 13 -15: Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

read and write. 0.85 rows/s in same DB

Hi team ,
I am trying to populate data from one table to two other tables in the same DB am using Tmap to split the data. It takes a long time to read and write. 0.85 rows/s -- read from sql server
i have optimized the sql server , using indexes still speed is very slow ...
Thanks
Vikram
Labels (2)
9 Replies
Anonymous
Not applicable
Author

Hello,
indeed the performances you are experimenting are strangely bad, maybe you should check the way you perform the inserts.
Did you use insert or update, update or insert?
Make sure the strategy you apply there is consistent with your needs because it can dramatically change performances.
Moreover, you are reading and writing on the same DB, in that case you should seriously consider to use an ELT approach instead of ETL (Yes, you can do it with Talend, use ELTInput tables and tELTMap.
Hope it helps.
Bye
Francesco
Anonymous
Not applicable
Author

hello Francesco,
am using update or insert as of now , tried insert or update also .
But insert or update is slower ...
ELT components in the palette is not available for MSSQL Server. 0683p000009MPcz.png
Am not sure why this kind of performance issue is cropping up.....
Thanks
Vikram
Anonymous
Not applicable
Author

ELT is a fancy way of saying CTAS or insert into select. You can use any of the t<DB>Row components to preform ELT queries.
Anonymous
Not applicable
Author

Hello again,
it's true you can do some ELT simply sending out a query, but then having a tool like TOS is not helping you much in your task 0683p000009MACn.png
Still I am pretty sure MS SQL has ELT components, see in attach the ones you probably need.
Second :
insert or update and update or insert totally make sense in cases when you need to"delta update" a table.
If this is your scenario then you probably need this kind of strategy, but be aware it requires to eventually perform multiple queries per record.
Instead, if you can manage to to have a truncate/insert or delete/insert, you are likely to get a better performance.

Edit
Anonymous
Not applicable
Author

Hello saburo,
Am using TOS 3.2.1 , am not finding that option , may be its not present ... i've got to use 3.2.1 itself .... PFA screenshot
i've perform update/insert ....

Is there any alternative to increase the speed ... am not sure why this is so slow ......
Anonymous
Not applicable
Author

what could you try to do... uhm, I would try to :
1) Load the new records in a a staging table (with a truncate insert).
2) IF all the fields must be updated in existing records, then I would do a delete in the target table for all the keys present in the staging (you can use a mssqlrow component ot issue the command)
3)
I would just do a select insert (again with a mssqlrow) from staging to target
You also mentioned you are reading from the same database in which you are writing. If so step 1 is not mandatory, it can be obtained with a simple query
Example :

Source table (or view, or query) : SOURCE
target Table : TARGET
Primary key : ID
1) a mssqlrow that sends the following command : delete from TARGET where ID in (select ID from SOURCE)
2) a mssqlrow with the command : insert intoTARGET select * from SOURCE (replace the * wth the appropriate field list if source and target do not share the same schema)
Hope it helps
Ciao
Francesco
_AnonymousUser
Specialist III
Specialist III

hello Francesco,
i wrote a procedure to do the same and tired calling that procedure in Jasper ...
But unfortunately i can't delete from target table ....
Now am planning to write Update insert procedure and check its performance ...

Thanks
Vikram
Anonymous
Not applicable
Author

Also consider that clustered indexes (i.e. a PK) mighl significantly slow down the insert process, especially if they are made up with several fields.
Sometimes you might get a better result by dropping and re-creating them (but if it is a PK, this might not be always possible).
On the other hand, with no index updates can be slower.
_AnonymousUser
Specialist III
Specialist III

Hello Francesco,
Thanks for the suggestion , i'll try deleting and recreating the indexes in the target
this there any option to perform presql or post sql ?

And will it help if i add all columns as non clustered index in the Source side ?

Thanks
Vikram