Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
See why IDC MarketScape names Qlik a 2025 Leader! Read more
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Very less throughput for tOracleInput component

Hi,

 

I have a very simple query selecting a set of columns from a particular table using tOracleInput component, followed by a tmap and then tOracleOutput. There is a small lookup table with the input component. But the throughput of tOracleInput is very very low (around 3 rows/sec) which is not acceptable for loading 650-700k records.

 

I am not using any tOracleConnection component. I also saw that the job firsts reads all the rows from the lookup table and then starts reading from the main Input component.

 

Can anybody help me out with the issue. It needs urgent resolution from our side so an early response would be highly appreciated!

Labels (1)
10 Replies
Anonymous
Not applicable
Author

can you please send screenshots?

how are you handling the commits?

Anonymous
Not applicable
Author

Attached is an image of the job structure.

 

I had tried enabling cursor for 20000 rows but that did not help.

Lookup is following 'Load once' method.

 

I have not explicitly handled commit in the job.


Talend.png
Anonymous
Not applicable
Author

Are you using tSCD or tDBOutput?

The picture you send does not show the connection with the last component

Do you have any key constrain in the table

we can't help if you don't provide enough infos


Anonymous
Not applicable
Author

I am using a tOracleSCD component. The last component is a tOracleRow and I did not include it in the picture because I am certain that it is not contributing to the issue. The delay is in the fetch/ read of the data from source into tmap, and subsequently into the SCD component. When i am using a filter like rownum<=100, the job is completing with a minor time taken to commit the operation in the last tOracleRow.
The table has one unique index on the surrogate key. Other than that, it has few not null constraints.
I am using Talend Data Integration 6.2.1. The settings of all the components mentioned are set as in the default mode, I have not explicitly mentioned any additional criteria.
Also, please let me know what other information you need.
Anonymous
Not applicable
Author

Thanks for clarification

I think the problem might be the sql request you are using to fetch the data.
I suggest you test the performances of the same sql request with another tool (toad, ..)
Anonymous
Not applicable
Author

Hi,

The SQL code when tested in DBeaver takes only around 5-7 secs. Hope this clarifies.
Anonymous
Not applicable
Author

Sorry, I don't see why it should be slow, but for years we have used it to from our oracle database from a table of 150.000.000 records per day.
Anonymous
Not applicable
Author

Hi @NinaD ,

 

     Did you get a chance to look at the explain plan of the input query you are using? Are the costs looking normal? Also the machine where you used DBeaver and the machine where Talend Studio is running are same?

      Please also check whether there is any throughput constraints from network side too.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved 🙂

Anonymous
Not applicable
Author

Hi,

Yes the two are running in the same machine. The query goes like :

SELECT
a1,a2,a3.... aN from source where source.last_update_dtm>(select max(dtm) from target);

The explain plan results are attached. 

Hope this helps.


Talend2.png