Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
mike_garcia
Luminary Alumni
Luminary Alumni

Package Size varies from one table to another - Oracle

Hello,

I am fetching data from different tables residing in an Oracle Database and I am using the 11g ODBC driver, 64 bits. Two tables of approximately the same size in # of records (about 1 million) are read at completely different rates (records per second).

  • One table is read at about 800 records per second and takes about 24 minutes to load.
  • The second table is reat at almost 5-7 thousand records per second and only takes 3 minutes!


Same database, same connection string, similar query (Select * From Table). What can be causing such problem? Some extra configuration to be made?

I am experiencing the same behavior in 3 or 4 different tables, all other tables are read much faster, some even at rates of 10K records per second.

Any help is greatly appreciated.

Mike.

Miguel García
Qlik Expert, Author and Trainer
1 Solution

Accepted Solutions
mike_garcia
Luminary Alumni
Luminary Alumni
Author

I made some tests, removing some fields to reduce the record size, but continued seeing the behavior. Then I went on to remove even more of the fields, and then add them gradually. I finally noticed that there was ONE specific field causing the bad performance: a BLOB field.

Regards,

Mike.

Miguel García
Qlik Expert, Author and Trainer

View solution in original post

3 Replies
jonathandienst
Partner - Champion III
Partner - Champion III

Mike

There are quite a few things that can affect the load rate. I am no expert on Oracle, but these are some of the things which generally affect relatove database table performance

  • Select * loads all fields in the table. The 800 records per second table might have a row size that is substantially larger than the second. Check the record size for each table in the databse table definitions.
  • Indexing and keys. If the table is a virtual table or a view, the indexing on the underlying table has a huge affect on performance. As would joins within a view.
  • Fragmentation in the database. If one table is updated more frequently than the other, then that table could become fragmented, affecting read performance.
  • Concurrency. If other users are on the database, they might be affecting performance through locking and transactions taking place on the slower performing table.
  • Calculated fields and read triggers also cause extra overhead on reading.

There may be others, but these are the ones that come immediately to mind.

Jonathan

Logic will get you from a to b. Imagination will take you everywhere. - A Einstein
mike_garcia
Luminary Alumni
Luminary Alumni
Author

Thank you Jonathan, I will check into it.

Regards,

Mike.

Miguel García
Qlik Expert, Author and Trainer
mike_garcia
Luminary Alumni
Luminary Alumni
Author

I made some tests, removing some fields to reduce the record size, but continued seeing the behavior. Then I went on to remove even more of the fields, and then add them gradually. I finally noticed that there was ONE specific field causing the bad performance: a BLOB field.

Regards,

Mike.

Miguel García
Qlik Expert, Author and Trainer