Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
cancel
Showing results for 
Search instead for 
Did you mean: 
markp201
Creator III
Creator III

Unexpected end of data with Oracle ODBC

Most of the time the script will run but sometimes we get the following?

QVX_UNEXPECTED_END_OF_DATA: SQL##f - SqlState: 08S01, ErrorCode: 3113, ErrorMsg: [Oracle][ODBC][Ora]ORA-03113: end-of-file on communication channel

The script is pulling about 4M rows but there is another document load which is running against the same Oracle DB

The SQL is embedded (i.e. no views).  It's somewhat complex with 8 SQL joins and the rest of the script has a Qlik left join.

If it runs normally, it will take 1 1/4 - 2 hours.  When it crashes, it only runs about 30 minutes but the crash is consistently occurring

on the first data pull - the one with 8 joins

Very difficult to find root cause because Qlik does not return any detail.  It's as if the DB connection is being dropped.  The connection uses a default timeout.  I've read on the community one resolution regarding how many cores are active - not the most encouraging news since fewer cores would impact performance.

I'm researching the possibility of loading the source tables into QVD's and then creating a star schema the dashboard will load

1 Solution

Accepted Solutions
markp201
Creator III
Creator III
Author

The issue was the backend was not returning rows because it was spending too much time searching (where clause, order by clause, group by were the main culprits)

There were a few ways to resolve...

1. Add the appropriate indexes to the backend tables

2. Read all rows (no where, order by, group by) into a temp table. This was surprisingly fast since the backend doesn't have to search the table but it can be time consuming if loading numerous and/or large fields.  Load from temp table and apply group/where/order.  I don't recall but I think order by cannot be used with a qvd

3. Read all rows into a QVD instead of temp table.  Supposedly QVDs are faster than resident tables.

4. Parallel loads.  Schedule multiple scripts to load data into QVDs then the final script to build your dimensional model (this was our solution)

5. Use OLEDB in place of ODBC.  Typically OLEDB are dedicated drivers but may require licensing.

6. Extend the odbc timeout parameter which is not a 100% solution but deserves a mention

Happy Hunting.

View solution in original post

2 Replies
Not applicable

Hi Mark,

I'm running into a similar problem. Were you able to solve this?

markp201
Creator III
Creator III
Author

The issue was the backend was not returning rows because it was spending too much time searching (where clause, order by clause, group by were the main culprits)

There were a few ways to resolve...

1. Add the appropriate indexes to the backend tables

2. Read all rows (no where, order by, group by) into a temp table. This was surprisingly fast since the backend doesn't have to search the table but it can be time consuming if loading numerous and/or large fields.  Load from temp table and apply group/where/order.  I don't recall but I think order by cannot be used with a qvd

3. Read all rows into a QVD instead of temp table.  Supposedly QVDs are faster than resident tables.

4. Parallel loads.  Schedule multiple scripts to load data into QVDs then the final script to build your dimensional model (this was our solution)

5. Use OLEDB in place of ODBC.  Typically OLEDB are dedicated drivers but may require licensing.

6. Extend the odbc timeout parameter which is not a 100% solution but deserves a mention

Happy Hunting.