Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We just upgraded to Sep 2018 and we're noticing an issue in the script. For example, there are 10M rows in table A, 5M in table B and 2M in table C. Each has a primary key (id) with foreign keys for B and C (b_id, c_id)
SELECT a.b_id,a.c_id
FROM A
JOIN B ON B.id=A.b_id JOIN
JOIN C ON C.id=A.c_id
returns less than 10M rows. I don't how the script determines when to stop
SELECT a.id, a.b_id,a.c_id
FROM A
JOIN B ON B.id=A.b_id JOIN
JOIN C ON C.id=A.c_id
returns all 10M rows
Since it's an SELECT statement it's not Qlik Sense that's processing that statement. Qlik Sense passes the sql statement to the oledb or odbc driver and that driver is responsible for handing that to the database server that executes it. The database server returns the resulting data to the driver and the driver then passes the results to Qlik Sense.
Are you sure this did not happen before you updated QlikSense? And are you sure the only change in the environment is the update of Qlik Sense? If you're using an odbc connection have you tried with an oledb connection?
In addition to the hints from Gysbert you could add a:
load *, recno() as RecNo, rowno() as RowNo;
Select ...
to check if there is really a difference between the number of records and if which ones are missing.
- Marcus
I ran the script which loaded 10M rows when script exited - no errors
I checked the query with dbeaver and it returns 44M rows.
This time I was loading from a single - although large - table.
Could this be a memory issue ? We're running a VM
Does dbeaver use the same odbc or oledb driver that you use with Qlik Sense?