Skip to main content
Announcements
Live today at 11 AM ET. Get your questions about Qlik Connect answered, or just listen in. SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
eduardo_dimperio
Specialist II
Specialist II

Use a Field from qvd in a where oracle clause

Hi,

I've a query to store into a qvd, but after i need to be incrememental, but for that i need to use a field from my qvd into oracle clause.

Until now i use some kind a workaround creating a string with all rows in the field.

//******************************************************************************

//FIELD from QVD

MO_ISSUE_FIELD_STRING:

ISSUE_ID

    FROM

    [lib://Dados/TESTE/MO_ISSUE_FIELD_STRING.qvd](qvd);

//******************************************************************************

CONCATENATE(MO_ISSUE_FIELD_STRING)

//******************************************************************************

MO_ISSUE_FIELD_STRING:

Load

    ID_ISSUE_FIELD_STRING AS ISSUE_FIELD_STRING_ID,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID  AS CATEGORY_ID,

    FIELD_VALUE AS CATEGORY_VALUE,

    1 AS ID_STORAGE_TYPE

;

select

    ID_ISSUE_FIELD_STRING,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID,

    FIELD_VALUE

from MO_ISSUE_FIELD_STRING

where

????? // I WOULD LIKE TO PUT IN HERE ONLY ID_ISSUE THAT NOT IN MY QVD


//********************************************************************************

//WORKAROUND

LET vWHERE='';

LET Aux='';

ID_ISSUE_LIST:

load

DISTINCT

ISSUE_ID

RESIDENT MO_ISSUE;

For Each ISSUE_ID in FieldValueList('ISSUE_ID')

Aux='ISSUE_ID='& $(ISSUE_ID)&' OR ';

vWHERE=Aux& vWHERE;

NEXT

If Aux='' then

exit Script;

endif

Aux=LEFT(vWHERE,LEN(vWHERE)-3);

********************************************************************************

1 Solution

Accepted Solutions
marcus_sommer

If the amount of your data are rather small and/or your database is fast enough to deliver the data in your needed time-frame you could use an where exists clause on the preceeding load like:

...

MO_ISSUE_FIELD_STRING:

ISSUE_ID

    FROM

    [lib://Dados/TESTE/MO_ISSUE_FIELD_STRING.qvd](qvd);

...

MO_ISSUE_FIELD_STRING:

Load

    ID_ISSUE_FIELD_STRING AS ISSUE_FIELD_STRING_ID,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID  AS CATEGORY_ID,

    FIELD_VALUE AS CATEGORY_VALUE,

    1 AS ID_STORAGE_TYPE

where not exists(ISSUE_ID)

;

select

    ID_ISSUE_FIELD_STRING,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID,

    FIELD_VALUE

from MO_ISSUE_FIELD_STRING;

...

But this meant that each record from the database is pulled and checked.

If this this not feasible I would go with your workaround of loading the data within a loop only if no other alternatives would be possible because I assume that this would be the slowest approach.

And this could be to concat the ISSUE_ID as a string and using them within a not in() clause - AFAIK this often limited to a certain number of parameters and/or number of chars. Also thinkable might be to use something like max(ISSUE_ID) if there could be ensured that each new ID must be greater than the previous ones.

Another way would be to rewrite the in Qlik loaded ID's again into the database - maybe with something like this: Write Back to Database via ETL process (using CSV or XML) - or maybe even easier to store these ID's within a parallel process directly within the database and using these ID's within a join to filter the data.

- Marcus

View solution in original post

10 Replies
marcus_sommer

If the amount of your data are rather small and/or your database is fast enough to deliver the data in your needed time-frame you could use an where exists clause on the preceeding load like:

...

MO_ISSUE_FIELD_STRING:

ISSUE_ID

    FROM

    [lib://Dados/TESTE/MO_ISSUE_FIELD_STRING.qvd](qvd);

...

MO_ISSUE_FIELD_STRING:

Load

    ID_ISSUE_FIELD_STRING AS ISSUE_FIELD_STRING_ID,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID  AS CATEGORY_ID,

    FIELD_VALUE AS CATEGORY_VALUE,

    1 AS ID_STORAGE_TYPE

where not exists(ISSUE_ID)

;

select

    ID_ISSUE_FIELD_STRING,

    ISSUE_FIELD_ID,

    ISSUE_ID,

    FIELD_ID,

    FIELD_VALUE

from MO_ISSUE_FIELD_STRING;

...

But this meant that each record from the database is pulled and checked.

If this this not feasible I would go with your workaround of loading the data within a loop only if no other alternatives would be possible because I assume that this would be the slowest approach.

And this could be to concat the ISSUE_ID as a string and using them within a not in() clause - AFAIK this often limited to a certain number of parameters and/or number of chars. Also thinkable might be to use something like max(ISSUE_ID) if there could be ensured that each new ID must be greater than the previous ones.

Another way would be to rewrite the in Qlik loaded ID's again into the database - maybe with something like this: Write Back to Database via ETL process (using CSV or XML) - or maybe even easier to store these ID's within a parallel process directly within the database and using these ID's within a join to filter the data.

- Marcus

eduardo_dimperio
Specialist II
Specialist II
Author

Well, unfortunately this not solve my problem, if a have more than 999 values i can use my workaround and if a use something like a loop the time will increase a lot. I dont understant why QS dont create a way to extract more wisely from a sql command using a qvd.

marcus_sommer

It's not a problem on the Qlik side because if you have the data there you could easily restrict them with something like where exists(). The problem is the SQL which isn't directly touched from Qlik else it will just transferred as it is to the database and Qlik just received the results - this meant the SQL database + driver would need to be able to handle the amount of values which Qlik delivers for the where clause ...

Therefore if there is no other field which could be used with a numeric operator like >= than you need a join-approach within the sql like hinted in my previous post.

If this is at the moment not possible or too expensive you could optimize your loop-approach by using the in() where clause with 999 parameter instead of going to each single value - reducing the needed number of loop iterations by nearly 1000.

- Marcus

eduardo_dimperio
Specialist II
Specialist II
Author

The problem is the restriction are created from my qvd, like your ideia (that i think very good) only works if a have a table with a PK. How can i create a incremental qvd from a table withou a pk?

I've expected use a simple Where Clause with a qvd Field like:

Select

*

from Database

where

FieldFrom Database Not In (QvdField),

but ok, i think that's not possile at moment

dwforest
Specialist II
Specialist II

Incremental loads typically are over time so the database table would have a date field that could be used as the limiter. It seems likely and an Issue Tracker has a date field?  Created? Updated?

Select * from table where date_field >= Today()-1

eduardo_dimperio
Specialist II
Specialist II
Author

Hi Forest,

Imagine that i have a cake book with only two fields Name (string) and Recipe(string), with no PK, no order asc or desc. How you do an incremental on that?

QVD_TABLE:

LOAD

NAME,

RECIPE

MYQVD (QVD);

CONCATENATE (QVD_TABLE)

DATABASE_TABLE:

Select

NAME,

RECIPE

from Database

where

????

dwforest
Specialist II
Specialist II

If it was a table that represented a changing list where it was important to track changes, I'd have a date in it.

Or it would be a short list where a complete reload would be fine.

If this is your real world issue, loading even thousands of records with 2 fields should be fast in Qlik; once resident, you could use Exists() as Marcus suggests.

If the database is outside of your control, build a DataWarehouse to create tracking.

Qlik works best when used against normalized or DW/Star Schema designs. It can't work with fields that don't exist.

eduardo_dimperio
Specialist II
Specialist II
Author

A complete reload doesn't work for me, if a have 10k rows it'll be too expensive to load every day, unfortunately QS not do what i need if i dont have a good database model. but thank you for your time

dwforest
Specialist II
Specialist II

Did you try the load of 10K records?

We routinely load more than that daily to support various applications; so what if it takes 10-20 mins for the scheduler to run? can be done off hours, ahead of the work day; just saying.