Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi community,
could anybody give me a hint on how to load only records that occur several times? Assume I have the following table:
Id | Description | Referenced Id |
---|---|---|
1 | That | A |
2 | does | B |
3 | not | A |
4 | matter | C |
5 | at | A |
6 | all | B |
Now I want QlikView to load only those records that have a Referenced Id that occurs more than two times. In that case it should only load records of Referenced Id A as it appears in three records.
I already tried to use mapping load and it works pretty fine for this simple example. But it doesn't work if the table definition gets more complex as mapping load only allows me to use two columns. E.g. when I also want to separate between different companies:
Id | Description | Referenced Id | Company |
---|---|---|---|
1 | This | A | QlikTech |
2 | is | B | QlikTech |
3 | just | A | |
4 | a | A | QlikTech |
5 | text | A | |
6 | that | B | QlikTech |
7 | does | A | QlikTech |
8 | not | B | |
9 | matter | A |
In this case QlikView should only load records of Referenced Id A as we have three A's for QlikTech and three A's for Google. Referenced Id B should be ignored as we only have two for QlikTech and one for Google.
Any idea?
See the load script in the attached.
Simply join the tables on that fields. Please find the below sample script:
T:
LOAD Reference_id , Company , Count(Reference_id & '-'Company) AS CNT
FROM soure
group by Reference_id , Company;
MAIN:
LOAD Reference_id, Company Resident T where cnt > 2;
Left Join (MAIN)
Load * from source;
Drop table T;
I already tried to use some kind of Join statement but it ends up in a stack overflow message.
Also Nicole's solution, it statically only looks for two times occurence. But I want it a bit more flexible, e.g. I also want only records that occur four times, six times, ...
To be honest it should also consider the Country in its selection, but I think that shouldn't be any different to a two field solution. So, the table looks like this:
Id | Description | Reference Id | Company | Country |
---|---|---|---|---|
1 | This | A | QlikTech | UK |
2 | description | B | QlikTech | UK |
3 | might | A | QlikTech | IT |
4 | be | B | IKEA | UK |
5 | different | B | IKEA | UK |
6 | in | A | QlikTech | IT |
7 | every | A | IKEA | IT |
8 | record | B | QlikTech | UK |
9 | but | A | IKEA | IT |
10 | that | A | QlikTech | UK |
11 | does | B | IKEA | UK |
12 | not | A | QlikTech | IT |
13 | matter | A | QlikTech | IT |
Now it should only load records of
UK - IKEA - B
IT - QlikTech - A
I also want to add some code snippet I used
No_of_occurence_TMP:
LOAD
"Linked Id",
Company,
Country,
Count("Linked Id") as No_of_occurence
Resident LedgerEntry_TMP
group by Country, Company, "Linked Id";
QUALIFY *;
LedgerEntry:
LOAD *
Resident No_of_occurence_TMP
Where No_of_occurence > MaxOccur; //MaxOccur = 2
Left Join (LedgerEntry)
LOAD
Id,
"Linked Id",
"Posting Date",
"Cost Centre",
"Account No",
"Source Code",
"Document No",
Description
Resident LedgerEntry_TMP;
UNQUALIFY *;
Hi,
using your example table you would get what you need with this:
First_run:
LOAD
Id,
Description,
[Reference Id],
Company,
Country,
[Reference Id]&'|'&Company&'|'&Country as Key
FROM
Example_Table;
left join(First_run)
load Key, count(Key) as Occurences resident First_run group by Key;
Result_table:
NoConcatenate load * Resident First_run where Occurences>2;
drop table First_run;
regards.Janne
Yes, I also think that should work. But when I do this I get an out of memory exception. I use QV 10 SR5 on a Windows XP machine with 2gb ram and I have about 3.1 million records in my table. Sounds like not undersized, does it?
You could make a loader app of the first table, save the table to a qvd file and load it with different qv app. Kind of splitting the tasks to a different qvw files. It should decrease the memory need. 3.1 million rows still isn't that much - but if the task manager shows that you are using plenty of memory, then the insufficient memory could be the reason for the error.