Hello,
I have a source table which looks like below (columns 1-6). But I need to add some data to it somehow (columns 7-9).
Here is some explanation:
I have an order (1) with plenty of documents (5) where 'Pw' stands for product made and 'Rw' for material used (4). In every order there is a sub order (9) which I'm aware of but there is no simple way to show it.
I know the start date (7) and the end date (8) of the sub order (9) because I know that after every group of documents marked as 'Pw' there will be at least one document marked as 'Rw' (4). So I know that the sub order starts with the first 'Pw' that occurs right after 'Rw' and it ends with the last 'Rw' that occurs right before another 'Pw'.
The question is how to do it in QlikView load script :-). I will apprecieate any help. Thank you.
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
order_no | date | time | document | document_no | material_code | from | to | sub_order_no |
229185 | 2018-02-26 | 22:07:28 | Pw | 18638622 | 2495347 | 2018-02-26 22:07:28 | 2018-02-26 22:08:37 | 1 |
229185 | 2018-02-26 | 22:08:00 | Rw | 18638619 | 2478198 | 2018-02-26 22:07:28 | 2018-02-26 22:08:37 | 1 |
229185 | 2018-02-26 | 22:08:18 | Rw | 18638616 | 2494452 | 2018-02-26 22:07:28 | 2018-02-26 22:08:37 | 1 |
229185 | 2018-02-26 | 22:08:37 | Rw | 18638610 | 2493008 | 2018-02-26 22:07:28 | 2018-02-26 22:08:37 | 1 |
229185 | 2018-02-26 | 22:23:40 | Pw | 18638607 | 2495354 | 2018-02-26 22:23:40 | 2018-02-26 22:24:28 | 2 |
229185 | 2018-02-26 | 22:24:09 | Rw | 18638604 | 2493008 | 2018-02-26 22:23:40 | 2018-02-26 22:24:28 | 2 |
229185 | 2018-02-26 | 22:24:28 | Rw | 18638601 | 2494452 | 2018-02-26 22:23:40 | 2018-02-26 22:24:28 | 2 |
229185 | 2018-02-26 | 23:50:23 | Pw | 18638598 | 2495390 | 2018-02-26 23:50:23 | 2018-02-26 23:51:47 | 3 |
229185 | 2018-02-26 | 23:51:43 | Pw | 18638586 | 2494452 | 2018-02-26 23:50:23 | 2018-02-26 23:51:47 | 3 |
229185 | 2018-02-26 | 23:51:44 | Rw | 18638583 | 2493008 | 2018-02-26 23:50:23 | 2018-02-26 23:51:47 | 3 |
229185 | 2018-02-26 | 23:51:47 | Rw | 18638528 | 2494448 | 2018-02-26 23:50:23 | 2018-02-26 23:51:47 | 3 |
229185 | 2018-02-27 | 00:22:17 | Pw | 18638525 | 2495416 | 2018-02-27 00:22:17 | 2018-02-27 00:23:45 | 4 |
229185 | 2018-02-27 | 00:23:40 | Rw | 18638522 | 2494448 | 2018-02-27 00:22:17 | 2018-02-27 00:23:45 | 4 |
229185 | 2018-02-27 | 00:23:41 | Rw | 18638519 | 2493008 | 2018-02-27 00:22:17 | 2018-02-27 00:23:45 | 4 |
229185 | 2018-02-27 | 00:23:45 | Rw | 18638405 | 2462851 | 2018-02-27 00:22:17 | 2018-02-27 00:23:45 | 4 |
229185 | 2018-02-27 | 00:57:17 | Pw | 18638368 | 2495440 | 2018-02-27 00:57:17 | 2018-02-27 00:59:47 | 5 |
229185 | 2018-02-27 | 00:58:36 | Pw | 18638365 | 2493008 | 2018-02-27 00:57:17 | 2018-02-27 00:59:47 | 5 |
229185 | 2018-02-27 | 00:58:55 | Pw | 18638362 | 2489473 | 2018-02-27 00:57:17 | 2018-02-27 00:59:47 | 5 |
229185 | 2018-02-27 | 00:59:47 | Rw | 18638272 | 2462851 | 2018-02-27 00:57:17 | 2018-02-27 00:59:47 | 5 |
Hi,
just add "and previous(document)<>document" in first "where" and we are almost there :-). So the final code is:
Submap:
Mapping Load
order_no&'|'&date&'|'&time&'|'&document as key,
RowNo() as temp_sub_order_no
FROM
[X] (x)
where document='Pw' and previous(document)<>document
;
Table1:
Load
order_no,
date,
time,
document,
document_no,
material_code,
ApplyMap('Submap', key, null()) as sub_order_no
;
LOAD
order_no&'|'&date&'|'&time&'|'&document as key,
order_no,
date,
time,
document,
document_no,
material_code
FROM
[X] (x)
;
NoConcatenate
New:
Load
order_no,
date,
time,
document,
document_no,
material_code,
If( isnull(sub_order_no), peek(sub_order_no),sub_order_no )as sub_order_no
Resident Table1
order by date, time
;
Drop table Table1
;
There is some exceptions which I don't want to bring in here - just have to work it out by myself . Anyway, thank you for your help!
Hi Mateusz,
I have re-created your table in the load script as per below:
Original: //Load original data
LOAD order_no,
Date(date) as date,
time,
document,
document_no,
material_code,
if(peek(sub_order_no)<>previous(sub_order_no), 1, sub_order_no) as sub_order_no, // create sub_order-no
if(peek(sub_order_no)<>previous(sub_order_no), date&' '&time, from) as from, // create from
order_no&'|'&sub_order_no as ordersub // create ordersub for aggregation purpose below
FROM
(txt, codepage is 1252, embedded labels, delimiter is '\t', msq, header is 1 lines);
Left Join(Original)
Load
ordersub,
Time(max(time)) as totime
Resident Original
Group by ordersub;
NewTable:
load *,
date&' '&totime as to // create to field
Resident Original;
Drop table Original;
Screenshot of the table box:
Hi,
that's not exactly what I meant. The source table is columns 1-6 and I want to create columns 7-9.
So I can't use "sub_order_no" in first LOAD because it doesn't exist.
Also a question, isn't peek() and previos() equal in this scenario? There was no transform of original data. So the input and output will have the same data.
Peek() vs Previous() – When to Use Each
Anyway, thanks for introducing me to Peek() and Previous() functions. I have managed to mark the first "Pw" - which date will be te first date of the next sub_order and the end date of the previous one. This way, after few resident loads I can make an Interval for the sub_orders (IntervalMatch).
As you can see if you read the script, only the first 6 columns are loaded directly, the others are derived. I copied only these fields to work out the solution. I created the sub_order_no first as it was the simplest logic.
Peek will look at the output of the load statement, where previous will look at the output of the load statement.
Hi,
isn't this true that when you are refeering to field before specific the output column name, that field should already exist and be specified before?
Like here: " if(peek(sub_order_no)<>previous(sub_order_no), 1, sub_order_no) as sub_order_no," your are refeering to the field that doesn't exist but it's currently defining.
Just to be sure I have created an empty .qvw, uploaded same data I wrote as an example and paste your code. The loaded data and result is as below.
You are right, I think I might have done a few iterations and lost my history and original logic.
Will need to take another look, sorry for the mistake.
No problem, I really apprecieate your time .
I have already half a solution of my problem. I have created two seperate tables and now I need to join them together somehow. But let us wait for your solution because I'm not sure if mine is such efficient if it comes to performance. - and most of all - it doesn't resolve all the problems .
Please don't tell me that you give up on me .
If you have no concept on this, maybe we can go further and try to join my tables? I have raw data and a range for sub orders, and now I need to join them together
Hi Mateusz,
Sorry, been busy, you are not forgotten ..
I've got he first part for you, a bit more convoluted before, but is working for me:
Submap:
Mapping Load order_no&'|'&date&'|'&time&'|'&document as key,
RowNo() as temp_sub_order_no
FROM
(txt, codepage is 1252, embedded labels, delimiter is '\t', msq)
where document='Pw';
Table1:
Load order_no,
date,
time,
document,
document_no,
material_code,
ApplyMap('Submap', key, null()) as sub_order_no;
LOAD order_no&'|'&date&'|'&time&'|'&document as key,
order_no,
date,
time,
document,
document_no,
material_code
FROM
(txt, codepage is 1252, embedded labels, delimiter is '\t', msq);
NoConcatenate
New:
Load order_no,
date,
time,
document,
document_no,
material_code,
If( isnull(sub_order_no), peek(sub_order_no),sub_order_no )as sub_order_no
Resident Table1;
Drop table Table1;