Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
Please suggest the good approach to achieve the below result set.
Data-
Output
@marcus_sommer please suggest
Maybe like:
t: load c_id, pid1, pc_id, year, amt from X;
concatenate(t) load c_id, pid1, pc_id, nextyear as year, nextamt as amt from X;
concatenate(t) load c_id, pid1, pc_id, nexttwoyear as year, nexttwoamt as amt from X;
You may go with The Crosstable Load - Qlik Community - 1468083 approach which could also resolve n fields - but in your case you would need some extra measurements because the field-names contain not the direct category-information (the next, nexttwo, ...) and there are with year and amt two value-fields.
If the number of fields and their names quite predictive the concatenate would be logically simpler.
Maybe like:
t: load c_id, pid1, pc_id, year, amt from X;
concatenate(t) load c_id, pid1, pc_id, nextyear as year, nextamt as amt from X;
concatenate(t) load c_id, pid1, pc_id, nexttwoyear as year, nexttwoamt as amt from X;
@marcus_sommer Thankyou, can we do it with other approach other than concatenate?
You may go with The Crosstable Load - Qlik Community - 1468083 approach which could also resolve n fields - but in your case you would need some extra measurements because the field-names contain not the direct category-information (the next, nexttwo, ...) and there are with year and amt two value-fields.
If the number of fields and their names quite predictive the concatenate would be logically simpler.
Hi @marcus_sommer but concatenation will create duplicates in fact, just year field will be a differentiator.
Can we do like creating two fact one with nextone amd nexttwo data and link them based on PcId.
But other concern with this model, if we make selection lets say 2025 than in raw fact all values will 2025 will come but it will also filter second fact.
Regardless to the way how such transformation is performed the result data-set is a table with lesser columns and more records as the source data-set. This kind of measurement itself creates no extra or invalid data and also it won't fix any data-quality issues which may within the source data.
In many scenarios the remaining needs in regard to the storage/RAM sizing will be significantly reduced because Qlik stores only distinct field-values in system-tables with bit-stuffed pointer to the data-tables. The more redundant the data are the more benefits has such column-oriented storage against a row-oriented storage like in sql/csv.
Separating these data into n fact-tables is the complete opposite of being useful because a linking of them is complicated to impossible or requires extra measurements like a link-table without providing any benefit - there are only disadvantages.
If there are further needs to differentiate the data between the concatenate-steps - which are not uniquely enough provided by the origin data - you could add to each load a source-information, like:
'load 1' as Source
and/or also including a record-information with recno() and/or rowno().
Thankyou for detailed explanation ☺