Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
So we have this QVD Reload step that we perform to get the latest data. Currently, its a huge table....490 million rows x 25 columns. Quite out of control. If I do not perform an incremental load, the QVD file size is 42.8GB. When I perfrom an incremental load, the new size is 62.7GB.
Is it creating new records? The problem is that both files have the same data (time stamp). Im not sure what I am doing wrong.
Hi Scott,
it's difficult to tell you what u are doing wrong without seeing your script.
you problem a part, in my experience,i can suggest you to split qvd by year or something else.
So you can perform a light incremental load.
C u,
Stefano.
It's possible that your incremental reload process is getting an unoptimized load when re-reading the existing qvw, causing some nice tight integers to be converted to strings. See
http://qlikviewnotes.blogspot.com/2008/05/when-less-data-means-more-ram.html
QT has taken the "unoptimized field expansion" as a bug, but I have no idea when it will be fixed. In the meantime, check if you are getting an unoptimized load and do what you can to make it optimized,
-Rob
So looking at the link you sent, I believe I am doing everything correctly. See my script below. The original QVD was written with the same load script at the bottom for the Weekly Selling but loaded the entire database from the fiscal time key of 201001.
Data:
LOAD
*
FROM
QVD\Weekly_Selling2.qvd (qvd);
//order existing Weekly_Selling.qvd based on fisc_time_key
Weekly_Selling:
NoConcatenate LOAD * Resident Data
Order by fisc_time_key asc;
Drop table Data;
//set variable for the maximum date
LET vMaxFiscalDateKey = peek('fisc_time_key');
Season_Map:
Mapping Load
class1code & '_' & class2code & '_' & class3code as Style_Key,
Left(description, 4) as Season
From QVD\Class3.qvd(qvd);
Product_Status_Map:
Mapping
LOAD * INLINE [
status, status_code
0, E
1, R
2, O
3
4, H
5, D
6, P
];
//load data from database based on vMaxFiscalDateKey variable
Weekly_Selling:
Concatenate
LOAD
store_num &'-'& Num(dept, '00000') &'-'& res_num as %DCMKey, //DCM STRUCTURE KEY
store_num & '-' & sku & '-' & fisc_time_key as %MDRKey, //MARKS DOWNS & RECEIPTS KEYS
store_num &'-'&product_number&'-'&fisc_time_key as %GMKey, //GROSS MARGIN DOLLARS KEY
fisc_time_key,
sku,
"sku_desc",
size1,
size2,
size1 & ' ' & size2 as size_dim,
"store_num",
"store_name",
"style",
"style_desc",
style & ' ' & style_desc as style_dim,
color,
"color_desc",
color & ' ' & color_desc as color_dim,
dept,
"dept_desc",
dept & ' ' & dept_desc as dept_dim,
"class_cde",
"class_desc",
class_cde & ' ' & class_desc as class_dim,
ApplyMap('Season_Map', dept & '_' & class_cde & '_' & style) as season,
"res_num",
"res_nme",
res_num & ' ' & res_nme as res_dim,
"oh_units",
"oh_dollars",
"oo_units",
"oo_dollars",
"ls_sales_units",
"ls_sales_dollars",
"nt_sales_units",
"nt_sales_dollars",
"soq_units",
"soq_dollars",
"stock_out_ind",
ars,
"retail_price",
"fcst_13wk_u",
"fcst_13wk_d",
"fcst_units",
"fcst_dollars",
"product_number",
status,
ApplyMap('Product_Status_Map', status) as status_code,
"nt_sales_units_ytd",
"nt_sales_dollars_ytd",
"nt_sales_units_mtd",
"nt_sales_dollars_mtd",
"nt_sales_units_ltd",
"nt_sales_dollars_ltd",
"nt_sales_units_std",
"nt_sales_dollars_std",
suspended_stock_out_ind,
contribution_code;
SQL SELECT *
FROM DCMCUSDB.weekly_selling where fisc_time_key >= $(vMaxFiscalDateKey);
Hard to say without seeing .mem files. Did you check them out with QlikviewOptimizer?
I'm curious about your re-reading the resident file to get the latest date. Could you ensure your SQL SELECT is order by adding an ORDER BY fisc_time_key? Then you could just peek() the last row of qvd load.
-Rob
I am not familiar with QlikviewOptimizer. Can you point me towards that?
I will try the changes you mentioned about using the ORDER BY.
You can find a copy of Qlikview Optimizer in this thread: