Skip to main content
Announcements
NEW: Seamless Public Data Sharing with Qlik's New Anonymous Access Capability: TELL ME MORE!
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Best practice for SAP Connector (5.2) in QlikView

The following code is an example of a powerpoint from QlikView 7. We use the SAP-Connector 5.2 and our duration will need more time every day. We handel ca. 270.000 recards a day and need more than 8 hourse to edit the data since the year 2006. The consultant, who create the application ist out of office since 2008 and we inherit the project. I´m thankful for every idea. Thank you.

with kind regards

Florian


QV_Table:
SQL SELECT PrimaryKey, X, Y FROM DB_TABLE
WHERE ModificationTime >= #$(LastExecTime)#
AND ModificationTime < #$(BeginningThisExecTime)#;

Concatenate
LOAD PrimaryKey, X, Y FROM File.QVD
WHERE NOT EXISTS(PrimaryKey);

Inner Join
SQL SELECT PrimaryKey FROM DB_TABLE;

STORE QV_Table INTO File.QVD;
[\code]
</body>
7 Replies
pablolabbe
Luminary Alumni
Luminary Alumni

This code do an incremental load of a QVD.

You say that Qlikview needs 8 hours to extract 270.000 records and all of these records are since 2006 ?

Or

Qlikview need 8 hours to extract data since 2006, and for each day you have 270.000 records ?

How large are the QVDs ? ( records and File Size )

disqr_rm
Partner - Specialist III
Partner - Specialist III

By no means this code could take 8 hours and extract only 270k records. Do you have any for-next loop going before or after this piece of code?

Is is possible for you to attach whole qvw document here? Since it's storing the QVD and probably (and hopefully) dropping the table, the QVW size should be pretty small.

Not applicable
Author

We get ca. 270k records per day from SAP. The QVDs are splitted in years, so we have a QVD File for every SAP table and year. We must change our QVDs within new, changed and delteted records. In our workflow we have 8 QVW scripts that get and transform the data and at last 3 QVW, that are the GUI for the data.

here is an extract from our code for one SAP table


// Table: VBAK - Verkaufsbeleg

if (isNull(vVBAKLastUDATE)) then
let vVBAKLastUDATE = '00000000';
end if

if (isNull(vVBAKLastERDAT)) then
let vVBAKLastERDAT = '00000000';
end if

// Neue Zeilen aus der Quelle laden
[VBAK]:
Load
[VBELN] as %VBELN,
[VBELN] as [VBELN_VERKAUFSBELEG],
[VKORG] as [VKORG_VERKAUFSORGANISATION],
[VKORG] as %VKORG,
[AUART] as [AUART_VERKAUFSBELEGART],
[VTWEG] as [VTWEG_VERTRIEBSWEG],
[AUGRU] as [AUGRU_AUFTRAGSGRUND],
[BSTDK] as [BSTDK_BESTELLDATUM],
[BSTNK] as [BSTNK_BESTELLNUMMER],
[BNAME] as [BNAME_NAME],
[ERDAT] as [ERDAT_ANGELEGT_AM],
[UDATE] as [UDATE_GEAENDERT_AM],
mid([ERDAT],1,4) as [ERDAT_YEAR];
SQL
SELECT VBAK~VBELN,VBAK~VKORG,VBAK~AUART,VBAK~AUGRU,VBAK~BSTDK,VBAK~BSTNK,VBAK~BNAME,VBAK~ERDAT,VBAK~KNUMV,VBAK~VTWEG,ZPMWM_AEND~UDATE
FROM ZPMWM_AEND
left join VBAK on ZPMWM_AEND~VBELN = VBAK~VBELN
where ZPMWM_AEND~TABNAME = 'VBAK'
and ZPMWM_AEND~UDATE >= '$(vVBAKLastUDATE)'
;

VBAK_Temp:
noconcatenate
Load
[VBELN] as %VBELN_Temp,
[VBELN] as [VBELN_VERKAUFSBELEG],
[VKORG] as [VKORG_VERKAUFSORGANISATION],
[VKORG] as %VKORG,
[AUART] as [AUART_VERKAUFSBELEGART],
[VTWEG] as [VTWEG_VERTRIEBSWEG],
[AUGRU] as [AUGRU_AUFTRAGSGRUND],
[BSTDK] as [BSTDK_BESTELLDATUM],
[BSTNK] as [BSTNK_BESTELLNUMMER],
[BNAME] as [BNAME_NAME],
[ERDAT] as [ERDAT_ANGELEGT_AM],
[ERDAT] as [UDATE_GEAENDERT_AM],
mid([ERDAT],1,4) as [ERDAT_YEAR];
SQL
SELECT VBELN,VKORG,AUART,AUGRU,BSTDK,BSTNK,BNAME,ERDAT,KNUMV,VTWEG
FROM VBAK
where ERDAT >= '$(vVBAKLastERDAT)' OR AEDAT >= '$(vVBAKLastERDAT)'
//where ERDAT >= '$(vVBAKLastERDAT)' OR AEDAT >= 0100360360
;

concatenate (VBAK)
Load
%VBELN_Temp,
%VBELN_Temp as %VBELN,
[VBELN_VERKAUFSBELEG],
[VKORG_VERKAUFSORGANISATION],
%VKORG,
[AUART_VERKAUFSBELEGART],
[VTWEG_VERTRIEBSWEG],
[AUGRU_AUFTRAGSGRUND],
[BSTDK_BESTELLDATUM],
[BSTNK_BESTELLNUMMER],
[BNAME_NAME],
[ERDAT_ANGELEGT_AM],
[UDATE_GEAENDERT_AM],
[ERDAT_YEAR],
1 as dummy
resident VBAK_Temp
where not exists(%VBELN,%VBELN_Temp);

drop table VBAK_Temp;
drop fields dummy, [%VBELN_Temp];

// Schleife über Jahre, um bereits gelesene Daten zu partitionieren

for j = $(vStartJahr) to $(vAktJahr)

let vQvdReadFlag = 0;

if (isNull(vVBAKQvdEx$(j))) then
let vVBAKQvdEx$(j) = 1;
let vQvdReadFlag = -1;
end if

[VBAK_$(j)]:
Load
%VBELN,
[VBELN_VERKAUFSBELEG],
[VKORG_VERKAUFSORGANISATION],
%VKORG,
[AUART_VERKAUFSBELEGART],
[VTWEG_VERTRIEBSWEG],
[AUGRU_AUFTRAGSGRUND],
[BSTDK_BESTELLDATUM],
[BSTNK_BESTELLNUMMER],
[BNAME_NAME],
[ERDAT_ANGELEGT_AM],
max([UDATE_GEAENDERT_AM]) as [UDATE_GEAENDERT_AM]
resident [VBAK]
where [ERDAT_YEAR] = '$(j)'
group by %VBELN, [VBELN_VERKAUFSBELEG], [VKORG_VERKAUFSORGANISATION], %VKORG, [AUART_VERKAUFSBELEGART], [VTWEG_VERTRIEBSWEG], [AUGRU_AUFTRAGSGRUND], [BSTDK_BESTELLDATUM], [BSTNK_BESTELLNUMMER], [BNAME_NAME], [ERDAT_ANGELEGT_AM]
;

// Falls bereits eine QVD Datei für das Jahr $(j) existiert, QVD laden
// "Anhängen+Ändern" --> In der Quelle werden nur Inserts/Updates durchgeführt
if ($(vQvdReadFlag) >= 0) then
Load * FROM VBAK_$(j).qvd (qvd)
where not exists(%VBELN);
end if

// QVD Datei mit neuen Zeilen speichern
store VBAK_$(j) into VBAK_$(j).qvd;
drop table VBAK_$(j);

next


// Letzte ID ermitteln

tmp:
load max([UDATE_GEAENDERT_AM]) as UPDATMax, max([ERDAT_ANGELEGT_AM]) as ERDATMax resident VBAK;
let vUPDATMax = peek('UPDATMax',-1,tmp);
let vERDATMax = peek('ERDATMax',-1,tmp);

if (len('$(vUPDATMax)') <> 0) then
SET DateFormat='YYYYMMDD';
let vVBAKLastUDATE = date($(vUPDATMax));
SET DateFormat='DD.MM.YYYY';
end if

if (len('$(vERDATMax)') <> 0) then
SET DateFormat='YYYYMMDD';
let vVBAKLastERDAT = date($(vERDATMax));
SET DateFormat='DD.MM.YYYY';
end if

drop table tmp;
drop table [VBAK];


disqr_rm
Partner - Specialist III
Partner - Specialist III

OK. Good.

Now, there is a flag under Document Properties to generate the logfile upon execution. Could you please turn it on, if it's not already on, and when this VBAK extracts runs next time, grab log file generated in that folder and attach it here? That will tell use where exactly it's taking longer time.

I see it is joining with a "Z" table in the first SELECT. It very well could be an index problem as well. But that's what log file will tell us.

Rakesh

Not applicable
Author

Here is our logfile from yesterday. In the Z-table are all tables loged, where records were changed. here ist a screenshot

And here is the logfile from the qlikview script.

[View:http://community.qlik.com/cfs-file.ashx/__key/CommunityServer.Discussions.Components.Files/27/2742.2_5F00_sap_5F00_jp6_5F00_qvd_5F00_delta.qvw.log:550:0]

pablolabbe
Luminary Alumni
Luminary Alumni

Florian,

Your script have some querys that takes very long time but returns onyl a few record.

At line 182 the following SQL takes 51 minutes and returns only 2.080 lines.

SELECT VBELN,VKORG,AUART,AUGRU,BSTDK,BSTNK,BNAME,ERDAT,KNUMV,VTWEG FROM VBAK

where ERDAT >= '20100526' OR AEDAT >= '20100526'

ERDAT stores the date when document was created and AEDAT stores the date when record was changed.

By default AEDAT is not an indexed field. Only ERDAT is indexed, then I think your query do a Table Full Scan to retrieve a few records.

This filter is already used at line 358 and 509. and these querys takes 50 minutes too.

The overall time to run these querys takes approximately 3 hours.

You have another bottleneck at line 113, but this query may be using an index because vbeln is part of the primary key.

SELECT VBELN,BESTK,LFSTK,LFGSK,WBSTK,ABSTK,GBSTK,AEDAT

FROM VBUK

where ( ( VBELN between '0100490392' and '0199999999' )

or ( VBELN between '1200007042' and '1299999999' )

or ( VBELN between '1300007329' and '1399999999' ) )

8 fields found: VBELN_VERTRIEBSBELEG, BESTK_BESTÄTIGT, LFSTK_LIEFERSTATUS, LFGSK_GESAMTLIEFERSTATUS, WBSTK_GESWARENBEWEGUNGSTAT, ABSTK_ABSAGESTATUS, GBSTK_GESAMTSTATUS, AEDAT_GEÄNDERT_AM, 2.935.300 lines fetched

This query takes 45 minutes to retrieve 2.9 milions lines.

Conclusion:
Try to modify the way you filter the records when retrieving data from VBAK. Why not use the z-table that log the updates. I see VBAK as one of the tables loged. May be this query could help you, using the z-table as the main filter:
SELECT VBAK~VBELN,VBAK~VKORG,VBAK~AUART,VBAK~AUGRU,
VBAK~BSTDK,VBAK~BSTNK,VBAK~BNAME,VBAK~ERDAT,
VBAK~KNUMV,VBAK~VTWEG
FROM VBAK
JOIN ZPMWM_AEND ON VBAK~VBELN = ZPMWM_AEND~VBELN
where ZPMWM_AEND~TABNAME = 'VBAK'
AND ZPMWM_AEND~UDATE >= '20100526'
Good Luck




Not applicable
Author

This is right "Qlikview need 8 hours to extract, convert and agregate data since 2006, and for each day you have 270.000 records". One of our QVD file, for example "VBAK_2010.qvd" is 24 MB large, and the QVD file "VBAK_2009.qvd" is 70 MB large. An other question: We have a six cluster SAP system, but our connections is static by IP. Is there a way to use load balancing in QV? Thanks for your help and ideas.