Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

How To Tune QV SAP SQL Connector to increase load speed

Hi all,

Any ideas on tuning the QlikView SAP SQL Connector to increase its speed.

We have been testing a load of 7.5 million records from a single table and it has taken 7 hrs, 22 mins, which is too slow for our needs.

Have tried the following settings on our sap system and used transaction ST02 to check that the import/export buffers have not been overloaded etc.

_____________________________________________________________________________

Initial record buffer IRBD

rsdb/ntab/irbdsize 6000 kB Size of initial record buffer

Data area size for the "Initial record buffer" to store the initial record layout of an R/3 table that

is accessed at runtime.

Change to 12000

rsdb/ntab/entrycount 20000 Max. number / 4 of initial records buffered

Number of directory entries created for the "table definition buffer" and

the "field description buffer". The default appears to be 3.33 * 12000 (value of irbdsize) = 39960

Change to 40000

_____________________________________________________________________________

Program buffer PXA

CUA buffer CUA

rsdb/cua/buffersize 3000 kB Size of CUA buffer

The CUA buffer holds objects from the GUI interface, such as menus, pushbuttons, and so on.

In the data base, these objects are held in tables D342L and D345T.

Recommended settings for large Systems: 3000 - 6000 KB.

change to 6000

affects ipc/shm_psize_40 as the CUA buffer is assigned to an SAP shared memory pool.

Run sappfpar check pf=\\usr\sap\trans\tp_domain_<sid> to check settings

_____________________________________________________________________________

Generic key table buffer TABL

zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer

This buffer holds entries from tables which have been defined in the Data Dictionary with buffer

type X (buffer all entries that belong to active clients) and G (buffer entries according to a generic

key specification).

Change to 60000000 Bytes

Affects ipc/shm_psize_10: In standard systems, the resident-table buffer is stored in shared

memory pool 10. If you change the size of the buffer, then you must change the amount of memory

in pool 10 by the same amount. You can remove the buffer from the pool by setting the

parameter ipc/shm_psize_19=0.

Removing the buffer from pool 10 simplifies your profile management. The buffer is stored in its own

shared memory segment, which is allocated automatically when the server is started. You do not have

to change ipc/shm_psize_10 when you change the buffer size.

Run sappfpar check pf=\\usr\sap\trans\tp_domain_<sid> to check settings

zcsa/db_max_buftab 5000 Max. number of buffered objects

Specifies the number of directory entries in the resident-table buffer. This is the maximum number of

tables that can be buffered. Set the parameter to a large enough value to ensure that there are always

free directory entries.

No Changes needed.

______________________________________________________________________________________

Single record table buffer TABLP

rtbb/buffer_length 10000 kB Size of single record table buffer

Specifies the size of the partial table buffer. The partial table buffer holds individual records from tables

that are defined in the data dictionary with buffer type P.

Effects upon other parameters: rtbb/max_tables. The efficiency of the partial table buffer depends not only

upon its size but also upon the number of directory entries in the buffer (the maximum number of tables that

can be buffered). You can specify the number of directory entries

in profile parameter rtbb/max_tables.

Change to 20000 KB

rtbb/max_tables 500 Max. number of buffered tables

Specifies the number of directory entries in the partial table buffer. This is the maximum number of tables that

can be buffered. Set the parameter to a large enough value to ensure that there are always free directory entries.

Change to 1000 KB

_____________________________________________________________________________

Export/import buffer EIBUF

rsdb/obj/buffersize 40000 kb Size of export/import buffer [kB]

This parameter sets the size of the import /export buffer (KB). The buffer is used for data storage

in the shared memory and is accessed using the ABAP commands EXPORT TO/IMPORT FROM

SHARED BUFFER and DELETE FROM SHARED BUFFER. The commands only affect the local

system. Buffers on external application servers are not changed. If the buffer is full, old buffer

objects are kicked out when new objects are exported to the buffer.

Rule of thumb seems to be to double the current figure.

This should directly affect the QlikView loads.

Change to 400000.

rsdb/obj/max_objects 20000 Maximum number of objects in export / import buffer

According to note 702728, as a rule of thumb the value of rsdb/obj/max_objects should

not be larger than one quarter of the value of rsdb/obj/buffersize.

Change to 100000.

__________________________________________________________________________________

Having carried out the above, we restarted the SAP system.

In the connection string I used Packetsize = 100000 and BufferPercent=12. A higher BufferPercent causes the sap system to issue the following message in the SM37 log file for job name /QTQVC/READ_DATA

'Not enough buffer space available - decrease buffer percentage used to 10 or a lower value. Include a string like: ;bufferpercentage=10; in the connection string.'

The control record in table /QTQVC/CONTROL (via transaction SE16) shows

Packetsize 43,124, Tablename = Data1024, buffer_percent = 12

__________________________________________________________________________________________________________

What's the best practise for speeding up the connector.I currently doing the testing on my PC - Is the QV SAP connector throttling the load

due to the PC's spec (Pentium Dual Core 2.8, 2GB Ram)

Is it possible to do parallel loads? How do we synchronise parallel loads so we can produce a single QVD file from them?

Many Thanks for your time.

Dave.

8 Replies
suniljain
Master
Master

Dear friend,

Can you give me details regarding which table you are extractig ?. what is primary memory size of server ?. ang cache memory allocated by SAP server ?

regards

Sunil jain.

Not applicable
Author

Hi Sunil,

The table I'm extracting from our BW system is /BI0/AGM_O0100 which is an 'active' table behind a DSO GM Line Items,

which is sourced from the SAP Grants Management Module (we are a university)

I'm currently testing against our Quality Assurance system which holds 7,501,459 records in the above table.

The load takes 7.5 hours when nothing else is running on the system.

The table structure consists of the following:

FieldData ElementData TypeLengthDecimal PlacesShort Description
GM_DOCNR/BI0/OIGM_DOCNRCHAR100Ledger Document Number
FISCVARNT/BI0/OIFISCVARNTCHAR20Fiscal year variant
FISCPER/BI0/OIFISCPERNUMC70Fiscal year / period
GM_DOCCT/BI0/OIGM_DOCCTCHAR10Document Category
GM_DOCLN/BI0/OIGM_DOCLNCHAR60Ledger Line Item Document Number
CURTYPE/BI0/OICURTYPECHAR20Currency Type
SOURSYSTEMRSSOURSYSIDCHAR20Source system ID
GM_BACTIV/BI0/OIGM_BACTIVCHAR40Business Process
ASSET_MAIN/BI0/OIASSET_MAINCHAR120Main Asset Number
ASSET/BI0/OIASSETCHAR40Asset Sub-number
FI_AWORG/BI0/OIFI_AWORGCHAR100Reference Organizational Units
FI_AWTYP/BI0/OIFI_AWTYPCHAR50Reference Procedure
GM_BDPH/BI0/OIGM_BDPHCHAR10Budget phase
GM_BDGST/BI0/OIGM_BDGSTCHAR10Workflow State of a Record in the Budget
GM_BDRLS/BI0/OIGM_BDRLSCHAR10Budget Release Status
ACDOC_NO_F/BI0/OIACDOC_NO_FCHAR100FI Document Number
GM_BLSTA/BI0/OIGM_BLSTACHAR10Billing Status for Item
PSTNG_DATE/BI0/OIPSTNG_DATEDATS80Posting date in the document
BUS_AREA/BI0/OIBUS_AREACHAR40Business area
ACDOC_IT_F/BI0/OIACDOC_IT_FNUMC30FI Document Posting Lines
CHRT_ACCTS/BI0/OICHRT_ACCTSCHAR40Chart of accounts
CMMT_ITEM/BI0/OICMMT_ITEMCHAR240Commitment item
COSTCENTER/BI0/OICOSTCENTERCHAR100Cost Center
CO_AREA/BI0/OICO_AREACHAR40Controlling area
CREATEDON/BI0/OICREATEDONDATS80Date on which the record was created
CREA_TIME/BI0/OICREA_TIMETIMS60Time Created
GM_DOCTY/BI0/OIGM_DOCTYCHAR20FI-SL DocumentType
GM_DCORG/BI0/OIGM_DCORGCHAR10Document's Technical Origin
RES_CLOSED/BI0/OIRES_CLOSEDCHAR10Indicator: Document item completed
FUNC_AREA/BI0/OIFUNC_AREACHAR160Functional area
FM_AREA/BI0/OIFM_AREACHAR40Financial management area
FUNDS_CTR/BI0/OIFUNDS_CTRCHAR160Funds Center
GL_ACCOUNT/BI0/OIGL_ACCOUNTCHAR100G/L Account
RES_DEL/BI0/OIRES_DELCHAR10Incidator: Document item deleted
MATERIAL/BI0/OIMATERIALCHAR180Material
PU_MEASURE/BI0/OIPU_MEASURECHAR240Funded Program
NETWORK/BI0/OINETWORKCHAR120Network
GM_OKEY_P/BI0/OIGM_OKEY_PCHAR200Previous Document: Object Key
GM_OORG_P/BI0/OIGM_OORG_PCHAR100Previous Document: Reference Organisational Units
GM_OPOS_P/BI0/OIGM_OPOS_PCHAR100Previous Document: Reference Line Item
GM_OTYP_P/BI0/OIGM_OTYP_PCHAR50Previous Document: Reference Procedure
COORDER/BI0/OICOORDERCHAR120Order Number
PROFIT_CTR/BI0/OIPROFIT_CTRCHAR100Profit Center
GM_BDVLN/BI0/OIGM_BDVLNCHAR30Budget Validity Number
COMP_CODE/BI0/OICOMP_CODECHAR40Company code
PU_RACTIV/BI0/OIPU_RACTIVCHAR40FI-SL Business Reference Procedure
PU_RDOCCT/BI0/OIPU_RDOCCTCHAR10Reference document category
PU_RDOCLN/BI0/OIPU_RDOCLNCHAR60Number of Reference Line Item
PU_RDOCNR/BI0/OIPU_RDOCNRCHAR100Reference document number of an accounting document
GM_RDDET/BI0/OIGM_RDDETCHAR200Reference Key
REF_DOC_FY/BI0/OIREF_DOC_FYNUMC40Fiscal Year of Reference Document
FUND/BI0/OIFUNDCHAR100Fund
/BIC/ZGM_FNDTP/BIC/OIZGM_FNDTPCHAR10Fund Type (Internal or External)
GM_FSCYR/BI0/OIGM_FSCYRNUMC40Fiscal year of Grantee Organization
GRANT_NBR/BI0/OIGRANT_NBRCHAR200Grant
GM_SPNCL/BI0/OIGM_SPNCLCHAR200Sponsored Class
GM_SCLTY/BI0/OIGM_SCLTYCHAR10Sponsored Class Type
GM_SPNPG/BI0/OIGM_SPNPGCHAR200Sponsored Program
GM_STATI/BI0/OIGM_STATICHAR10GM Statistical Indicator
GM_RTRAN/BI0/OIGM_RTRANCHAR40Business Transaction
GM_VLTYP/BI0/OIGM_VLTYPCHAR20GM Value Type
GM_BDVER/BI0/OIGM_BDVERCHAR30Budget version
GM_SGTXT/BI0/OIGM_SGTXTCHAR500Item text
CREATEDBY/BI0/OICREATEDBYCHAR120Name of person who created the object
VTYPE/BI0/OIVTYPENUMC30Value Type for Reporting
VALUATION/BI0/OIVALUATIONNUMC10Valuation view
WBS_ELEMT/BI0/OIWBS_ELEMTCHAR240Work Breakdown Structure Element (WBS Element)
VALUE_DATE/BI0/OIVALUE_DATEDATS80Value Date
/BIC/ZGM_TXCO/BIC/OIZGM_TXCOCHAR150GM TX. SAP (CO) Ref.
/BIC/ZGM_TXDR/BIC/OIZGM_TXDRCHAR160GM TX. Document Ref.
/BIC/ZGM_TXER/BIC/OIZGM_TXERCHAR120GM TX. External Ref.
/BIC/ZGM_TXFI/BIC/OIZGM_TXFICHAR150GM TX. SAP (FI) Ref.
/BIC/ZGM_TXOS/BIC/OIZGM_TXOSCHAR100GM TX. Offset
DATEFROM/BI0/OIDATEFROMDATS80Valid from
DATETO/BI0/OIDATETODATS80Valid to
/BIC/ZGM_GLCDE/BIC/OIZGM_GLCDECHAR100GL Account Non-Compounded
/BIC/ZGM_RBVP/BIC/OIZGM_RBVPCHAR30Budget Validity Number from Posting Date
FISCPER3/BI0/OIFISCPER3NUMC30Posting period
FISCYEAR/BI0/OIFISCYEARNUMC40Fiscal year
CALMONTH/BI0/OICALMONTHNUMC60Calendar Year/Month
GM_AMT/BI0/OIGM_AMTCURR172Amount
/BIC/ZGM_AAMT/BIC/OIZGM_AAMTCURR172Actual Amount
GM_QTY/BI0/OIGM_QTYQUAN173Quantity
CURRENCY/BI0/OICURRENCYCUKY50Currency key
UNIT/BI0/OIUNITUNIT30Unit of measure
RECORDMODERODMUPDMODCHAR10BW Delta Process: Record Mode
728

The server has 32Gb of memory but only 4GB has been allocated as 'extended' memory - see ST02 screenshot below:

Any ideas what the bottle neck could be. Could it be the PC I'm using?

Many Thanks for your Help.

Dave.

Lars_Wahlstedt
Employee
Employee

Hi,

you should expect roughly 1 million records/ minute for a 5-column table on decent hardware. The table you are downloading is much wider, do you really need all the fields?

The client side pc definitely affects the total performance, especially the processor performance is important.

Be also very careful with any manipulations in the load script when reading from SAP , you should rather download directly to a qvd-file and then do all manipulations when reading this instead.

Kind regards,

Lars Wahlstedt

suniljain
Master
Master

Dear Dave,

Your PC configuration is too low to extract larger table from SAP BW. Because we are alredy extractioing more than crores of record from SAP BW but we 32 GB RAM size . and this much is sufficient .

But 2 GB is too low.

normally QV SAP connector extract moe than 1 crore record in 10 minute.

Regards

Sunil Jain.

Not applicable
Author

Thanks Lars,

1 Million records a minute sounds miraculous :-), we are averaging 20000~ a minute after ramping up the import/export parameter rsdb/obj/buffersize to 400000 (400Mb) and rsdb/obj/max_objects to 100000. and restarting the sap instance,

We need to load all the the line item fields to form a base QVD, which will subsequently be transformed into star schema type QVD's. The idea is that the user can 'drill down' to the base data if required and also because its far quicker to load data from QVD's.We also want to replace our SAP BW front end reporting tools with QlikView.

The problem lies with getting the data out of the sap system in a timely manner.

I'm testing the sap connector on a PC dual core pentium 2.79 2Gb Ram running 32 bit windows.This is because we are in the process of purchasing a development server and the production PC server is off limits (with good reason).

Checking Windows Task Manager, the PC (CPU/memory) doesn't look overloaded as I have been careful with the background tasks

and there's hardly any network activity. The only thing I've noticed is that screen display showing the load is updated every two minutes or so,

it then goes away and does some thinking...

I've also noticed that the /QTQVC/CONTROL record on the SAP server is constantly accessed during the 'read' process. Would an index be beneficial?

No manipulations are carried out on the LOAD statement from SAP... as I've seen the other forum posts :-))

I've now run out of ideas... is there a way of splitting the table, loading in parallel and reassembling into a QVD file which can then be updated on a daily basis? I've read posts where multiple instances of QV,exe are used, however there is no 'event' synchronisation whereby multiple job threads can be executed in parallel and then loaded into a singular object - in SAP BW we have 'process chains' - is there equivalent or 'planned' functionality for QlikView.

Cheers.

Dave.

Not applicable
Author

Thanks Sunil.

We might be able to wrangle 32GB Ram for the DEV server. But its unlikely my PC will get more ram, besides 32bit windows doesn't allow a program to access more then 2GB so 64bit Windows 7 is probably the way forwards.

Does anybody know whether the PC Client side of the SAP Connector actually restricts bandwidth usage depending on the PC spec as this would explain why whatever changes we make on the SAP system, the loading times remain constant. The packetsize increased from a few thousand to just over twenty thousand but it still took 7.5 hours. I tried increasing the BufferPercentage but anything over 11% crashes the SAP job.

Cheers.

Dave.

christian77
Partner - Specialist
Partner - Specialist

Hi:

I have the same problem. Table FIGL, 20 million records 102 fields.

I post my message with the parallel in the query for Oracle.

I don't know what to do with SAP. It takes 6 hours and my client wants somethig faster.

http://community.qlik.com/message/403401#403401

Thanks in advance.

preminqlik
Specialist II
Specialist II

Dave Li , can you share the details regarding how you got solved above problem..