Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi all,
Any ideas on tuning the QlikView SAP SQL Connector to increase its speed.
We have been testing a load of 7.5 million records from a single table and it has taken 7 hrs, 22 mins, which is too slow for our needs.
Have tried the following settings on our sap system and used transaction ST02 to check that the import/export buffers have not been overloaded etc.
_____________________________________________________________________________
Initial record buffer IRBD
rsdb/ntab/irbdsize 6000 kB Size of initial record buffer
Data area size for the "Initial record buffer" to store the initial record layout of an R/3 table that
is accessed at runtime.
Change to 12000
rsdb/ntab/entrycount 20000 Max. number / 4 of initial records buffered
Number of directory entries created for the "table definition buffer" and
the "field description buffer". The default appears to be 3.33 * 12000 (value of irbdsize) = 39960
Change to 40000
_____________________________________________________________________________
Program buffer PXA
CUA buffer CUA
rsdb/cua/buffersize 3000 kB Size of CUA buffer
The CUA buffer holds objects from the GUI interface, such as menus, pushbuttons, and so on.
In the data base, these objects are held in tables D342L and D345T.
Recommended settings for large Systems: 3000 - 6000 KB.
change to 6000
affects ipc/shm_psize_40 as the CUA buffer is assigned to an SAP shared memory pool.
Run sappfpar check pf=\\usr\sap\trans\tp_domain_<sid> to check settings
_____________________________________________________________________________
Generic key table buffer TABL
zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer
This buffer holds entries from tables which have been defined in the Data Dictionary with buffer
type X (buffer all entries that belong to active clients) and G (buffer entries according to a generic
key specification).
Change to 60000000 Bytes
Affects ipc/shm_psize_10: In standard systems, the resident-table buffer is stored in shared
memory pool 10. If you change the size of the buffer, then you must change the amount of memory
in pool 10 by the same amount. You can remove the buffer from the pool by setting the
parameter ipc/shm_psize_19=0.
Removing the buffer from pool 10 simplifies your profile management. The buffer is stored in its own
shared memory segment, which is allocated automatically when the server is started. You do not have
to change ipc/shm_psize_10 when you change the buffer size.
Run sappfpar check pf=\\usr\sap\trans\tp_domain_<sid> to check settings
zcsa/db_max_buftab 5000 Max. number of buffered objects
Specifies the number of directory entries in the resident-table buffer. This is the maximum number of
tables that can be buffered. Set the parameter to a large enough value to ensure that there are always
free directory entries.
No Changes needed.
______________________________________________________________________________________
Single record table buffer TABLP
rtbb/buffer_length 10000 kB Size of single record table buffer
Specifies the size of the partial table buffer. The partial table buffer holds individual records from tables
that are defined in the data dictionary with buffer type P.
Effects upon other parameters: rtbb/max_tables. The efficiency of the partial table buffer depends not only
upon its size but also upon the number of directory entries in the buffer (the maximum number of tables that
can be buffered). You can specify the number of directory entries
in profile parameter rtbb/max_tables.
Change to 20000 KB
rtbb/max_tables 500 Max. number of buffered tables
Specifies the number of directory entries in the partial table buffer. This is the maximum number of tables that
can be buffered. Set the parameter to a large enough value to ensure that there are always free directory entries.
Change to 1000 KB
_____________________________________________________________________________
Export/import buffer EIBUF
rsdb/obj/buffersize 40000 kb Size of export/import buffer [kB]
This parameter sets the size of the import /export buffer (KB). The buffer is used for data storage
in the shared memory and is accessed using the ABAP commands EXPORT TO/IMPORT FROM
SHARED BUFFER and DELETE FROM SHARED BUFFER. The commands only affect the local
system. Buffers on external application servers are not changed. If the buffer is full, old buffer
objects are kicked out when new objects are exported to the buffer.
Rule of thumb seems to be to double the current figure.
This should directly affect the QlikView loads.
Change to 400000.
rsdb/obj/max_objects 20000 Maximum number of objects in export / import buffer
According to note 702728, as a rule of thumb the value of rsdb/obj/max_objects should
not be larger than one quarter of the value of rsdb/obj/buffersize.
Change to 100000.
__________________________________________________________________________________
Having carried out the above, we restarted the SAP system.
In the connection string I used Packetsize = 100000 and BufferPercent=12. A higher BufferPercent causes the sap system to issue the following message in the SM37 log file for job name /QTQVC/READ_DATA
'Not enough buffer space available - decrease buffer percentage used to 10 or a lower value. Include a string like: ;bufferpercentage=10; in the connection string.'
The control record in table /QTQVC/CONTROL (via transaction SE16) shows
Packetsize 43,124, Tablename = Data1024, buffer_percent = 12
__________________________________________________________________________________________________________
What's the best practise for speeding up the connector.I currently doing the testing on my PC - Is the QV SAP connector throttling the load
due to the PC's spec (Pentium Dual Core 2.8, 2GB Ram)
Is it possible to do parallel loads? How do we synchronise parallel loads so we can produce a single QVD file from them?
Many Thanks for your time.
Dave.
Dear friend,
Can you give me details regarding which table you are extractig ?. what is primary memory size of server ?. ang cache memory allocated by SAP server ?
regards
Sunil jain.
Hi Sunil,
The table I'm extracting from our BW system is /BI0/AGM_O0100 which is an 'active' table behind a DSO GM Line Items,
which is sourced from the SAP Grants Management Module (we are a university)
I'm currently testing against our Quality Assurance system which holds 7,501,459 records in the above table.
The load takes 7.5 hours when nothing else is running on the system.
The table structure consists of the following:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The server has 32Gb of memory but only 4GB has been allocated as 'extended' memory - see ST02 screenshot below: |
Any ideas what the bottle neck could be. Could it be the PC I'm using?
Many Thanks for your Help.
Dave.
Hi,
you should expect roughly 1 million records/ minute for a 5-column table on decent hardware. The table you are downloading is much wider, do you really need all the fields?
The client side pc definitely affects the total performance, especially the processor performance is important.
Be also very careful with any manipulations in the load script when reading from SAP , you should rather download directly to a qvd-file and then do all manipulations when reading this instead.
Kind regards,
Lars Wahlstedt
Dear Dave,
Your PC configuration is too low to extract larger table from SAP BW. Because we are alredy extractioing more than crores of record from SAP BW but we 32 GB RAM size . and this much is sufficient .
But 2 GB is too low.
normally QV SAP connector extract moe than 1 crore record in 10 minute.
Regards
Sunil Jain.
Thanks Lars,
1 Million records a minute sounds miraculous :-), we are averaging 20000~ a minute after ramping up the import/export parameter rsdb/obj/buffersize to 400000 (400Mb) and rsdb/obj/max_objects to 100000. and restarting the sap instance,
We need to load all the the line item fields to form a base QVD, which will subsequently be transformed into star schema type QVD's. The idea is that the user can 'drill down' to the base data if required and also because its far quicker to load data from QVD's.We also want to replace our SAP BW front end reporting tools with QlikView.
The problem lies with getting the data out of the sap system in a timely manner.
I'm testing the sap connector on a PC dual core pentium 2.79 2Gb Ram running 32 bit windows.This is because we are in the process of purchasing a development server and the production PC server is off limits (with good reason).
Checking Windows Task Manager, the PC (CPU/memory) doesn't look overloaded as I have been careful with the background tasks
and there's hardly any network activity. The only thing I've noticed is that screen display showing the load is updated every two minutes or so,
it then goes away and does some thinking...
I've also noticed that the /QTQVC/CONTROL record on the SAP server is constantly accessed during the 'read' process. Would an index be beneficial?
No manipulations are carried out on the LOAD statement from SAP... as I've seen the other forum posts :-))
I've now run out of ideas... is there a way of splitting the table, loading in parallel and reassembling into a QVD file which can then be updated on a daily basis? I've read posts where multiple instances of QV,exe are used, however there is no 'event' synchronisation whereby multiple job threads can be executed in parallel and then loaded into a singular object - in SAP BW we have 'process chains' - is there equivalent or 'planned' functionality for QlikView.
Cheers.
Dave.
Thanks Sunil.
We might be able to wrangle 32GB Ram for the DEV server. But its unlikely my PC will get more ram, besides 32bit windows doesn't allow a program to access more then 2GB so 64bit Windows 7 is probably the way forwards.
Does anybody know whether the PC Client side of the SAP Connector actually restricts bandwidth usage depending on the PC spec as this would explain why whatever changes we make on the SAP system, the loading times remain constant. The packetsize increased from a few thousand to just over twenty thousand but it still took 7.5 hours. I tried increasing the BufferPercentage but anything over 11% crashes the SAP job.
Cheers.
Dave.
Hi:
I have the same problem. Table FIGL, 20 million records 102 fields.
I post my message with the parallel in the query for Oracle.
I don't know what to do with SAP. It takes 6 hours and my client wants somethig faster.
http://community.qlik.com/message/403401#403401
Thanks in advance.
Dave Li , can you share the details regarding how you got solved above problem..