Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We are using the SAP Connector 5.5. We updated our SAP Oracle DB from 10.2.0.2 to 10.2.0.5. After this upgrade only one of my jobs are aborting with the following message:
Error Fetch aborted after 241 retries. Key = TIMEOUT_READ_MEMORY (ID:00 Type:E Number:001 Timeout when trying to read shared buffer)
2011-02-28 07:11:28 Progress Disconnected
2011-02-28 07:11:28 Error /QTQVC/FETCH_STREAM failed after 00:00:00 Key = RFC_INVALID_HANDLE (An invalid handle was passed to the API call)
Has anyone experienced this problem? Would ajusting the TimeOutFetch help? Thanks
I'm getting this same error, did you ever got a solution for this?
Thanks
Not yet. We figured out a work-around. What type of about are you getting? Did you check the logfile.
Are you trying to read SAP cluster table ?.
I had to downgrade to SAP Connector 5.3 because in 5.5 QlikView uses a % of the total shared memory buffer on SAP system, but sometimes that % is bigger than the available memory in the shared memory buffer causing an overflow of the memory and that mistake and some others happen... in version 5.3 of the Connector QlikView uses % of the total AVAILABLE shared memory buffer (it's a little bit slower) but you can work fine, schedule your reloads at night.
The other solution is to increase the shared memory buffer in the SAP System. In my case, for my customer, that was not feasible.
Good luck!
I had the same problem on 5.6 SR1 version of the connector, and noticed that the error always comes on the second SELECT statement (at least on ReloadSAPDD.qvw).
A quick workaround is to disconnect and reconnect for each of the five tables.
Hope this helps.
Is there any concrete solution for this, and may i know from which area we needs to work around to resolve this.?
quick help, appreciated.
Thanks,
Alok