There is a bit of a gotcha with using Pervasive databases, unless someone is aware of a way around this.....
It would appear that when the ODBC connection is used that any indexes set up in the Pervasive database are not employed - so a full table scan is used each time. When we tried to pull out just the current months data from a table with a few years history it was the best part of an hour before any rows were returned (as the new rows are at the end of the table). Doing this in the desktop version of QlikView it appears to hang, as the time elapsed is only updated as rows come in, it is also impossible to cancel out cleanly unless rows are being brought in.
This was quite a long time ago now on a client site that I have not been back to for some time....
We did get the data extracts going at a sensible rate in the end, and I seem to recall that the indexes would be used provided the SQL statement was structured to exactly match the index. For example, make sure the fields in the where clause appear in the same order as they do in the index. Also, if there is a field in an index that you don't want to limit your extract to still put it in the where clause of your statement as Field = '%' - to ensure the index is used.
The requirement for this derives from the fact that the query parser in Pervasive (or maybe just the ODBC driver) is not as intelligent as most RDBMS's and does not work out execution plans very well.
If you have serious amounts of data you need to move about and want bespoke drivers then I can recommend speaking to a company called Attunity. Getting data from 'classic' systems is what they are all about.
This may be stating the obvious - but if you have slow data transfer from your source system put some thought into QVD usage and incremental loads - search Qlik Communities for more information on this.