i have a table 780 Million rows and 5 columns, with the following Information
when loaded isolated it takes only 16 minutes, but stucks after it. In case incremental load is not possible, what are other loading Options?
i use the deployment Framework and the Problem i have when connecting directly to the database (extract) and then when loading from QVD. I Need all the data, as per requirement and there is at the Moment no way to reduce it.
The RAM consumption isn't quit clear with this informations. I suggest you make a restart from the qlikview server services, starts this reload again and monitor the RAM consumption carefully within the taskmanager and you should also pay attention for other activities on this server then not only the qvb.exe will require RAM as well as the qvs.exe and perhaps some other services, too.
The stucking is clear - there is not enough RAM available. But it looked a bit strange for me that a qvd with 780 M records and only 5 columns takes so much RAM - are you sure that's only 5 columns and you don't performed a wildcard-load on more columns? What kind of data contain those columns?
the table contains a Primary key, and the other columns contain no more than 1% of distinct values from total Primary key
i loaded this table isolated directly from SQL and then in the tranform file. in both cases after loading all the data rows, it stucks, where QV Shows the " ..... lines fetched". In the Transform file, the "date" field is transformed with a "floor". The table is loaded really fast, less than 15 minutes (in extract and transform file), but then it stucks and keeps with "... lines fetched". Strange for me is that before our Migration took place, this was not the case. User rights where changed / migrated, Publisher updated and then came the Problem. Is there any place where we can check? (Server side / qv)?
Are there other loadings within these application? If yes there could be occur synthetic keys and/or circular references between the tables and for them could qv need a long time to calculate (by 780 M records many hours and very probably will it need more RAM then available). You could check this within the debugger or with a FIRST statement to limit the data - maybe on 100 records and you will see what happens within the table-viewer.
Another thought is to leave the primary key out of these loadings/tables then are there really reasons for them if you didn't apply an incremental loading?