Qlik Catalog Data loads fail to complete due to an out of memory error.
The issue can be addressed by either increasing memory on the host running Qlik Catalog and engine or by decreasing the number of concurrent loads Catalog will process (32 by default). You may need to do both.
Resolution
Decrease the number of concurrent loads to 4.
The setting is changed in the core_env.properties file located in the directory: QDC_HOME/<Tomcat>/conf/core_env.properties.
A Tomcat restart is required after making the change in the core_env.properties file.
Make the change when no loads are running. Find and edit the following (do not add a new entry):
# Number of Java threads in pool used for each of loading data, prepare and publish.
# Restart required. Default: 32
hadoop.job.poolsize=4
If the loads are successful, increase the count gradually and continue testing.
If it is easy to provision more RAM, consider doing so.
Cause
Qlik Catalog uses the "engine" container to convert QVDs to CSVs (in order to sample and profile the data). The issue is that engine is running out of memory attempting to concurrently process several QVDs. Sometimes engine gracefully handles the memory exhaustion and returns an error to Qlik Catalog. Other times engine crashes and the container is restarted. In both cases, the job status is FAILED in Qlik Catalog.
Environment