Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in Manage Cases. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
The QlikSenseUtil comes bundled with Qlik Sense Enterprise on Windows. It's executable is by default stored in:
%Program Files%\Qlik\Sense\Repository\Util\QlikSenseUtil\QlikSenseUtil.exe
Qlik Sense Util is no longer supported as a backup and restore tool. For the officially supported backup and Restore Process, see Backup and Restore.
The native Qlik PostgreSql connector (ODBC package) does not import strings with more of 255 characters. Such strings are just cut off, the following characters are not shown in the applications. No warnings are thrown during the script execution.
The problem affects the Qlik connector but not all the DNS drivers.
Qlik Sense February 2023 and higher versions
Qlik Cloud
When TextAsLongVarchar is set, and the Max String Length is set to 4096, 4096 characters are loaded.
Notice that there still are limitations to this functionality related to the datatype used in the database. Data types like text[] are currently not supported by Simba and they are affected by the 255 characters limitation even when the TextAsLongVarchar parameter is applied.
Qlik has opened an improvement request to Simba to support them.
As workaround, it is possible to test a custom connector using a DSN driver to deal with these data types.
QB-21497
DB2 Z/OS Source Endpoint loading a table with date columns leads to the data coming in as blank. The source data has valid dates but is not loading to the target in the Qlik Replicate Task
The ODBC Driver for DB2 Z/OS requires the 11.5.6 or the 11.5.8 ODBC driver to be installed on the Qlik Replicate Server.
ODBC driver 11.5.9 or other installed.
z/OS Prerequisites | Qlik Replicate Help
Qlik Replicate 2023.5
Upgrade installation or fresh installation of Qlik Replicate 2023.11 (includes builds GA, PR01 & PR02), Qlik Replicate reports errors for MySQL or MariaDB source endpoints. The task attempts over and over for the source capture process but fail, Resume and Startup from timestamp leads to the same results:
[SOURCE_CAPTURE ]T: Read next binary log event failed; mariadb_rpl_fetch error 0 () [1020403] (mysql_endpoint_capture.c:1060)
[SOURCE_CAPTURE ]T: Error reading binary log. [1020414] (mysql_endpoint_capture.c:3998)
Upgrade to Replicate 2023.11 PR03 (coming soon)
If you are running 2022.11, then keep run it.
No workaround for 2023.11 (GA, or PR01/PR02) .
Jira: RECOB-8090 , Description: MySQL source fails after upgrade from 2022.11 to 2023.11
There is a bug in the MariaDB library version 3.3.5 that we started using in Replicate in 2023.11.
The bug was fixed in the new version of MariaDB library 3.3.8 which be shipped with Qlik Replicate 2023.11 PR03 and upper version(s).
support case #00139940, #00156611
Replicate - MySQL source defect and fix (2022.5 & 2022.11)
C:\Windows\System32>sftp -P 22 username@ftptest.test.com
The authenticity of host 'ftptest.test.com (18x.6x.15x.21x)' can't be established.
RSA key fingerprint is SHA256:KijFUxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxapWjo
if the above does not retrieve your fingerprint, contact your SFTP server administrator who can provide fingerprint or public key details for you.
If there are connectivity issues, to request data from your data connections through a firewall, you need to add the underlying (all three) IP addresses for your region to your firewall's allow list. These addresses are static and will not change. See Allowlisting domain names and IP addresses.
Other Diagnostics:
Error in SFTP connector
Diagnostic
Example: (insert a valid SFTP server host IP)
Result:
In case of this or other SFTP connection errors, please work with the SFTP server administrator or SFTP server vendor in order in order to enable the generation of a publicly available fingerprint.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Reload fails in QMC even though script part is successfull in Qlik Sense Enterprise on Windows November 2023 and above.
When you are using a NetApp based storage you might see an error when trying to publish and replace or reloading a published app.
In the QMC you will see that the script load itself finished successfully, but the task failed after that.
ERROR QlikServer1 System.Engine.Engine 228 43384f67-ce24-47b1-8d12-810fca589657
Domain\serviceuser QF: CopyRename exception:
Rename from \\fileserver\share\Apps\e8d5b2d8-cf7d-4406-903e-a249528b160c.new
to \\fileserver\share\Apps\ae763791-8131-4118-b8df-35650f29e6f6
failed: RenameFile failed in CopyRename
ExtendedException: Type '9010' thrown in file
'C:\Jws\engine-common-ws\src\ServerPlugin\Plugins\PluginApiSupport\PluginHelpers.cpp'
in function 'ServerPlugin::PluginHelpers::ConvertAndThrow'
on line '149'. Message: 'Unknown error' and additional debug info:
'Could not replace collection
\\fileserver\share\Apps\8fa5536b-f45f-4262-842a-884936cf119c] with
[\\fileserver\share\Apps\Transactions\Qlikserver1\829A26D1-49D2-413B-AFB1-739261AA1A5E],
(genericException)'
<<< {"jsonrpc":"2.0","id":1578431,"error":{"code":9010,"parameter":
"Object move failed.","message":"Unknown error"}}
ERROR Qlikserver1 06c3ab76-226a-4e25-990f-6655a965c8f3
20240218T040613.891-0500 12.1581.19.0
Command=Doc::DoSave;Result=9010;ResultText=Error: Unknown error
0 0 298317 INTERNAL&
emsp; sa_scheduler b3712cae-ff20-4443-b15b-c3e4d33ec7b4
9c1f1450-3341-4deb-bc9b-92bf9b6861cf Taskname Engine Not available
Doc::DoSave Doc::DoSave 9010 Object move failed.
06c3ab76-226a-4e25-990f-6655a965c8f3
Potential workarounds
The most plausible cause currently is that the specific engine version has issues releasing File Lock operations. We are actively investigating the root cause, but there is no fix available yet.
An update will be provided as soon as there is more information to share.
QB-25096
QB-26125
The dedicated tFTPxxx components do not support reading files directly from FTP servers. Usually the file is obtained from the FTP server to the local system first and then the file is read. However, this can cause performance issues when many files or large files need to be read.
One solution is to use the JSch library, which allows the creation of an InputStream object from a file via SFTP. Then, use the tFileInputDelimited component to read data from the InputStream object. This avoids downloading the file to the local system.
For example, in the following Job, the Java code iterates over the FTP files one by one, using a JSch library file and a tFileInputDelimited component to read the file directly.
The tLibraryLoad component loads the jsch-0.1.55.jar JSch library file, then the tJava component uses the following code to create the InputStream object for the current FTP file.
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("username", "localhost", 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("password");
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
java.io.InputStream in=sftpChannel.get( ((String)globalMap.get("tFTPFileList_1_CURRENT_FILE")));
globalMap.put("sftp_inputStream",in);
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
The tFileInputDelimited component reads the data from the InputStream object:
From R2024-05, Java 17 will become the only supported version to start most Talend modules, enforcing the improved security of Java 17 and eliminating concerns about Java's end-of-support for older versions. In 2025, Java 17 will become the only supported version for all operations in Talend modules.
Starting from v2.13, Talend Remote Engine requires Java 17 to run. If some of your artifacts, such as Big Data Jobs, require other Java versions, see Specifying a Java version to run Jobs or Microservices.
Content
Qlik Talend Module | Patch Level and Version |
Studio | Supported from R2023-10 onwards |
Remote Engine | 2.13 or later |
Runtime | 8.0.1-R2023-10 or later |
For Windows users, please follow the JDK installation guide (docs.oracle.com).
For Linux users, please follow the JDK installation guide (docs.oracle.com).
For MAC OS users, please follow the JDK installation guide (docs.oracle.com).
When working with software that supports multiple versions of Java, it's important to be able to specify the exact Java version you want to use. This ensures compatibility and consistent behavior across your applications. Here is how you can specify a specific Java version on the following products (such as build servers, shared application server, and similar):
For Studio users who are using multiple JDKs, please follow the appropriate instructions listed above and follow the proceeding additional steps:
-vm
<JDK17 HOME>\bin\server\jvm.dll
For Remote Engine (RE) users who are using multiple JDKs, please follow the appropriate instructions listed above and follow the proceeding additional steps.
For Runtime users who are using multiple JDKs, please follow the appropriate instructions listed above and follow the proceeding additional steps.
If Runtime is not running as a service:
With the Enable Java 17 compatibility option activated, any Job built by Talend Studio cannot be executed with Java 8. For this reason, verify the Java environment on your Job execution servers before activating the option.
To use Talend Administration Center with Java 17, you need to open the <tac_installation_folder>/apache-tomcat/bin/setenv.sh file and add the following commands:
# export modules export JAVA_OPTS="$JAVA_OPTS --add-opens=java.base/sun.security.x509=ALL-UNNAMED --add-opens=java.base/sun.security.pkcs=ALL-UNNAMED"
Windows users use <tac_installation_folder>\apache-tomcat\bin\setenv.bat
For Java 17 users, Talend CICD process requires the following Maven options:
set "MAVEN_OPTS=%MAVEN_OPTS% --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/sun.security.x509=ALL-UNNAMED --add-opens=java.base/sun.security.pkcs=ALL-UNNAMED"
For Java 17 users, Talend CICD process requires the following Maven options:
export MAVEN_OPTS="$MAVEN_OPTS \ --add-opens=java.base/java.net=ALL-UNNAMED \ --add-opens=java.base/sun.security.x509=ALL-UNNAMED \ --add-opens=java.base/sun.security.pkcs=ALL-UNNAMED"
<name>TALEND_CI_RUN_CONFIG</name> <description>Define the Maven parameters to be used by the product execution, such as: - Studio location - debug flags These parameters will be put to maven 'mavenOpts'. If Jenkins is using Java 17, add: --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/sun.security.x509=ALL-UNNAMED --add-opens=java.base/sun.security.pkcs=ALL-UNNAMED </description>
Overview
Enable your Remote Engine to run Jobs or Microservices using a specific Java version.
By default, a Remote Engine uses the Java version of its environment to execute Jobs or Microservices. With Remote Engine v2.13 and onwards, Java 17 is mandatory for engine startup. However, when it comes to running Jobs or Microservices, you can specify a different Java version. This feature allows you to use a newer engine version to run the artifacts designed with older Java versions, without the need to rebuild these artifacts, such as the Big Data Jobs, which reply on Java 8 only.
When developing new Jobs or Microservices that do not exclusively rely on Java 8, that is to say, they are not Big Data Jobs, consider building them with the add-opens option to ensure compatibility with Java 17. This option opens the necessary packages for Java 17 compatibility, making your Jobs or Microservices directly runnable on the newer Remote Engine version, without having to go through the procedure explained in this section for defining a specific Java version. For further information about how to use this add-opens option and its limitation, see Setting up Java in Talend Studio.
Procedure
c:\\Program\ Files\\Java\\jdk11.0.18_10\\bin\\java.exe
org.talend.remote.jobserver.commons.config.JobServerConfiguration.JOB_LAUNCHER_PATH=c:\\jdks\\jdk11.0.18_10\\bin\\java.exe
ms.custom.jre.path=C\:/Java/jdk/binMake this modification before deploying your Microservices to ensure that these changes are correctly taken into account.
The Qlik Replicate task log (with source_capture and trace logging enabled) shows LSN information similar to:
00023576: 2024-03-08T17:46:03 [SOURCE_CAPTURE ]T: MS-CDC capture loop. SQL Server start time '2024-03-06 22:41:29.170', start LSN '000A9B8100000050000D', end time '2024-03-08 19:46:04', end LSN '' (sqlserver_mscdc.c:3684)
00023576: 2024-03-08T17:46:08 [SOURCE_CAPTURE ]T: MS-CDC capture loop. SQL Server start time '2024-03-06 22:41:29.170', start LSN '000A9B8100000050000D', end time '2024-03-08 19:46:09', end LSN '' (sqlserver_mscdc.c:3684)
This indicates the LSN is looping and that the LSN was purged.
A Qlik Replicate task with an Oracle target fails with the following in the task log:
[TARGET_LOAD ]T: ORA-03156: OCI call timed out
This can be expected with long-running executions during a full load.
To disable the timeout, set the Internal Parameter executeTimeout on the Oracle target endpoint.
This article briefly explains how to find what tables (articles) are used in a Publication. This method is useful to identify issues when a table does not capture updates.
Run the following query from Microsoft Studio Management or any Database Utility (such as DBeaver):
SELECT
msp.publication AS PublicationName,
msa.publisher_db AS DatabaseName,
msa.article AS ArticleName,
msa.source_owner AS SchemaName,
msa.source_object AS TableName
FROM distribution.dbo.MSarticles msa
JOIN distribution.dbo.MSpublications msp ON msa.publication_id = msp.publication_id
ORDER BY
msp.publication,
msa.article;
Qlik’s Data Integration Platform provides functionality to support near real-time data delivery from traditional sources, files, and some SaaS environments via Qlik Replicate. Qlik Compose for Data Warehouses is integrated with Qlik Replicate to automate the incremental load process of the data warehouse based on change data capture (CDC) from the source system.
However, there are certain scenarios where the automated incremental load processes cannot be leveraged. (For example, SAP Hana Log based replication, third party data ingestion or cloud data sharing programs).
In those instances there are extract transform and load (ETL) patterns that can be implemented in Qlik Compose to support incremental data loads. This paper will describe these scenarios and show how to implement custom incremental load patterns in Qlik Compose and how those patterns can be applied to query or view based mappings.
HSTS (HTTP Strict-Transport-Security response header) security check failed.
HTTP Strict Transport Security (HSTS) is a policy mechanism that helps to protect websites against man-in-the-middle attacks such as protocol downgrade attacks and cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should automatically interact with it using only HTTPS connections, which provide Transport Layer Security (TLS/SSL), unlike the insecure HTTP used alone.
Before adding HSTS to either the QlikView AccessPoint or the QlikView Management Console (QMC), set both up to use HTTPS. See for QlikView AccessPoint and QMC with HTTPS and a custom SSL certificate instructions.
Custom response headers can be set in both the QlikView WebServer (beginning with 12.30) and Microsoft IIS (all QlikView versions).
The custom header needed for HSTS is: Strict-Transport-Security
<Config>
...
<Web>
...
<CustomHeaders>
<Header>
<Name>Strict-Transport-Security</Name>
<Value>max-age=31536000</Value>
</Header>
</CustomHeaders>
</Web>
</Config>
For information on how to configure custom headers with Microsoft IIS, see Setting Custom HTTP Headers in IIS for QlikView. The site https://https.cio.gov/hsts/ gives information on how to setup the webserver to enable HSTS.
Testing can be achieved using any number of third party sites, such as:
This setting was introduced with QlikView 12.70 (May 2022) SR1.
QVManagementService.exe.Config Changes:
Qlik ODBC connector package (database connector built-in Qlik Sense) fails to reload with error Connector reply error:
Executing non-SELECT queries is disabled. Please contact your system administrator to enable it.
The issue is observed when the query following SQL keyword is not SELECT, but another statement like INSERT, UPDATE, WITH .. AS or stored procedure call.
See the Qlik Sense February 2019 Release Notes for details on item QVXODBC-1406.
By default, non-SELECT queries are disabled in the Qlik ODBC Connector Package and users will get an error message indicating this if the query is present in the load script. In order to enable non-SELECT queries, allow-nonselect-queries setting should be set to True by the Qlik administrator.
To enable non-SELECT queries:
As we are modifying the configuration files, these files will be overwritten during an upgrade and will need to be made again.
Only apply !EXECUTE_NON_SELECT_QUERY if you use the default connector settings (such as bulk reader enabled and reading strategy "connector"). Applying !EXECUTE_NON_SELECT_QUERY to non-default settings may lead to unexpected reload results and/or error messages.
More details are documented in the Qlik ODBC Connector package help site.
Feature Request Delivered: Executing non-SELECT queries with Qlik Sense Business
Execute SQL Set statements or Non Select Queries
Qlik Replicate can take advantage of the SAP HANA CTS mode feature.
CTS mode must be enabled on the SAP HANA Source.
To enable the feature in Qlik Replicate:
You need to manually drop the current triggers as the new table name with CTS enabled will be attrep_cdc_changes_cts, not attrep_cdc_changes. By default, attrep_cdc_changes_cts is partitioned with default value of 100,000,000 rows. SAP HANA allows maximum 16K partitions. Updating threshold for each partition can impact performance and hence should be thoroughly tested in a Non-Production environment. These steps do not include the migration to CTS Mode where you can Stop and or Resume the Tasks by enabling this feature.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Change CTS Configuration | SAP HANA Help Portal
When a reload target is performed on the logstream staging task (parent) you will get the following error in every Log Stream remote task (child):
[SOURCE_CAPTURE ]W: End of timeline, no more records to read (ar_cdc_channel.c:1210)
[SORTER ]E: End of time-line reached and all records were applied, task will stop [1020101] (sorter_transaction.c:3486)
The task stops replication to the remote task.
There are 2 optional ways to resolve this issue:
Option 1: Reload the Log Stream remote (child) task
Option 2: Start the Log Stream stagging task (parent) from the timestamp before its reload. To do this please follow the steoos below:
Qlik has introduced an update with Qlik Cloud and Qlik Sense Enterprise on Windows which significantly increases the performance of any ODBC database Connector when working with larger datasets.
The change was rolled out to Qlik Cloud in October 2022 and is available in any Qlik Sense Enterprise on Windows version beginning November 2022.
All new connections automatically have a Bulk Reader feature that uses larger portions of data in the iterations within a load (instead of loading data row by row). This can result in faster load times for larger datasets.
To add this feature to an existing connection, open the connection properties window by selecting Edit and then click Save. The Bulk Reader feature is now turned on. You don’t have to change any connection properties to invoke this feature.
While we believe that most customers will want all their ODBC connections to use this new capability, it is possible to turn off the Bulk Reader feature for a particular connector. To do this, add the parameter useBulkReader with a value of False to the Advanced section of the connector properties window.
Qlik Cloud
Qlik Sense Enterprise on Windows
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Many reloads are failing on different schedulers with General Script Error in statement handling. The issue is intermittent.
The connection to the data source goes through our Qlik ODBC connectors.
The Script Execution log shows a “General Script Error in statement handling” message. Nevertheless, the same execution works if restarted after some time without modifying the script that looks correct.
The engine logs show:
CGenericConnection: ConnectNamedPipe(\\.\pipe\data_92df2738fd3c0b01ac1255f255602e9c16830039.pip) set error code 232
Change the reading strategy within the current connector:
Set reading-strategy value=engine in C:\ProgramFiles\Common Files\Qlik\CustomData\QvOdbcConnectorPackage\QvODBCConnectorPackage.exe.config
reading-strategy value=connector should be removed
Switching the strategy to engine may impact NON SELECT queries. See Connector reply error: Executing non-SELECT queries is disabled. Please contact your system administrator to enable it. for details.
QB-8346
The error message Wrong content node response type and other report failures can be caused by invalid applied filters or a cycle that tries to select an invalid value for the field (a value that doesn't exist in the field).
Note: By default, 'verify filter' is enforced programmatically in NPrinting 17 meaning that if a filter produces empty values in a chart, a report will not be produced and the error you will see is Wrong Content Node Type error.
Example 1:
Your source contains the Year field that contains the values 2012, 2013 and 2014 but you added a filter with Year = 2015 to the report resulting in an empty data set.
Example 2:
You added to the report the filter Year = 2014 and the Country field into the Levels node. Remember that adding a field in the Levels node is like a filter. When the report is created it is possible that a value of Country has an empty data set for the year selected. For example, there are no sales in Italy in 2014. This generates the error.
An automation will not automatically rerun or retry if it fails. You can, however, rerun a failed automation by using an additional automation.
Caution should be taken when implementing this solution to prevent running endless loops or reruns.
Before a solution can be implemented you must decide if the rerun should:
The block Retry Automation Run can be used to retry a specific run of the automation using the same, if any, inputs. This block is useful if the monitored automation has its run mode set to Webhook or Triggered. It is also possible to manually re-run these runs using the Retry button in the automations run history.
The block Run Automation can be used to initiate a new run of the automation. This block is useful if the automation that fails has the run mode set to Scheduled, and the inputs of the failed run wouldn’t be unique.
The following example shows how a failed automation can be retried using the Retry Automation Run. Depending on the specifics of the automation you want to rerun, you may want to use the Run Automation block instead.
The value for stopTime must be transformed using the Date formula so that the value may be used for comparisons. The output format must be changed to Unix format (U):
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.