Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Please, let us know if anyone has questions or concerns about enabling the email notifications.
Qlik Sense Mobile Client Managed & SaaS
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
The purpose of this article is to provide details about enabling Full Load Passthru filter in Qlik Cloud Data Integration (QCDI) and get the selected data from the source during the initial load of the Landing or Replication Tasks.
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
With the new inclusion of the Get Chart Image block in the Qlik Reporting connector in Qlik Application Automation, you now have more options to notify a group of users with more in-depth data and charts using Slack, Microsoft Teams, and email.
This article will guide you in sending your first chart image to Slack with Qlik Application Automation.
It explains a basic example of a template configured in Qlik Application Automation for this scenario.
You can make use of the template which is available in the template picker. You can find it by navigating to Add new -> New automation -> Search templates and searching for 'Send a Chart Image to Slack' in the search bar, and clicking the Use template option.
For guidance on sending charts via Microsoft Teams and mail, go to the "Next Steps" section at the end of this article.
You can download examples of the automations from this article: Send-chart-image-to-slack.json, Send-chart-image-to-outlook.json, Send-chart-image-to-mail.json, Send-chart-image-to-microsoft-teams.json
Warning: Whenever the “Get Chart Image” block is to be used, we advise you to only use it with temporary bookmarks or pre-existing persistent bookmarks.
If the condition block outcome evaluates to false:
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
It is not possible to set up a Microsoft Office 365 email provider with OAuth 2.0 authentication.
The HAR file shows this message in Network:
{connectionFailed: true, message: "Error during email request", success: false}
connectionFailed: true
message: "Error during email request"
success: false
Configure the Mail.Send permission as it is described in Configuring a Microsoft 365 email provider using OAuth2.
This problem occurs when the Mail.Send permission has not been configured in the app registration.
More information about the Mail.Send permission can be found in Application permission to Microsoft Graph (learn.microsoft.com).
This article gives an overview of the available blocks in the Jira connector in Qlik Application Automation. It will also go over some basic examples of retrieving issues by a specified project and creating an issue within a Jira account.
This connector supports CRUD operations(read, create, update, delete) for the following modules in Jira:
There are also a few generic blocks that could help to cover the other modules :
Authentication for this connector is based on the oAuth2 Protocol.
Let's now go over a few basic examples of how to use the Jira connector:
1. How to list issues from a specific project from a Jira account
2. To create a new issue:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
After exporting a Logstream parent task and changing its name, the import fails with:
SYS-E-HTTPFAIL, SYS-E-HTTPFAIL, Put full task failed. Task name: <task_name>
Alternatively, if you need two parent tasks:
Line 8: "target_names": ["logstreamnewname"],
Line 41: "target_name": "logstreamnewname",
Line 43: "database_name": "logstreamnewname"
Line 126: "name": "logstreamnewname",
Line 132: "path": "C:\\logstreamnewname",
The Parent Logstream task has wrongfully signalled a duplication of the target.
Qlik Sense can process a maximum of 1,048,576 (2^20) characters by row when loading data from a CSV file. If a row in the source CSV file is longer than this limit, Qlik Sense automatically breaks it to multiple rows in the loaded table.
This doesn't happen when loading another file format (like XML) or loading the same CSV file in QlikView.
To increase the maximum length, please set parameter LongestPossibleLine in Settings.ini file of Qlik Sense Engine to a higher value than 1048576.
See How to modify Qlik Sense Engine's Settings.ini for detailed instructions of changing parameters in Settings.ini.
Qlik Sense engine supports up to 512 Megabytes (512*1024*1024) as line length. Script reload can handle strings up to this length in a single data cell. However, when using the data selection wizard, such long string may break the web socket. Therefore, maximum string length is limited to 1,048,576 characters to avoid this web socket issue.
Sometimes the target default endpoint behavior does not meet our needs. This article is useful if we want to modify the default syntax.
For example, while MySQL is the target endpoint, Qlik Replicate creates a net changes table and uses it in batch apply mode. The net changes table is created with the default engine type "InnoDB" which has limitations while "MyISAM" and does not have the Row size too large limitation in MySQL.
The below steps demonstrate how to change the net changes table engine type from "InnoDB" (default) to "MyISAM". After the setup is done, Qlik Replicate will create the net changes table automatically with engine type "MyISAM".
From the Qlik Replicate computer where you want to import the task, open the Qlik Replicate command line console by doing the following:
From the Start menu, expand Qlik Replicate and then select Qlik Replicate Command Line.
A command-line console is displayed with the correct prompt for Qlik Replicate.
Alternatively, open a Windows Command Prompt using As Adinistrator and change to "<product dir>\Attunity\Replicate\bin" (default location)
repctl.exe getprovidersyntax syntax_name=MySQL > MySQL_MyISAM.jsonIf the DATA folder is non-default location, add option -d data_directory in the command.
command getprovidersyntax response:
[getprovidersyntax command] Succeeded
"provider_syntax": { "name": "MySQL", "query_syntax": {Modified:
"provider_syntax": {
"name": "MySQL_MyISAM",
"repository.provider_syntax": {
"name": "MySQL_MyISAM",
"query_syntax": {
repctl putobject data=MySQL_MyISAM
Do not add the additional suffix ".json" in the end of the command as this will cause the command to fail.
It is finally here: The first public iteration of the Log Analysis app. Built with love by Customer First and Support.
"With great power comes great responsibility."
Before you get started, a few notes from the author(s):
Chapters:
01:23 - Log Collector
02:28 - Qlik Sense Services
04:17 - How to load data into the app
05:42 - Troubleshooting poor response times
08:03 - Repository Service Log Level
08:35 - Transactions sheet
12:44 - Troubleshooting Engine crashes
14:00 - Engine Log Level
14:47 - QIX Performance sheets
17:50 - General Log Investigation
20:28 - Where to download the app
20:58 - Q&A: Can you see a log message timeline?
21:38 - Q&A: Is this app supported?
21:51 - Q&A: What apps are there for Cloud?
22:25 - Q&A: Are logs collected from all nodes?
22:45 - Q&A: Where is the latest version?
23:12 - Q&A: Are there NPrinting templates?
23:40 - Q&A: Where to download Qlik Sense Desktop?
24:20 - Q&A: Are log from Archived folder collected?
25:53 - Q&A: User app activity logging?
26:07 - Q&A: How to lower log file size?
26:42 - Q&A: How does the QRS communicate?
28:14 - Q&A: Can this identify a problem chart?
28:52 - Q&A: Will this app be in-product?
29:28 - Q&A: Do you have to use Desktop?
Qlik Sense Enterprise on Windows (all modern versions post-Nov 2019)
*It is best used in an isolated environment or via Qlik Sense Desktop. It can be very RAM and CPU intensive.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Optimizing Performance for Qlik Sense Enterprise - Qlik Community - 1858594
Qlik Gold Client support cases can be opened on the Qlik Customer Portal. For effective communication with Qlik Support, always include all required information, go into detail when describing your problem, and provide all necessary supplementary material (such as log files).
To log the case:
How to Contact Qlik Support
How to View Cases in Support Portal
The Qlik NPrinting Engine cannot resolve requests for tasks which include Qlikview Entity reports that output to PDF or is not printing QlikView Entity reports to PDF.
An error is displayed when attempting to edit or print the QlikView Entity report:
Error: QlikView NPrinting PDF Printer not installed or not properly registered
Or the report fails silently while the following is printed in the Qlik NPrinting logs:
resolution aborted with exception System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.↵↓ at Tracker.PDFXChange.IPXCControlEx.get_Printer(String pServerName, String pPrinterName, String pRegKey, String pDevCode)↵↓ at Qlik.Reporting.Printers.QlikPdfPrinter.Win64PrinterFactory.get_Item(String pServerName, String pPrinterName, String pRegKey, String pDevCode
When you install Qlik NPrinting, the Windows service “Print spooler” must be up and running. If it is disabled, the Qlik NPrinting Printer will not be added during the installation. Similarly, if a separate PDF-XChange driver is installed, the Qlik NPrinting engine install will not install the QlikView NPrinting PDF-Printer.
NPrinting: PDF reports generation fails after disabling the Windows Spooler service
QB-14941
Creating a new git branch will persist unstaged changes from the current branch.
Replication Steps
This is a normal behavior in git. You can add these untracked changes to a git branch or remove these untracked changes before switching branch.
Due to untracked files are not monitored by git, so after switching branch, these files are still present in local file system. When Talend Studio loads jobs from local file system, the untracked changes will appear in the newly created branch in studio as well.
Internal Defect ID: TUP-44462
After a Qlik Replicate task was stopped to update parameters, the task will not start. A new task referencing the same source fails to fetch its tables and errors out with the following:
SYS-E-HTTPFAIL, SYS-E-HTTPFAIL, Command get_owner_list failed when creating the stream component...
SYS,GENERAL_EXCEPTION,SYS-E-HTTPFAIL, Command get_owner_list failed when creating the stream component..,SYS,GENERAL_EXCEPTION,Command get_owner_list failed when creating the stream component.,Failed getting stream handle create_stream_handle failed Command create_stream_handle failed when preparing component. ORA-12170: TNS:Connect timeout occurred
ORA-12170 is an Oracle connection error and is not caused by Qlik Replicate. Qlik Replicate will send a connection request through the Oracle client, which will establish a connection to the database based on the sqlnet.ora settings. ORA-12170: TNS:Connect timeout occurred indicates a timeout.
To resolve the error, update sqlnet.ora and increase the available timeouts.
A Qlik Replicate Log Stream task fails with the error:
Stream component 'st_0_T03ERP_TGT_QLK_LSS' terminated Cannot initialize subtask
Failed while preparing stream component 'st_0_T03ERP_TGT_QLK_LSS'. Error reading audit batch
Timeout while waiting to get data from audit file
Verify that sufficient disk space is available. Clear up space or increase the available quota, then:
A corrupted metastore.sqllite file. The file is created in the Log Stream target location (LogStream/audit_services directory) when a task is started. Most commonly disk space issues cause the file's corruption.
Please, let us know if anyone has questions or concerns about enabling the email notifications.
Qlik Sense Mobile Client Managed & SaaS
Talend v8 Big Data EMR task execution in HDFS configuration(Based on EMR 5.29) is hitting below issue with China region after doing migration to v8 from v7.
==Log==
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. -- ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIATAYZAWDCxxx</AWSAccessKeyId><RequestId>HRZ7WJJFEREGFNGX</RequestId><HostId>2wOWmpLVjpjPfmclmQF8sZ6t3+QVjC1K8zzyyHbgphS==</HostId></Error>
==Log==
The current library jets3t used in Hadoop does not support the China region (cn-north-1). Due to some compatibility issues, even though the signature of S3 has been upgraded to V4, other regions of AWS are still using V2 version for avoiding these compatibility issues.
As the China region is a new region without such compatibility issues, so only V4 has been added in it.
defining-amazon-emr-connection-parameters-with-spark-universal
Internal defect ID: TBD-16745
It is possible that the following error may occur when executing a Job in Talend Studio version 8 that references a custom user routine.
java.lang.NoClassDefFoundError: routines/my-custom-routine
The error occurs due to the custom routine not being found at runtime, which obstructs the execution of the Job.
To resolve this issue, please uncheck the "Offline" checkbox in Talend Studio's Maven preferences (Window -> Preferences -> Maven). This enables Talend to download the required dependencies, thereby resolving the classpath conflict and enabling successful Job execution.
Asking questions using Insight Advisor Chat in the Hub on Qlik Sense Enterprise on Windows, may result in the message "Unable to get data" being returned. See Fig 1.
Verify the LEF file includes any of the following two attributes to be entitled for Insight Advisor Chat:
Then do below:
[nl-parser]
//Disabled=true
Identity=Qlik.nl-parser
[nl-app-search]
//Disabled=true
Identity=Qlik.nl-app-search
Qlik Sense Enterprise on Windows
Executing tasks or modifying tasks (changing owner, renaming an app) in the Qlik Sense Management Console and refreshing the page does not update the correct task status. Issue affects Content Admin and Deployment Admin roles.
The behaviour began after an upgrade of Qlik Sense Enterprise on Windows.
This issue can be mitigated beginning with August 2021 by enabling the QMCCachingSupport Security Rule.
Enable QmcTaskTableCacheDisabled.
To do so:
Upgrade to the latest Service Release and disable the caching functionality:
To do so:
NOTE: Make sure to use lower case when setting values to true or false as capabilities.json file is case sensitive.
Should the issue persist after applying the workaround/fix, contact Qlik Support.
This is a problem which on first impressions should not (and you would think logically cannot) happen. Therefore it is important to understand why it does, and what can be done to resolve it when it does.
The situation is that Replicate is doing a Full Load for a table (individually or as part of a task full loading many tables). The source and target tables have identical unique primary keys. There are no uppercasing or other character set issues relating to any of the columns that make up the key which may sometimes cause duplication problems. Yet as the Full Load for the table progresses, probably nearing the end, you get a message indicating that Replicate has failed to insert a row into the target as a result of a duplicate. That is there is already a row in the target table with the unique key for the row that it is trying to insert. The Full Load for that table is terminated (often after several hours); and if you try again the same error, perhaps for a different row, will often occur.
Logically this shouldn’t happen, but it does. The likelihood of it doing so depends on the source DBMS type, the type of columns in the source table, and you will find it is always for a table that is being updated (SQL UPDATEs) as Replicate copies it. The higher the update rate and the bigger the table, the more likely it is to happen.
Note: This article discussed the problems that are related to duplicates in the TARGET_LOAD and not the TARGET_APPLY, that is during Full Load and before starting to apply the cached changes.
To understand the fix we first need to understand why the problem occurs, and this involves understanding some of the internal workings of most conventional Relational Database Management Systems.
RDBMS’s tend to employ different terminology for things that exist in all of them. I’m going to use DB2 terminology and explain each term the first time I use it. With a different RDBMS the terminology may be different, but the concepts are generally the same?
The first concept to introduce is the Tablespace. That’s what it’s called in DB2, but it exists for all databases and is the physical area where the rows that make up the table are stored. Logically it can be considered as a single contiguous data area, split up into blocks, numbered in ascending order.
This is where your database puts the row data when you INSERT rows into the table. What’s also important is that it tries to update the existing data for a row in place when you do an UPDATE, but may not always be able to do so. If that is the case then it will move the updated row to another place in the tablespace, usually at what is then the highest used (the endpoint) block in the tablespace area.
The next point concerns how the DBMS decides to access data from the tablespace in resolving your SQL calls. Each RDBMS has an optimiser, or something similar that makes these decisions. The role of indexes with a relational database is somewhat strange. They are not really part of the standard Relational Database model, although in practice they are used to guarantee uniqueness and support referential integrity. Other than for these roles, they exist only to help the optimiser come up with faster ways of retrieving rows that satisfy your SELECT (database read) statements.
When any piece of SQL (we’ll focus on simple SELECT statements here) is presented to the optimiser, it decides on what method to use to search for and retrieve any matching rows from the tablespace. The default method is to search through all the rows directly in the tablespace looking for rows that match any selection criteria, this is known as a Tablespace Scan.
A Tablespace Scan may be the best way to access rows from a table, particularly if it is likely that many or most of the rows in the table will match the selection criteria. For other SELECTs though that are more specific about what row(s) are required, a suitable matching index may be used (if one exists) to go directly to the row(s) in the tablespace.
The sort of SQL that Replicate generates to execute against the source table when it is doing a Full Load is of the form SELECT * FROM, or SELECT col1, col2, … FROM. Neither of these has any row specific selection criteria, and in fact this is to be expected as a Full Load is in general intended to select all rows from the source table.
As a result the database optimiser is not likely to choose to use an index (even if a unique index on the table exists) to resolve this type of SELECT statement, and instead a Tablespace Scan of the whole tablespace area will take place. This, as you will see later, can be inconvenient to us but is in fact the fastest way of processing all the rows in the table.
When we do a Full Load copy for a table that is ‘live’ (being updated as we copy it), the result we end up with when the SELECT against the source has been completed and we have inserted all the rows into the target is not likely to be consistent with what is then in the source table. The extent of the differences is dependent on the rate of updates and how long the Full Load for that table takes. For high update rates on big tables that take many hours for a Full Load the extent of the differences can be quite considerable.
This all sounds very worrying but it is not as the CDC (Change Data Capture) part of Replicate takes care of this. CDC is mainly known for Replicating changes from source to target after the initial Full Load has been taken, keeping the target copies up to date and in line with the changing source tables. However CDC processing has an equally important role to play in the Full Load process itself, especially when this is being done on ‘live’ tables subject to updates as the Full Load is being processed.
In fact CDC processing doesn’t start when Full Load is finished, but in fact before Full Load starts. This is so that it can collect details of changes that are occurring at the source whilst the Full Load (and it’s associated SELECT statement) are taking place. The changes collected during this period are known as the ‘cached changes’ and they are applied to the newly populated target table before switching into normal ongoing CDC mode to capture all subsequent changes.
This takes care of and fixes all of the table row data inconsistencies that are likely to occur during a table Full Load, but there is one particular situation that can occur and catch us out before the Full Load completes and the cached changes can be applied. This results in Replicate trying to insert details for the same row more than once in the target table; triggering the duplicates error that we are talking about here.
Consider this situation:
That is how the problem occurs. Having variable length columns, and binary object columns in the source table make this (movement of the row to a new location in the tablespace) much more likely to happen and the duplicate insert problem to occur.
So how to fix this, or at least how to find a method to stop it happening.
The solution is to persuade the optimiser in the source database to use the unique index on the table to access the rows in the table’s tablespace rather than scanning sequentially through it. The index (which is unique) will only provide one row to read for each key as the execution of our SELECT statement progresses. We don’t have to worry about whether it is the ‘latest’ version of the row or not because that will be taken care of later by the application of the cached changes.
The optimiser can (generally) be persuaded to use the unique index on the source table if the SELECT statement indicates that there is a requirement to return the rows in the result set in the order given by that index. This requires having a SELECT statement with a order clause matching the columns in the unique index. Something of the form SELECT * FROM ORDER BY col1, col2, col3, etc. Where col1, col2, col3 etc. are the columns that make up the tables unique primary index.
But, how can we do this. Replicate has a undocumented facility that allows the user to configure extra text to be added to the end of the generated SQL for a particular table during Full Load processing specifically to add a WHERE statement to determine which rows are included and excluded during a Full Load extract.
This is not exactly what we want to do (we want to include all rows), but this ‘FILTER’ facility also provides the option to extend the content of the SELECT statement that is generated after the WHERE part of the statement has been added. So we can use it to add the ORDER BY part of the statement that we require.
Here is the format of the FILTER statement that you need to add.
—FILTER: 1=1) ORDER BY col1, col2, coln —
This is inserted in the ‘Record Selection Condition’ box on the individual table filter screen when configuring the Replicate task. If you want to do this for multiple tables in the Replicate task then you need to set up a FILTER for each table individually.
To explain, the —FILTER: keyword indicates the beginning of filter information that is expected to begin with a WHERE clause (which is generated automatically).
The 1=1)) component completes that WHERE clause in a way that all rows are selected (you could put in something to limit the rows selected if required, but that’s not what we are trying yo achieve here)
It is then possible to add other clauses and parameters before terminating the additional text to be added with the final —
In this case an ORDER clause is added that will guarantee that rows are returned in the order selected. This causes the unique index on the table to be used to retrieve rows at the source; assuming that you code col1, col2, etc. to match the columns and their order in the index. If the index has some columns in descending order (rather than ascending) make sure that is coded in the ORDER BY statement as well.
If you code things incorrectly the generated SELECT statement will fail and you will be able to see and debug this through the log.