Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in Manage Cases. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
You can install or remove Qlik Sense Extension bundles from your Qlik Sense deployment at any moment. If you have a multi-node installation, Qlik Sense Extension bundles are installed on the central node.
For information on how to install or import other extensions not delivered in our Qlik Sense Extension bundles, see: Installing, importing and exporting your visualizations on Qlik Help.
Note: All these steps need to be carried out in the Programs and Feature section of the Control Panel. You cannot modify or uninstall the bundle from the Add and Remove Programs menu.
You can verify that the changes have been correctly applied by checking the Extensions section in the Qlik Management Console (QMC).
You can add or remove extension bundles from your Qlik Sense Desktop installation at any moment.
Do the following:
Then, click Next.
If after uninstalling/reinstalling the Extension Bundle, specific extensions are not available see also relevant articles below:
Depending on the type of connection to QlikView, a Qlik NPrinting filter may behave differently:
If you encounter this problem, we recommend:
A previously delivered push to server fix (QV-19899) has been reverted. This may lead to some Qlik NPrinting reports failing on servers with Push from Server enabled.
QlikView - May 2024 IR Release Notes
QV-24977
Any interruption in Internet connectivity (caused by Qlik, customer environment, or third parties), a deployment in QCS or other QCS incident could cause a disconnect.
If the time a Gateway is Disconnected is short, is unlikely to have reloads affected. For some customers with a high volume where seconds count, reloads might fail. Under those circumstances it is suggested to use some kind of application-level resiliency strategy like retries via QAA, API's, at least for the Direct Access Gateway.
The DirectAccess.log shows:
41 2024-04-21 04:17:08 [Service ] [ERROR] Connection failed
System.Net.WebSockets.WebSocketException (0x80004005): The remote party closed the WebSocket connection without completing the close handshake.
Make sure the Direct Access Gateway was configured following recommendations documented in our help site.
Install at least Direct Access Gateway 1.6.6 or later.
R&D indicates that the low-level WS disconnects in the DirectAccessAgent.log are expected. We should expect to see at least 1-2 per day on average
Qlik Data Gateway - Direct Access. Reloads failing intermittently
QB-25723
After an upgrade to QlikView 12.90 (May 2024), the QlikView Management Console lists EnableRC4 as a non-default config value. No negative impact is observed.
The config value can be found in the Status > Services > QMS @node Information overview.
QlikView 12.90 removes RC4, leading to a blank default value if EnableRC4 has previously been set. This has no impact on QlikView's functionality.
To remove the alert of a non default value being found:
QV-25301
Qlik Support communicates Product Releases in its Release Notes board, and information on Product alerts and Support related activities (Webinars and Q&As) on the Qlik Support Updates blog.
This will alert you for activities such as:
This will alert you for activities such as:
The following messages will appear in the engine trace system logs:
QVGeneral: when AAALR(63.312046) is greater than 1.000000, we suggest using new row applicator to improve time and mem effeciency.
QVGeneral: - aggregating on 'RecruiterStats'(%DepartmentID) with Cardinal(87), for Object: in Doc: ffe8a825-b52e-4ceb-aea2-30de0f2c3306
There has also been reports of end users seeing the message "Internal Engine error" when opening apps when the error above is present.
Also for QlikView see article SE_LOG: when AAALR(1072.471418) is greater than 1.000000, we suggest using new row applicator to improve time and mem efficiency.
"AAALR" is a very low level concept deep in the engine. Generally speaking it means the average length of aggregation array. The longer this array is, the more memory and CPU power are to be used by the Engine to get aggregation results for every hypercube node.
When AAALR is greater than 1.0, normally the customer has a large data set and suffers slow responses and high memory usage in their app. In this case, Qlik Sense has a setting called DisableNewRowApplicator (default value is 1).
By setting this parameter to “0”, Qlik Sense will use a new algorithm which is optimized for large data set to do the aggregation, and will use much less memory and CPU power.
Changing this setting when they have AAALR warnings, making this change has resulted in drastic performance increases.
Possible setting values for DisableNewRowApplicator:
[Settings 7]
DisableNewRowApplicator=0
<---- the cursor should be here when saving the file
Reloads with Data Gateway are randomly slower, sometimes they last for 3 hours until they are automatically aborted.
The same reloads usually run faster and they can be completed in the right time if relaunched manually or automatically.
The logs do not show specific error messages.
We recommend two actions to resolve the problem.
The first is to activate the process isolation in order to reduce the number of reloads running at the same time. Please, follow this article.
It is possible to start with a value of 10 for the ODBC|SAPBW|SAPSQL_MAX_PROCESS_COUNT parameter and adjust it after some tests.
The second action is to add the "DISCONNECT" command after every query and to start every new query with a "LIB CONNECT".
This will force the closure and re-creation of the connection every time it is needed.
More information about the DISCONNECT statement can be found here.
We always recommend to keep Data Gateway on the latest version available.
This intermittent problem can be due to different causes.
In many cases the system can't handle multiple connections efficiently and this can lead to severe slowness in the data connection. Activating a Process Isolation will help to avoid this.
It is also possible that there is a delay between the connection opening and the query.
A connection for a query can be opened at the beginning a reload, then kept open for a while and re-called later for another table load in the script.
It is possible that there is a disconnection when the connection is not working. This can happen if another connection to the same location is called by a concurrent reload or if a timeout automatically closes the connection.
It is possible to force Data Gateway to recreate the connection using the "DISCONNECT" statement in the script.
Log4j, incorporated in Talend software, is an essential tool for discovering and solving problems. This article shows you some tips and tricks for using Log4j.
The examples in this article use Log4j v1, but Talend 7.3 uses Log4j v2. Although the syntax is different between the versions, anything you do in Log4j v1 should work, with some modification, in Log4j v2. For more information on Log4j v2, see Configuring Log4j, available in the Talend Help Center.
Content:
Configure the log4j.xml file in Talend Studio by navigating to File > Edit Project properties > Log4j.
You can also configure Log4j using properties files or built-in classes; however, that is not covered in this article.
You can execute code in a tJava component to create Log4j messages, as shown in the example below:
log.info("Hello World"); log.warn("HELLO WORLD!!!");
This code results in the following messages:
[INFO ]: myproject.myjob - Hello World [WARN ]: myproject.myjob - HELLO WORLD!!!
You can use Log4j to emit messages by creating a logger class in a routine, as shown in the example below:
public class logSample { /*Pick 1 that fits*/ private static org.apache.log4j.Logger log = org.apache.log4j.Logger.getLogger(logSample.class); private static org.apache.log4j.Logger log1 = org.apache.log4j.Logger.getLogger("from_routine_logSample"); /*...*/ public static void helloExample(String message) { if (message == null) { message = "World"; } log.info("Hello " + message + " !"); log1.info("Hello " + message + " !"); } }
To call this routine from Talend, use the following command in a tJava component:
logSample.helloExample("Talend");
The log results will look like this:
[INFO ]: routines.logSample - Hello Talend ! [INFO ]: from_routine_logSample - Hello Talend !
Using <routineName>.class includes the class name in the log results. Using free text with the logger includes the text itself in the log results. This is not really different than using System.out, but Log4j can be customized and fine-tuned.
You can use patterns to control the Log4j message format. Adding patterns to Appenders customizes their output. Patterns add extra information to the message itself. For example, when multiple threads are used, the default pattern doesn't provide information about the origin of the message. Use the %t variable to add a thread name to the logs. To easily identify new messages, it's helpful to use %d to add a timestamp to the log message.
To add thread names and timestamps, use the following pattern after the CONSOLE appender section in the Log4j template:
<param name="ConversionPattern" value= "%d{yyyy-MM-dd HH:mm:ss} [%-5p] (%t): %c - %m%n" />
The pattern displays messages as follows:
ISO formatted date [log level] (thread name): class projectname.jobname - message contents
If the following Java code is executed in three parallel threads, using the sample pattern above helps distinguish between the threads.
java.util.Random rand = new java.util.Random(); log.info("Hello World"); Thread.sleep(rand.nextInt(1000)); log.warn("HELLO WORLD!!!"); logSample.helloExample("Talend");
This results in an output that shows which thread emitted the message and when:
2020-05-19 12:18:30 [INFO ] (tParallelize_1_e45bc79b-d61f-45a3-be8f-7089ab6d565d): myproject.myjob_0_1.myjob - Hello World 2020-05-19 12:18:30 [INFO ] (tParallelize_1_4064c9b8-0585-41e0-b9f0-95fb31e602b7): myproject.myjob_0_1.myjob - Hello World 2020-05-19 12:18:30 [INFO ] (tParallelize_1_a8ef1065-0106-4b45-8a60-d02a9cbe1f00): myproject.myjob_0_1.myjob - Hello World 2020-05-19 12:18:30 [WARN ] (tParallelize_1_e45bc79b-d61f-45a3-be8f-7089ab6d565d): myproject.myjob_0_1.myjob - HELLO WORLD!!! 2020-05-19 12:18:30 [INFO ] (tParallelize_1_e45bc79b-d61f-45a3-be8f-7089ab6d565d): routines.logSample - Hello Talend ! 2020-05-19 12:18:30 [INFO ] (tParallelize_1_e45bc79b-d61f-45a3-be8f-7089ab6d565d): from_routine.logSample - Hello Talend ! 2020-05-19 12:18:30 [WARN ] (tParallelize_1_a8ef1065-0106-4b45-8a60-d02a9cbe1f00): myproject.myjob_0_1.myjob - HELLO WORLD!!! 2020-05-19 12:18:30 [INFO ] (tParallelize_1_a8ef1065-0106-4b45-8a60-d02a9cbe1f00): routines.logSample - Hello Talend ! 2020-05-19 12:18:30 [INFO ] (tParallelize_1_a8ef1065-0106-4b45-8a60-d02a9cbe1f00): from_routine.logSample - Hello Talend ! 2020-05-19 12:18:31 [WARN ] (tParallelize_1_4064c9b8-0585-41e0-b9f0-95fb31e602b7): myproject.myjob_0_1.myjob - HELLO WORLD!!! 2020-05-19 12:18:31 [INFO ] (tParallelize_1_4064c9b8-0585-41e0-b9f0-95fb31e602b7): routines.logSample - Hello Talend ! 2020-05-19 12:18:31 [INFO ] (tParallelize_1_4064c9b8-0585-41e0-b9f0-95fb31e602b7): from_routine.logSample - Hello Talend !
If you want to know which component belongs to which thread, you need to change the log level to add more information.
You can do this in Studio on the Run tab, in the Advanced settings tab of the Job execution.
In Talend Administration Center, you do this in Job Conductor.
Using DEBUG level adds a few extra lines to the log file, which can help you understand which parameters resulted in a certain output:
2020-05-19 12:51:50 [DEBUG] (tParallelize_1_c6de81be-1bbf-4f9b-9b7a-3d92bf345c40): myproject.myjob_0_1.myjob - tParallelize_1 - The subjob starting with the component 'tJava_1' starts. 2020-05-19 12:51:50 [DEBUG] (tParallelize_1_fa636a36-9f53-423f-abc6-b26c4c52c5b4): myproject.myjob_0_1.myjob - tParallelize_1 - The subjob starting with the component 'tJava_3' starts. 2020-05-19 12:51:50 [DEBUG] (tParallelize_1_d4da8ea0-4401-4229-82e9-86ff0ed67c3b): myproject.myjob_0_1.myjob - tParallelize_1 - The subjob starting with the component 'tJava_2' starts.
Keep in mind the following:
The following table describes the Log4j logging levels you can use in Talend applications:
Debug Level | Description |
TRACE | Everything that is available is being emitted at this logging level, which makes every row behave like it has a tLogRow component attached. This can make the log file extremely large; however, it also displays the transformation done by each component. |
DEBUG | This logging level displays the component parameters, database connection information, queries executed, and provides information about which row is processed, but it does not capture the actual data. |
INFO | This logging level includes the Job start and finish times, and how many records were read and written. |
WARN | Talend components do not use this logging level. |
ERROR | This logging level writes exceptions. These exceptions do not necessarily cause the Job to halt. |
FATAL | When this appears, the Job execution is halted. |
OFF | Nothing is emitted. |
These levels offer high-level controls for messages. When changed from the outside they affect only the Appenders that did not specify a log level and rely on the level set by the root logger.
Log4j messages are processed by Appenders, which route the messages to different outputs, such as to console, files, or logstash. Appenders can even send messages to databases, but for database logs, the built-in Stats & Logs might be a better solution.
Storing Log4j messages in files can be useful when working with standalone Jobs. Here is an example of a file Appender:
<appender name="ROLLINGFILE" class="org.apache.log4j.RollingFileAppender"> <param name="file" value="rolling_error.log"/> <param name="Threshold" value="ERROR"/> <param name="MaxFileSize" value="10000KB"/> <param name="MaxBackupIndex" value="5"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss} [%-5p] (%t): %c - %m%n"/> </layout> </appender>
You can use multiple Appenders to have multiple files with different log levels and formats. Use the parameters to control the content. The Threshold value of ERROR doesn't provide information about the Job execution, but a value of INFO makes errors harder to detect.
For more information on Appenders, see the Apache Interface Appender page.
You can use filters with Appenders to keep messages that are not of interest out of the logs. Log4j v2 offers regular expression based filters too.
The following example filter omits any Log4j messages that contain the string " - Adding the record ".
<filter class="org.apache.log4j.varia.StringMatchFilter"> <param name="StringToMatch" value=" - Adding the record " /> <param name="AcceptOnMatch" value="false" /> </filter>
When a Java program starts, it attempts to load its Log4j settings from the log4j.xml file. You can modify this file to change the default settings, or you can force Java to use a different file. For example, you can do this for Jobs deployed to Talend Administration Center by configuring the JVM parameters. This way, you can change the logging behavior for a Job without modifying the original Job, or you can revert back to the original logging behavior by clearing the Active check box.
Qlik Sense can process a maximum of 1,048,576 (2^20) characters by row when loading data from a CSV file. If a row in the source CSV file is longer than this limit, Qlik Sense automatically breaks it to multiple rows in the loaded table.
This doesn't happen when loading another file format (like XML) or loading the same CSV file in QlikView.
To increase the maximum length, please set parameter LongestPossibleLine in Settings.ini file of Qlik Sense Engine to a higher value than 1048576.
See How to modify Qlik Sense Engine's Settings.ini for detailed instructions of changing parameters in Settings.ini.
Qlik Sense engine supports up to 512 Megabytes (512*1024*1024) as line length. Script reload can handle strings up to this length in a single data cell. However, when using the data selection wizard, such long string may break the web socket. Therefore, maximum string length is limited to 1,048,576 characters to avoid this web socket issue.
When replicating a CDS View the Updates are being processed as Inserts not Updates. In SAP the CDS views delta with UPSERT and SAP capture INSERT and UPDATE both as one operation which is treated as INSERT.
If you have a SAP login you can look up SAP Note 3300238 for more information as shown below:
SAP Note 3300238 - ABAP CDS CDC: ODQ_CHANGEMODE not showing proper status forcreation
Component: BW-WHM-DBA-ODA (SAP Business Warehouse > Data Warehouse Management > Data Basis >Operational Data Provider for ABAP CDS, HANA & BW), Version: 4, Released On: 19.01.2024
This is working as expected. It is the designed behavior of the CDC logic. For both insert and update, ODQ_CHANGEMODE = U and ODQ_ENTITYCNTR = 1.
The CDC-delta logic is designed as UPSERT-logic. This means a DB-INSERT (or create) or a DB-UPDATE both get the ODQ_CHANGEMODE = U and ODQ_ENTITYCNTR = 1. It's not possible to distinguish in CDC-delta between Create and Update.
Qlik Replicate
SAP S/4HANA
SAP BW/4HANA
All versions of Replicate using the Oracle database that buffers online redo logs can experience caching issues with the Linux-related OS. This is seen and verified when the redo logs are stuck in a loop reading the same log over and over again.
[SOURCE_CAPTURE ]V: Reading blocks at offset 0000000000000a00 (from block 5) (oradcdc_redo.c:1096)
[SOURCE_CAPTURE ]V: Start read from online Redo log 5120 bytes at offset 0000000000000800 for requested offset 0000000000000a00, thread '1' (oradcdc_redo.c:1147)
[SOURCE_CAPTURE ]V: Completed to read from Redo log with rc 1 (oradcdc_redo.c:1161)
[SOURCE_CAPTURE ]V: Page validate - iBlockIndex 5 rba.iBlockIndex 4 iBlocksCount 2097153. Current Redo log sequence is 10703. (oradcdc_redo.c:1255)
[SOURCE_CAPTURE ]V: Validate Unverified, current Redo log sequence is 10703, block Redo log sequence is 10700 (oradcdc_redo.c:1330)
[SOURCE_CAPTURE ]V: Reading blocks at offset 0000000000000a00 (from block 5) (oradcdc_redo.c:1096)
When archived redo logs are in use, a log switch will happen when a new archive log is generated from the online redo logs. While the task is unable to get the most recent online redo log, the Replicate task will be able to detect the log switch and be able to read off the archived redo logs to continue the replication process. Latency will be seen as the task is stuck reading the same redo log until the archive log can be generated and read. The default task behavior is to recover from the caching issue when a new archive log is finally generated.
The supportResetlog Internal Parameter (Default option in a Qlik Replicate task) has been found not to detect the log switches. It is not switching to reading the archived log so the task continues to be stuck with reading the old cached redo logs.
Qlik is actively investigating the issue and will issue a fix. Review the release notes of the latest version for details.
Disabling the supportResetlog Internal Parameter can be used as a workaround.
QB-26734
RECOB-8423
An Oracle table has a primary key that is defined as Invisible which the Qlik Replicate task does not use. Qlik Replicate is encountering errors when updating rows in a table from Oracle invisible primary keys (PKs).
Message in the task log file:
[TASK_MANAGER ]W: Table 'XYZ'.D_XYZ' (subtask 0 thread 1) is suspended. Failed to build 'where' statement; Failed to get update statement for table XYZ'.D_XYZ', stream position 00007776: 2024-03-22T06:52:46:484188 [ASSERTION ]V: 'UPDATE (3)' event of table XYZ'.D_XYZ' with id '2065373' does not contain all key values (0 from 1), stream position '00000792.a5e2a494.00000001.0008.01.0000:44682.246184.16' (streamcomponent.c:2984)
The primary key is correctly defined in both source and target Databases and the insert statements are correctly replicated.
Excerpt from table DDL:
CREATE TABLE XYZ.D_XYZ
(
MLOT VARCHAR2(4 BYTE) NOT NULL,
MLDAY NUMBER(6) NOT NULL,
NUM_SEQ NUMBER INVISIBLE NOT NULL,
SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS
ALTER TABLE XYZ.D_XYZ ADD (
CONSTRAINT NUM_SEQ_PK
PRIMARY KEY
(NUM_SEQ)
USING INDEX XYZ.NUM_SEQ_PK
ENABLE VALIDATE);
We use the advanced setting 'Support invisible' columns.
The task does not use the key even when the Configuration Parameter 'Support invisible columns' is set.
The table behaves as if there were no primary key at all and the task tries to build a where clause with all fields.
In this case that did not work because the source table only had Primary Key logging enabled.
To get this to work we enables ALL COLUMN supplemental logging on the source table and the task was able to build the correct where clause for updates.
A task which uses the SAP extractor as the source and replicates data to Snowflake may stop at a line feed character during load processing.
In SAP:
Activate the extractors for Replicate
QB-26591
With Qlik Application Automation, you can get data out of Qlik Cloud and distributing it to different users in formatted Excel. The workflow can be automated by leveraging the connectors for Office 365, specifically Microsoft SharePoint and Microsoft Excel.
Here I share two example Qlik Application Automation workspaces that you can use and modify to suit your requirements.
Content:
Video:
Note - These instructions assume you have already created connections as required in Example 1.
This On-Demand Report Automation can be used across multiple apps and tables. Simply copy the extension object between apps & sheets, and update the Object ID (Measure 3) for each instance.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article gives an overview of the measure distribution use case. It explains a basic example of a template configured for this scenario and additions for a more advanced use case.
For this use case, we will define the following keywords/expressions:
By using this approach, all you need to do is create/update your master items in your main app, and then push these updates to all your destination apps. This way, all destination apps have the same master items.
To support this use case, we created a basic template, which uses measures as master items.
By running this template, you will be able to distribute all the measures created in your main app to all the apps available in the destination space.
All you need to do is select your main app and your destination space.
Of course, this is just a basic implementation. This template can be upgraded to suit more advanced scenarios.
Let's go over a few examples:
The changes made by this automation won't be accessible immediately in other sessions (like the Qlik Sense UI) more info on that can be found here: Automation session delay. It can take up to 40 minutes for these changes to be visible in other sessions, if these changes are needed sooner in these sessions, the Save App block can be used. But keep in mind it can only be used once for every app that's changed by the automation. More information on the Save App block can be found here: How to use the Save App block.
For the above example, it's best to add an additional List Apps block that's configured exactly the same as the first one so, it returns the same apps. We'll add a Save App block in the loop of the new List Apps block and configure it to run for every app that's returned. This way, we make sure that the Save App block is executed only once for every app that was changed. See the image below for an example with the Save App block.
First part: includes an input block for the source/destination apps and for the measure tags.
Second part: includes a measure deletion flow, for a complete sync automation process.
Both these template examples are available as attachments.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Reload fails in QMC even though script part is successfull in Qlik Sense Enterprise on Windows November 2023 and above.
When you are using a NetApp based storage you might see an error when trying to publish and replace or reloading a published app.
In the QMC you will see that the script load itself finished successfully, but the task failed after that.
ERROR QlikServer1 System.Engine.Engine 228 43384f67-ce24-47b1-8d12-810fca589657
Domain\serviceuser QF: CopyRename exception:
Rename from \\fileserver\share\Apps\e8d5b2d8-cf7d-4406-903e-a249528b160c.new
to \\fileserver\share\Apps\ae763791-8131-4118-b8df-35650f29e6f6
failed: RenameFile failed in CopyRename
ExtendedException: Type '9010' thrown in file
'C:\Jws\engine-common-ws\src\ServerPlugin\Plugins\PluginApiSupport\PluginHelpers.cpp'
in function 'ServerPlugin::PluginHelpers::ConvertAndThrow'
on line '149'. Message: 'Unknown error' and additional debug info:
'Could not replace collection
\\fileserver\share\Apps\8fa5536b-f45f-4262-842a-884936cf119c] with
[\\fileserver\share\Apps\Transactions\Qlikserver1\829A26D1-49D2-413B-AFB1-739261AA1A5E],
(genericException)'
<<< {"jsonrpc":"2.0","id":1578431,"error":{"code":9010,"parameter":
"Object move failed.","message":"Unknown error"}}
ERROR Qlikserver1 06c3ab76-226a-4e25-990f-6655a965c8f3
20240218T040613.891-0500 12.1581.19.0
Command=Doc::DoSave;Result=9010;ResultText=Error: Unknown error
0 0 298317 INTERNAL&
emsp; sa_scheduler b3712cae-ff20-4443-b15b-c3e4d33ec7b4
9c1f1450-3341-4deb-bc9b-92bf9b6861cf Taskname Engine Not available
Doc::DoSave Doc::DoSave 9010 Object move failed.
06c3ab76-226a-4e25-990f-6655a965c8f3
Qlik Sense Client Managed version:
Potential workarounds
The most plausible cause currently is that the specific engine version has issues releasing File Lock operations. We are actively investigating the root cause, but there is no fix available yet.
QB-25096
QB-26125
This article covers the details of how to license a Qlik Sense Enterprise on Windows server with a Signed License Key (SLK).
Index:
To apply a Signed License Key, a secure network connection is required to be established: A signed license key requires connectivity to license.qlikcloud.com. See List of IP Addresses behind license.qlikcloud.com and lef.qliktech.com for details.
It can establish in any of the security scenarios below:
All nodes in a Qlik Sense Enterprise on Windows on-premise multi node environment need access to the license server.
The information in this article is provided as is. Adjustments to error messages cannot be supported by Qlik Support and can lead to difficulties identifying underlying root causes of technical issues experienced later. All changes will be reverted after an upgrade.
The method documented in this article is intended for later versions of Qlik Sense Enterprise on Windows.
The message texts are defined in JavaScript files located by default in path C:\Program Files\Qlik\Sense\Client\translate\. There are subfolders for each supported language, e.g. en-US for English.
To modify the message text edit e.g. the file hub.js in the folder corresponding to the language you want to change the text for.
To change e.g. the default message text for access denied messages you need to find the following line:
"ProxyError.OnLicenseAccessDenied": "You cannot access Qlik Sense because you have no access pass.",
To change the text of the message you need to modify the second part of the text which is in brackets (see example below):
"ProxyError.OnLicenseAccessDenied": "This is a modified message text for access denied events",
To modify the error message:
Users will not be able to see the new message until their browser cache has been cleared.
Qlik Sense Repository Service API (QRS API) contains all data and configuration information for a Qlik Sense site. The data is normally added and updated using the Qlik Management Console (QMC) or a Qlik Sense client, but it is also possible to communicate directly with the QRS using its API. This enables the automation of a range of tasks, for example:
Using Xrfkey header
A common vulnerability in web clients is cross-site request forgery, which lets an attacker impersonate a user when accessing a system. Thus we use the Xrfkey to prevent that, without Xrfkey being set in the URL the server will send back a message saying: XSRF prevention check failed. Possible XSRF discovered.
Environments:
Note: Please note that this example is related to token-based licenses and in case this is needed to be configured with Professional Analyser type of licenses you might need to use the following API calls:
Furthermore, combining this with QlikCli and in case you need to monitor and more specifically remove users, the following link from community might be useful: Deallocation of Qlik Sense License
This procedure has been tested in a range of Qlik Sense Enterprise on Windows versions.
$hdrs = @{} $hdrs.Add("X-Qlik-xrfkey","12345678qwertyui") $url = "https://qlikserver1.domain.local/qrs/about?xrfkey=12345678qwertyui" Invoke-RestMethod -Uri $url -Method Get -Headers $hdrs -UseDefaultCredentials
$hdrs = @{} $hdrs.Add("X-Qlik-xrfkey","12345678qwertyui") $hdrs.Add("X-Qlik-User","UserDirectory=DOMAIN;UserId=Administrator") $cert = Get-ChildItem -Path "Cert:\CurrentUser\My" | Where {$_.Subject -like '*QlikClient*'} $url = "https://qlikserver1.domain.local:4242/qrs/about?xrfkey=12345678qwertyui" Invoke-RestMethod -Uri $url -Method Get -Headers $hdrs -Certificate $cert
Execute the command.
A possible response for the 2 above scripts may look like this (Note that the JSON string is automatically converted to a PSCustomObject by PowerShell) :
buildVersion : 23.11.2.0 buildDate : 9/20/2013 10:09:00 AM databaseProvider : Devart.Data.PostgreSql nodeType : 1 sharedPersistence : True requiresBootstrap : False singleNodeOnly : False schemaPath : About
If there are several certificates from different Qlik Sense server, these can not be fetched by subject as there will have several certificates with subject QlikClient and that script will fail as it will return as array of certificates instead of a single certificate. In that case, fetch the certificate by thumbprint. This required more Powershell knowledge, but an example can be found here: How to find certificates by thumbprint or name with powershell
Qlik Sense allows for three settings that may influence the perceived connection and session timeout period. These are the "Session Inactivity Timeout", "Keep-Alive Timeout", and "TCP Websocket keep-alive" settings.
Note: Adjusting the below settings can help when working with slow internet connectivity or wanting to extend the session inactivity. However, session disconnect issues can be caused by other network connectivity issues and by system resource shortage as well and may require additional troubleshooting. See Hub access times out with: Error Connection lost. Make sure that Qlik Sense is running properly
This is the maximum timeout for a single HTTP request. The default value is 10 seconds. During the defined keep alive timeout value, the connection between end user and Qlik Sense will remain open.
It serves as protection against denial-of-service attacks. That is, if an ongoing request exceeds this period, Qlik Sense proxy will close the connection.
Increase this value if your users work over slow connections and experience closed connections for which no other workaround has been found. Make sure to take the mentioned DoS consideration above into account.
This is the browser authentication session time out ( 30 minutes by default set under Virtual Proxy in QMC ). This sets a cookie on the client machine with the name X-Qlik-Session. This cookie can be traced in Fiddler or Developer tools under the header tab.
If the session cookie header value does not get passed, is destroyed, or modified between the end user client and the Qlik Sense server while 'in-flight' the user session is terminated and the user is logged out.
By default, it will be destroyed after 30 minutes of inactivity or when the browser is closed.
This is another setting that may help keep the connection open in certain environments. See Enabling TCP Keep Alive Functionality In Qlik Sense. Note that customers who don't experience any issues with web sockets terminated by the network due to inactive SHOULD NOT switch this feature ON since it may potentially cause Qlik Sense to send unnecessary traffic on the network towards the client.