Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
It is possible that the following error may occur when executing a Job in Talend Studio version 8 that references a custom user routine.
java.lang.NoClassDefFoundError: routines/my-custom-routine
The error occurs due to the custom routine not being found at runtime, which obstructs the execution of the Job.
To resolve this issue, please uncheck the "Offline" checkbox in Talend Studio's Maven preferences (Window -> Preferences -> Maven). This enables Talend to download the required dependencies, thereby resolving the classpath conflict and enabling successful Job execution.
Asking questions using Insight Advisor Chat in the Hub on Qlik Sense Enterprise on Windows, may result in the message "Unable to get data" being returned. See Fig 1.
Verify the LEF file includes any of the following two attributes to be entitled for Insight Advisor Chat:
Then do below:
[nl-parser]
//Disabled=true
Identity=Qlik.nl-parser
[nl-app-search]
//Disabled=true
Identity=Qlik.nl-app-search
Qlik Sense Enterprise on Windows
Executing tasks or modifying tasks (changing owner, renaming an app) in the Qlik Sense Management Console and refreshing the page does not update the correct task status. Issue affects Content Admin and Deployment Admin roles.
The behaviour began after an upgrade of Qlik Sense Enterprise on Windows.
This issue can be mitigated beginning with August 2021 by enabling the QMCCachingSupport Security Rule.
Enable QmcTaskTableCacheDisabled.
To do so:
Upgrade to the latest Service Release and disable the caching functionality:
To do so:
NOTE: Make sure to use lower case when setting values to true or false as capabilities.json file is case sensitive.
Should the issue persist after applying the workaround/fix, contact Qlik Support.
This is a problem which on first impressions should not (and you would think logically cannot) happen. Therefore it is important to understand why it does, and what can be done to resolve it when it does.
The situation is that Replicate is doing a Full Load for a table (individually or as part of a task full loading many tables). The source and target tables have identical unique primary keys. There are no uppercasing or other character set issues relating to any of the columns that make up the key which may sometimes cause duplication problems. Yet as the Full Load for the table progresses, probably nearing the end, you get a message indicating that Replicate has failed to insert a row into the target as a result of a duplicate. That is there is already a row in the target table with the unique key for the row that it is trying to insert. The Full Load for that table is terminated (often after several hours); and if you try again the same error, perhaps for a different row, will often occur.
Logically this shouldn’t happen, but it does. The likelihood of it doing so depends on the source DBMS type, the type of columns in the source table, and you will find it is always for a table that is being updated (SQL UPDATEs) as Replicate copies it. The higher the update rate and the bigger the table, the more likely it is to happen.
Note: This article discussed the problems that are related to duplicates in the TARGET_LOAD and not the TARGET_APPLY, that is during Full Load and before starting to apply the cached changes.
To understand the fix we first need to understand why the problem occurs, and this involves understanding some of the internal workings of most conventional Relational Database Management Systems.
RDBMS’s tend to employ different terminology for things that exist in all of them. I’m going to use DB2 terminology and explain each term the first time I use it. With a different RDBMS the terminology may be different, but the concepts are generally the same?
The first concept to introduce is the Tablespace. That’s what it’s called in DB2, but it exists for all databases and is the physical area where the rows that make up the table are stored. Logically it can be considered as a single contiguous data area, split up into blocks, numbered in ascending order.
This is where your database puts the row data when you INSERT rows into the table. What’s also important is that it tries to update the existing data for a row in place when you do an UPDATE, but may not always be able to do so. If that is the case then it will move the updated row to another place in the tablespace, usually at what is then the highest used (the endpoint) block in the tablespace area.
The next point concerns how the DBMS decides to access data from the tablespace in resolving your SQL calls. Each RDBMS has an optimiser, or something similar that makes these decisions. The role of indexes with a relational database is somewhat strange. They are not really part of the standard Relational Database model, although in practice they are used to guarantee uniqueness and support referential integrity. Other than for these roles, they exist only to help the optimiser come up with faster ways of retrieving rows that satisfy your SELECT (database read) statements.
When any piece of SQL (we’ll focus on simple SELECT statements here) is presented to the optimiser, it decides on what method to use to search for and retrieve any matching rows from the tablespace. The default method is to search through all the rows directly in the tablespace looking for rows that match any selection criteria, this is known as a Tablespace Scan.
A Tablespace Scan may be the best way to access rows from a table, particularly if it is likely that many or most of the rows in the table will match the selection criteria. For other SELECTs though that are more specific about what row(s) are required, a suitable matching index may be used (if one exists) to go directly to the row(s) in the tablespace.
The sort of SQL that Replicate generates to execute against the source table when it is doing a Full Load is of the form SELECT * FROM, or SELECT col1, col2, … FROM. Neither of these has any row specific selection criteria, and in fact this is to be expected as a Full Load is in general intended to select all rows from the source table.
As a result the database optimiser is not likely to choose to use an index (even if a unique index on the table exists) to resolve this type of SELECT statement, and instead a Tablespace Scan of the whole tablespace area will take place. This, as you will see later, can be inconvenient to us but is in fact the fastest way of processing all the rows in the table.
When we do a Full Load copy for a table that is ‘live’ (being updated as we copy it), the result we end up with when the SELECT against the source has been completed and we have inserted all the rows into the target is not likely to be consistent with what is then in the source table. The extent of the differences is dependent on the rate of updates and how long the Full Load for that table takes. For high update rates on big tables that take many hours for a Full Load the extent of the differences can be quite considerable.
This all sounds very worrying but it is not as the CDC (Change Data Capture) part of Replicate takes care of this. CDC is mainly known for Replicating changes from source to target after the initial Full Load has been taken, keeping the target copies up to date and in line with the changing source tables. However CDC processing has an equally important role to play in the Full Load process itself, especially when this is being done on ‘live’ tables subject to updates as the Full Load is being processed.
In fact CDC processing doesn’t start when Full Load is finished, but in fact before Full Load starts. This is so that it can collect details of changes that are occurring at the source whilst the Full Load (and it’s associated SELECT statement) are taking place. The changes collected during this period are known as the ‘cached changes’ and they are applied to the newly populated target table before switching into normal ongoing CDC mode to capture all subsequent changes.
This takes care of and fixes all of the table row data inconsistencies that are likely to occur during a table Full Load, but there is one particular situation that can occur and catch us out before the Full Load completes and the cached changes can be applied. This results in Replicate trying to insert details for the same row more than once in the target table; triggering the duplicates error that we are talking about here.
Consider this situation:
That is how the problem occurs. Having variable length columns, and binary object columns in the source table make this (movement of the row to a new location in the tablespace) much more likely to happen and the duplicate insert problem to occur.
So how to fix this, or at least how to find a method to stop it happening.
The solution is to persuade the optimiser in the source database to use the unique index on the table to access the rows in the table’s tablespace rather than scanning sequentially through it. The index (which is unique) will only provide one row to read for each key as the execution of our SELECT statement progresses. We don’t have to worry about whether it is the ‘latest’ version of the row or not because that will be taken care of later by the application of the cached changes.
The optimiser can (generally) be persuaded to use the unique index on the source table if the SELECT statement indicates that there is a requirement to return the rows in the result set in the order given by that index. This requires having a SELECT statement with a order clause matching the columns in the unique index. Something of the form SELECT * FROM ORDER BY col1, col2, col3, etc. Where col1, col2, col3 etc. are the columns that make up the tables unique primary index.
But, how can we do this. Replicate has a undocumented facility that allows the user to configure extra text to be added to the end of the generated SQL for a particular table during Full Load processing specifically to add a WHERE statement to determine which rows are included and excluded during a Full Load extract.
This is not exactly what we want to do (we want to include all rows), but this ‘FILTER’ facility also provides the option to extend the content of the SELECT statement that is generated after the WHERE part of the statement has been added. So we can use it to add the ORDER BY part of the statement that we require.
Here is the format of the FILTER statement that you need to add.
—FILTER: 1=1) ORDER BY col1, col2, coln —
This is inserted in the ‘Record Selection Condition’ box on the individual table filter screen when configuring the Replicate task. If you want to do this for multiple tables in the Replicate task then you need to set up a FILTER for each table individually.
To explain, the —FILTER: keyword indicates the beginning of filter information that is expected to begin with a WHERE clause (which is generated automatically).
The 1=1)) component completes that WHERE clause in a way that all rows are selected (you could put in something to limit the rows selected if required, but that’s not what we are trying yo achieve here)
It is then possible to add other clauses and parameters before terminating the additional text to be added with the final —
In this case an ORDER clause is added that will guarantee that rows are returned in the order selected. This causes the unique index on the table to be used to retrieve rows at the source; assuming that you code col1, col2, etc. to match the columns and their order in the index. If the index has some columns in descending order (rather than ascending) make sure that is coded in the ORDER BY statement as well.
If you code things incorrectly the generated SELECT statement will fail and you will be able to see and debug this through the log.
Qlik Replicate tasks using Oracle as a Source Endpoint fail after installing the Oracle July 2024 patch.
All Qlik Replicate versions older than the 2024.5 SP03 release are affected.
Upgrade to Qlik Replicate 2024.5 SP03 or later once available.
In the meantime, Qlik has made an early build available for 2024.5:
2024.5 SP03: https://files.qlik.com/url/idgdr2nxshgpkij3
password: cygie73l
The Oracle July 2024 patch introduced a change to redo events. Qlik has since provided a fix for Qlik Replicate which parses the redo log correctly.
RECOB-8698
Oracle Database 19c Release Update July 2024 Known Issues
As a general reminder, all changes to the environment such as operating system patches, endpoint and driver patches, etc. should be tested in lower environments before promoting to production.
Qlik ODBC connector package (database connector built-in Qlik Sense) fails to reload with error Connector reply error:
Executing non-SELECT queries is disabled. Please contact your system administrator to enable it.
The issue is observed when the query following SQL keyword is not SELECT, but another statement like INSERT, UPDATE, WITH .. AS or stored procedure call.
See the Qlik Sense February 2019 Release Notes for details on item QVXODBC-1406.
By default, non-SELECT queries are disabled in the Qlik ODBC Connector Package and users will get an error message indicating this if the query is present in the load script. In order to enable non-SELECT queries, allow-nonselect-queries setting should be set to True by the Qlik administrator.
To enable non-SELECT queries:
As we are modifying the configuration files, these files will be overwritten during an upgrade and will need to be made again.
Only apply !EXECUTE_NON_SELECT_QUERY if you use the default connector settings (such as bulk reader enabled and reading strategy "connector"). Applying !EXECUTE_NON_SELECT_QUERY to non-default settings may lead to unexpected reload results and/or error messages.
More details are documented in the Qlik ODBC Connector package help site.
Feature Request Delivered: Executing non-SELECT queries with Qlik Sense Business
Execute SQL Set statements or Non Select Queries
With the new inclusion of the Get Chart Image block in the Qlik Reporting connector in Qlik Application Automation, you now have more options to notify a group of users with more in-depth data and charts using Slack, Microsoft Teams, and email.
This article will guide you in sending your first chart image to Slack with Qlik Application Automation.
It explains a basic example of a template configured in Qlik Application Automation for this scenario.
You can make use of the template which is available in the template picker. You can find it by navigating to Add new -> New automation -> Search templates and searching for 'Send a Chart Image to Slack' in the search bar, and clicking the Use template option.
For guidance on sending charts via Microsoft Teams and mail, go to the "Next Steps" section at the end of this article.
You can download examples of the automations from this article: Send-chart-image-to-slack.json, Send-chart-image-to-outlook.json, Send-chart-image-to-mail.json, Send-chart-image-to-microsoft-teams.json
Warning: Whenever the “Get Chart Image” block is to be used, we advise you to only use it with temporary bookmarks or pre-existing persistent bookmarks.
If the condition block outcome evaluates to false:
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
The Qlik Download page or Qlik Ideation app do not show their expected content. The web page and browser console display error messages referring to HTTP 401 Unauthorized access, which may look similar to the below examples.
The Qlik Download page and Ideation App on Qlik Community require 3rd party cookies as part of the current web integration. The accessing browser must allow 3rd-party cookies while accessing the Qlik Downloads page in order for the page to render successfully.
The browser does not have to completely allow 3rd party cookies, but can also just allow 3rd cookies for the *.qlik.com or community.qlik.com domain.
The cookie settings are browser-specific, please consult browser help for more details.
The Qlik Download page and Ideation App in Qlik Community are composed by an embedded object hosted in Qlik Cloud. This means cookies for the user session are associated with two different domains, community.qlik.com and qlikcloud.com. The browser refer to the parent page (community.qlik.com) as a 1st party cookie, while the embedded content from a different domain (qlikcloud.com) is referred to as a 3rd party cookie.
3rd party cookies may be blocked in a browser as a mechanisms to block user tracking and advertisement. Browser incognito mode may also block 3rd party cookies by default as part of keeping the user more anonymous.
Clear, allow & manage cookies in Chrome
Third-party cookies and Firefox tracking protection
Microsoft Edge, browsing data, and privacy
Clear cookies in Safari on Mac
This article provides step-by-step instructions for implementing Azure AD as an identify provider for Qlik Cloud. We cover configuring an App registration in Azure AD and configuring group support using MS Graph permissions.
It guides the reader through adding the necessary application configuration in Azure AD and Qlik Sense Enterprise SaaS identity provider configuration so that Qlik Sense Enterprise SaaS users may log into a tenant using their Azure AD credentials.
Content:
Throughout this tutorial, some words will be used interchangeably.
The tenant hostname required in this context is the original hostname provided to the Qlik Enterprise SaaS tenant.
Copy the "value of the client secret" and paste it somewhere safe.After saving the configuration the value will become hidden and unavailable.
In the OpenID permissions section, check email, openid, and profile. In the Users section, check user.read.
Failing to grant consent to GroupMember.Read.All may result in errors authenticating to Qlik using Azure AD. Make sure to complete this step before moving on.
In this example, I had to change the email claim to upn to obtain the user's email address from Azure AD. Your results may vary.
While not hard, configuring Azure AD to work with Qlik Sense Enterprise SaaS is not trivial. Most of the legwork to make this authentication scheme work is on the Azure side. However, it's important to note that without making some small tweaks to the IdP configuration in Qlik Sense you may receive a failure or two during the validation process.
For many of you, adding Azure AD means you potentially have a bunch of clean up you need to do to remove legacy groups. Unfortunately, there is no way to do this in the UI but there is an API endpoint for deleting groups. See Deleting guid group values from Qlik Sense Enterprise SaaS for a guide on how to delete groups from a Qlik Sense Enterprise SaaS tenant.
Qlik Cloud: Configure Azure Active Directory as an IdP
After migrating a Spark Job from version 731 to version 801, the migrated Spark task execution generated an application log with a DEBUG level log. For some large Spark task executions, this generated up to 10GB of logs. The Spark Job design showed that the log4jLevel was unchecked by default.
The log configuration for both the spark.driver and spark.executor is not set by default, resulting in the Spark batch Job executing with DEBUG level by default.
In Run -> Spark Configuration ->Advanced properties (or in the wizard if using repository)
Add the property "spark.driver.extraJavaOptions" with value "-Dlog4j.configuration=/etc/spark/conf.cloudera.spark_on_yarn/log4j.properties"
Add the property "spark.executor.extraJavaOptions" with value "-Dlog4j.configuration=/etc/spark/conf.cloudera.spark_on_yarn/log4j.properties"
Note: /etc/spark/conf.cloudera.spark_on_yarn/log4j.properties is the default value provided on CDP, and you have the flexibility to customize the log levels as per your preference. This will result in altering the logger value when executed on Yarn.
This article provides a list of the best practices for Qlik Sense configuration. It is worth implementing each item, especially for a large environment so that your database can handle the volume of requests coming from all its connected nodes.
For basic information, see Max Connections.
Specifies the maximum number of concurrent connections (max_connections) to the database. The default value for a single server is 100.
In a multi-node environment, this should be adjusted to the sum of all repository connection pools + 20. By default, this value is 110 per node.
Assuming two nodes and assuming the default value of 110 per node, the value would be 240.
The value of 110 above is a default example. You can further refine the value.
The connection pool for the Qlik Sense Repository is always based on core count on the machine. To date, our advise is to take the core count of your machine and multiply it by five. This will be your max connection pool for the Repository Service for that node.
This should be a factor of CPU cores multiplied by five.
If 90 is higher than that result, leave 90 in place. Never decrease it.
For more information about Database Max Pool Size Connection, see https://wiki.postgresql.org/wiki/Number_Of_Database_Connections
Optimizing Performance for Qlik Sense Enterprise
PostgreSQL: postgresql.conf and pg_hba.conf explained
Database connection max pool reached in Qlik Sense Enterprise on Windows
This article explains how the Reporting connector in Qlik Application Automation can be used to generate multi-app reports. It also explains how the generated report can be stored on a cloud storage tool, like Microsoft SharePoint.
Multi-app reports
A multi-app report is a report that contains sheets from multiple apps. This type of report can be created in Qlik Application Automation with the Create Multi App Report and the Add Sheet to Multi App Report blocks.
To add selections to these sheets, you can still use the Add Selection To Sheet block. To add selections to the report, you can use the Add Selection To Report block.
Example
In this example, we'll create an automation that generates a report containing two sheets from two different apps, with selections applied to the second sheet.
Before you continue, please create a new automation and search for the reporting connector in the Block Library:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
During the installation of Qlik Sense with Shared Persistence, an administrator encounters an error message indicating that the database could not be reached during the Shared Persistence Connection Settings section of the installer:
This error message is a generic response when the installer cannot reach the database for a variety of reasons. In order of probability:
This is the most common roadblock for this portion of the installer. To determine whether this cause is at play:
The test in (1) purely connects for TCP connectivity and does not check for whether authentication is allowed. Please reference article PostgreSQL: postgres.conf and pg_hba.conf explained for how to configure PostgreSQL / Qlik Sense Repository Database to allow for connections from RIM nodes. The likely misconfigurations here would be in the listen_address section of the postgres.conf as well as the IP address(es) allowed in the pg_hba.conf files.
The next most common scenario is that the server which Qlik Sense is installed is configured to enforce FIPS policy. Due to a dependency in the install, additional steps are needed to allow the installer to operate in a FIPS environment. Reference article Unable to install Qlik Sense to a remote PostgreSQL database with FIPS enabled for more details on the resolution for this variant of blockers.
This can be a broad category but the best method for determining the cause here is by directly testing the connection to the database using the dependency that the installer use (npgsql). Attached to this article is npgsql.zip which contains the compiled .EXE which can be used to test.
Example:
This implies that exclusive SSL is configured on the PostgreSQL side. Removing this configuration and allowing standard connectivity bypasses this section of the installer.
Reload fails in QMC even though script part is successfull in Qlik Sense Enterprise on Windows November 2023 and above.
When you are using a NetApp based storage you might see an error when trying to publish and replace or reloading a published app.
In the QMC you will see that the script load itself finished successfully, but the task failed after that.
ERROR QlikServer1 System.Engine.Engine 228 43384f67-ce24-47b1-8d12-810fca589657
Domain\serviceuser QF: CopyRename exception:
Rename from \\fileserver\share\Apps\e8d5b2d8-cf7d-4406-903e-a249528b160c.new
to \\fileserver\share\Apps\ae763791-8131-4118-b8df-35650f29e6f6
failed: RenameFile failed in CopyRename
ExtendedException: Type '9010' thrown in file
'C:\Jws\engine-common-ws\src\ServerPlugin\Plugins\PluginApiSupport\PluginHelpers.cpp'
in function 'ServerPlugin::PluginHelpers::ConvertAndThrow'
on line '149'. Message: 'Unknown error' and additional debug info:
'Could not replace collection
\\fileserver\share\Apps\8fa5536b-f45f-4262-842a-884936cf119c] with
[\\fileserver\share\Apps\Transactions\Qlikserver1\829A26D1-49D2-413B-AFB1-739261AA1A5E],
(genericException)'
<<< {"jsonrpc":"2.0","id":1578431,"error":{"code":9010,"parameter":
"Object move failed.","message":"Unknown error"}}
ERROR Qlikserver1 06c3ab76-226a-4e25-990f-6655a965c8f3
20240218T040613.891-0500 12.1581.19.0
Command=Doc::DoSave;Result=9010;ResultText=Error: Unknown error
0 0 298317 INTERNAL&
emsp; sa_scheduler b3712cae-ff20-4443-b15b-c3e4d33ec7b4
9c1f1450-3341-4deb-bc9b-92bf9b6861cf Taskname Engine Not available
Doc::DoSave Doc::DoSave 9010 Object move failed.
06c3ab76-226a-4e25-990f-6655a965c8f3
Qlik Sense Client Managed version:
Potential workarounds
The most plausible cause currently is that the specific engine version has issues releasing File Lock operations. We are actively investigating the root cause, but there is no fix available yet.
QB-25096
QB-26125
Click here for Video Transcript
Note: The concept ’ UPSERT MODE’ and 'MERGE MODE' is not documented in the User Guide. i.e. it is not a word you can search for in the User Guide and is not a key word in the Replicate UI.
UPSERT MODE: Change an update to an insert if the row doesn't exists on the target
MERGE MODE: Change an insert to an update if the row already exists on the target
Use MERGE MODE: i.e. configure the task under: task setting --> Error Handling --> Apply Conflicts --> ‘Duplicate key when applying INSERT:’ UPDATE the existing target record
Use UPSERT MODE: i.e. configure the task under: task setting --> Error Handling --> Apply Conflicts --> ‘No record found for applying an UPDATE:’ INSERT the missing target record
Batch Apply and Transactional Apply modes:
There is a big difference in how these Upsert/Merge settings work depending of whether the task is in 'Batch' or 'Transactional' Apply mode.
Batch Apply mode:
Either option (Upsert/Merge) does an unconditional Delete of all rows in the batch, followed by an Insert of all rows.
Note: The other thing to note is that with this setting the actual update that fails is inserted in a way that may not be obvious and could cause issue with downstream processing. In batch apply mode the task will actually issue a pair of transactions (1st a delete of the record and then 2nd an insert) this pair of transactions is unconditional and will result in a "newly inserted row every time the record is updated on the source.
Transactional Apply mode:
Either option (Upsert/Merge) - the original statement is run and if it errors out then the switch is done (try and catch).
Insert in transactional apply mode, the insert statement will be performed in a "try / catch" fashion. The insert statement will be run and only if it fails will it be switched to an update statement .
In transactional apply mode, the update will be performed in a "try / catch" fashion. The update will be run and only if it fails will it be switched to an insert statement .
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Dragging and dropping the Qlik Map object onto a sheet results in the following error:
The visualization was not found on the server. This extension is not available: map ()
Inspecting the error in the browser developer tools (Developer tool console logs) shows the following error:
geo.error.WebmapInvalidkey
Invalid key
Match the serverKey between Qlik Sense Enterprise on Windows and Qlik GeoAnalytics.
The serverKey value must not be surrounded by quotes (" ").Optional: If a custom URL is used in the mapconf.json file, match the serverURL value to the custom URL in use.
Qlik Sense Upgrade or Migration. Every time Qlik Sense is upgraded or migrated, the "mapconf.json" file will be overwritten and needs to be updated again.
Qlik Sense Enterprise on Windows
Qlik GeoAnalytics
The Qlik Sense Mobile app allows you to securely connect to your Qlik Sense Enterprise deployment from your supported mobile device. This is the process of configuring Qlik Sense to function with the mobile app on iPad / iPhone.
This article applies to the Qlik Sense Mobile app used with Qlik Sense Enterprise on Windows. For information regarding the Qlik Cloud Mobile app, see Setting up Qlik Sense Mobile SaaS.
Content:
See the requirements for your mobile app version on the official Qlik Online Help > Planning your Qlik Sense Enterprise deployment > System requirements for Qlik Sense Enterprise > Qlik Sense Mobile app
Out of the box, Qlik Sense is installed with HTTPS enabled on the hub and HTTP disabled. Due to iOS specific certificate requirements, a signed and trusted certificate is required when connecting from an iOS device. If using HTTPS, make sure to use a certificate issued by an Apple-approved Certification Authority.
Also check Qlik Sense Mobile on iOS: cannot open apps on the HUB for issues related to Qlik Sense Mobile on iOS and certificates.
For testing purposes, it is possible to enable port 80.
If not already done, add an address to the White List:
An authentication link is required for the Qlik Sense Mobile App.
NOTE: In the client authentication link host URI, you may need to remove the "/" from the end of the URL, such as http://10.76.193.52/ would be http://10.76.193.52
Users connecting to Qlik Sense Enterprise need a valid license available. See the Qlik Sense Online Help for more information on how to assign available access types.
Qlik Sense Enterprise on Windows > Administer Qlik Sense Enterprise on Windows > Managing a Qlik Sense Enterprise on Windows site > Managing QMC resource > Managing licenses
The authentication token associated with auto Provisioning is about to expire
Or
The authentication token associated with auto Provisioning has expired
The token must be deleted that was created for SCIM Auto-provisioning with Azure.
curl "https://<tenanthostname>/api/v1/api-keys?subType=externalClient" \ -H "Authorization: Bearer <dev-api-key>"
Ensure to replace <tenanthostname>
with your actual tenant hostname and <dev-api-key>
with your generated developer API key. Execute the command in Postman or a similar tool, and make sure to include the API key in the header for authorization.
Once you have obtained the key ID from the output, copy it for later use in the deletion process.
To delete the API key associated with SCIM provisioning, execute the following curl command:
curl "https://<tenanthostname>/api/v1/api-keys/<keyID>" \ -X DELETE \ -H "Authorization: Bearer <dev-api-key>"
<tenanthostname>
with your actual tenant hostname, <keyID>
with the key ID you obtained in step 2, and <dev-api-key>
with your developer API key. Execute this command in Postman or a similar tool, and ensure that the API key is included in the header for authorization.By following these steps, you can successfully delete the token created for SCIM Auto-provisioning with Azure.
The information in this article is provided as-is and is to be used at your own discretion. Depending on the tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The token was not deleted when the SCIM Auto-Provisioning was disabled or completely removed.
If a TCP connection is possible with Qlik's licensing server endpoint, testing the connection to license.qlikcloud.com will return the message default backend - 404 or 404 Not Found (nginx).
When testing whether or not your Sense installation can successfully connect to the license backend, always test the connection with all nodes.
The 404 HTTP error code indicates the server was reached but could not find any content to be displayed in the URL address specified.
To avoid a 404 message, rather than accessing license.qlikcloud.com, open license.qlikcloud.com/sld.
Another test would be to use telnet and confirm a connection to port 443 is possible:
If different results are returned:
There was an error when getting license information from the license server