Featured Content
-
How to contact Qlik Support
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical e... Show MoreQlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
- Support and Professional Services; who to contact when.
- Qlik Support: How to access the support you need
- 1. Qlik Community, Forums & Knowledge Base
- The Knowledge Base
- Blogs
- Our Support programs:
- The Qlik Forums
- Ideation
- How to create a Qlik ID
- 2. Chat
- 3. Qlik Support Case Portal
- Escalate a Support Case
- Phone Numbers
- Resources
Support and Professional Services; who to contact when.
We're happy to help! Here's a breakdown of resources for each type of need.
Support Professional Services (*) Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. - Error messages
- Task crashes
- Latency issues (due to errors or 1-1 mode)
- Performance degradation without config changes
- Specific questions
- Licensing requests
- Bug Report / Hotfixes
- Not functioning as designed or documented
- Software regression
- Deployment Implementation
- Setting up new endpoints
- Performance Tuning
- Architecture design or optimization
- Automation
- Customization
- Environment Migration
- Health Check
- New functionality walkthrough
- Realtime upgrade assistance
(*) reach out to your Account Manager or Customer Success Manager
Qlik Support: How to access the support you need
1. Qlik Community, Forums & Knowledge Base
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
The Knowledge Base
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
- Go to the Official Support Articles Knowledge base
- Type your question into our Search Engine
- Need more filters?
- Filter by Product
- Or switch tabs to browse content in the global community, on our Help Site, or even on our Youtube channel
Blogs
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)Our Support programs:
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.The Qlik Forums
- Quick, convenient, 24/7 availability
- Monitored by Qlik Experts
- New releases publicly announced within Qlik Community forums (click)
- Local language groups available (click)
Ideation
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation GuidelinesHow to create a Qlik ID
Get the full value of the community.
Register a Qlik ID:
- Go to register.myqlik.qlik.com
If you already have an account, please see How To Reset The Password of a Qlik Account for help using your existing account. - You must enter your company name exactly as it appears on your license or there will be significant delays in getting access.
- You will receive a system-generated email with an activation link for your new account. NOTE, this link will expire after 24 hours.
If you need additional details, see: Additional guidance on registering for a Qlik account
If you encounter problems with your Qlik ID, contact us through Live Chat!
2. Chat
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
- Answer common questions instantly through our chatbot
- Have a live agent troubleshoot in real time
- With items that will take further investigating, we will create a case on your behalf with step-by-step intake questions.
3. Qlik Support Case Portal
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
Your advantages:
- Self-service access to all incidents so that you can track progress
- Option to upload documentation and troubleshooting files
- Option to include additional stakeholders and watchers to view active cases
- Follow-up conversations
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Problem Type
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
Priority
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
Severity
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
Escalate a Support Case
If you require a support case escalation, you have two options:
- Request to escalate within the case, mentioning the business reasons.
To escalate a support incident successfully, mention your intention to escalate in the open support case. This will begin the escalation process. - Contact your Regional Support Manager
If more attention is required, contact your regional support manager. You can find a full list of regional support managers in the How to escalate a support case article.
Phone Numbers
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
- Qlik Data Analytics: 1-877-754-5843
- Qlik Data Integration: 1-781-730-4060
- Talend AMER Region: 1-800-810-3065
- Talend UK Region: 44-800-098-8473
- Talend APAC Region: 65-800-492-2269
Resources
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
Recent Documents
-
Talend Job using key pair authentication for Snowflake fails with a ‘Missing Key...
Running a Talend Job using a key pair authentication for Snowflake fails with the exception: Starting job Snowflake_CreateTable at 09:21 19/07/2021. ... Show MoreRunning a Talend Job using a key pair authentication for Snowflake fails with the exception:
Starting job Snowflake_CreateTable at 09:21 19/07/2021. [statistics] connecting to socket on port 3725 [statistics] connected Exception in component tDBConnection_2 (Snowflake_CreateTable) java.lang.RuntimeException: java.io.IOException: Missing Keystore location at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.tDBConnection_2Process(Snowflake_CreateTable.java:619) at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.runJobInTOS(Snowflake_CreateTable.java:3881) at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.main(Snowflake_CreateTable.java:3651) [FATAL] 09:21:38 edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable- tDBConnection_2 java.io.IOException: Missing Keystore location java.lang.RuntimeException: java.io.IOException: Missing Keystore location at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.tDBConnection_2Process(Snowflake_CreateTable.java:619) [classes/:?] at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.runJobInTOS(Snowflake_CreateTable.java:3881) [classes/:?] at edw_demo.snowflake_createtable_0_1.Snowflake_CreateTable.main(Snowflake_CreateTable.java:3651) [classes/:?]
Cause
The Keystore path is not configured correctly at the Job or Studio level before connecting to Snowflake on the metadata and using the same metadata connection in the Jobs.
Resolution
To use key pair authentication for Snowflake, they Keystone settings must be configured in Talend Studio before connecting to Snowflake.
Configuring the Keystore at the Studio level
Perform one of the following options.
Option 1:
Update the appropriate Studio initialization file (Talend-Studio-win-x86_64.ini,Talend-Studio-linux-gtk-x86_64.ini,or Talend-Studio-macosx-cocoa.ini depending on your operating system), with the following settings:
-Djavax.net.ssl.keyStore={yourPathToKeyStore} -Djavax.net.ssl.keyStoreType={PKCS12}/{JKS} -Djavax.net.ssl.keyStorePassword={keyStorePassword}
Option 2:
-
Update the Keystore configuration in Studio SSL preferences with the required Path, Password, and Keystore Type.
-
Add the Key Alias to the Snowflake metadata.
Configuring the Keystore at the Job level
Update the tSetKeystore components in your Job, if you plan to run the Job when the target execution is local, Remote Engine, or JobServer (the versions do not matter). Before selecting the Key Pair option for the tSnowflakeConnection component, configure the key pair authentication on the Basic settings tab of the tSetKeystore component:
-
Select JKS from the TrustStore type pull-down list.
-
Enter " " in the TrustStore file field.
-
Clear the TrustStore password field.
-
Select the Need Client authentication check box.
-
Enter the path to the Keystore file in double quotation marks in the KeyStore file field.
-
Enter the Keystore password in the KeyStore password field.
-
-
QlikView & Qlik Sense Unified (Dual Use) License - User Allocation
With a Unified license (formerly called Dual Use License) the legacy QlikView license is complemented with a Qlik Sense license that can be applied to... Show More
With a Unified license (formerly called Dual Use License) the legacy QlikView license is complemented with a Qlik Sense license that can be applied to the QlikView server as it includes QlikView entitlement license attributes. Such license needs to be activated with the Signed License Key (SLK). When a customer enters an Analytics Modernization Program (AMP, formerly known as Dual Use program), QlikView CALs (e.g. Named User CALs, Document CALS, Session CALs and Usage CALs) are converted into Professional User, Analyzer User, and Analyzer Capacity User allocations based on the Analytics Modernization Program conversion ratios.Here are two scenarios:
I. If a customer transitions to AMP with on-premise (client-managed) Qlik software (e.g. converts to the perpetual estate or convert to subscription Qlik Sense Enterprise on Windows), a Unified License containing the converted quantity of Professional Users, Analyzer Users and Analyzer Capacity would be delivered. This license contains a customized Qlik Sense Enterprise Signed License Key (SLK) which can also be deployed on QlikView Server and/or QlikView Publisher. If a user is assigned a Professional or Analyzer user license, this assignment information is synchronized to all Qlik Sense and QlikView deployments activated using this Unified License key. As such one user just needs one license to access the entire Qlik software platform (regardless if it is Qlik Sense or QlikView).
In this scenario, as long as the QlikView Server and Qlik Sense edition use the same Identity Provider (IdP) the user can access apps on both environments consuming only one user license allocation.
If the user license is reallocated in any of the systems to a different user, the same will occur across both QlikView and Qlik Sense environments.
II. If a customer transitions to AMP with Qlik Sense Enterprise SaaS add-on, a Qlik Sense Enterprise SaaS tenant may use a Unified License for the on-premise deployment. The customer is able to upload QlikView document prepared by the on-premise QlikView software directly into Qlik Sense Enterprise SaaS for distribution.A QlikView Server or a Qlik Publisher software can be activated in two ways:
1) Using a legacy method with 16-digit QlikView license key and the corresponding control number
2) Using a modern Unified License with Signed License Key containing needed QlikView Server and Publisher Service attribute(s). In this latter scenario, user license assignment (Professional/Analyzer) and analyzer capacity would be synchronized with other deployments using the same Signed License Key as it is done in the Unified License model
If the customer opts to remain in Perpetual licensing, the existing QlikView license model can be retained. Otherwise, if the customer opts for conversion into Subscription licensing model, a set of QlikView subscription license attributes mirroring the existing QlikView perpetual license key setup would be delivered such that the customer switch to the subscription QlikView keys without the need for an immediate migration project towards using Unified licensing.
Note: Please note that Qlik is no longer starting clients on the Perpetual license model. See End of Perpetual License Sales
Environment:
Related Content:
-
How To: Configure Qlik Sense Enterprise SaaS to use Azure AD as an IdP. Now with...
This article provides step-by-step instructions for implementing Azure AD as an identify provider for Qlik Cloud. We cover configuring an App registra... Show MoreThis article provides step-by-step instructions for implementing Azure AD as an identify provider for Qlik Cloud. We cover configuring an App registration in Azure AD and configuring group support using MS Graph permissions.
It guides the reader through adding the necessary application configuration in Azure AD and Qlik Sense Enterprise SaaS identity provider configuration so that Qlik Sense Enterprise SaaS users may log into a tenant using their Azure AD credentials.
Content:
- Prerequisites
- Helpful vocabulary
- Considerations when using Azure AD with Qlik Sense Enterprise SaaS
- Configure Azure AD
- Create the app registration
- Create the client secret
- Add claims to the token configuration
- Add group claim
- Collect Azure AD configuration information
- Configure Qlik Sense Enterprise SaaS IdP
- Recap
- Addendum
- Related Content (VIDEO)
Prerequisites
- An Microsoft Azure account
- A Microsoft Azure Active Directory instance
- A Qlik Sense Enterprise SaaS tenant
- The BYOIDP feature in your Qlik license is set to YES. Contact customer support to find out if you are entitled to bring your own identity provider to your tenant.
Helpful vocabulary
Throughout this tutorial, some words will be used interchangeably.
- Qlik Sense Enterprise SaaS: Qlik Sense hosted in Qlik’s public cloud
- Microsoft Azure Active Directory: Azure AD
- Tenant: Qlik Sense Enterprise SaaS tenant or instance
- Instance: Microsoft Azure AD
- OIDC: Open Id Connect
- IdP: Identity Provider
Considerations when using Azure AD with Qlik Sense Enterprise SaaS
- Qlik Sense Enterprise SaaS allows for customers to bring their own identity provider to provide authentication to the tenant using the Open ID Connect (OIDC) specification (https://openid.net/connect/)
- Given that OIDC is a specification and not a standard, vendors (e.g. Microsoft) may implement the capability in ways that are outside of the core specification. In this case, Microsoft Azure AD OIDC configurations do not send standard OIDC claims like email_verified. Using the Azure AD configuration in Qlik Sense Enterprise SaaS includes an advanced option to set email_verified to true for all users that log into the tenant.
- The Azure AD configuration in Qlik Sense Enterprise SaaS includes special logic for contacting Microsoft Graph API to obtain friendly group names. Whether those groups originate from an on-premises instance of Active Directory and sync to Azure AD through Azure AD Connect or from creation within Azure AD, the friendly group name will be returned from the Graph API and added to Qlik Sense Enterprise SaaS.
Configure Azure AD
Create the app registration
- Log into Microsoft Azure by going to https://portal.azure.com.
- Click on the Azure Active Directory icon in the browser Or search for "Azure Active Directory" in the search bar on the top. The overview page for the active directory will appear.
- Click the App registrations item in the menu to the left.
- Click the New registration button at the top of the detail window. The application registration page appears.
- Add a name in the Name section to identify the application. In this example, the name of the hostname of the tenant is entered along with the word OIDC.
- The next section contains radio buttons for selecting the Supported account types. In this example, the default – Accounts in this organizational directory only – is selected.
- The last section is for entering the redirect URI. From the dropdown list on the left select “web” and then enter the callback URL from the tenant. Enter the URI https://<tenant hostname>/login/callback.
The tenant hostname required in this context is the original hostname provided to the Qlik Enterprise SaaS tenant.
Using the Alias hostname will cause the IdP handshake to fail. - Complete the registration by clicking the Register button at the bottom of the page.
- Click on the Authentication menu item on the left side of the screen.
- On the middle of the page, the reference to the callback URI appears. There is no additional configuration required on this page.
Create the client secret
- Click on the Certificates and secrets menu item on the left side of the screen.
- In the center of the Certificates and secrets page, there is a section labeled Client secrets with a button labeled New client secret. Click the button.
- In the dialog that appears, enter a description for the client secret and select an expiration time. Click the Add button after entering the information.
- Once a client secret is added, it will appear in the Client secrets section of the page.
Copy the "value of the client secret" and paste it somewhere safe.
After saving the configuration the value will become hidden and unavailable.
Add claims to the token configuration
- Click on the Token configuration menu item on the left side of the screen.
- The Optional claims window appears with two buttons. One for adding optional claims, and another for adding group claims. Click on the Add optional claim button.
- For optional claims, select the ID token type, and then select the claims to include in the token that will be sent to the Qlik Sense Enterprise SaaS tenant. In this example, ctry, email, tenant_ctry, upn, and verified_primary_email are checked. None of these optional claims are required for the tenant identity provider to work properly, however, they are used later on in this tutorial.
- Some optional claims may require adding OpenId Connect scopes from Microsoft Graph to the application configuration. Click the check mark to enable and click Add.
- The claims will appear in the window.
Add group claim
- Click on the API permissions menu item on the left side of the screen.
- Observe the configured permissions set during adding optional claims.
- Click the Add a permission button and select the Microsoft Graph option in the Request API permissions box that appears. Click on the Microsoft Graph banner.
- Click on Delegated permissions. The Select permission search and the OpenId permissions list appears.
In the OpenID permissions section, check email, openid, and profile. In the Users section, check user.read.
- In the Select permissions search, enter the word group. Expand the GroupMember option and select GroupMember.Read.All. This will grant users logging into Qlik Sense Enteprise SaaS through Azure AD to read the group memberships they are assigned.
- After making the selection, click the Add permissions button.
- The added permissions will appear in the list. However, the GroupMember.Read.All permission requires admin consent to work with the app registration. Click the Grant button and accept the message that appears.
Failing to grant consent to GroupMember.Read.All may result in errors authenticating to Qlik using Azure AD. Make sure to complete this step before moving on.
Collect Azure AD configuration information
- Click on the Overview menu item to return to the main App registration screen for the new app. Copy the Application (client) ID unique identifier. This value is needed for the tenant’s idp configuration.
- Click on the Endpoints button in the horizontal menu of the overview.
- Copy the OpenID Connect metadata document endpoint URI. This is needed for the tenant’s IdP configuration.
Configure Qlik Sense Enterprise SaaS IdP
- With the configuration complete and required information in hand, open the tenant’s management console and click on the Identity provider menu item on the left side of the screen.
- Click the Create new button on the upper right side of the main panel.
- Select OIDC from the Type drop-down menu item, and select Microsoft Entra ID (Azure AD) from the Provider drop-down menu item.
- Scroll down to the Application credentials section of the configuration panel and enter the following information:
- ADFS discovery URL: This is the endpoint URI copied from Azure AD.
- Client ID: This is the application (client) id copied from Azure AD.
- Client secret: This is the value copy and pasted to a safe location from the Certificates & secrets section from Azure AD.
- The Realm is an optional value used if you want to enter what is commonly referred to as the Active Directory domain name.
- Scroll down to the Claims mapping section of the configuration panel. There are five textboxes to confirm or alter.
- The sub field is the subject of the token sent from Azure AD. This is normally a unique identifier and will represent the UserID of the user in the tenant. In this example, the value “sub” is left and appid is removed. To use a different claim from the token, replace the default value with the name of the desired attribute value.
- The name field is the “friendly” name of the user to be displayed in the tenant. For Azure AD, change the attribute name from the default value to “name”.
- In this example, the groups, email, and client_id attributes are configured properly, therefore, they do not need to be altered.
In this example, I had to change the email claim to upn to obtain the user's email address from Azure AD. Your results may vary.
- The sub field is the subject of the token sent from Azure AD. This is normally a unique identifier and will represent the UserID of the user in the tenant. In this example, the value “sub” is left and appid is removed. To use a different claim from the token, replace the default value with the name of the desired attribute value.
- Scroll down to the Advanced options and expand the menu. Slide the Email verified override option ON to ensure Azure AD validation works. Scope does not have to be supplied.
- The Post logout redirect URI is not required for Azure AD because upon logging out the user will be sent to the Azure log out page.
- Click the Save button at the bottom of the configuration to save the configuration. A message will appear confirming intent to create the identity provider. Click the Save button again to start the validation process.
- The validation procedure begins by redirecting the person configuring the IdP to the login page for the IdP.
- After successful authentication, Azure AD will confirm that permission should be granted for this user to the tenant. Click the Accept button.
- If the validation fails, the validation procedure will return a window like the following.
- If the validation succeeds, the validation procedure will return a mapped claims window. If the validation states it cannot map the user's email address, it is most likely because the email_verified switch has not been turned on. Go ahead and confirm, move through the remaining steps, and update the configuration as per the previous step. Re-run the validation to map the email.
- After confirming the information is correct, the account used to validate the IdP may be elevated to a TenantAdmin role. It is strongly recommended to do make sure the box is checked before clicking continue.
- The next to last screen in the configuration will ask to activate the IdP. By activating the Azure AD IdP in the tenant, any other identity providers configured in the tenant will be disabled.
- Success.
- Please log out of the tenant and re-authenticate using the new identity provider connection. Once logged in, change the url in the address bar to point to https://<tenanthostname>/api/v1/diagnose-claims. This will return the JSON of the claims information Azure AD sent to the tenant. Here is a slightly redacted example.
- Verify groups resolve properly by creating a space and adding members. You should see friendly group names to choose from.
Recap
While not hard, configuring Azure AD to work with Qlik Sense Enterprise SaaS is not trivial. Most of the legwork to make this authentication scheme work is on the Azure side. However, it's important to note that without making some small tweaks to the IdP configuration in Qlik Sense you may receive a failure or two during the validation process.
Addendum
For many of you, adding Azure AD means you potentially have a bunch of clean up you need to do to remove legacy groups. Unfortunately, there is no way to do this in the UI but there is an API endpoint for deleting groups. See Deleting guid group values from Qlik Sense Enterprise SaaS for a guide on how to delete groups from a Qlik Sense Enterprise SaaS tenant.
Related Content (VIDEO)
Qlik Cloud: Configure Azure Active Directory as an IdP
-
Embedding Qlik Analytics in SharePoint
This Techspert Talks session covers: Qlik Cloud settings for SharePoint embedding Using JWT Authentication Using Anonymous Access Chapters: 01... Show More -
Qlik Talend Cloud: Single Sign-ON (SSO) Permission Missing for Talend Cloud Appl...
A user as developer role cannot log in to Talend Cloud and SSO was enabled for Talend Cloud in the environment. Resolution Adding the permission f... Show MoreA user as developer role cannot log in to Talend Cloud and SSO was enabled for Talend Cloud in the environment.
Resolution
- Adding the permission for this user from SSO end to use Talend cloud application.
- You can also add the user to a group that has access to the Talend Cloud application.
Cause
Based on the error below, it appears the user lacks the necessary permissions on the Single Sign-On (SSO) provider's end.
Error: Your Administrator has configured the application Talend Cloud to block the users. The signed in user is blocked and doen't have the access to the application
SSOPermissionMissing
Environment
-
How to load Excel file from online file storage services (Box, Dropbox, Google D...
Qlik Web Connectors allows loading Excel files (xls, xlsx) hosted in online file storage services such as Box, Dropbox, Google Drive and OneDrive dire... Show MoreQlik Web Connectors allows loading Excel files (xls, xlsx) hosted in online file storage services such as Box, Dropbox, Google Drive and OneDrive directly into QlikView and Qlik Sense without having to save the file to disk first.
Different methods need to be applied to fetch files depending on what edition of Qlik Sense is used.- With Qlik Sense SaaS Editions (Qlik Sense Enterprise Business and Qlik Sense Enterprise on Cloud Services), Qlik offers a number of built in connectors. See Built-in Qlik Web Connectors for a list of available connectors.
- With Qlik Sense Enterprise on Windows, Qlik offers Standard and Premium web connectors for install. See Data sources included in Qlik Web Connectors.
With Qlik Sense SaaS Editions:
Note that the in-built Web Connectors are available are:
- Google Drive
- Dropbox
Unavailable at the moment are:
- OneDrive
- Box
- Sharepoint getfile and download file
To connect to, for example an excel file, stored on Google Drive:
- Open a Qlik Sense App
- Navigate to Add data
- Choose Google Drive
- Click Authenticate and follow the Google Drive authentication steps
- Copy the authentication code and paste it into the provided text bar
- Click Verify
- You can now choose supported files.
More information: Managing data sources in spaces.
You can add data files and data connections directly in shared and personal spaces. This enables data sources to be added outside of apps for use by other space members.With Qlik Sense Enterprise on Windows:
We use the query GetRawFileAsBinary of the appropriate connector in Qlik Web Connectors.
Once the binary content is sucessfully loaded in Qlik Web Connectors web console, you can create a webfile connection in QlikView or Qlik Sense to load data from Qlik Web Connectors.- Choose the connector (for this test we are using the Qlik Box Connector)
- Select the query ListFilesAndFolders to identify the ID of the Excel file you want to load
- Run the query with the Folder ID parameter left blank
- This returns all files and folders in the root folder of the Box account
- Copy the ID of the folder which the Excel file is located in and paste it into the Folder ID parameter of the ListFilesAndFolders query
- Run the query again
- This will list all the files in the subfolder
- Copy the ID of the file we want to load
- Switch to the query GetRawFileAsBinary, choose Parameters and fill in the File ID
- Click Save Inputs & Run Table
- In our example, no Data Preview is shown.
- Obtain the URL to use by switching to the QlikView tab and copying the connection string
In QlikView:- Open QlikView and the Script Editor
- Select Web Files...
- Paste the link previously obtained into the Internet File text box
- Click Next >
- Proceed to load in the table
Related Content:
SaaS Topics:
Managing data sources in spaces.
Built-in Qlik Web Connectors
STT - Reloading Your Data in Qlik Sense Business
WebConnectors (not SaaS):
Data sources included in Qlik Web Connectors
Google Drive and Spreadsheets
OneDrive
Dropbox
Qlik Web Connectors authentication fails with error "Could not establish trust relationship for the SSL/TLS secure channel" -
Qlik Connectors: How to import strings longer than 255 characters
Recent versions of Qlik connectors have an out-of-the-box value of 255 for their DefaultStringColumnLength setting. This means that, by default, any ... Show MoreRecent versions of Qlik connectors have an out-of-the-box value of 255 for their DefaultStringColumnLength setting.
This means that, by default, any strings containing more than 255 characters is cut when imported from the database.
To import longer strings, specify a higher value for DefaultStringColumnLength.
This can be done in the connection definition and the Advanced Properties, as shown in the example below.
The maximum value that can be set is 2,147,483,647.
Environment
- Qlik Connectors
- Built-in Connectors Qlik Sense Enterprise on Windows November 2024 and later
-
Qlik Replicate: Periodically scheduled task not listed in the Executed Jobs tab
A scheduled Qlik Replicate task does not show up in the Executed Jobs list. This is working as intended. The Executed Jobs tab will only show executed... Show MoreA scheduled Qlik Replicate task does not show up in the Executed Jobs list.
This is working as intended. The Executed Jobs tab will only show executed jobs that were scheduled to run once only. In other words, jobs scheduled to run periodically (e.g. Daily, Weekly, Monthly) will not be shown.
See Scheduling jobs.
Environment
-
Advanced options is missing in a Qlik Cloud app
Advanced options is not visible when editing a sheet. This can happen on a specific app even if the option was present before. Resolution You can acti... Show MoreAdvanced options is not visible when editing a sheet. This can happen on a specific app even if the option was present before.
Resolution
You can activate the "Show Sheet Header" option in the app settings to make the "Advanced option" button visible again.
- Open the App settings.
- Activate "Show Sheet Header".
Cause
For some reasons, the "Show Sheet Header" could be deactivated in an app. "Advanced Options" is invisible when this happens, because it is located in the sheet header that is removed. The app looks like this in editing mode:
Environment
- Qlik Cloud
- Open the App settings.
-
Qlik Talend Product: How to Remove /Clean Up Kar files from Talend Runtime Serve...
For better performance of Talend Runtime Server, you could uninstall Kar files to free up storage space. This article briefly introuduces How to unin... Show MoreFor better performance of Talend Runtime Server, you could uninstall Kar files to free up storage space.
This article briefly introuduces How to uninstall kar files from Talend Runtime Server
Prerequiste
For command
kar:uninstall
It is a command used to uninstall a KAR file (identified by a name).
By uninstall, it means that:
- The features previously installed by the KAR file are uninstalled
- Delete (from the KARAF_DATA/system repository) all files previously "populated" by the KAR file
For instance, to uninstall the previously installed
my-kar-1.0-SNAPSHOT.kar
KAR file:karaf@root()> kar:uninstall my-kar-1.0-SNAPSHOT
Talend Runtime Server
Please run the following karaf commands to clean Kar files from your Repository
#stop running artifact task
bundle:list |grep -i <artifact-name>
bundle:uninstall <artifact-name | bundle-id>
#clean task kar cache
kar:list |grep -i <artifact-name>
kar:uninstall <artifact-name>Related Content
For more information about KAR file, please refer to Apache Content
kar:* commands to manage KAR archives.
Environment
-
How to recreate or just delete certificates in Qlik Sense - No access to QMC or ...
There may be several different symptoms associated with a need to regenerate and redistribute certificates; After installing, renewing, or changin... Show MoreThere may be several different symptoms associated with a need to regenerate and redistribute certificates;
- After installing, renewing, or changing a third-party certificate for use with Qlik Sense the Qlik Management Console (QMC) and Hub may become inaccessible leading to Page Cannot Be Displayed error.
This article does not cover the use of a 3rd party certificate for end user Hub access, but the certificates used for communication between the Sense services. For recommendation on how to use a 3rd party certificate for end user access, see How to: Change the certificate used by the Qlik Sense Proxy to a custom third party certificate
- In the Qlik Sense Proxy trace logs, the last line may be indicating waiting for certificates to be installed or similar. In addition, even though Proxy service remains running, port 443 (by default) will fail to bind and start listening for requests.
- Qlik Sense may sometimes fail to create the correct certificates during installation if there are old/unused certificates left from a previous installation. Also, certs can become corrupted, or newly installed certificates configured to be used may not be compatible. See Qlik Sense: Compatibility information for third-party SSL certificates and Requirements for configuring Qlik Sense with SSL.
Do not perform the below steps in a production environment, without first doing a backup of the existing certificates. Certificates are being used to encrypt information in the QRS database, such as connection strings. By recreating certificates, you may lose information in your current setup.
By removing the old/bad certificates, and restarting the Qlik Sense Repository Service (QRS), the correct certificates can be recreated by the service. If trying to remove certs, only the removal steps need to be followed.The instructions are to be carried out on the Qlik Sense Central Node. In the case of a multi-node deployment, verify which node is the central node before continuing.
- Open Qlik Sense Management Console (QMC)
- Navigate to Nodes section
- Add the column Central Node column through Column selector
If the current central node role is held by the failover, you need to fail the role back to the original central node by shutting down all the nodes (this implies downtime). Then start the original central node, reissue the certificates on it with this article, and when the central node is working apply the article Rim node not communicating with central node - certificates not installed correctly on each Rim node.
Step by Step instructions:
Test all data connections after the certificates are regenerated. It is likely that data connections with passwords will fail. This is because passwords are saved in the repository database with encryption. That encryption is based on a hash from the certificates. When the Qlik Sense signed certificates are regenerated, this hash is no longer valid, and the saved data connection passwords can not be decrypted. The customer must re-enter the passwords in each data connection and save. See article: Repository System Log Shows Error "Not possible to decrypt encrypted string in database"
- Log on to the Central node using the Qlik Service Account and navigate to the 'Services' and to the Qlik Services.
- Stop the QRS (this will also stop the other services; however, make sure the postgresql-64-12 or Qlik Sense Repository Database is still running).
- Open Microsoft Management Console (MMC).
Important: Execute the MMC as the account configured to run the services (using Run as a different user [Ctrl-Shift & Right click on the exe to see option]... ) - Add the following snap-ins for Certificates:
- My user account
- Local Computer account
- In Certificates (Local Computer) > Trusted Root Certification Authorities > Certificates, delete the Self-Signed certificates created by Qlik Sense, issued by HOSTNAME.domain-CA*
*Where HOSTNAME is the machine name of the server in question and domain is the domain of the server.
So for example, QlikServer1 is the computer hostname and the domain is domain.local, the certificate will be issued by QlikServer1.domain.local-CA
- In Certificates (Local Computer) > Personal > Certificates, delete the Self-Signed certificate issued by HOSTNAME.domain-CA
- In Certificates > Current User > Personal > Certificates, delete the Self-Signed certificate named QlikClient
- Go to the folder C:\ProgramData\Qlik\Sense\Repository, delete the folder 'Exported Certificates'
- Run this command from an elevated (admin) command prompt to create new certificates:
"C:\Program Files\Qlik\Sense\Repository\Repository.exe" -bootstrap -iscentral -restorehostname
Note: If the script doesn't get to "Bootstrap mode has terminated. Press ENTER to exit.." and gets stuck at "[INFO] Entering main startup phase.." start the "Qlik Sense dispatcher service" and it will get to the end) - Verify the new certificates have been created by REFRESHING the screen for each certificate location, and then start the rest of the Qlik Sense services. In addition, verify that duplicate or multiple certificates were not created (rarely occurs). If so, the article will need to be followed again by starting with the deletion of the certificates.
There is no need to perform a full reinstall to propagate new certificates. Certificates are created by the QRS automatically if not found during the service startup process.For Qlik Sense multi-cloud deployment (September 2020 or later):
The steps in this section must be performed after recreating certificates as described above.
- Start Qlik Sense Repository Database service on CENTRAL NODE, or PostgreSQL Server service if running a dedicated instance of PostgreSQL database server.
- Using pgAdmin tool or any other database client, connect to SenseServices database. (IMPORTANT: the below query needs to be executed on the SenseServices DB)
-
Execute following query against SenseServices database:
DROP TABLE IF EXISTS hybrid_deployment_service.mt_doc_asymmetrickeysencrypt CASCADE;
-
Navigate to Deployments page of Multi-cloud Setup Console (MSC).
-
Delete and re-add any existing deployments by following the steps mentioned in Distributing apps from Qlik Sense Enterprise on Windows to Qlik Sense Enterprise SaaS and Distributing apps to Qlik Sense Enterprise on Kubernetes.
Node.js certificates
After the certificates have been recreated and then redistributed to all of the rim nodes, the node.js certificates stored locally on the central and all rim nodes also need to be recreated. Follow the below steps to perform this action:
- Stop all Qlik Sense services
- In Windows File Explorer, navigate to %ProgramData%\Qlik\Sense\Repository\Exported_certificates
- Back up the Local certificates directory and then delete it
- Restart the Qlik Sense services
Test all data connections after the certificates are rebuilt. It is likely that data connections with passwords will fail. This is because passwords are saved in the repository database with encryption. That encryption is based on a hash from the certs. When the Qlik Sense self-signed cert is rebuilt, this hash is no longer valid, and so the saved data connection passwords will fail. The customer must re-enter the passwords in each data connection and save. See article: Repository System Log Shows Error "Not possible to decrypt encrypted string in database"
Self Signed Certificates:
Notice if using an official Signed Server Certificate from a trusted Certificate Authority
The certificate information will also be in the QMC, under Proxies, with the Certificate thumbprint listed. If trying to merely remove all aspects of certs, this will need to be removed as well.- Go to Proxies
- Select your Proxy and click Edit
- In the right pane, select Security
- Scroll down and locate "SSL browser certificate thumbprint" in the Security section to locate the thumprint info.
If the Central Node repository service hanging in the logs:
- Open C:\ProgramData\Qlik\Sense\Log\Repository\Trace
- Look for this Example "API service initialized with 1501 available methods". This is Central Node.
- If you see this Example "API service initialized with 2 available methods". This is a Rim node.
- For Central Node you should see as an example ""API service initialized with 1501 available methods".
- Running this command "C:\Program Files\Qlik\Sense\Repository\Repository.exe" -bootstrap -iscentral -restorehostname will resolved this issue.
If the above does not work, see Qlik Sense Enterprise Hub and Qlik Management Console (QMC) down - bootstrap fails with "Newly created client certificate not valid; root certificate can't sign new certificates"
- After installing, renewing, or changing a third-party certificate for use with Qlik Sense the Qlik Management Console (QMC) and Hub may become inaccessible leading to Page Cannot Be Displayed error.
-
Transformation: Source Lookup - Oracle ROWID
Environment Replicate Table/Field level transformation The information in this article is provided as-is and to be used at own discretion. Depending... Show MoreEnvironment
- Replicate
- Table/Field level transformation
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Field Transformation
On my Source table from Oracle, I am using the source lookup function under Data Enrichment in order to retrieve the Oracle ROWID field value. Not all fields are logged by oracle into the Redo Logs. In the case of ROWID we will use a source lookup function to query the source table and retrieve the field value, putting it into the Add Column that we defined and called ROWID. This source lookup function is documented in the replicate user guide and the on-line help.
The general form of the function is as follows:
source_lookup(TTL,'SCHM','TBL','EXP','COND',COND_PARAMS)
For this example in am doing the lookup into the Employees table:source_lookup('NO_CACHING','HR','EMPLOYEES','ROWID','EMPLOYEE_ID=:1',$EMPLOYEE_ID)
NO_CACHING is important to ensure it keeps changing for each value and doesn’t re-use values
HR is the schema in oracle
EMPLOYEES is the table
ROWID is what I want returned
EMPLOYEE_ID=:1 is the predicate for the lookup
$EMPLOYEE_ID is the value from the redo log that we are using in the predicate in place of :1NOTE: If the source lookup needed more than one field in the key then:
(Multiple variables would look like: 'EMPLOYEE_ID=:1 AND DEPARTMENT_ID=:2)
$EMPLOYEE_ID, $DEPARTMENT_ID (in this example we are using field values that the task has read from the redo log file.NOTE: That the lookup may have a performance implication and will need to be tested to see if it meets all of your latency criteria. Please also note that the Oracle ROWID is not guaranteed to persist. Traditional Oracle ROWID's can change if a table is quiesced or rebuilt with dbms_redefinition.
This expression is created for the Add Column transformation ROWID as seen in the screen shot below.
Transformation Screen shot
NOTE: Unfortunately data enrichment function can not be tested on the screen, they must be saved and the task run for the transformation to be tested.
The screen shot below shows the target table with the ROWID field after the task has run.
Target Table with ROWID field
Related Content
-
Qlik Sense Enterprise on Windows: Extended WebSocket CSRF protection
Beginning with Qlik Sense Enterprise on Windows 2024, Qlik has extended CSRF protection to WebSockets. For reference, see the Release Notes. In the ca... Show MoreBeginning with Qlik Sense Enterprise on Windows 2024, Qlik has extended CSRF protection to WebSockets. For reference, see the Release Notes.
In the case of mashups, extensions,and or other cross-site domain setups, the following two steps are necessary:
- Add additional response headers. These headers help protect against Cross-Site Forgery (CSRF) attacks.
- Change the applicable code in your mashup or extension.
Content
Add the Response Headers
The three additional response headers are:
Access-Control-Allow-Origin: https://localhost:8080
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: qlik-csrf-tokenLocalhost and port 8080 are examples. Replace them with the appropriate hostname. Defining the port is optional.
If you have multiple origins, seperate them by comma.
Example:
For more information about adding response headers to the Qlik Sense Virtual proxy, see Creating a virtual proxy. Expand the Advanced section to access Additional response headers.
Adapt your Mashup or Extension code
In certain scenarios, the additional headers on the virtual proxy will not be enough and a code change is required. In these cases, you need to request the CSRF token and then send it forward when opening the session on the WebSocket. See Workflow for a visualisation of the process.
An example written in Enigma.js is available here:
The information and example in this article are provided as-is and are not directly supported by Qlik Support. More assistance can be found on the Qlik Integration forum. Professional Services are available to help where needed.
Workflow
Workflow
Verification
To verify if the header information is correctly passed on, capture the web traffic in your browser's debug tool.
Environment
- Qlik Sense Enterprise on Windows November 2024 and later
-
Reload failure via Data Gateway Direct Access from MongoDB Atlas using ODBC (via...
Using the ODBC (via Direct Access gateway) connector to reload from a MongoDB Atlas database via Qlik Data Gateway Direct Access, the task fails with ... Show MoreUsing the ODBC (via Direct Access gateway) connector to reload from a MongoDB Atlas database via Qlik Data Gateway Direct Access, the task fails with the below error:
Please check the values for Username‚ Password‚ Host and other properties. Description: General warning - ERROR [01000] The driver returned invalid (or failed to return) SQL_DRIVER_ODBC_VER: 03.80")
Resolution
To connect to a MongoDB Atlas source for use with Qlik Data Gateway Direct Access, follow the configuration detailed in Create a MongoDB connection.
Cause
The MongoDB Atlas SQL driver is not supported for use with the ODBC (via Direct Access gateway) connector. Refer Unsupported or partially supported drivers | ODBC (via Direct Access gateway).
Environment
- Qlik Data Gateway Direct Access
-
Optimizing Performance for Qlik Sense Enterprise
This Techspert Talks session addresses: Understanding Back-end Infrastructure Measure Monitor Troubleshoot Tip: Download the LogAnalyzer app here: ... Show MoreThis Techspert Talks session addresses:
- Understanding Back-end Infrastructure
- Measure
- Monitor
- Troubleshoot
Tip: Download the LogAnalyzer app here: LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs.
00:00 - Intro
01:22 - Multi-Node Architecture Overview
04:10 - Common Performance Bottlenecks
05:38 - Using iPerf to measure connectivity
09:58 - Performance Monitor Article
10:30 - Setting up Performance Monitor
12:17 - Using Relog to visualize Performance
13:33 - Quick look at Grafana
14:45 - Qlik Scalability Tools
15:23 - Setting up a new scenario
18:26 - Look QSST Analyzer App
19:21 - Optimizing the Repository Service
21:38 - Adjusting the Page File
22:08 - The Sense Admin Playbook
23:10 - Optimizing PostgreSQL
24:29 - Log File Analyzer
27:06 - Summary
27:40 - Q&A: How to evaluate an application?
28:30 - Q&A: How to fix engine performance?
29:25 - Q&A: What about PostgreSQL 9.6 EOL?
30:07 - Q&A: Troubleshooting performance on Azure
31:22 - Q&A: Which nodes consume the most resources?
31:57 - Q&A: How to avoid working set breaches on engine nodes?
34:03 - Q&A: What do QRS log messages mean?
35:45 - Q&A: What about QlikView performance?
36:22 - Closing
Resources:
LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs
Qlik Help – Deployment examples
Using Windows Performance Monitor
PostgreSQL Fine Tuning starting point
Qlik Sense Shared Storage – Options and Requirements
Qlik Help – Performance and Scalability
Q&A:
Q: Recently I'm facing Qlik Sense proxy servers RAM overload, although there are 4 nodes and each node it is 16 CPUs and 256G. We have done App optimazation, like delete duplicate app, remove old data, remove unused field...but RAM status still not good, what is next to fix the performace issue? Apply more nodes?
A: Depends on what you mean by “RAM status still not good”. Qlik Data Analytics software will allocate and use memory within the limits established and does not release this memory unless the Low Memory Limit has been reached and cache needs cleaning. If RAM consumption remains high but no other effects, your system is working as expected.
Q: Similar to other database, do you think we need to perform finetuning, cleaning up bad records within PostgresQL , e.g.: once per year?
A: Periodic cleanup, especially in a rapidly changing environment, is certainly recommended. A good starting point: set your Deleted Entity Log table cleanup settings to appropriate values, and avoid clean-up tasks kicking in before user morning rampup.
Q: Does QliKView Server perform similarly to Qlik Sense?
A: It uses the same QIX Engine for data processing. There may be performance differences to the extent that QVW Documents and QVF Apps are completely different concepts.
Q: Is there a simple way (better than restarting QS services)to clean the cache, because chache around 90 % slows down QS?
A: It’s not quite as simple. Qlik Data Analytics software (and by extent, your users) benefits from keeping data cached as long as possible. This way, users consume pre-calculated results from memory instead of computing the same results over and over. Active cache clearing is detrimental to performance. High RAM usage is entirely normal, based Memory Limits defined in QMC. You should not expect Qlik Sense (or QlikView) to manage memory like regular software. If work stops, this does not mean memory consumption will go down, we expect to receive and serve more requests so we keep as much cached as possible. Long winded, but I hope this sets better expectations when considering “bad performance” without the full technical context.
Q: How do we know when CPU hits 100% what the culprit is, for example too many concurrent user loading apps/datasets or mutliple apps qvds reloading? can we see that anywhere?
A: We will provide links to the Log Analysis app I demoed during the webinar, this is a great place to start. Set Repository Performance logs to DEBUG for the QRS performance part, start analysing service resource usage trends and get to know your user patterns.
Q: Can there be repository connectivity issues with too many nodes?
A: You can only grow an environment so far before hitting physical limits to communication. As a best practice, with every new node added, a review of QRS Connection Pools and DB connectivity should be reviewed and increased where necessary. The most usual problem here is: you have added more nodes than connections are allowed to DB or Repository Services. This will almost guarantee communication issues.
Q: Does qlik scalability tools measure browser rendering time as well or just works on API layer?
A: Excellent question, it only evaluates at the API call/response level. For results that include browser-side rendering, other tools are required (LoadRunner, complex to set up, expert help needed).
Transcript:
Hello everyone and welcome to the November edition of Techspert Talks. I’m Troy Raney and I’ll be your host for today's session. Today's presentation is Optimizing Performance for Qlik Sense Enterprise with Mario Petre. Mario why don't you tell us a little bit about yourself?
Hi everyone; good to be here with everybody once again. My name is Mario Petre. I’m a Principal Technical Engineer in the Signature Support Team. I’ve been with Qlik over six years now and since the beginning, I’ve focused on Qlik Sense Enterprise backend services, architecture and performance from the very inception of the product. So, there's a lot of historical knowledge that I want to share with you and hopefully it's an interesting springboard to talk about performance.
Great! Today we're going to be talking about how a Qlik Sense site looks from an architectural perspective; what are things that should be measured when talking about performance; what to monitor after going live; how to troubleshoot and we'll certainly highlight plenty of resources and where to find more details at the end of the session. So Mario, we're talking about performance for Qlik Sense Enterprise on Windows; but ultimately, it's software on a machine.
That's right.
So, first we need to understand what Qlik Sense services are and what type of resources they use. Can you show us an overview from what a multi-node deployment looks like?
Sure. We can take a look at how a large Enterprise environment should be set up.
And I see all the services have been split out onto different nodes. Would you run through the acronyms quickly for us?
Yep. On a consumer node this is where your users come into the Hub. They will come in via the Qlik Proxy Service and consume applications via the Qlik Engine Service, that ultimately connects to the central node and everything else via the Qlik Repository Service.
Okay.
The green box is your front-end services. This is what end users tap into to consume data, but what facilitates that in the background is always the Repository Service.
And what's the difference between the consumer nodes on the top and the bottom?
These two nodes have a Proxy Service that balances against their own engines as well as other engines; while the consumer nodes at the bottom are only there for crunching data.
Okay.
And then we can take a look at the backend side of things. Resources are used to the extent that you're doing reloads, you will have an engine there as well as the primary role for the central node, active and failover which is: the Repository Service to coordinate communication between all the rest of the services. You can also have a separate node for development work. And ultimately we also expect the size of an environment to have a dedicated storage solution and a dedicated central Repository Database host either locally managed or in one of the cloud providers like AWS RDS for example.
Between the front-end and back-end services where's the majority of resource consumption, and what resources do they consume?
Most of the resource allocation here is going to go to the Engine Service; and that will consume CPU and RAM to the extent that it's allocated to the machine. And that is done at the QMC level where you set your Working Set Limits. But in the case of the top nodes, the Proxy Service also has a compute cost as it is managing session connectivity between the end user's browser and the Engine Service on that particular server. And the Repository Service is constantly checking the authorization and permissions. So, ultimately front-end servers make use of both front-end and back-end resources. But you also need to think about connectivity. There is the data streaming from storage to the node where it will be consumed and then loading from that into memory. And these are three different groups of resources: you have compute; you have memory, and you have network connectivity. And all three have to be well suited for the task for this environment to work well.
And we're talking about speed and performance like, how fast is a fast network? How can we even measure that?
So, we would start for any Enterprise environment, we would start at a 10 Gb network speed and ultimately, we expect response time of 4 MS between any node and the storage back end.
Okay. So, what are some common bottlenecks and issues that might arise?
All right. So, let's take a look at some at some examples. The Repository Service failing to communicate with rim nodes, with local services. I would immediately try to verify that the Repository Service connection pool and network connectivity is stable and connect. Let's say apps load very very slow for the first time. This is where network speed really comes into play. Another example: the QMC or the Hub takes a very very long time to load. And for that, we would have to look into the communication between the Repository Service and the Database, because that's where we store all of this metadata that we will try to calculate your permissions based on.
And could that also be related to the rules that people have set up and the number of users accessing?
Absolutely. You can hurt user experience by writing complex rules.
What about lag in the app itself?
This is now being consumed by the Engine Service on the consumer node. So, I would immediately try to evaluate resource consumption on that node, primarily CPU. Another great example for is high Page File usage. We prefer memory for working with applications. So, as soon as we try to cache and pull those results again from disk, performance we'll be suffering. And ultimately, the direct connectivity. How good and stable is the network between the end users machine and the Qlik Sense infrastructure? The symptom will be on the end user side, but the root cause is almost always (I mean 99.9% of the time) will be down to some effect in the environment.
So, to get an understanding of how well the machine works and establish that baseline, what can we use?
One simple way to measure this (CPU, RAM, disk network) is this neat little tool called iPerf.
Okay. And what are we looking at here?
This is my central node.
Okay. And iPerf will measure what exactly?
How fast data transfer is between this central node and a client machine or another server.
And where can people find iPerf?
Great question. iPerf.fr
And it's a free utility, right?
Absolutely.
So, I see you've already got it downloaded there.
Right. You will have to download this package, both on the server and the client machine that you want to test between. We'll run this “As Admin.” We call out the command; we specify that we want it to start in “server mode.” This will be listening for connection attempts.
Okay.
We can define the port. I will use the default one. Those ports can be found in Qlik Help.
Okay.
The format for the output in megabyte; and the interval for refresh 5 seconds is perfectly fine. And then, we want as much output as possible.
Okay.
First, we need to run this. There we go. It started listening. Now, I’m going to switch to my client machine.
So, iPerf is now listening on the server machine and you're moving over to the client machine to run iPerf from there?
Right. Now, we've opened a PowerShell window into iPerf on the client machine. Then we call the iPerf command. This time, we're going to tell it to launch in “Client Mode.” We need to specify an IP address for it to connect to.
And that's the IP address of the server machine?
Right. Again, the port; the format so that every output is exactly the same. And here, we want to update every second.
Okay.
And this is a super cool option: if we use the bytes flag, we can specify the size of the data payload. I’m going to go with a 1 Gb file (1024 Mb). You can also define parallel connections. I want 5 for now.
So, that's like 5 different users or parallel streams of activity of 1 Gb each between the server machine and this client machine?
Right. So, we actually want to measure how fast can we acquire data from the Qlik Sense server onto this client machine. We need to reverse the test. So, we can just run this now and see how fast it performs.
Okay. And did the server machine react the same way?
You can see that it produced output on the listening screen. This is where we started. And then it received and it's displaying its own statistics. And if you want to automate this, so that you have a spot check of throughput capacity between these servers, we need to use the log file option. And then we give it a path. So, I’m gonna say call this “iperf_serverside…” And launch it. And now, no output is produced.
Okay.
So, we can switch back to the client machine.
Okay. So, you're performing the exact same test again, just storing everything in a log file.
The test finished.
Okay. So, that can help you compare between what's being sent to what's being received, and see?
Absolutely. You can definitely have results presented in a way that is easy to compare across machines and across time. And initial results gave us a throughput per file of around 43.6, 46, thereabouts megabytes per second.
So, what about for an end user who's experiencing issues? Can you use iPerf to test the connectivity from a user machine on a different network?
Yep. So, in in the background we will have our server; it's running and waiting for connections. And let's run this connection now from the client machine. We will make sure that the IP address is correct; default port; the output format in megabytes; we want it refreshed every second; and we are transferring 1 Gb; and 5 parallel streams in reverse order. Meaning: we are copying from the server to the client machine. And let's run it.
Just seeing those numbers, they seem to be smaller than what we're seeing from the other machine.
Right. Indeed. I have some stuff in between to force it to talk a little slower. But this is one quick way to identify a spotty connection. This is where a baseline becomes gold; being able to demonstrate that your platform is experiencing a problem. And to quantify and to specify what that problem is going to reduce the time that you spend on outages and make you more effective as an admin.
Okay. That was network. How can admins monitor all the other performance aspects of a deployment? What tools are available and what metrics should they be measuring?
Right. That's a great question. The very basic is just Performance Monitor from Windows.
Okay.
The great thing about that is that we provide templates that also include metrics from our services.
Can you walk us through how to set up the Performance Monitor using one of those templates?Sure thing. So, we're going to switch over first to the central node. So, the first thing that I want to do is create a folder where all of these logs will be stored.
Okay. So, that's a shared folder, good.
And this article is a great place to start. So, we can just download this attachment
So, now it's time to set up a Performance Monitor proper. We need to set up a new Data Collector Set.
Giving it a name.
And create from template. Browse for it, and finish.
Okay. So it’s got the template. That's our new one Qlik Sense Node Monitor, right?
Yep. You'll have multiple servers all writing to the same location. The first thing is to define the name of each individual collector; and you do that here. And you can also provide subdirectory for these connectors, and I suggest to have one per node name. I will call this Central Node.
Everything that comes from this node, yeah.
Correct. You can also select a schedule for when to start these. We have an article on how to make sure that Data Collectors are started when Windows starts. And then a stop condition.
Now, setting up monitors like this; could this actually impact performance negatively?
There is always an overhead to collecting and saving these metrics to a file. But the overhead is negligible.
Okay.
I am happy with how this is defined. Now, this static collector on one of the nodes is already set up. There is an option here that's called Data Manager. What's important here to define is to set a Minimum Free Disk. We could go with 10 Gb, for example; and you can also define a Resource Policy. The important bit is Minimum Free Disk. We want to Delete the Oldest (not the largest) in the Data Collector itself. We should change that directory and make sure that it points to our central location instead of locally; and we'll have to do this for every single node where we set this up.
Okay. So, that's that shared location?
Yep.
And you run the Data Collector there. And it creates a CSV file with all those performance counters. Cool.
So, here we have it now. If we just take a very quick look inside, we'll see a whole bunch of metrics. And if you want to visualize these really really quick, I can show you a quick tip that wasn't on the agenda but since we're here: on Windows, there is a built-in tool called Relog that is specifically designed for reformatting Performance Monitor counters. So, we can use Relog; we'll give it the name of this file; the format will be Binary; the output will be the same, but we'll rename it to BLG; and let's run it.
And now it created a copy in Binary format. Cool thing about this Troy is that: you can just double click on it.
It's already formatted to be a little more readable. Wow! Check that out.
There we go. Another quick tip: since we're here, first thing to do is: select everything and Scale; just to make sure that you're not missing any of the metrics. And this is also a great way to illustrate which service counters and system counters we collect. As you can see, there's quite a few here.
Okay. So, that Performance Monitor is, it's set up; it's running; we can see how it looks; and that is going to run all the time or just when we manually trigger it?
You can definitely configure it to run all the time, and that would be my advice. Its value is really realized as a baseline.
Yeah. Exactly. That was pretty cool seeing how that worked, using all the built-in utilities. And that Relog formatting for the Process Monitor was new to me. Are there any other tools you like to highlight?
Yeah. So, Performance Monitor is built-in. For larger Enterprises that may already be monitoring resources in a centralized way, there's no reason why you shouldn't expect to include the Sense resources into that live monitoring. And this could be done via different solutions out there. A few come to mind like: Grafana, Datadog, Butler SOS, for example from one of our own Qlik luminaries.
Can we take a quick look at Grafana? I’ve heard of that but never seen it.
Sure thing. This is my host monitor sheet. It's nowhere built to a corporate standard, but you can see here I’m looking at resources for the physical host where these VMs are running as well as the domain controller, and the main server where we've been running our CPU tests. And the great part about this is I have historical data as far back I believe as 90 days.
So, this is a cool tool that lets you like take a look at the performance and zoom-in and find the processes that might be causing some peaks or anything you want to investigate?
Right. Exactly. At least come up with a with a narrow time frame for you to look into the other tools and again narrow down the window of your investigation.
Yeah, that could be really helpful. Now I wanted to move on to the Qlik Sense Scalability Tools. Are those available on Qlik community?
That's right. Let me show you where to find them. You can see that we support all current versions including some of the older ones. You will have to go through and download the package and the applications used for analysis afterwards. There is a link over here. So, once the package is downloaded, you will get an installer. And the other cool thing about Scalability Tools is that you can use it to pre-warm the cache on certain applications since Qlik Sense Enterprise doesn't support application pre-loading.
Oh, cool. So, you can throttle up applications into memory like in QlikView. Can we take a look at it?
Yes, absolutely. This is the first thing that you'll see. We'll have to create a new connection. So, I’ll open a simple one that I’ve defined here and we can take a look at what's required just to establish a quick connection to your Qlik Sense site.
Okay, but basically the scenario that you're setting up will simulate activity on a Qlik Sense site to test its performance?
Exactly. You'll need to define your server hostname. This can be any of your proxy nodes in the environment. The virtual proxy prefix. I’ve defined it as Header and authentication method is going to be WebSocket.
Okay.
And then, if we want to look at how virtual users are going to be injected into the system, scroll over here to the user section. Just for this simple test, I’ve set it up for User List where you can define a static list of users like so: User Directory and UserName.
Okay. So, it's going to be taking a look at those 2 users you already predefined and their activity?
Exactly. We need to test the connection to make sure that we can connect to the system. Connection Successful. And then we can proceed with the scenario. This is very simple but let me show you how I got this far. So, the very first thing that we should do is to Open an App.
So, you're dragging away items?
Yep. I’m removing actions from this list. Let's try to change the sheet. A very simple action. And now we have four sheets, and we'll go ahead and select one of them.
Okay, so far, we have Opening the App and immediately changing to a sheet?
Yep. That's right. This will trigger actions in sequence exactly how you define them. It will not take into consideration things like Think Time. I will just define a static weight of 15 seconds, and then you can make selections.
But this is an amazing tool for being able to kind of stress test your system.
It's very very useful and it also provides a huge amount of detail within the results that it produces. One other quick tip: while defining your scenario, use easy to read labels, so that you can identify these in the Results Application. Let's assume that the scenario is defined. We will go ahead and add one last action and that is: to close, to Disconnect the app. We'll call this “OpenApp.” We'll call this “SheetChange.” Make sure you Save. The connection we've tested; we've defined our list of users. First, let's run the scenario. There is one more step to define and that is: to configure an Executor that will use this scenario file to launch a workload against our system. Create a New Sequence.
This is just where all these settings you're defining here are saved?
Correct. This is simply a mapping between the execution job that you're defining and which script scenario should be used. We'll go ahead and grab that. Save it again; and now we can start it. And now in the background if we were to monitor the Qlik Sense environment, we would see some amount of load coming in. We see that we had some kind of issue here: empty ObjectID. Apparently I left something in the script editor; but yeah, you kind of get the idea.
So, all this performance information would then be loaded into an app that is part of the package downloaded from Qlik community. How does that look?
So, here you will see each individual result set, and you can look at multiple-exerciser runs in the single application. Unfortunately, we don't have more than one here to showcase that, but you would see multiple-colored lines. There is metrics for a little bit of everything: your session ramp, your throughput by minute, you can change these.
CPU, RAM. This is great.
Exactly. CPU and RAM. These are these are not connected. We don't have those logs, but you would have them for a setup run on your system. These come from Performance Monitor as well, so you could just use those logs provided that the right template is in place. We see Response Time Distribution by Action, and these are the ones that I’ve asked you to change and name so that they're easy to understand.
Once your deployment is large enough to need to be multi-node and the default settings are no longer the best ones for you, what needs to be adjusted with a Repository Service to keep it from choking or to improve its performance?
That's a great question Troy. So, the first thing that we should take a look at is how the Repository communicates with the backend Database and vice versa. The connection pool for the Repository is always based on core count on the machine. And the best rule of thumb that we have to date is to take your core count on that machine, multiply it by 5, and that will be the max connection pool for the Repository Service for that node.
Can you show us where that connection pool setting can be changed?
Yes. So, we will go ahead and take a look. Here we are on the central node of my environment. You'll have to find your Qlik installation folder. We'll navigate to the Repository folder, Util, QlikSenseUtil, and we'll have to launch this “As Admin.”
Okay.
We'll have to come to the Connection String Editor. Make sure that the path matches. We just have to click on Read so that we get the contents of these files. And the setting that we are about to change is this one.
Okay. So, the maximum number of connections that the Repository can make?
Yes. And this is (again) for each node going towards the Repository Database.
Okay.
Again, this should be a factor of CPU cores multiplied by 5. If 90 is higher than that result, leave 90 in place. Never decrease it.
Okay, that's a good tip.
Right. I change this to 120. I have to Save. What I like to do here is: clear the screen and hit Read again; just to make sure that the changes have been persisted in the file.
Okay.
Once that's done, we can close this. We can restart the environment. We can get out of here.
So, there you adjusted the setting of how many connections this node can make to the QSR. Then assuming we do the same on all nodes, where do we adjust the total number of connections the Repository itself can receive?
That should be a sum of all of the connection strings from all of your nodes plus 110 extra for the central node. By default, here is where you can find that config file: Repository, PostgreSQL, and we'll have to open this one, PostgreSQL. Towards the end of the file…
Just going all the way to the bottom.
Here we have my Max Connections is 300.
Okay. One other setting you mentioned was the Page File and something to be considered. How would we make changes or adjust that setting?
Right. So, this is a Windows level setting that's found in Advanced System Settings; Advanced tab; Performance; and then again Advanced; and here we have Virtual Memory.
Okay.
We have to hit Change. We'll have to leave it at System Managed or understand exactly which values we are choosing and why. If you're not sure, the default should always be System Managed.
Now, I want to know what resources are available for Qlik Sense admins; specifically, what is the Admin Playbook?
It's a great starting place for understanding what duties and responsibilities one should be thinking about when administering a Qlik Sense site.
So, these are a bunch of tools built by Qlik to help analyze your deployment in different ways. I see weekly, monthly, quarterly, yearly, and a lot of different things are available there.
Yeah. So, we can take a look at Task Analysis, for example. The first time you run it, it's going to take about 20 minutes; thereafter about 10. The benefits: it shows you really in depth how to get to the data and then how to tweak the system to work better based on what you have.
Yeah, that's great.
Right? So, not only we put the tools in your hands, but also how to build these tools as you can here. See here, we have instructions on how to come up with these objects from scratch. An absolute must-read for every system admin out there.
Mario, we've talked about optimizing the Qlik Sense Repository Service, but not about Postgres? Do larger Enterprise level deployments affect its performance?
Sure. The thing about Postgres is again: we have to configure it by default for compatibility and not performance. So, it's another component that has to be targeted for optimization.
The detail there that anything over 1 Gb from Postgres might get paged - that sounds like it could certainly impact performance.
Right, because the buffer setting that we have by default is set to 1 Gb; and that means only 1 Gb of physical memory will be allocated to Postgres work. Now, we're talking about the large environment 500 to maybe 5,000 apps. We're talking 1000s of users with about 1000 of them peak concurrency per hour.
So, can we increase that Shared Buffer setting?
Absolutely. And in fact, I want to direct you to a really good article on performance optimization for PostgreSQL. And when we talk about fine-tuning, this article is where I’d like to get started. We talk about certain important factors like the Shared Buffers. So, this is what we define to 1 Gb by default. Their recommendation is to start with 1/4 of physical memory in your system. 1 Gb is definitely not one quarter of the machines out there. So, it needs tweaking.
And again these are settings to be changed on the machine that's hosting the Repository Database, right?
That's correct. That's correct.
Now, is there an app that you're aware of that would be good to kind of look at all these logs and analyze what's going on with the performance?
Absolutely. This is an application that was developed to better understand all of the transactions happening in a particular environment. It reads the log files collected with the Log Collector either via the tool or the QMC itself.
Okay.
It's not built for active monitoring, but rather to enhance troubleshooting.
Sure. So, basically it's good for looking at a short period of time to help troubleshooting?
Right. The Repository itself communicates over APIs between all the nodes and keeps track of all of the activities in the system; and these translate to API calls. If we want to focus on Repository API calls, we can start by looking at transactions.
Okay.
So, this will give us detail about cost. For example, per REST call or API call, we can see which endpoints take the most, duration per user, and this gives you an opportunity to start at a very high level and slowly drill in both in message types and timeframe. Another sheet is the Threads Endpoints and Users; and here you have performance information about how many worker-threads the Repository Service is able to start, what is the Repository CPU consumption, so you can easily identify one. For example, here just by discount, we can see that the preview privileges call for objects is called…
Yeah, a lot.
Over half a million times, right? And represents 73% of the CPU compute cost.
Wow, nice insights.
And then if we look here at the bottom, we can start evaluating time-based patterns and select specific time frames and go into greater detail.
So, I’m assuming this can also show resource consumption as well?
Right. CPU, memory in gigabytes and memory in percent. One neat trick is: to go to the QMC, look at how you've defined your Working Set Limits, and then pre-define reference lines in this chart. So, that it's easier to visualize when those thresholds are close to being reached or breached. And you do that by the add-ons reference lines, and you can define them like this.
That's just to sort of set that to match what's in the QMC?
Exactly.
Makes a powerful visualization. So, you can really map it.
Absolutely. And you can always drill down into specific points in time we can go and check the log details Engine Focus sheet; and this will allow us to browse over time, select things like errors and warnings alone, and then we will have all of the messages that are coming from the log files and what their sources.
Yeah. That's great to have it all kind of collected here in one app, that's great.
Indeed.
To summarize things, we've talked about to understand system performance, a baseline needs to be established. That involves setting up some monitoring. There are lots of options and tools available to do that; and it's really about understanding how the system performs so the measurement and comparisons are possible if things don't perform as expected.
And to begin to optimize as well.
Okay, great. Well now, it's time for Q&A. Please submit your questions through the Q&A panel on the left side of your On24 console. Mario, which question would you like to address first?
We have some great questions already. So, let's see - first one is: how can we evaluate our existing Qlik Sense applications?
This is not something that I’ve covered today, but it's a great question. We have an application on community called App Metadata Analyzer. You can import this into your system and use it to understand the memory footprint of applications and objects within those applications and how they scale inside your system. It will very quickly illustrate if you are shipping applications with extremely large data files (for example) that are almost never used. You can use that as a baseline for both optimizing local applications and also in your efforts to migrating to SaaS, if you feel like you don't want to bother with all of this Performance Monitoring and optimization, you can always choose to use our services and we'll take care of that for you.
Okay, next question.
So, the next question: worker schedulers errors and engine performance. How to fix?
I think I would definitely point you back to this Log Analysis application. Load that time frame where you think something bad happened, and see what kind of insights you can you can get by playing with the data, by exploring the data. And then narrow that search down if you find a specific pattern that seems like the product is misbehaving. Talk to Qlik support. We'll evaluate that with you and determine whether this is a defect or not or if it's just a quirk of how your system is set up. But that Sense Log Analysis app is a great place to start. And going back to the sheet that I showed: Repository and Engine metrics are all collected there. And these come from the performance logs that we already produce from Qlik Sense. You don't need to load any additional performance counters to get those details.
Okay.
All right. So, there is a question here about Postgres 9.6 and the fact that it's soon coming end of life. And I think this is a great moment to talk about this. Qlik Sense client-managed or Qlik Sense Enterprise for Windows supports Postgres 12.5 for new installations since the May release. If you have an existing installation, 9.6 will continue to be used; but there is an article on community on how to in-place upgrade that to 12.5 as a standalone component. So, you don't have to continue using 9.5 if your IT policy is complaining about the fact that it's soon coming to the end of life. As we say, we are aware of this fact; and in fact, we are shipping a new version as of the May 2021 release.
Oh, great.
So, here's an interesting question. If we have Qlik Sense in Azure on a virtual machine, why is the performance so sluggish? How do you fine-tune it? I guess first we need to understand what would you mean by sluggish? But the first thing that I want to point to is: different instance types. So, virtual machines in virtual private cloud providers are optimized for different workloads. And the same is true for AWS, Azure and Google Cloud platform. You will have virtual machines that are optimized for storage; ones that are optimized for compute tasks or application analytics; some that are optimized for memory. Make sure that you've chosen the right instance type and the right level of provisioned iOps for this application. If you feel that your performance is sluggish, start increasing those resources. Go one tier up and reevaluate until you find a an instance type that works for you. If you wish to have these results (let's say beforehand), you will have to consider using the Scalability Tools together with some of your applications against different instance types in Azure to determine which ones work best.
Just to kind of follow up on that question, if we're looking at that multi-node example from Qlik help, what nodes would you consider would require more resources?
Worker nodes in general. And those would be front and back-end.
So, a worker node is something with an engine, right?
Exactly. Something with an engine. It can either be front-facing together with a proxy to serve content, or back-end together with a scheduler a service to perform reload tasks. These will consume all the resources available on a given machine.
Okay.
And this is how the Qlik Sense engine is developed to work. And these resources are almost never released unless there is a reason for it, because us keeping those results cached is what makes the product fast.
Okay.
Oh, here's a great one about avoiding working set breaches on engine nodes. Question says: do you have any tips for avoiding the max memory threshold from the QIX engine? We didn't really cover this this aspect, but as you know the engine allows you to configure memory limits both for the lower and higher memory limit. Understanding how these work; I want to point you back to that QIXs engine white paper. The system will perform certain actions when these thresholds are reached. The first prompt that I have for you in this situation is: understand if these limits are far away from your physical memory limit. By default, Qlik Sense (I believe) uses 70 / 90 as the low and high working sets on a machine. With a lot of RAM, let's say 256 - half a terabyte of RAM, if you leave that low working set limit to 70 percent, that means that by default 30 of your physical RAM will not be used by Qlik Sense. So. always keep in mind that these percentages are based on physical amount of RAM available on the machine, and as soon as you deploy large machines (large: I’m talking 128 Gb and up) you have to redefine these parameters. Raise them up so that you utilize almost all of the resources available on the machine ,and you should be able to visualize that very very easily in the Log Analysis App by going to Engine Load sheet and inserting those reference lines based on where your current working sets are. Of course, the only way really to avoid a working set limit issue is to make sure that you have enough resources. And the system is configured to utilize those resources, so even if you still get them after raising the limit and allowing the - allowing the product to use as much RAM as it can without of course interfering with Windows operations (which is why you should never set these to like 99, 98, 99). Windows needs RAM to operate by itself, and if we let Qlik Sense to take all of it, it will break things. If you've done that and you're still having performance issues, that means you need more resources.
Yeah. It makes sense.
Oh, so here is another interesting question about understanding what certain Qlik Repository Service (QRS) log messages say. There is a question here that says: try to meet the recommendation of network and persistence the network latency should be less than 4 MS, but consistently in our logs we are seeing the QRS security management retrieved privileges in so many milliseconds. Could this be a Repository Service issue or where would you suggest we investigate first? This is an info level message that you are reporting. And it's simply telling you how long it took for the Repository Service to compute the result for that request. That doesn't mean that this is how long it took to talk to the Database and back, or how long it took for the request to reach from client to the server; only how long it took for the Repository Service to look up the metadata look up the security rules and then return a result based on that. And I would say this coming back in 384 milliseconds is rather quick. It depends on how you've defined these security rules. If these security rules are super simple and you are still getting slow responses, we would definitely have to look at resource consumption. But if you want to know how these calls affect resource consumption on the Repository and Postgres side, go back to that Log Analysis App. Raise your Repository performance logs in the QMC to Debug levels so that you get all of the performance information of how long each call took to execute. And try to establish some patterns. See if you have calls that take longer to execute than others; and where are those coming from any specific apps, any specific users? All of these answers come from drilling down into the data via that app that I demoed.
Okay Mario, we have time for one last question.
Right. And I think this is an excellent one to end. We talked a whole bunch here about Qlik Sense, but all of this also applies to QlikView environments. We are always looking at taking a step back and considering all of the resources that are playing in the ecosystem, not just the product itself. And the question asks: is QlikView Server performance similar to how it handles resources Qlik Sense? The answer is: yes. The engine is exactly the same in both products. If you read that white paper, you will understand how it works in both QlikView and Qlik Sense. And the things that you should do to prepare for performance and optimization are exactly the same in both products. Excellent question.
Great. Well, thank you very much Mario!
Oh, it's been my pleasure Troy. That was it for me today. Thank you all for participating. Thank you all for showing up. Thank you Troy for helping me through this very very complicated topic. It's been a blast as always. And to our customers and partners, looking forward to seeing your questions and deeper dives into logs and performance on community.
Okay, great! Thank you everyone! We hope you enjoyed this session. Thank you to Mario for presenting. We appreciate getting experts like Mario to share with us. Here's our legal disclaimer and thank you once again. Have a great rest of your day. -
How to create a working ODBC Firebird connection using Qlik Data Gateway
Currently (March 2025) the ODBC connector package does not have a Qlik internal connector for Firebird. It is necessary to create a connection with a ... Show MoreCurrently (March 2025) the ODBC connector package does not have a Qlik internal connector for Firebird. It is necessary to create a connection with a generic ODBC connection, following the steps in ODBC (via Direct Access gateway).
In this article, we provide suggestions on how to set the connection to import data and preview tables correctly.
- It is possible to download an ODBC driver to create a DSN connection from Download Firebird-ODBC version 3.0 (github)
- To make the connection work, share the C:/ and the Program File folders of the machine where the database is installed. Sharing only the drive itself (C:/) is insufficient.
- An example of the driver configuration:
- In the Qlik Data Connection, add a connection string in this format:
SYSTEM={Server IP};UID={dbAccount};PWD={db Pasword};DBNAME={Connection to file}.FDB;
Example:
-
Select Custom SQL syntax, choose " as Delimiters and fill SELECT statement template for Data Preview with:
SELECT ${COLUMN_LIST} FROM ${TABLE_NAME}
Example:
Environment
- Qlik Data Gateway
-
Qlik Talend and Camel 4: Migrating routes leveraging Camel RabbitMQ (from cMessa...
When migrating Camel 3 projects that use the camel-rabbitmq library to interact with RabbitMQ, several important changes in components, dependencies s... Show MoreWhen migrating Camel 3 projects that use the camel-rabbitmq library to interact with RabbitMQ, several important changes in components, dependencies structure and connection configuration must be considered. The primary change involves shifting from the native camel-rabbitmq component to the spring-rabbitmq component, as well as updating the connection configuration approach.
This article outlines all the necessary steps to successfully migrate projects that use the cMessagingEndpoint component with RabbitMQ.
Changes in Talend Studio that were made
- A new component, cRabbitMQ, has been added. This component must now be used in all Talend Studio routes to produce or consume messages from RabbitMQ.
- Additionally, a new type of configuration has been introduced in the cMQConnectionFactory component: MQ Server = RabbitMQ. This configuration is used to set up connections to RabbitMQ for any cRabbitMQ component.
- In Camel 4, the camel-rabbitmq dependency has been removed and replaced by the dependency for the new spring-rabbitmq component. This dependency is automatically added when the new component is used.
How to Migrate from cMessagingEndpoint with camel-rabbitmq to the New cRabbitMQ Component
- The first step in the migration process is to replace the cMessagingEndpoint component (Fig. 1) with the new cRabbitMQ component (Fig. 2) in your project.
The cRabbitMQ component has two base parameters: Exchange Name and Connection Factory. - The URI is now constructed based on Connection Factory properties and the values provided in the Advanced Settings of cRabbitMQ. Any additional needed options you can add in Advanced Settings.
Now generated URI for the cRabbitMQ component will start from “spring-rabbitmq:” instead of “rabbitmq:” that it was in cMessagingEndpoint.Figure 1 - Old approach that used cMessagingEndpoint
Figure 2 - New approach that uses cRabbitMQ and cMQConnectionFactory components
- Since cRabbitMQ uses the cMQConnectionFactory component for connection setup, you will also need to add this component. In its settings, specify the MQ Server type as RabbitMQ, and provide your connection parameters: Host name, Port, Virtual host, as well as authentication parameters or SSL settings if necessary.
Figure 3 - Example of cMQConnectionFactory settings
- With the transition to the spring-rabbitmq component in Camel 4, there may be some changes in the format and support for certain parameters. For up-to-date information on the parameters supported in Camel 4, it is recommended to refer to the official Spring RabbitMQ component documentation: https://camel.apache.org/components/4.8.x/spring-rabbitmq-component.html
Related Content
- The upgrade to Java 17 and Apache Camel 4
- Qlik Talend and your Move to Camel 4: What you need to know
-
Talend Studio: Error: Could not find or load main class when executing job with ...
The following error occurs "Could not find or load main class" when executing a job with tDBConnection component. Creating the connection through Meta... Show MoreThe following error occurs "Could not find or load main class" when executing a job with tDBConnection component. Creating the connection through Metadata generates the below error:
Connection failure. You must change the Database Settings.
java.lang.RuntimeException: java.lang.ClassNotFoundException: oracle.jdbc.OracleDriverIssue occurred with connecting to Oracle 19c database. However, the solution may work with other database issues.
Resolution
- Download the correct jar file ojdbc8-19.3.0.0.jar (https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8/19.3.0.0)
- Exit out of Talend Studio
- Place the download jar file into \studio\configuration\.m2\repository\com\oracle\ojdbc\ojdbc8\19.3.0.0
- Launch Talend Studio > Setup Metadata > DBConnection or set in the tDBConnection component
- Execute job
-
Qlik Talend Product: Talend Administration Center License Token Will not Automat...
As License Token is only valid for 90 days after it was created, it is renewed automatically by Talend Administration Center, except if Talend Adminis... Show MoreAs License Token is only valid for 90 days after it was created, it is renewed automatically by Talend Administration Center, except if Talend Administration Center is not connected or cannot access Talend Token Site.
You will be experiencing a situation where the License Token will not automatically renew in Talend Administration Center.
Resolution
- For Talend Administration Center License Token to automatically renew, you must have access to the Internet. Please check if you are getting any Internet connection issues during the time.
- Please check if you have any proxy settings that may block the connection to the license token validation server (https://www.talend.com/api/get_tis_validation_token_form.php) and please reference the following: Proxy and firewall allowlist information
Port 443 is the port to access the license token validation server.
- Please feel free to contact our Customer Support staffs in order to check the expiration date of your Talend Administration Center License Token from here: Support.
- If you are using a patch before the TPS-5612, it would be advised to move to a patch TPS-5612 or later.
- There has been a following fix for the "License Token will expire in x days" alert message in TPS-5612 patch. The alert message shows starting from 10 days before the License Token expiration date, and the License Token will try to automatically renew once everyday in the 10 days. However, if there are Internet connectivity issues during the time, the License will not successfully renew automatically.
In order for the License Token to automatically renew, we have implemented a feature for the license to try to renew again upon failure everyday. (The license will try to renew check per day, once it failed and will do it again). Please see the following document for details: TPS-5612 (cumulative patch). - You can always check the technicalvariable table in the Talend Administration Center database. The value for the datetokencheck will be the date of Talend Administration Center License Token renew check( this date is On a weekly basis). The calculation of this value is: Year * 1000 + Month * 10 + Week of Month. For Month and Week of Month, it will start the count from 0. If it is January, it will be 0, and if it is December, it will be 11. If it is the first week, it will be 0 and if it is the 5th week, it will be 4.
For example, if the value is 2025012 for the datetokencheck, it would mean the Year is 2025, the Month is February, and the Week of Month is the 3rd week.LicenseTokenCheckDate
- There has been a following fix for the "License Token will expire in x days" alert message in TPS-5612 patch. The alert message shows starting from 10 days before the License Token expiration date, and the License Token will try to automatically renew once everyday in the 10 days. However, if there are Internet connectivity issues during the time, the License will not successfully renew automatically.
- If you already owned a license that has been reactivated by Talend after renewal, you may have to validate it manually from the Talend Administration Center web application. : How to validate a license after renewal.
Cause
There are two potential causes:
- Talend Administration Center has experienced a connectivity problem with the License Token server.
- The License Token has not yet expired.
Related Content
For more information about how to clear Talend Administration Center Cache, please refer to this official article:
How-to-clear-the-Talend-Administration-Center-TAC-cache
Environment
-
How to count sessions in Qlik Sense
How are sessions counted in Qlik Sense? The following are examples of how sessions are counted within Qlik Sense. One user opens Qlik Sense Hub wi... Show MoreHow are sessions counted in Qlik Sense?
The following are examples of how sessions are counted within Qlik Sense.
- One user opens Qlik Sense Hub with One browser on One machine = 1 session
- One user opens Qllik Sense Hub with One browser but Multiple tabs on One machine= 1 session
- One user opens Qlik Sense Hub with Two different browsers on One machine= 2 sessions
- One user opens Qlik Sense Hub with One browser, then closes the browser and reopens it = 2 sessions
- One user opens Qlik Sense Hub with One browser on Two different machines = 2 sessions
- One user opens Qlik Sense Hub and two Apps in one browser, two different tabs and on a mobile device = 2 sessions
- One user opens Qlik Sense Hub from Two virtual proxy with One browser on One machine= 2 sessions
- One user opens Qlik Sense Management Console (QMC) with One browser on One machine = 1 session
- One user opens a Qlik Sense Mashup using Two Apps hosted from the same proxy = 1 session
Sessions will be terminated after the currently configured Session timeout in the Qlik Sense Proxy.
If the Qlik Sense engine or Proxy are terminated or crash, Sessions are ended right away.Once the maximum number of parallel user connections (5) is reached, this will be documented in the AuditSecurity_Repository log. To identify if this is the issue, review the relevant log and review how the user is interacting with the system.
The log is stored in:
C:\Programdata\Qlik\Sense\Log\Repository\Audit\AuditSecurity_Repository.txt
The related message reads:
Access was denied for User: 'Domain\USER', with AccessID '264ff070-6306-4f1b-85db-21a8468939b5', SessionID: 'e3cd957b-a501-4bec-a3f8-d35170a73efa', SessionCount: '5', Hostname: '::1', OperationType: 'UsageDenied'
Related content:
Troubleshoot too many sessions active in parallel
Qlik Sense April 2018 and later- Service account getting "You cannot access Qlik Sense because you have no access pass"