Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources
Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.
Get started on Qlik Community, find How-To documents, and join general non-product related discussions.
Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.
Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.
Updated November 14th 15:00 CET: Download link.
Updated November 15th 15:40 CET: Release Notes.
Hello Qlik Cloud Administrators,
The current version of Qlik Data Transfer (November 2022) will expire on the 28th of November 2024.
Your log files may already show the following information:
ABOUT TO EXPIRE: This engine is about to expire. Please upgrade to a newer version! Expiry date: 2024/11/28
Qlik Data Transfer will stop functioning after this date.
A new version of Qlik Data Transfer has been released (14th of November), which will guarantee continued functionality and support until its end-of-support date. The November 2024 release is available on the Download page and its release notes can be accessed in Qlik DataTransfer Release Notes - November 2024.
Whenever possible, Qlik recommends using Qlik Data Gateway - Direct Access to load data from on-premise into a Qlik Cloud tenant. The supported databases, the Generic ODBC Connector Package, together with the upcoming support for loading on-premise files, can allow you to decommission Qlik DataTransfer servers (and potentially repurpose them for gateways).
For more information, see Qlik Data Gateway - Direct Access.
Thank you for choosing Qlik,
Qlik Support
Talend Studio is used to visually design and build the job which is published to a repository hosted in the Qlik Talend Cloud. Talend Management Console is used to manage, schedule, configure, and monitor Tasks which run the Studio jobs on Remote Engines.
But scheduling is not the only mechanism for running Tasks. Sometimes it is desirable to trigger Tasks in response to an external event. For example, a Task might be invoked whenever a file is dropped in a folder or perhaps in an S3 bucket. Or maybe the Task needs to be triggered as part of a larger workflow in another application.
In these cases, the TMC API can be used to flexibly invoke the Task. The TMC API also has the advantage that it can pass parameters from the REST API call as a context variables to the underlying job represented by the Task configuration. Using the TMC API is more flexible than a scheduled Task which must run a fixed configuration on a fixed schedule.
The TMC API is quite powerful, and with it you can automate any task that you can do interactively via the TMC in the browser. There are some excellent examples in the TMC API Use Cases documentation. A very important use case is Using a Service Account to Run Tasks. The example in the documentation assumes that you have a Service Account token and that you have the Task Id of the Task you want to execute. In practice you need to make additional API calls to generate a Service Account Token and to get the Task Id given the human readable Task Name.
In this blog post we will explore the basics of the TMC API using the Talend API Tester. The Talend API Tester is a Chrome plugin that allows you to create and manage Requests against REST endpoints that have an Open Application Specification v3 (OAS v3) contract. This was previously known as a Swagger contract. OASv3 is a broadly accepted standard for expressing service contracts and is comparable but much simpler than the older WSDL for SOAP based service consumption. The Talend API Tester is included as part of the Talend Cloud and is accessible from the Same Talend Cloud UI.
We will make an initial API calls to retrieve the Environment and Workspace Id. Then we will make a second API call to get the Task Id given the Task name. Finally, we will invoke the Task using its Task Id. Initially we will authenticate using a User Token.
We will then extend this example to make some oAuth2 calls to retrieve a Service Account Token which will be used instead of the User Token.
In a future blog post we will apply this technique to orchestrate multiple job Tasks from other Tasks to scale jobs horizontally.
You will need an existing Environment and a Workspace for which you have Execute permission and View permissions. If you are creating a new Job which you will publish to the Workspace you will also need the Publish permission. In order for your user to be able to be granted these permissions, they will need to have the Operator role. If you do not have this role, ask your administrator to Assign this Role to you.
There must be a Remote Engine associated with the sample Workspace or Environment.
You do not need Studio permissions if a job has already been published as a Task to TMC. But if you are creating your own job in Studio then you will need the Integration Developer role.
Rather than using your personal userid and password, you should create a Personal Access Token.
A Service Account can be created by your Administrator or other privileged user. If you are doing the Service Account examples, verify that your Service Account has access to the Target Workspace. It needs the same Execute permission and View permissions on the target Workspace mentioned earlier.
Below is a screenshot of how your administrator can add Workspace permissions for your Service Account. Initially the Service Accounts tab in the right-hand pane may be empty. Clicking the Add Permissions button will allow the administrator to select the Service Accont and select the appropriate Execute and View permission.
You will also need a sample job which has been published from Studio and configured as a Task in TMC that runs in the target Workspace.
In order to the examples, you will need the API Tester role. This will allow you to launch the API Tester from the TMC.
The TMC API Reference is available at https://api.talend.com/apis/. We will be using the Orchestration and Processing API’s.
Click on the Orchestration API. It takes you to the OASv3 UI representation. Scroll down and inspect the Workspaces -> List Workspace operation. It is a simple Get operation. Note the Query parameters that allow you to specify filter criteria. It is self-explanatory with name referring to the Workspace name and environment.name referring to the Environment to which the Workspace belongs.
Click on the Try in API Tester dropdown and select the region in which your TMC is located. Now click on the Try in API Tester link.
You will be taken to the API Tester where a new API Tester Project called Orchestration 2021-03 will be created. There is a folder for each section of the API in the left-hand pane.
Expand the Workspaces section folder and you will see four operations including the List Workspaces operation. Select the List Workspaces operation and the right-hand pane will show a display a form for populating the request parameters based on the OASv3 specification.
You can modify the input parameters including query parameters, headers, and the body to submit individual requests directly to the API. However, we are going to create a series of requests as a Scenario.
Before we can create our new Scenario, we need to create a separate Project to store it in. We will be using operations from multiple TMC Services, so it does not make sense to store our Scenario in the project of an individual Service.
Click on the ellipsis context menu of the root My Drive folder and select Add a Project.
A new project with an empty Scenario 1 will be created.
Select Scenario 1 and rename it to Run Task by Name.
Now return to the Orchestration project and click on the ellipsis and select Extract to Scenario.
A dialog box is displayed for you to select which operations from the current project you want to include in the Scenario. Select both the Tasks -> Get Task Executions and the Workspaces -> List Workspaces operation and click Extract. The screenshot below only shows one of these because of the scrollbar but be sure to select both.
Another dialog window in which Project you want to create the new Scenario. Select the TMC API project you just created.
The dialog box will update to show the new path to the selected project. Now select the Run Task by Name scenario.
The path is updated. Finally click Save.
The Get Available Task and the List Workspaces operations are copied into the new Run Task by Name scenario folder.
We need one more API operation for our Scenario. Return to the API documentation page and click on the Processing service.
Select your TMC region endpoint and then click Try in API Tester. It takes you back to API Tester and informs you that the new Project is named Processing 2021-03.
Open the Processing 2021-03 project and click the ellipsis next to the Task Executions Service and select Extract to scenario.
Since we selected just the Service rather than the whole Project we get a smaller list of Operations to export. Select the Execute Task operation and click Extract To.
Navigate to the TMC API project and select the Run Task by Name scenario as before and click Save.
The Execute Task operation is added to the Scenario. Select the Run Task by Name scenario in the left-hand pane and then click the Scenarios tab at top to switch to a more detailed perspective.
All three Operation Requests were added to the scenario, but they are not in the correct order. Use the up and down arrows on the right to change the order of the requests. The order you want is
Your Scenario is mostly ready but we need to populate the requests with details for authentication and specific parameters for your sample job.
When you imported your Orchestration and Processing Projects a default Environment was created for each Project. The Environment can be used to store common key-value pair configurations. The only such environment variable created by default was the BaseUrl property which points to the regional API endpoint, e.g. https://api.us.cloud.talend.com for the US region. But we will be adding a few more environment variables for things like your authentication token.
First, let’s copy our API project Environment into new Environment for our TMC API project. Select the Run Task by Name scenario if it is not already selected in the left-hand pane and click on the pencil icon in the upper right representing the Environment settings.
You are taken to the Environments editor dialog window. Click on Add an Environment.
Enter TMC API as the name for the new environment to match the name of the Project you created. Also, check the box marked “Copy variables from” and select one of the environments created for the API projects you just imported, either Orchestration 2021-03 or Processing 2021-03 as before. Click the Create button to initialize your new environment with the same settings as those projects environments.
Your new environment is displayed and not surprisingly it starts off identical to the old environment. In addition to the BaseUrl configuration we are going to add the user token created earlier. We are going to create this as a Private variable. This is important because it means it is private to you. Other users using the Scenario will need to enter their own token.
Name the new environment variable “tmc_token” and paste in the user token you created earlier. Then click Close.
We are ready to test each of the individual requests in our Scenario. As we progress, we will use the output of previous API calls as inputs to subsequent API calls.
Start by testing the List Workspaces operation. Select the List Workspaces operation from the Run Task by Name scenario in the left-hand pane to edit it. In the right hand-pane add a request Header named Authorize.
Set the value of the Authorize Heder to “Bearer “. Note the space at the end of that string. After entering the Bearer prefix, click the little wand icon to the right of the edit box.
The wand icon opens the Expression Builder dialog box which allows you to reference environment variables or the response body of previous operations in the scenario.
Click on the tmc_token in the Environment variables section of the Expression Builder.
The Expression built for you is displayed in the lower section as well as a concrete evaluation Preview of the Expression in the current context. In the diagram the preview has been redacted since it is a sensitive value. Click Insert to return to the request editor.
In this case we just want an environment variable. We could have just manually entered
Bearer ${“tmc_token”}
But we wanted to use the Expression Builder with this simple example so we will be ready for the more complex steps later.
The result of the Expression Builder has been inserted into the value of the Authorize Header.
Note that although there is a Query Parameter it has not been enabled by the checkbox. As a result, the current request will return all Tasks in all Environments.
Rather than overwhelm ourselves with a potentially large result set, let’s check the box to enable the Query Parameter and modify the filter criteria to reference the sample Workspace and Environment we created in the Setup section. In the screenshot below I have used eost-dev as the Environment name and eost-lob1-dev as the Workspace name. Yours will be different. Note that there is a semi-colon separator between the two criteria in the query parameter.
name==eost-lob1-dev;environment.name==eost-dev
The documentation and hence the generated operation may have only a comma so you will need to modify it.
Finally, run the request by clicking on the green play arrow.
The results are displayed below the green arrow so you may need to scroll down.
The request output is displayed in Pretty format with collapsible elements. It can also be displayed in raw json format as shown below.
[
{
"id": "66ba3b68aea0c50341661f68",
"name": "eost-lob1-dev",
"owner": "eost",
"type": "custom",
"environment": {
"id": "66ba3b67aea0c50341661f67",
"name": "eost-dev",
"maxCloudContainers": 0,
"default": false
}
}
]
The result of the List Workspaces operation is an array of one element which is the eost-lob1-dev Workspace. It has a child element which is the eost-dev Environment. For both the Workspace and Environment there is an id property in addition to the human readable name. We will need the Workspace id returned from this operation as input to the next step.
Select the Get Available Tasks operation from the Run Task by Name scenario in the left-hand pane to edit it. In the right hand-pane add a request Header named Authorize and set the value of the Authorize Header to “Bearer ${tmc_token}“ just like the previous request.
Check the box to enable the Query Parameter. There are lot of Query Parameter options for the Get Available Tasks. Check the boxes next to the environmentId, workspaceId, and name query parameters to enable them. The screenshot below shows the query parameters already populated, but they will initially be empty.
Click in the environmentId query parameter text field and then click the wand icon next to it to build the Expression to return the environmentId. The Expression Builder dialog window will be displayed. Use it to drill into the previous List Workspaces result to retrieve the Environment id.
On the left-hand pane of the Expression Builder there are different sections for Projects, Environment variables, and Global Methods. The Projects section has the title “Repisitory MyDrive”. Find the TMC API project and click on it. In the right-hand pane you will see the expanded results, which include the Run Task by Name that you created. Click on it and you will se the different Requests you have created within the scenario. Click on the previous step, List Workspaces. Now drill into response->body->0. The 0 is the first element of the array that was returned as the response. Continue drilling into the environment->id elements. Since id is a string an additional drilldown is possible to further parse the string, but we do not need it.
The full drill down as well as the final expression are shown below.
Repeat this process for the WorkspaceId field. The drill down and final expression are shown below.
For the name field, just specify the human readable name of your sample Task. In the screenshots this has been tmc_sample_job but you can use any job you wish.
Finally, run the request by clicking on the green play arrow.
The request output is displayed in Pretty format with collapsible elements. It can also be displayed in raw json format as shown below.
{
"items": [
{
"executable": "6723bd08708ce135e8d12bf3",
"name": "tmc_sample_job",
"workspace": {
"id": "66ba3b68aea0c50341661f68",
"name": "eost-lob1-dev",
"owner": "eost",
"type": "custom",
"environment": {
"id": "66ba3b67aea0c50341661f67",
"name": "eost-dev",
"default": false
}
},
"artifactId": "6723bd080bd9b9551259eccc",
"runtime": {
"type": "REMOTE_ENGINE",
"id": "66cccd70a0ca60412e39e758",
"runProfileId": ""
}
}
],
"limit": 100,
"offset": 0,
"total": 1
}
The result of the Get Available Tasks operation shows tmc_sample_task. In addition to the human readable name property, the Task has a unique key which is the executable property. The Task has links to the Environment and Workspace as well as the underlying Job (Artifact). For both Workspaces and Environments there is an id in addition to the human readable name. We will need the value of the executable property returned from this operation as input in the next step.
Select the Execute Task operation from the Run Task by Name scenario in the left-hand pane to edit it. In the right hand-pane add a request Header named Authorize and set the value of the Authorize Header to “Bearer ${tmc_token}“ just like the previous request.
Unlike the previous List Workspaces and Get Available Tasks which were GET operations, Execute Task is a POST operation. It is creating a new Execution which is being appended to the list of active Executions.
Since it is a POST operation, we need to look at the Execute Task API documentation to understand the schema of the request body. The request body is an ExecutableTask. The ExecutableTask object has four properties: executable, parameters, logLevel, and timeout. We will only use the first three properties for our example.
In the Body of the request paste in the following json template.
{
"executable" : "",
"parameters" : {},
"logLevel": "INFO"
}
We will use the Expression Builder to populate the executable property with the value returned from the previous Get Available Tasks Operation. First position the cursor between the two quotes for the executable value. Then click the wand icon.
Now navigate to the previous Get Available Tasks in the Expression Builder and drill into the desired executable property of the result. The drilldown navigation and the resulting expression are shown below.
After clicking Insert in the Expression Builder dialog window, the resulting request should like this:
{
"executable" : "${"TMC API"."Run Task by Name"."Get available Tasks"."response"."body"."items"."0"."executable"}",
"parameters" : {},
"logLevel": "INFO"
}
Now it is time to enter parameters. Parameters depend on the Context Variables used by your job. Any Context Variable default values you have in your job are overridden by the Task configuration properties. Those Task configuration properties can in turn be overridden by the parameters specified in you API call. So if you are happy with the already defined defaults in either the Task or the Job you can omit the parameters property of the Request.
Parameters for your sample job will differ, but the tmc_sample_job example has just a single Context Variable called message which is a String. The request is shown below.
{
"executable" : "${"TMC API"."Run Task by Name"."Get available Tasks"."response"."body"."items"."0"."executable"}",
"parameters" : {
"message" : "Greeting Earthling"
},
"logLevel": "INFO"
}
Run the request by clicking on the green play arrow. The result of the ExecuteTask operation is only a link to the ExecutableTask that was created.
{
"executionId": "ab9c894d-3664-4a71-bce6-066f055937bc"
}
The operation provides an asynchronous interface, so you can poll the status of the Task execution with the Get Task Execution Status operation which uses the exectionId as part of its path.
All three operations in the Run Task by Name scenario are now working and wired together. You can run the operations sequentially by clicking on the Scenario play button in the left hand side.
The API Tester will attempt to run each operation in sequence. New results for prior steps will be incorporated via the Expression Builder into subsequent steps. The outputs will be available in the individual operations to review.
Regular user accounts for humans use User Access Tokens which are static. In contrast, Service Accounts must generate temporary access tokens based on the oAuth2 Client Credential Flow. The temporary token itself can be generated by calls to the Get JWT Token operation of the TMC oAuth API. This example assumes that you already have a service account created with appropriate permissions to access your sample workspace as described in the Pre-requisites section.
From the TMC oAuth API click select the appropriate region for your Talend Cloud and then click “Try in API Tester”.
A new project named oAuth 2021-03 is created. Get JWT Token is the only one operation in the API. Click on the ellipsis next to the oAuth 2021-03 project and select “Extract to Scenario”.
Select the Get JWT Token operation by clicking on the checkbox and then click Extract To.
Navigate to the Run by Task Name scenario as in the previous sections and click Save.
The Get JWT token operation request has been appended to the Run Task by Name scenario.
Since we will need our JWT Service Account token for the subsequent steps, click on the Run Task by Name scenario and click the pencil icon to edit the scenario.
Move the Get JWT token to be the first operation using the up-arrow keys.
Notice that when you imported the oAuth API into API Tester the Environment was changed. The environment is shown in the upper right. Change the environment to the same TMC_API environment used for the other requests.
Now click on Edit Request for the Get JWT Token operation. Notice that the Authorization header for the request expects an Environment variable named PublicAuthorizationHeader.
As noted in the Generating a Service Account Token use case format Authorization header needs to be the base64 representation of the service account id concatenated with a colon and then the service account secret. If you do not know the service account id you can look it up in the TMC from Users and Security -> Service Accounts as shown below.
You could also look it up programmatically using the Service Accounts API with the List Service Accounts operation.
In addition the service account id, you need the service account secret. That was displayed when the service account was created, and you have it stored in a secure place, e.g. a secrets manager. If you have lost access to the secret you will need to generate a new service account.
With the service account id and secret in hand you can create the Public Header Authorization environment variable. Click the pencil icon in the upper right corner to edit the TMC API Environment.
Add a new private environment variable called PublicHeader Authorization as shown below. Set it to plaintext (not Base64) value of <service account id>:<service_account_secret>. Note the colon between the two values. We will format this in base64 in the next step.
Back in the Get JWT Token request editor, select the Authorization header.
Prefix it with the word “Basic”. It should now read:
Basic ${"PublicHeaderAuthorization"}
Now we need to transform this to Base64 format. Click anywhere within the quotes surround PublicHeaderAuthorization and click the wand icon to use the Expression Builder.
The Expression Builder displays the PublicHeaderAuthorization environment variable. In the Methods column select base64. The resulting expression is shown below, as well as a preview of your base64 formatted service-account-id:service-account-secret pair. Keep in mind that base64 is just a format, not an encryption. So be sure that your environment variables are private and do not share even the base64 value. It is obfuscated, not encrypted.
The payload for the request is always the same. Substitute your API region (us, us-west, eu, ap, au) for the <env> placeholder below and place in the Body section of the form.
{
"audience":"https://api.<env>.cloud.talend.com",
"grant_type":"client_credentials"
}
Now click on the green play arrow to execute the operation. The result is an access token in json format as shown below.
{
"access_token": "--- big long key redacted ---",
"token_type": "Bearer",
"expires_in": 1800
}
Note that the token will expire in 1800 seconds (30 minutes).
The new temporary Service Account token must be used for our subsequent queries. We will need to update the Authorization header of the List Workspaces, Get Available Tasks, and Execute Task operations.
Select the List Workspaces operation within the Run Task by Name scenario. Select the Authorization header, click within the quotes surrounding tmc_token and click the wand to open the Expression Builder.
The Expression Builder opens and displays the previously select tmc_token environment variable. We need to change that. Select the TMC API project from the left hand pane, and then the Run TMC Task by Name->Get JWT Token->response->body->access_token as shown below.
After clicking the Insert button, the new Authorization property should read
Bearer ${"TMC API"."Run Task by Name"."Get JWT token"."response"."body"."access_token"}
Click the green play arrow to execute the List Workspaces to verify that the new Service Account based invocation works. If you get an authorization error, double check that you have your service-account:service-account-secret pair set correctly in the environment, and that your service account has correct permissions on the Workspace.
For the Get Available Tasks and the Execute Task operations of the Run Task by Name scenario you can just copy-paste the same Authorization expression from the List Workspaces Authorization shown above.
Execute those operations as well to confirm the end-to-end test.
All three operations in the Run Task by Name scenario are now working and wired together. You can run the operations sequentially by clicking on the Scenario play button in the left-hand panel.
The API Tester will attempt to run each operation in sequence. New results for prior steps will be incorporated via the Expression Builder into subsequent steps. The outputs will be available in the individual operations to review.
Hello Qlik Admins and Developers,
The next major Qlik Sense Enterprise on Windows release is scheduled for November 2024. The update will introduce changes that will have an impact on the following add-ons:
The changes affecting the add-ons are:
New versions of all affected add-ons will be available with the November 2024 release, and the associated Release Notes will provide detailed information on any improvements and changes.
Please plan your upgrade accordingly to prevent interruptions:
If you upgrade to Qlik Sense Enterprise on Windows November 2024, all listed add-ons must be upgraded as well.
Thank you for choosing Qlik,
Qlik Support
Table of Contents
Version 5.5. Current as of: 19th November 2024
Qlik and Talend, a Qlik company, may from time to time use the following Qlik and Talend group companies and/or third parties (collectively, “Subprocessors”) to process personal data on customers’ behalf (“Customer Personal Data”) for purposes of providing Qlik and/or Talend Cloud, Support Services and/or Consulting Services.
Qlik and Talend have relevant data transfer agreements in place with the Subprocessors (including group companies) to enable the lawful and secure transfer of Customer Personal Data.
You can receive updates to this Subprocessor list by subscribing to this blog or by enabling RSS feed notifications.
Third party subprocessors for Qlik Cloud |
|
|
||
Third Party |
Location of processing (e.g., tenant location) |
Service Provided/Details of processing |
Address of contracting party |
Contact |
Amazon Web Services (AWS) |
If EU region is chosen: - Ireland (Republic of); & Paris, France (back-up); or - Frankfurt, Germany; & Milan, Italy (back-up); or - London, UK; & Spain (back-up). Qlik Anonymous Access: Stockholm, Sweden - Frankfurt, Germany (Blendr only). If US region is chosen: - North Virginia, US; & Ohio, US (back-up). If APAC region is chosen: - Sydney, Australia; & Melbourne, Australia (back-up); or - Singapore; & Seoul, South Korea (back-up ); or - Tokyo, Japan; Osaka, Japan (back-up); or - Mumbai, India; Hyderabad, India (back-up). |
Qlik Cloud is hosted through AWS |
Amazon Web Services, Inc. 410 Terry Avenue North, Seattle, WA 98109-5210, U.S.A |
|
MongoDB |
If EU region is chosen: - Ireland (Republic of);& Paris, France (back-up); or - Frankfurt, Germany; & Milan, Italy (back-up); or - London, UK; & Spain (back-up)
- Frankfurt, Germany (Blendr only). If US region is chosen: - North Virginia, US; & Ohio, US (back-up). Customer may select one of four APAC locations: - Sydney, Australia; & Melbourne, Australia (back-up); or - Singapore; & Seoul, South Korea (back-up); or - Tokyo, Japan; Osaka, Japan (back-up); or - Mumbai, India; Hyderabad, India (back-up). |
Any data inputted into the Notes feature in Qlik Cloud |
Mongo DB, Inc. |
legal@mongodb.com |
Third party subprocessors for Qlik Support Services and/or Consulting Services The vast majority of Qlik’s support data that it processes on behalf of customers is stored in Germany (AWS). However, in order to resolve and facilitate the support case, such support data may also temporarily reside on the other systems/tools below. |
|
|
||
Amazon Web Services (AWS) |
Germany |
Support case management tools |
Amazon Web Services, Inc. 410 Terry Avenue North, Seattle, WA 98109-5210, U.S.A. |
|
Salesforce |
UK |
Support case management tools |
Salesforce UK Limited |
DPO privacy@salesforce.com |
Grazitti SearchUnify |
United States |
Support case management tools |
Grazitti Interactive |
DPO Dpo@grazitti.com |
Microsoft |
United States |
Customer may send data through Office 365 |
Microsoft Corporation |
Chief Privacy Officer |
Ada |
Germany |
Support Chatbot |
Ada Support |
Data Protection Officer |
Persistent |
India |
R&D Support Services |
2055 Laurelwood Road |
Privacy Officer |
Atlassian (Jira Cloud) |
Germany, Ireland (Back-up) |
R&D support management tool |
350 Bush Street |
privacy@atlassian.com |
Altoros |
United States |
R&D Support Services |
Altoros Americas, LLC |
Data Protection Officer |
Ingima |
Israel |
R&D Support Services |
Ha-Khilazon St 3, Ramat Gan, Israel |
Mickey Peleg |
Galil |
Israel |
R&D Support Services |
Galil Software and Technology Services Ltd. Industrial Park, Mount Precipice, |
info@galilsoftware.com |
Third party subprocessors for Qlik mobile device apps |
|
|
|
|
Google Firebase |
United States |
Push notifications |
Google LLC |
dpo-google@google.com |
Third party subprocessors for Talend Cloud |
|
|
||
Third Party |
Location of processing (e.g., tenant location) |
Service Provided/Details of Processing |
Address of contracting party |
Contact |
Amazon Web Services (AWS) |
Talend Cloud AMERICAS: - Virginia, US; & Oregon, US (backup). EMEA: - Frankfurt, Germany; & Ireland (Republic of)(backup). APAC: - Tokyo, Japan; & Singapore (backup); or - Sydney, Australia; & Singapore (backup).
Stitch AMERICAS: - Virginia, US; & Oregon, US (backup). EMEA: - Frankfurt, Germany; & Ireland (Republic of) (backup). |
These Talend Cloud locations are hosted through AWS |
Amazon Web Services, Inc. |
aws-EU-privacy@amazon.com |
Microsoft Azure |
Virginia, United States: California; (backup) |
These Talend Cloud locations are hosted through Microsoft Azure |
Microsoft Corporation |
Microsoft Enterprise Service Privacy |
MongoDB |
See Talend Cloud locations above |
|
Mongo DB, Inc. |
privacy@mongodb.com or 1-866-692-1371 |
Third party subprocessors for Talend Support Services and/or Consulting Services: In order to provide Support and/or Consulting Services, the following third party tools may be used. |
|
|
||
Sub-processor |
Location of processing (e.g., tenant location) |
Service Provided/Details of processing |
Address of contracting party |
Contact |
Atlassian |
France United States |
Project management; support issue tracking |
Atlassian Pty Ltd 350 Bush Street Floor 13 |
|
Atlassian (Jira Cloud) |
Germany, Ireland (Back-up) |
R&D support management tool |
Atlassian Pty Ltd 350 Bush Street Floor 13 |
|
Microsoft |
United States |
Email provider, if the Customer sends Customer Personal Data through email. |
Microsoft Corporation |
Microsoft Enterprise Service Privacy Microsoft Corporation 1 Microsoft Way Redmond, Washington 98052 USA |
Salesforce |
United States |
CRM; support case management |
Salesforce UK Limited |
Affiliate Subprocessors These affiliates may provide services, such as Consulting or Support, depending on your location and agreement(s) with us. Our Support Services are predominantly performed in the customer’s region: EMEA – France, Sweden, Spain, Israel; Americas – USA; APAC – Japan, Australia, India. |
||||
Subsidiary Affiliate |
Location of processing (e.g., tenant location) |
Service Provided/Details of Processing |
Address of contracting party |
Contact |
QlikTech International AB, Talend Sweden AB |
Sweden |
These affiliates may provide services, such as Consulting or Support, depending on your location and agreement(s) with us. Our Support Services are predominantly performed in the customer’s region: EMEA – France, Sweden, Spain, Israel; Americas – USA; APAC – Japan, Australia, India. |
Scheelevägen 26 223 63 Lund Sweden |
DPO privacy@qlik.com |
QlikTech Nordic AB |
Sweden |
DPO privacy@qlik.com |
||
QlikTech Latam AB |
Sweden |
|||
QlikTech Denmark ApS |
Denmark |
Dampfaergevej 27-29, 5th Floor 2100 København Ø Denmark |
||
QlikTech Finland OY |
Finland |
Simonkatu 6 B 5th Floor FI-00100 Helsingfors Finland |
||
QlikTech France SARL, Talend SAS |
France |
93 Ave Charles de Gaulle 92200 Neuilly Sur Seine France |
||
QlikTech Iberica SL (Spain), Talend Spain, S.L. |
Spain |
"Blue Building", 3rd Floor Avinguda Litoral nº 12-14 08005 Barcelona Spain |
||
QlikTech Iberica SL (Portugal liaison office), Talend Sucursal Em Portugal |
Portugal |
|||
QlikTech GmbH, Talend Germany GmbH |
Germany |
Joseph-Wild-Str. 23 81829 München Germany |
||
QlikTech GmbH (Austria branch) |
Austria |
Am Euro Platz 2, Gebäude G A-1120, Wien, Austria |
||
QlikTech GmbH (Swiss branch), Talend GmbH |
Switzerland |
c/o Küchler Treuhand Brünigstrasse 25, CH-6055 Alpnach Dorf Switzerland
|
||
QlikTech Italy S.r.l., Talend Italy S.r.l. |
Italy |
Piazzale Luigi Cadorna 4 20123 Milano (MI) |
||
Talend Limited |
Ireland |
c/o Crowleys DFK 16/17 College Green, Dublin, D02 V078 Ireland |
||
QlikTech Netherlands BV, Talend Netherlands B.V. |
Netherlands |
Evert van de Beekstraat 1-122 |
||
QlikTech Netherlands BV (Belgian branch) |
Belgium |
Culliganlaan 2D |
||
Blendr NV |
Belgium |
Bellevue Tower Bellevue 5, 4th Floor, Ledeberg 9050 Ghent Belgium |
||
QlikTech UK Limited, Talend Ltd. |
United Kingdom |
1020 Eskdale Road, Winnersh, Wokingham, RG41 5TS United Kingdom |
||
Qlik Analytics (ISR) Ltd. |
Israel |
1 Atir Yeda St, Building 2 7th floor 4464301, Kfar Saba Israel |
||
QlikTech International Markets AB (DMCC Branch) |
United Arab Emirates |
AB (DMCC Branch) |
||
QlikTech Inc., Talend, Inc., Talend USA, Inc. |
United States |
211 South Gulph Road Suite 500 King of Prussia, Pennsylvania 19406 |
||
QlikTech Corporation (Canada), Talend |
Canada |
1133 Melville Street Suite 3500, The Stack Vancouver, BC V6E 4E5 Canada |
||
QlikTech México S. de R.L. de C.V. |
Mexico |
c/o IT&CS International Tax and Consulting Service San Borja 1208 Int. 8 Col. Narvate Poniente, Alc Benito Juarez 03020 Ciudad de Mexico Mexico |
||
QlikTech Brasil Comercialização de Software Ltda. |
Brazil |
51 – 2o andar - conjunto 201 Vila Olímpia – São Paulo – SP Brazil |
||
QlikTech Japan K.K., Talend KK |
Japan |
105-0001 Tokyo Toranomon Global Square 13F, 1-3-1. Toranomon, Minato-ku, Tokyo, Japan |
||
QlikTech Singapore Pte. Ltd., Talend Singapore Pte. Ltd. |
Singapore |
9 Temasek Boulevard Suntec Tower Two Unit 27-01/03 Singapore 038989 |
||
QlikTech Hong Kong Limited |
Hong Kong |
Unit 19 E Neich Tower 128 Glouchester Road Wanchai, Hong Kong |
||
Qlik Technology (Beijing) Limited Liability Company, Talend China Beijing Technology Co. Ltd. |
China |
51-52, 26F, Fortune Financial Center, No. 5 Dongsan Huanzhong Road, Chaoyang district, Pekin / Beijing, 100020 China |
||
QlikTech India Private Limited, Talend Data Integration Services Private Limited |
India |
“Kalyani Solitaire” Ground Floor & First Floor 165/2 Krishna Raju Layout Doraisanipalya Off Bannerghatta Road, JP Nagar, Bangalore 560076 |
||
QlikTech Australia Pty Ltd, Talend Australia Pty Ltd. |
Australia |
McBurney & Partners Level 10 68 Pitt Street Sydney NSW 2000 Australia |
||
QlikTech New Zealand Limited |
New Zealand |
Kensington Swan 40 Bowen Street Wellington 6011 New Zealand |
In addition to the above, other professional service providers may be engaged to provide you with professional services related to the implementation of your particular Qlik and/or Talend offerings; please contact your Qlik account manager or refer to your SOW on whether these apply to your engagement.
Qlik and Talend reserve the right to amend its products and services from time to time. For more information, please see www.qlik.com/us/trust/privacy and/or https://www.talend.com/privacy/.
Don't miss our next Q&A with Qlik! Pull up a chair and chat with our panel of experts to help you get the most out of your Qlik experience.
Not able to make it live? Don't worry! All registrants will get a copy of the recording the following week.
See you there!
Qlik Global Support
In today's data-driven world, organizations are increasingly leveraging Machine Learning (ML) to extract valuable insights from their data. This powerful technology enables businesses to make data-driven predictions, classify data, and uncover hidden patterns. As a result, ML can provide a significant competitive advantage, improve operational efficiency, and enhance customer experiences.
However, implementing ML can be challenging. Two key obstacles often arise: poor data source quality and inefficient ML model development. Overcoming these problems requires providing accurate and complete source data for effective ML model training. Also, reducing the time-consuming and resource-intensive effort needed to develop and deploy these models.
To overcome these challenges, organizations need a streamlined approach to integrate high-quality data with ML capabilities. Qlik simplifies this process, making it easier to build data pipelines that leverage the power of ML to drive better business outcomes.
Introducing Qlik AutoML , Qlik Application Automation and Qlik Talend Cloud
Qlik AutoML automates machine learning by using classification or regression models to find patterns in data that can be used for predictions. Qlik AutoML trains and tests your machine learning experiments, making them ready for deployment. These machine learning models can be integrated within Qlik Sense applications, Qlik Automation workflows and external applications.
When configuring Qlik AutoML experiments, you select the target and features used within the predictive model. Qlik AutoML automatically preprocesses trains and optimizes the model with the use of automatic feature engineering based on your choices. Once the experiment is complete, your ML models can be deployed through APIs for real-time predictions. Qlik AutoML facilitates an iterative workflow by enabling you to tune your model parameters for better optimization.
Qlik Application Automation is a powerful tool that enables you to automate data, analytics and ML processes without writing any code. It provides a visual interface where you can easily create and manage automations, consisting of a sequence of actions and event triggers. Automations are a simple way to solve the data consumption use case for ML models where users can choose from templates or build their own workflows by assembling predefined connectors and logical blocks.
Bringing it All Together
Qlik Cloud capabilities allow you to use automation to create a workflow that acquires data from any supported source into a target with real-time predictions and load data visualizations within a dashboard application.
We will demonstrate first using a Titanic passenger survival data set to create and deploy a classification model utilizing AutoML. A data pipeline will be used to ingest and transform source data that can be used for predicting if a passenger would have survived the Titanic. An application automation will be created to invoke the classification model in real-time and update the resultant passenger survivor data in a QlikSense application.
The Qlik platform can show an example of providing predictions to a data integration pipeline with the use of automation. (Using the Titanic dataset from Kaggle, we can build a Qlik data pipeline that can predict which passengers survived the Titanic.)
Setting up and running Qlik Talend Cloud Services
Build a classification prediction Model using Qlik AutoML experiment using the Titanic data set. (Choose deployment model based on F1 score.)
Deploy the generated CatBoost Classification model to predict survivors in our workflow using a Real-time prediction API URL.
Qlik Talend Cloud Data Integration pipeline used to load source data from MySQL and transform the data for model predictions into a Snowflake target.
Create a Qlik Automation to invoke the Qlik ML prediction model on the Titanic Transformed dataset created in the QTC Data pipeline.
QLIK AUTOML CONNECTOR USING THE DEPLOYED TITANIC CLASSIFICATION API WITH FEATURES SHOWN BELOW
Qlik Application Automation workflow sequence with embedded processor blocks.
Qlik Sense Application Dashboard loaded with Real-time prediction data.
Conclusion
Qlik Talend Cloud delivers real-time prediction capabilities by adding machine learning to your data pipelines. The Qlik Application automation features make it easier to integrate the services Qlik Talend Cloud provides for data integration and analytics. The platform reduces the complexities of deploying ML models within your data pipeline and integrate the results for your Analytical application. Organizations can quickly adopt the power of machine learning within their enterprise data architecture with the Qlik Talend Cloud platform.
DBeaver is a SQL editor available in both community and enterprise editions. It is available as a web application or a traditional application. The DBeaver application is built as an Eclipse Rich Client Platform (RCP) and also supports a DBeaver Plugin on the Eclipse Marketplace.
It is a great SQL toolbench that supports navigating schemas and database metadata, viewing data, DDL, and a very good sql editor. Integrations with Git for version control are available and even a prompt-to-sql AI plugin.
Since Dbeaver is available as an Eclipse Plugin it is compatible with Eclipse RCP applications like Talend Studio. The only challenge is that Talend Studio is based on Eclipse 2023-12 (4.30). When Studio is upgraded to a more recent version of Eclipse this problem will go away.
Fortunately, the actual problem is limited to only a single library. DBeaver expects a more recent org.eclipse.text plugin. This is very easy to add and should have minimal risk to the rest of the existing Talend product.
This document provides detailed analysis of the problem as well as simple steps to fix it so you can have DBeaver running in Talend Studio today.
Start by installing the DBeaver Eclipse plugin from the update site in Talend Studio. Although DBeaver is available in the Eclipse Marketplace, Talend Studio has removed the Marketplace plugin in order to minimize the size of Studio. So you must use the Feature Manager and then click the “Go to the wizard” link. This is equivalent to selecting Help->Install New Software in regular Eclipse installations.
Add the DBeaver update site to the list of repositories. The DBeaver update site url is https://dbeaver.io/update/ce/latest/. If you are running Talend Studio in a restricted location without access to the internet, see the appendix for instructions on creating a local copy of the dbeaver update site as a zip file.
Select the DBeaver IDE feature from the list of features offering in the DBeaver update.
Review the list of features that will be installed and click Next.
The plugin jars are signed. Review and trust the cert authorities. There will be two such dialog windows.
Click through the remaining dialog screens to accept the licenses and install DBeaver into Studio.
Accept the dialog option to restart Studio. At this point DBeaver has been installed in Studio.
Although the DBeaver Eclipse Marketplace web page lists DBeaver as compatible with many older versions of Eclipse, it is in fact not compatible with the 2023-12 (4.30) version of Eclipse that Talend Studio is built upon. But most of DBeaver will still work with Studio and it is a simple manner to fix the one compatibility bug.
Before we fix it, let’s test DBeaver in Studio. Most of the functionality is working. The Studio Window-Perspective menu is more limited than Eclipse and does not show an option for Open Perspective for other perspectives. So you must use the obscure Open Perspective button in the upper right on the toolbar.
Select the DBeaver Perspective.
Create a Database Project by right clicking on Database Navigator in the left-hand pane and selecting Create->Other.
Select Database Project in the dialog window.
Give the database project a meaningful name.
Accept the default resource folder locations.
You may want to select a different location for your project or resources rather than the default locations if you wish to apply source control to your sql independently from your Talend project.
Click on the Projects tab in the left-hand pane and then right click and select Create -> Connection to create a new database connection.
Select your database type for the connection. The screenshot shows Mysql but you can select whatever database you use.
Configure your database connection and then test it with the Test Connection button.
Back in the Projects tab in the left-hand pane, drill into your new connection to see table details. Expand the new connection you created and drill into Databases, then a specific database, and then Tables. Double click on a table and select the Data tab in the right-hand pane to see a tabular view of the data.
You can try other features of DBeaver, but the most important one will not work (yet). Click on the SQL toolbar to open a sql script.
Either nothing happens or you may see an error message such as
java.lang.ClassNotFoundException: org.eclipse.jface.text.rules.RuleBasedPartitionScanner cannot be found
This is expected albeit not desired behavior which we will fix in the next sections.
Although the DBeaver Eclipse Marketplace web page lists DBeaver as compatible with many older versions of Eclipse, it is in fact not compatible with the 2023-12 (4.30) version of Eclipse that Talend Studio is built upon.
The root cause of the problem is that a number of classes including the RuleBasedPartitionScanner were originally located in the org.eclipse.jface.text.rules package in the org.eclipse.text.jface plugin. But those files were moved to the org.eclipse.text plugin in version 3.14. This is where DBeaver expects to find them, but it is not available in the org.eclilpse.text 3.13 version used by the Eclipse 2023-12 baseline upon which Studio is based.
So we just need to install the org.ecilpse.text 3.14 plugin into Studio using one of the approaches discussed in the next two sections.
The most direct way to register the org.eclipse.text_3.14.100.v20240524-2010.jar with Studio is to add the file to the studio/plugins folder.
First, download the plugin file by going to the Eclipse project downloads archive and selecting a recent version of Eclipse such as the Eclipse 2024-9 release which is version 4.33. The links above are in human readable format and have lots of other information. But all you need is the link to the Eclipse 2024-09 repo zip file.
Unzip the file and go to the plugins folder and you will find the org.eclipse.text_3.14.100.v20240524-2010.jar file. Copy this to the plugins folder of your Talend Studio.
Stop Studio if it is running. Now modify the Studio configuration/org.eclipse.equinox.simpleconfigurator/bundles.info file located in
Studio/configuration/org.eclipse.equinox.simpleconfigurator/ bundles.info
Find the following line.
org.eclipse.text,3.13.100.v20230801-1334,plugins/org.eclipse.text_3.13.100.v20230801-1334.jar,4,false
Add a similar line immediately after it.
org.eclipse.text,3.14.100.v20240524-2010,plugins/org.eclipse.text_3.14.100.v20240524-2010.jar,4,false
You should now have two lines different versions of org.eclipse.text in the bundles.info file.
org.eclipse.text,3.14.100.v20240524-2010,plugins/org.eclipse.text_3.14.100.v20240524-2010.jar,4,false
org.eclipse.text,3.13.100.v20230801-1334,plugins/org.eclipse.text_3.13.100.v20230801-1334.jar,4,false
Note that there may be a carriage return shown in this document, but it is a single line in the bundle.info file.
First, confirm that the org.eclipse.text 3.14 plugin has been installed in Studio. Select Help->About Talend Studio from the menu and then click Installation Details in the resulting dialog box. Select the Plugins tab in the new dialog window and click the Plugin-Id column header to sort by that column. Now scroll down to the entries for org.ecilpse.text. Notice that there are two, the original 3.13 and the new 3.14 versions. Co-existence of these libraries is handled smoothly by the Eclipse OSGI framework.
Open the DBeaver perspective.
Click on the SQL Editor.
This time it should open successfully and you should be able to enter and execute query.
Do you need answers for specific points in time when working with multiple calendars / dates? Then use a calendar bridge. A calendar bridge is used to create what Qlik commonly calls: a Canonical Date / a Canonical Calendar. A calendar bridge is nothing more than a simple table that links 1 or more dates to a single common date, called a canonical date. This is used to simplify time period selection during analysis and when multiple calendar / date filters can be confusing to the user. The bridge table is linked to a key field in your data and created with a new dimension to simply describe each date type you have. (You can link this to a Master Calendar if you require more granular time periods.) Your charts can then use aggregated measures with the defined date type in a set expression to show the specific results. Watch the video below to learn more and see the attached app and sample data if you want to try it yourself.
Want to learn more tips and tricks likes these? Don't forget to join me tomorrow 10AM ET for Set Analysis: Redux on the next Do More with Qlik Webinar
This article by our beloved HIC is a great reference with more detail if needed.
Part 2: Coming soon....
Want to learn more tips and tricks likes these? Don't forget to join me tomorrow 10AM ET for Set Analysis: Redux on the next Do More with Qlik Webinar
Qlik Data Integration Client Managed November 2024 General Availability Release
November traditionally brings many celebrations, from Diwali and Guy Fawkes Day to Thanksgiving, just to name a few. Another celebration to add to that long list is the General Availability of the November 2024 releases of Qlik Replicate and Qlik Enterprise Manager. Our Qlik Replicate customers looking to empower their SAP data will especially want to celebrate, as we now support OData for sourcing data from SAP systems.
Qlik Replicate November 2024 General Availability Release
New Endpoints
Qlik is uniquely placed by offering several methods of replicating data with ease and automation and the added ability to combine it with other data seamlessly. SAP and Oracle continue to be mission-critical systems for many organizations, and two new endpoints have been added to enable the most efficient ways to replicate data from these two powerhouses.
New SAP OData source endpoint
Qlik Solutions has supported SAP as a source endpoint for a very long time. We have deep domain knowledge and expertise, and through this, we offer many different ways to source and extract data from your SAP applications that best match your use cases, latency requirements, and SAP licensing considerations.
In this release, we are excited to announce the addition of the new SAP OData source endpoint. We purposely developed this new endpoint with a keen eye on performance and scalability, and it supports Change Data Capture (CDC). The new OData endpoint uses a secured web service connection to extract data from SAP applications.
The new JAVA source endpoint uses the OData v2.0 protocol, consistent with SAP support, to explore SAP and get data out. All ODP-based services, such as dynamic filters and projection list forwarding, are supported to help reduce the load and impact. Objects supported included CDS Views, Extractors, BW data providers, and SLT. Data movement with CDC can be supported via two options: configurable periodic time periods or running a task on a schedule using Qlik Replicate’s built-in scheduler function.
See the online help for using SAP OData as a source
The OData endpoint is coming to Qlik Talend Cloud soon, through the Data Movement Gateway, stay tuned.
Qlik will continue supporting all our existing SAP source endpoints to serve customers' requirements.
To learn more about all the many ways that we support SAP and other sources in Qlik Replicate and how to use them, check out the online help section on managing sources
New Oracle XStream source endpoint
We are pleased to announce the new Oracle XStream source endpoint, which brings several improvements over using only the Oracle Redo logs for extracting data. This new endpoint interfaces directly with the Oracle XStream API, so now a Replicate task creates an XStream Out Server. With this new method, you will experience better performance, increased reliability, and simplified maintenance. Additionally, by utilizing the API, we can ensure future-proofing the endpoint with later Oracle versions.
See the online help for using Oracle XStream as a source
There have also been several improvements across several other endpoints, such as
Source endpoint improvements
Target endpoint improvements
Scheduling enhancements
The Qlik Replicate Scheduler is used to schedule Replicate task operations such as running, stopping, reloading, and resuming tasks as one-time or recurring jobs. Schedules can be configured to occur once, daily, weekly, or monthly.
This release introduces two new options, monthly and every, to make scheduling even more flexible.
Monthly – gives you more granular options to schedule tasks to run on the <nth> <weekday> of every month and at the specified time.
Every - This lets you schedule tasks to run at regular intervals. This new option allows you to control the interval, starting on the specified date and time.
Note: The minimum interval is 5 minutes, and job intervals are always calculated according to the original start time.
Security and Compliance
Qlik has always taken trust and security seriously, implementing security and privacy by design in our products for a long time. We offer a world-class architecture and experience to meet your security, compliance, and privacy needs confidently.
The following endpoints have enhanced access and authentication methods
Qlik Enterprise Manager May 2024 General Availability Release
Enhanced API Support - We continue to extend the Qlik Enterprise Manager APIs by now supporting the ability to resume change processing from a position in the log (SCN or LSN) using the RunTask method. (RESUME_PROCESSING_FROM_POSITION)
We have also enabled API authentication using access tokens via JSON Web Token (JWT).
⚠️ Note: the requires JWT to be setup and configured ⚠️
All our APIs can be used as REST, .NET, and Python. More details can be found on Qlik Help. Qlik Enterprise Manager API guide
As always, each new release is fully supported for two years. To check the status of support for your currently installed version, please see the relevant product lifecycle pages.
We hope you enjoy using Qlik Data Integration products and would love to hear your feedback and success stories, especially in any improvement gains you achieve.
To get the latest versions, please visit the Downloads and Release Notes section on Qlik Community.
To learn more about what is included in these releases be sure to check out the Release notes which are available here
To obtain any of these releases, go to the Qlik Downloads Site in the Community and filter “Product Category” by “Qlik Data Integration”, and then select the product and the versions you would like to download.
Note: For most products, selecting “Latest release and patch” under the “Show Releases” should be enough.
If required, you can filter further by selecting the latest “Release” and/or Service Release (SR) version under “Release Number”.
We are excited to introduce tabular reporting from Qlik Cloud. Now customers can address common, centrally managed, tabular report distribution requirements from within a Qlik Sense Application! With tabular reporting, report developers can create custom and highly formatted XLS documents from Qlik data and Qlik visualizations; Governed Report Tasks can burst reports to any stakeholder, ensuring that the Qlik platform is the source for operational decisions, customer communications and more.
Access our Getting Started section in your Qlik Cloud app (available for users with Can Edit permissions). Open your app, (a) choose your activity, and select (b) Reporting.
From here, you can begin with our introductory videos and configuration instructions:
Thank you for choosing Qlik,
Qlik Support
It’s been an exciting year for Qlik Cloud Reporting. Back in December 2023, we took a big step by adding tabular reporting to our dashboard-style capabilities, something many of you have been asking for. This update comes packed with features like report task management, easy imports of recipient lists from connected data sources, powerful filtering using bookmarks, and simple uploads for tabular report templates. Now, report developers can easily handle centralized tabular reporting tasks right inside Qlik Sense apps. Thanks to our incredible Qlik Cloud customers, we’re gearing up to roll out even more cool features!
We’re putting the finishing touches on our cloud reporting capabilities for Q4:
As BI leaders get ready for Qlik Answers, Qlik AutoML, and Qlik Talend Cloud, we know that having flexibility in report types and control over operations is key for keeping everyone in the loop.
As we launch these new features, we’ll also be focusing on enhancing operational task controls, including email reporting solutions.
Sign up for this webinar on October 17th to explore Qlik Cloud’s Reporting Evolution and get a sneak peek at everything it has to offer!
Sign up for Qlik Insider Webinar on November 13th hosted by Qlik’s VP of Analytics Portfolio Marketing, Mary Kern, this episode of Qlik Insider dives into to the role of reporting in the world of modern BI.
A new file management system has been added to Qlik Cloud allowing users to create folders and subfolders for their data files. This hierarchical folder structure is available in your personal space, as well as shared, managed and data spaces. Now, I can add folders to a data space in my tenant and use folders to organize my files. This is extremely helpful when I have a project that has a lot of data files.
There are two ways to add folders to a space. The first is through Space details as seen in the image below.
After clicking on Space details, select Details. Then select Data files.
From the screenshot below, I can use the icons at the top or click on the eclipse at the end of a folder row to:
I can also use the icons and menu options to cut, copy and paste files and/or folders to somewhere else within the space or to another space. More than one file can be cut/copy/pasted by selecting the entire folder or by holding ctrl and selecting each file.
The second way to add a folder to a space is via Administration > Content.
Let’s add one more folder to the CAJ space to see how it is done. To add a 2023 folder to the CAJ space, I will click on the Add folder icon and enter the name of the new folder. Be sure to confirm the path is correct. If not, select the correct path from the Path dropdown. Then click the Create button.
Once the folder is added, I can click on the eclipse for the 2023 folder and select Upload file to folder to upload files to this folder. If I used the Upload icon at the top, I would have to change the path to the 2023 folder before uploading my files.
This enhancement also lets me create subfolders as seen in the image below. I have a Population folder in the Census folder.
Now, let’s look at how we can load the data files in a Qlik Sense app using the folder structure. From the script editor in Qlik Sense, I can select the CAJ space and then click the Select data icon.
I can see the folder structure I have set up and I can drill down into the folders to select the file I want to load.
Notice that the file path in the script matches the hierarchical file structure I set up. This file structure can also be used when storing QVDs or inserting a QVS file.
I love this new feature in Qlik Cloud. Sometimes I have the need to organize my files by their source or in the example I shared in this blog by the year of the project. This allows me to organize my data in a way that makes sense for development. To learn more about this enhanced file management feature, check out Qlik Help.
Thanks,
Jennell
To ensure continuous support for your data integration processes and to leverage the latest innovations, we are providing this advance notice that Java 17 and Camel 4 will become the new standard versions across Talend Data Fabric version 8. We initiated a transition process by introducing Java 17 support in October 2023, and we are now completing the last leg of this transition.
As of the February, 2025 release:
To make this transition smoother:
Step-by-step guides will be made available when the new versions of these components become available.
For further questions please contact Qlik Talend Support and subscribe to our Support Blog for future updates.
Thank you for choosing Qlik,
Qlik Talend Global Support
Qlik-cli is a command line interface for Qlik sense SaaS, providing access to all public APIs. The tool enables you to administer your tenant, develop and manage apps, migrate data, making it easier to script and automate workflows.
Qlik-cli is mainly developed for Qlik Sense SaaS where the aim is to support all the publicly exposed APIs. However, you can expect functionality to support migrating resources for example apps to Qlik Sense SaaS for Qlik Sense Enterprise on Windows.
Here is an overview of how you can use the tool.
To get started, simply install the tool. Below is a short video that walks you through the installation process.
I hope you will try the tool soon and when you do, let us know what you think and if you are already using and loving the tool also share with us your favourite use cases.
https://qlik.dev/tutorials/get-started-with-qlik-cli
https://qlik.dev/libraries-and-tools/qlik-cli
No Code Transformation Workflows For Your Data Pipelines
Data transformations and manipulations are usually the domain of experts in SQL, Python, or other programming languages. Because data transformations were hand-coded in past, developing them was a resource intensive undertaking. Moreover, once the transformations were implemented, they needed to be updated and maintained as business requirements changed and new data sources were added or modified. One of the goals of Data Integration solutions is to assist users in their quest to manage and transform data by removing barriers to this process. What if you could automate and build your transformation workflows with zero manual coding using a visual interface?
Transformation flows from Qlik Talend Cloud help bring advanced transformation capabilities to users of all levels. Transformation flow relies on a no-code graphical interface to guide the user on the data transformation.
The interface only requires knowledge of the data and the desired transformation output. As the user builds the transformation flows, the system generates the SQL code, optimized for the target platform, and displays the results for verification as one goes along.
The key to transformation flows “no-code” approach is the concept of configurable processors. These processors function as building blocks that take raw data from a source table or preceding processor as input and perform an operation to transform and produce data as output. A wide range of processors are available as part of the Qlik Talend Cloud including processors to aggregate, cleanse, filter, join and more (see below). Currently, all processors execute using a push-down ELT paradigm. That is to say the processors generate SQL instructions compatible with the target database platform or data warehouse for the project, then execute these instructions utilizing the compute and data present on the target Cloud platform – such as Snowflake, Databricks or others.
Getting Started with Transformation Flows
If you are already using Qlik Talend Cloud Data Pipelines, you can create transformation flows inside the transformation objects of data integration projects.
If you are new to Qlik Talend Cloud, or new to Qlik, contact your sales team to start a new project.
Let us look at a couple of examples for transformation flows in the context of customer data. First, we are going to filter and split customer data from SAP by geography. Then we are going to combine it with data from other systems to arrive at a consolidated customer list.
Filter and split example
The steps to filter and split customer data by location/ geo are as follows.
When you create a transformation flow, the Qlik Talend Cloud user interface will display the input data set(s) selected in the prior step and the output data set by default. The output dataset will have the name of the transformation, but the output dataset name can be changed by the transformation flow developer.
Quick tip: A tip for getting started with building a transformation flow is selecting the input data set and turning on the data preview. This will show the fields and data available to use in your transformation flow. Note the LAND1 field has the country of the customer.
Combining data from multiple sources
Building upon the previous transformation, we will now combine the filtered SAP customer data with a different set of customers from an Oracle based system.
Conclusion
Transformation flows in Qlik Talend Cloud allows users without extensive data programing skill levels (SQL, Python, etc.) to easily and effectively transform their data for analytics. The graphical interface levels the playing field for implementation by abstracting data knowledge and design from syntactical language constructs and presenting them as configurable processors. Seemingly complex nuances like having the transformation flow process incremental changes or adding filters to reduce the set of data being processed can be handled by the product automatically. Simply enable the incremental load option and include the Incremental filter processor. But be aware, incremental loading is only available if the data set has been materialized. Making data transformations more accessible improves requirements communication, which can shorten data pipeline build times and make the pipelines easier to update when requirements evolve. Transformation flows are at the core of Qlik Talend Cloud’s transformation capabilities and available for use today.
Learn more about Qlik Talend Cloud
But in 2023, the arrival of ChatGPT and generative-AI changed the game.
Qlik was quick to respond, but not in haste. We introduced a new AI new strategy coined Qlik Staige, which is comprised of three pillars – a trusted data foundation for AI, AI-enhanced analytics, and self-service AI solutions. The foundational aspect of our AI-enhanced analytics strategy is to take Insight Advisor to the next level, modernizing the architecture and language model to take advantage of the latest generative-AI capabilities. We are pleased to introduce the first step on that journey – LLM driven language generation.
Insight Advisor now features generative-AI driven language generation as a private preview feature, both in Insight Advisor Chat and in-app search experiences. Users will now get more "chat GPT like" answers when asking questions, with human sounding narrative for improved readability and a wider range of observations, summarizations, and additional insights. Qlik has partnered with Amazon Bedrock to utilize state of the art LLM technology, through a fully built-in solution that takes advantage of Qlik's security and governance. And of course, all analytical calculations are still generated by the trusted Qlik engine, with the LLM being utilized to enhance and improve output. Going forward, we will continue to modernize the language model in Insight Advisor, improving intent recognition and enhancing the overall capabilities and experience.
In addition, in the coming months we will be introducing automated authoring. Business analysts might be great at building visualizations, but it’s a time-consuming and complex process. Automated authoring is a new AI-driven creation experience that speeds and simplifies authoring in Qlik. With a wide variety of analysis types to choose from, you can now drag-and-drop AI-generated analyses directly onto your sheets, select applicable data from a list of suggestions, and let Qlik do the rest. In a few clicks you get a sophisticated analysis that would have taken far longer to build manually. This enables business analysts and authors who know what they want to gain efficiency, shorten time-to-value, and spend more of their effort actually analyzing the business.
Both of these exciting new features were demonstrated at our annual user conference, Qlik Connect, and received great feedback. The new LLM-driven language generation capability is now offered in private preview, and automated authoring will be generally available later this summer.
Stay tuned as we continue to lead the way in AI-powered analytics – lots more to come.
Vidya Jyothi Institute of Technology ( VJIT), a leading educational institution in Hyderabad and one of the close Qlik Academic Program partners, hosted a successful datathon recently.
Vidya Jyothi Institute of Technology was established in 1998 by Vidya Jyothi Educational Society created by a group of committed academicians and enterprising educationists. The institute offers many programs in engineering and is one the most recognized campuses in the State of Telangana in India. The first "Centre of Excellence in Analytics powered by Qlik" was started at VJIT and so far, many students have leveraged the academic program of Qlik along with qualifications and certifications.
In another step to build this relationship further, VJIT organised a datathon where more than 200 students participated in a Qlik only datathon event. The departments of IT, CSE, CSE- DS and AI participated. One of the distinct feature of this datathon was that all the student participants had completed the Qlik Sense Business Analyst Qualification.
Students were shared datasets and they created dashboards and presented them to the audience. The final selection of students was based on the quality of analysis using Qlik Sense, presentation and knowledge of the technology. In the end, five teams were shortlisted for the final round and three winners were declared.
For more information on the Qlik Academic Program free resources and many other engagement opportunities, visit qlik.com/academicprogram
Customizing your Qlik Sense apps not only enhances their visual appeal but also ensures consistency with your organization's branding guidelines. With custom themes you can modify colors, fonts, and layouts on both global and granular levels, giving you complete control over the look and feel of your analytics.
In this blog post, we'll dive into the essentials of building a custom theme, dissect the anatomy of the theme's JSON file, and share some tips and tricks to help you create themes easily.
Bonus: along the way, we will be creating a Netflix inspired theme. We'll go from this:
to this:
Getting Started: The Essentials of Building a Theme
A custom theme in Qlik Sense is a collection of files stored in a folder.
It typically includes:
{
"name": "Netflix Theme",
"description": "A custom theme inspired by Netflix's branding.",
"type": "theme",
"version": "1.0.0",
"author": "Ouadie Limouni"
}
Folder structure example:
netflix-theme/
├── netflix-theme.qext
├── theme.json
├── netflix.css (optional)
├── BebasNeue-Regular.ttf (optional)
└── images/ (optional)
└── background.jpg
Anatomy of the `theme.json` File
(The full theme code is attached at the end of this blog post)
Variables allow you to define reusable values (like colors and font sizes) that can be referenced throughout your theme. Variables must be prefixed with `@`.
Example:
"_variables": {
"@primaryColor": "#E50914",
"@backgroundColor": "#141414",
"@ObjectBackgroundColor": "#3A3A3A",
"@fontColor": "#FFFFFF",
"@secondaryColor": "#B81D24",
"@fontFamily": "\"Bebas Neue\", Arial, sans-serif",
"@fontSize": "14px"
}
These properties set the default styles for your entire app.
Customize the appearance of sheets, including the title backgrounds.
Control the styling of various objects (charts, tables, etc.) in your app.
Define how data appears in your visualizations, including primary data color, colors for null values, and colors for different selection states.
Learn more here.
Palettes are arrays of colors used for dimensions (categorical data). You can define custom palettes for data and UI elements.
"palettes": {
"data": [
{
"name": "Netflix Data Palette",
"scale": [
"#E50914",
"#B81D24",
"#221F1F",
"#FFFFFF"
]
}
],
"ui": [
{
"name": "Netflix UI Palette",
"colors": [
"#FFFFFF",
"#B3B3B3",
"#333333",
"#000000"
]
}
]
},
Scales are used for measures (numerical data) and can be gradients or classes.
"scales": [
{
"name": "Netflix Red Gradient",
"type": "gradient",
"scale": ["#B81D24", "#E50914"]
},
{
"name": "Netflix Grey Gradient",
"type": "gradient",
"scale": ["#333333", "#B3B3B3"]
}
],
You can apply specific styles to individual chart types, overriding global settings.
Example for a Bar Chart:
"barChart": {
"label": {
"value": {
"color": "@fontColor",
"fontSize": "12px",
"fontFamily": "@fontFamily"
}
},
"bar": {
"fill": "@primaryColor"
},
"outOfRange": {
"color": "#404040"
}
},
Tips and Tricks
Creating custom themes can be a rewarding experience, and here are some tips to help you along the way:
1. Use Variables for Consistency
Defining colors, font sizes, and other reusable values as variables ensures consistency across your theme and makes updates easier.
"_variables": {
"@primaryColor": "#E50914",
"@fontSize": "14px"
}
2. Leverage Inheritance
The `_inherit` property allows your theme to inherit properties from the default theme, reducing the amount of code you need to write.
{
"_inherit": true,
// Your custom properties here
}
3. Test Incrementally
Apply your theme during development and test changes incrementally. This approach helps you catch errors early and see the immediate impact of your changes.
4. Organize Your Theme File
Keep your `theme.json` file organized by grouping related properties. This practice makes it easier to navigate and maintain your theme.
5. Prefix Your Variables and Themes
To avoid conflicts with other themes or variables, use unique prefixes.
"_variables": {
"@netflix-primaryColor": "#E50914",
}
6. Validate Your JSON Files
Always validate your JSON files to prevent syntax errors. Use online tools like JSONLint.
7. Utilize Custom Fonts Carefully
Don't overuse custom fonts and ensure that any custom fonts you use are properly licensed for use in your application.
8. Use High-Quality Images
If you're incorporating images (like backgrounds or logos), make sure they are high-quality and optimized for web use.
-> Stay up-to-date with the latest on qlik.dev
Applying the Netflix Theme to Your App
Once you've created your custom theme, you can apply it to your Qlik Sense app:
1. Upload the Theme: Upload the zipped folder to the Themes section in your Console.
2. Apply the Theme: In your app, go to the App options menu, select Appearance, and choose your custom theme from the list.
📌 If you are an advanced developer, checkout the following blog posts that tackle theming in an embedded context:
- Theming with Picasso.js
- Qlik Embed (theming section towards the end)
Happy theming!
Optimization of Route Order: Experimenting with different city sequences reveals that the order of stops significantly affects total travel time. Users discover that planning an efficient sequence can save valuable minutes, especially with a time constraint. Real-Time Decision Making: The app demonstrates how real-time adjustments, such as changing routes or redistributing speed points, impact overall performance, mirroring the decision-making process in real logistics scenarios.
The app demonstrates how Qlik Sense can be used beyond traditional analytics, enabling businesses to simulate and optimize complex logistical operations in real time. By providing insights into route efficiency and resource allocation, it can help companies streamline delivery processes, reduce costs, and improve overall operational efficiency.
This idea of app is designed for logistics managers, BI analysts, and operations teams who would use it to explore and refine delivery strategies.
The total time calculation provides valuable insights derived from the interplay of speed points and total weight on each route. In a real-world scenario, additional variables would come into play, but this app effectively demonstrates the foundational concept of leveraging data to optimize complex logistics operations.