Search or browse our knowledge base to find answers to your questions ranging from account questions to troubleshooting error messages. The content is curated and updated by our global Support team
Within the available connectors in Qlik Application Automation are Microsoft Onedrive & Microsoft Sharepoint . This article intends to describe some potential use cases that can be performed through the use of these connectors and provide an easy to use template to do so. Most examples are built using the Onedrive connector, however the same use cases can be performed for shared drives using the Sharepoint connector.
Authentication for the connector happens with oAuth2. We are making use of the following scopes for the Onedrive connector:
These scopes should be accessible for all users in a default Office 365 tenant. If you are having troubles attempting to authorize Qlik Application Automation to obtain these scopes consult your Office 365 administrator.
Using Automations in Qlik Cloud it is possible to monitor changes to a user's personal Onedrive or to a Sharepoint shared drive. When changes to a personal or shared drive happen, automatic actions can be performed, such as automatically obtaining data from an uploaded excel spreadsheet and updating a Qlik Sense app.
Onedrive makes use of a delta token to view changes that have happened since that token. As we do not have a data store within Automations, we are bound to use a separate store for the delta tokens. Options could be an excel file / text file stored on Onedrive, an app on Qlik Sense or any other place where data is persisted.
An example automation to monitor changes might look like the following. The automation is also added as JSON to the article to import into a Qlik Cloud tenant.
The Microsoft Onedrive and Microsoft Sharepoint connectors in Qlik Application Automation provide blocks to Move and Rename items on drives. These can be used for example in conjunction with other connectors to perform these actions whenever … happens
An example could be to move all files from an incoming folder to an archive folder after a Qlik Sense reload is complete.
A simple example automation is added as an attachment to this article. In the automation editor this might look like the following:
The block Create Sharing Link can create a shareable URL that can be send over email to a user.
It offers the user the choice whether the URL works for the organization of the user or anonymous. The scope can be set to view, edit or embed and it is possible to provide a password as well as an expiration time for the shareable link.
The following automation would obtain data from a table in Qlik Sense, copy the rows to an Excel file hosted on Onedrive, create a shareable link and share this link to a user by email:
Sharing permissions can be done with the Send Sharing Invitation. This block sends an invitation to an email recipient including a message, subject, reading or writing permissions and the choice if an invitation has to be sent out and whether a user has to be signed in. A password can also be provided, but only if the user is using a personal Onedrive account.
Furthermore it is possible to list sharing permissions, updating these permissions and removing permission to access a file on a user's Onedrive.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article explains how a reload task chain can be built that will run longer than the maximum automation run limit of 4 hours.
This is an advanced article, if you’re new to automations or task chaining, I suggest you first read this basic article.
Content:
The maximum run time of an automation is 4 hours. This can be a problem when you want to build a task chain of multiple reloads of big apps that could take longer than that. To solve this, we'll build a system of two automations that run together:
Automation 1: Task Chain Initiator, this automation will run once and start the task chain. It will trigger the first app reloads.
Automation 2: Task Chain Manager, this automation will be scheduled to run every 5 minutes. It will verify if any reloads have finished, whether they were successful and if any new reloads should be triggered.
The Task Chain Manager automation will need to know which reloads were triggered by the Task Chain Initiator and will need a way to store which reloads it already processed in previous runs. In other words: these automations need some sort of state to track the progress of the reload task chain. We’ll use Google Sheets for this, but of course, feel free to use any other cloud storage tool.
The state will be stored in one file and divided over 4 sheets. Go ahead and create a new file in Google Sheets.
Sheet 1: Apps for initial reload, this sheet contains the ids and names of the apps that should be reloaded in the first automation. They will be executed by the Reload Initializer automation and run in parallel. Add a new sheet to the file you created, give it the name “Apps for initial reload” and add each app's id and name as in the below example. The order of the apps in the sheet is ignored.
Sheet 2: Queued reloads, this sheet will keep track of which apps were already triggered for reloading.
Add the sheet to the file and name it “Queued reloads” but don’t fill in any fields, the content of the sheet is autogenerated by the automations.
Sheet 3: Finished reloads, this sheet keeps track of all reloads that have finished, it contains their id, name, the status of the reload, and the reload log. Add the sheet to the file and name it “Finished reloads” but don’t fill in any fields, the content of the sheet is autogenerated by the Task Chain Manager automation.
Sheet 4: Chained reloads, this sheet dictates which apps should be reloaded after certain apps finished reloading. Add a new sheet to the file and name it “Chained reloads” specify the order in which apps should be executed.
The order of the rows in the sheet is ignored, the sheet is processed as follows:
If a reload has finished for an app from column “First_id”, then trigger a new reload for the app that’s specified in the column “Next_id” on the same row.
Add this sheet to the file and specify the apps as in the example below:
This automation will only run once at the beginning of the full task chain process, it will execute the following steps:
Note: an export of this automation is attached to this article as a JSON file named "Task chain initiator.json".
This automation will run on a schedule and will be the engine of our task chain process. Feel free to adjust the schedule according to your needs. On each run of the automation, the following steps will be executed:
Note: an export of this automation is attached to this article as a JSON file named "Task chain manager.json".
The reloads in the Finished reloads sheet will look like this:
When scheduled every 5 minutes, the "Task Chain Manager" automation will generate 288 daily runs. This bonus paragraph explains how the number of total runs can be reduced. Instead of having an automation that has many executions that run short, we'll build an automation that has only a few executions that run for a longer time.
The information in this article is provided as-is and to be used at your own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
How to task chaining with Qlik Application Automation
How to import and export automations
This article shows how to generate a report from a Qlik Sense app, upload the report to any cloud storage platform, create a sharing link, and send out an email with the link in the body.
We can send the report as an attachment using the Send Mail block in the Mail connector, but this block only supports attachments of size 40MB. An alternative solution is to store reports in any of the cloud storage platforms that support the creation of sharing links and send emails with the links.
I will go through a few examples to show how this can be achieved using the following cloud storage connectors:
I will not go through the steps on how to generate the report using Qlik Reporting blocks as it's already explained in this article.
The four automation files used in the examples are attached to this article. More information on importing automations can be found here.
The following steps explain how to create a public URL using the Dropbox connector.
The following steps explain how to create a public URL using the Microsoft OneDrive connector.
The following steps explain how to create a public URL using the Microsoft SharePoint connector.
Amazon S3
The following steps explain how to create a public URL using the Amazon S3 connector.
Authentication and authorization with the Microsoft Excel Connector in Qlik Application Automation works through oAuth2. When a user attempts to connect a Microsoft Excel datasource, the user will be redirected to the Microsoft login page where a user is prompted to give permission for Qlik Application Automation to access their Office 365 tenant.
The following scopes are being requested by Qlik Application Automation:
The Microsoft Excel connector has blocks available to perform actions on worksheets, data inside worksheets and tables inside worksheets. When working with data, we always recommend to work with tables rather than data on the worksheet due to ease of use.
This block obtains values from an excel file worksheet from a specified range. The following input parameters are used:
The block returns a double nested array. The first level contains a row, the second level contains the value of a column in that row.
The same block also exists for working with tables. Here the start cell and end cell are replaced by a Table ID.
With the block Add Worksheet to Excel File a user can add a new worksheet to an Excel document. It takes the following input parameters:
Using the block Update Rows in Excel Sheet you can specify a range in which you want to overwrite existing data. The following input parameters are required:
Using the block Create Excel Table With Headers a user can create a table in Excel. Using the block Add Table to Excel File Worksheet a user can also create a table without headers. The following input parameters are being requested:
Aside from creating a table, we can also delete is by the use of the Delete Table from Excel File Worksheet block. This requires an item ID, worksheet and Table ID.
Two blocks serve to find a matching row in an Excel Table. These blocks are Find Row in Excel Worksheet Table and Find Rows in Excel Worksheet Table. The first one will only return the first row found, the second one will return all rows found matching these criteria. The following input parameters are required:
Functionality to share workbooks is not included in the Microsoft Excel connector. These actions can be done using the Microsoft Onedrive or Sharepoint connector.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Getting data out of Qlik Sense Enterprise SaaS, and distributing it to different users in formatted Excel, has been a manual task until now.
Thanks to the release of Qlik Application Automation, it is now possible to automate this workflow by leveraging the connectors for Office 365 - specifically Microsoft SharePoint and Microsoft Excel.
Here I share two example Qlik Application Automation workspaces that you can use and modify to suit your requirements.
Video:
Note - These instructions assume you have already created connections as required in Example 1.
This On-Demand Report Automation can be used across multiple apps and tables. Simply copy the extension object between apps & sheets, and update the Object ID (Measure 3) for each instance.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
With Salesforce Jobs API, you can insert, update, upsert, or delete large data sets. Prepare a comma-separated value (CSV) file representation of the data you want to upload, create a job, upload job data, and let the Qlik Application Automation handle these with the Salesforce API.
Here are the steps to use the Upload Jobs APIs:
This article is intended to get started with the Microsoft Outlook 365 connector in Qlik Application Automation.
To authenticate with Microsoft Outlook 365 you create a new connection. The connector makes use of OAuth2 for authentication and authorization purposes. You will be prompted with a popup screen to consent a list of permissions for Qlik Application Automation to use. The Oauth scopes that are requested are:
The scope of this connector has been limited to only sending emails. Currently, we do not enable sending email attachments and are looking to provide this functionality in the future. The suggested approach is to upload files to a different platform, e.g. Onedrive or Dropbox and create a sharing link that can be included in the email body.
The following parameters are available on the Send Email block:
As we do not currently support email attachments, we need to first generate a sharing link in Onedrive or an alternative file sharing service. The following automation shows how to generate a report from a Qlik Sense app, upload the report to Microsoft Onedrive, create a sharing link and send out an email with the sharing link in the body. This automation is also attached as JSON in the attachment to this post.
In this article, we will explain how to set up one automation as a source and keep multiple target automations in other tenants synced with the changes made to the original. We will be using an intermediary tool for this process, in our case Github, to store the changes made to the source automation so you could also say this article doubles as a version control article for your future automations.
To get the initial automation data inside a Github repository, all we need to do is construct a quick automation that contains the "Get Automation" block and run it in order to receive the much-needed data in the block history. We can then copy that data and paste it all into a file inside our working repository. For example purposes, our working file is called 'automation.json' and is situated in the main directory of the demo repository.
Now that our initial set-up is complete, we will list the steps needed to keep our repository up to date with all the changes to our automation. This is how the workflow will look:
You can see that our initial start block is replaced with a Qlik Cloud Services webhook that listens to all automations being updated in our tenant. We have also created a variable to keep the Github branch name in memory and keep it as a version number for the cases where it might be needed:
The formula used takes the date at which the automation was modified, transforming it to a UTC format, and attachs it to the branch name: version-{date: {$.AutomationUpdated.data.updatedAt}, 'U'}
We also need to attach a condition block to our flow to make sure only our source automation gets sent to our repository:
In this case, we are hardcoding the id of our automation in the condition input parameter. Next, on the 'yes' branch of the condition, comes the 'Get automation' block that will return the latest version of our source automation, followed by the flow that sends that data to a new branch in our Github repository. First, we create a new branch:
Secondly, we update the automation.json file we had in our main repository with the new data we received from the 'Get automation' block:
As you can see, Github platform only accepts BASE64 encoded file contents, so the JSON content received from 'Get automation' will be transformed to that format. To create the commit and pull request, we have one more step, which is to find out the branch TREE SHA information:
With this information, we can go ahead and create the commit:
The final step in our flow is to create the pull request :
To do this, we now move to our target tenant and create a separate automation there. The workflow involved is a simple one:
Again, the start block will get replaced by the Github webhook that is listening to all new pull requests done in our repository:
We also create an empty object variable that will be later used to save the incoming information from the repository. We need to create a condition block as well with the following two conditions:
This lets the automation flow continue only if the pull request has been closed and its contents have been merged into the main branch of our repository. It tells us that the pull request has been approved and we can sync all changes into our target automations. For that, we query the file contents of the repository and save that information in our object variable after we have transformed it from a BASE64 encoding format to a text-based format:
The formula used when assigning the value to the variable is {object: '{base64decode: {$.getFileContent}}'} which just removes the base64 formatting and turns the string into a JSON object to better handle it in the next block. Now all that is left is to use the 'Update Automation' block to bring the new changes to our target automation:
As you can see, we hardcoded the GUID of our target automation, but the do lookup functionality can be used as well to select one of the automations in our tenant. Finally, we need to send the correct parameters from our object to the input parameters of the block. This can easily be done by selecting the Name, Description, Workspace and Schedule parts of the object.
This should let you keep your automations synced over multiple tenants. Your target automation history should not be impacted by the syncs in any way.
Please be aware that if the target automation is open during the sync process, the changes to it will not be reflected in real-time and a reopening of the automation will be needed in order to see them.
Attached to this article you can also find the JSON files that contain the automation flows presented.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article is meant to get users starting on the API key connector in Qlik Application Automation.
This connector is designed for advanced users to connect to platforms that do not have a dedicated connector
The connector employs the API key authentication flow, which requires only the API key to make requests to a specific API.
Index:
{
"content": [{
"endDate": "2021-11-01T05:55:00.000+0000",
"status": "unpublished",
"created": "2021-06-16T18:40:12.000+0000",
"websiteUrl": "https://events.bizzabo.com/337104",
"modified": "2021-08-30T20:54:52.000+0000",
"timezone": "America/Denver",
"name": "Pluralsight LIVE 2021"
}]
}
Not Supported results type :
[{
"endDate": "2021-04-08T19:00:00.000+0000",
"status": "published",
"websiteUrl": "https://events.bizzabo.com/309250",
"modified": "2021-04-01T16:38:46.000+0000",
"timezone": "America/Denver",
"name": "Pluralsight Summit",
"id": 309250
}]
Begin by looking for the "API Key Connector" connector. When you click connect, the following input parameters are requested:
GET / HTTP/1.1
Host: example.com
Authorization: Basic <api_key>
curl -X GET "https://example.com/endpoint/?api_key=<api_key>"
When selecting the connector in automation you will notice that there are only a few blocks available, depending on the type of request and pagination involved in your request. These blocks have the following input variables:
The following example will obtain list of events from the Tripletex application. We do not yet have a connector to do this and API key is used to obtain the value for the Authorization header/ Query string. We first start by obtaining the Base URL and API Key. From the Tripletex documentation we can find these at https://api.tripletex.io/v2-docs/#/ . We use the following values:
Once you fill the above details in the connector connection pop-up and click save, it would say the connector is connected. But since this is a generic connector, we don't have a way of verifying if the connector is actually connected. This will be evident when you run an endpoint.
Create a new automation and drag the Raw API List Request Offset Based Paging block from the API Key connector on to the canvas. Since the Tripletex connector supports offset based paging, we will be using the Offset Based Paging block to execute the list events api.
Raw API List Request Offset Based Paging block inputs are as follows:
The information in this article is provided as-is and is to be used at your own discretion. Depending on the tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article gives an overview of the available blocks in the Google Sheets connector in Qlik Application Automation. It will also go over some basic examples of creating a new spreadsheet, adding new sheets to it, adding some information to the sheet as well as removing information from it.
You will find the mentioned example attached to this article in JSON format so you can upload it to your workspace and get started with it easily.
This connector supports operations to manage spreadsheets, adding/removing sheets from spreadsheets, manipulating sheet data, and a few other support operations.
Authentication for this connector is based on the oAuth2 Protocol.
Let's start with the example prepared:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Let's say you uploaded the wrong dataset, or want to clean up data files you've generated using AutoML.
These are the steps to delete a dataset from your working space in Qlik Catalog:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article describes how an excel file can be converted to pdf file and send as an attachment via Send Mail block using Qlik Application Automation
Automation Structure
The creation of an Excel report from a Qlik Sense straight table is explained in the first part of this article.
Let's walk through the blocks used in the automation workflow to convert the Excel file into a PDF file.
A JSON file containing the above automation is attached to this article.
Please follow the steps provided in the How to import automation from a JSON file article to import the automation.
This article explains how the Reporting connector in Qlik Application Automation can be used to generate multi-page PowerPoint reports. It also explains how to store the generated report on a cloud storage tool, like Microsoft SharePoint.
You can use the template that is available in the template picker. You can find it by navigating to Add new -> New automation -> Search templates and searching for 'Store a simple multi-page PowerPoint report to Microsoft SharePoint' in the search bar. Click on the Use template option to use it in the automation.
The following steps describe how to build the demo automation. You will find a version of this automation attached to this article: "Store-a-simple-multi-page-PowerPoint-report-to-Microsoft-SharePoint.json".
This article contains information on how to use the “Split QVD Generation” template.
Content:
While QVD files are optimized for loading speed, it is possible to further improve loading time during reloads by splitting a QVD file into several smaller ones. This will allow you to use Qlik Sense Cloud's ability to perform parallel reloads.
The template uses two Qlik Sense applications. Demo examples of what these applications may look like are attached to this article.
This app should have a table containing the dimensional values to be used for the splitting. Add the dimension on which you want to base the QVD split as a dimension in the application's Master Items. To visualize the values, you may also add the dimension to a table on a sheet. When considering what dimension to base the data split on, it is a good idea to select a dimension that has evenly distributed values, since this will improve performance. Having the data unevenly split, where most of the data is loaded into just one of the QVDs, would have minimal performance improvements compared to not splitting the QVD.
The attached demo uses sample data from Snowflake (TPCH_SF10) and "Region" is used as the dimension on which to base the data split.
This is the application that produces the data for the split QVD files. It is important that you filter on the dimension used for splitting in the WHERE clause in the load script of this application, as highlighted below in the example taken from the attached demo application:
FROM "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."CUSTOMER","SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."ORDERS","SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."LINEITEM","SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."REGION", "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."NATION","SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10"."SUPPLIER"
where Year("O_ORDERDATE")=$(vYear) and Month("O_ORDERDATE")=$(vMonthNumber) and R_NAME = '$(vQVDName)' and "L_ORDERKEY"="O_ORDERKEY" and "C_CUSTKEY"="O_CUSTKEY" and "N_NATIONKEY"="C_NATIONKEY" and "N_REGIONKEY"="R_REGIONKEY" and "S_SUPPKEY"="L_SUPPKEY";
Once the applications have been set up, they can be used in the automation template. The template requires the configuration of two blocks:
If you wish to add debugging to the reloads you can add this after the “Wait For Reloads To Complete” block.
If you can’t identify the app that failed based on its id, use the "Get App Information" to retrieve the app name.
Note that automation logs are removed after 30 days. If you wish to keep older logs, then you must add blocks to write the failed reload logs to a cloud storage destination.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article gives an overview of verifying the load script of the Qlik Sense app from new Github pull request using Qlik Application Automation. It explains a basic example of a template configured in Qlik Application Automation for this scenario
You can make use of the template which is already available in the template picker
You can find it by navigating to Add new -> New automation -> Search templates and search for 'Verify load script in Qlik Sense app from new GitHub pull request' in the search bar and click on the Use template option in order to use it in the automation
Before you begin, you'll need the following:
The following steps describe how to build the demo automation. You can also find an exported version of this automation attached to this article as "Verify-Qlik-Sense-app-load-script-from-new-GitHub-pull-request.json".
A JSON file containing the above automation is attached to this article.
Please follow the steps provided in How to import automation from a JSON file article to import the automation
This is a current limitation.
Variables are among the settings that are only saved (as value, not as formulas) when the subscription is saved.
The subscription report will continue to use first value.
We have an improvement planned to include variables as formulas planned in 2023, but currently no precise ETA.