Qlik Community

Knowledge

Search or browse our knowledge base to find answers to your questions ranging from account questions to troubleshooting error messages. The content is curated and updated by our global Support team

Announcements
Support Case Portal has moved to Qlik Community! Read the FAQs to start exploring Support resources.

How to use Cloud Storage Connector for Amazon S3 with Qlik Application Automation

Alvaro_Palacios
Support
Support

How to use Cloud Storage Connector for Amazon S3 with Qlik Application Automation

Inside Qlik Application Automation, the Amazon S3 functionality is split into 2 connectors. The native Cloud Storage connector and the specific Amazon S3 connector. To create, update and delete files, it’s highly recommended to use the native Cloud Storage connector. To get the information and metadata of regions & buckets, use the Amazon S3 connector. Example of automation using the Amazon S3 connector to output a paginated list of regions and buckets in each region (not covered in this article):

 

Alvaro_Palacios_0-1631864473034.png

 

 

This article though focuses on the available blocks in the native Cloud Storage connector in Qlik Application Automation to work with files stored in S3 buckets. It will provide some examples of basic operations such as listing files in a bucket, opening a file, reading from an existing file, creating a new file in a bucket, and writing lines to an existing file.

The Cloud Storage connector supports additional building blocks to copy files, move files and check if a file already exists in a bucket, which can certainly help with additional uses cases. The Amazon S3 connection, however, supports advanced use cases such as generating a URL that grants temporary access to an S3 object, or download a file from a public URL and upload this to Amazon S3.

Let’s get started. Authentication for this connector is based on tokens or keys. Login to the AWS console with an IAM user to generate the access key id and secret access key required to authenticate.

Alvaro_Palacios_1-1631864473056.png

 

Now let's go over the basic use cases and the supporting building blocks in the Cloud Storage connector to work with Amazon S3:

  1. How to list files from an existing S3 bucket

a. Create an automation.

b. From the left menu, select the Cloud Storage connector.

Alvaro_Palacios_2-1631864473079.png

 

 

c. Search for the List Files block from the available list of blocks.

Alvaro_Palacios_3-1631864473094.png

 

d. Drag and drop the block into the automation and connect it after the Start block.

Alvaro_Palacios_4-1631864473105.png

 

e. The ‘Path’ parameter of this block allows to list the contents of a specific directory from a Dropbox account. In this example, ‘./’ indicates the root directory of your bucket.

f. Drag and drop the ‘Output’ block into the automation and connect it to the ‘List Files on Amazon S3’ block.

Alvaro_Palacios_5-1631864473120.png

 

g. Run the automation (if not saved previously, a 'Save automation' popup will appear). This will output a paginated list of files available in the root directory of a S3 bucket.

 

  1. How to open an existing file and read from it

a. The first two steps are similar as described before

b. Now use the ‘Open File’ block from the list

c. Drag and drop the block into the automation, link it to the Start block and fill in the required parameters, ie. Path, Region and Bucket. You can use ‘do look up’ to search across your S3 account. Add the file directory, filename and file extension under ‘Path’, e.g. ./4bxH6V4ac9zoAxZU.csv

Alvaro_Palacios_6-1631864473155.png

 

d. Drag and drop the ‘Read Data From File’ and link it to the previous block. Use the output from the previous block as input.

Alvaro_Palacios_7-1631864473176.png

 

e. Drag and drop the ‘Output’ block into the automation before running the automation. This will output a paginated table with the data stored in the file.

Alvaro_Palacios_8-1631864473201.png

 

Alvaro_Palacios_9-1631864473216.png

 

  1. How to create a new file (if it doesn’t exist and delete if it does) in the S3 bucket, write lines of data, save and close the opened file

a. The first two steps are similar as in the two previous use cases

b. Now select the ‘Check If File Exists’, drag and drop it into the automation and link it to the Start block.

Alvaro_Palacios_10-1631864473229.png

 

c. The previous block will return ‘True’ if the file exists, and ‘False’ if it doesn’t. Now search for the ‘Condition’ block to drag and drop it into the automation. Link it to the previous block and add the following condition:

Alvaro_Palacios_11-1631864473253.png

 

d. First let’s focus on the ‘YES’ part of the condition. Search the ‘Delete File’ block and drag and drop it into the automation. This will delete the specified file in the inside the S3 bucket if it already exists.

e. Hide the ‘NO’ part of the condition and continue building this automation at the loose end of the ‘Condition’ block (this part executes regardless of the result of evaluating the condition). First search the ‘Create File’ block, add it to the canvas, and connect it to the previous block.

f. Next, search the ‘Write Line to File’ block and connect it to the ‘Create File on Amazon S3’ block. Fill in the required input parameters. Select CSV as ‘Mode’ and specify the ‘Column names’ (i.e. headers) of the file.

Alvaro_Palacios_12-1631864473316.png

 

 

g. This example shows how to define ‘Column names’ manually but, obviously, this operation can also be automated using the Get Keys formula and reading files stored in S3 buckets, lists or objects defined as variable. Same applies to the ‘Data’ input parameter, one single line of data has been added manually in this example, but we could have read data from other data sources (e.g. tables, flat files, etc.) or loop through a list of items, and write each item as a line in the CSV file. It requires additional data transformations though. Check the ‘Csv’ function from the ‘Other Functions’ link.

h. Finally, search the ‘Save and Close’ block, and link it to the ‘Write Line to File on Amazon S3’ block. Optionally, add the ‘Output’ block into the automation which will show the path where the file has been saved and closed on the S3 bucket in Amazon.

Alvaro_Palacios_13-1631864473350.png

 

i. Optionally, you can add the following building blocks as a continuation of your automation to check the content of the newly created file in the S3 bucket. If the file has been successfully created and written, then this will output its content as rows, otherwise it’ll output ‘File Not Found’.

 

Alvaro_Palacios_14-1631864473382.png

 

Attached example file: create_and_write_files_amazon_s3.json

The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.

Tags (1)
Attachments
Version history
Revision #:
1 of 1
Last update:
‎2021-09-17 03:51 AM
Updated by:
 
Contributors