Skip to main content
Woohoo! Qlik Community has won “Best in Class Community” in the 2024 Khoros Kudos awards!
Announcements
Nov. 20th, Qlik Insider - Lakehouses: Driving the Future of Data & AI - PICK A SESSION

How to use the Cloud Storage Connector for Amazon S3 with Qlik Application Automation

No ratings
cancel
Showing results for 
Search instead for 
Did you mean: 
Alvaro_Palacios
Support
Support

How to use the Cloud Storage Connector for Amazon S3 with Qlik Application Automation

Last Update:

Feb 9, 2024 4:27:15 AM

Updated By:

Qlik-Lorena

Created date:

Sep 17, 2021 3:51:17 AM

Attachments

Inside Qlik Application Automation, the Amazon S3 functionality is split into two connectors: the native Cloud Storage connector and the specific Amazon S3 connector. To create, update, and delete files, it’s highly recommended to use the native Cloud Storage connector. To get the information and metadata of regions & buckets, use the Amazon S3 connector.

The following is an example of automation using the Amazon S3 connector to output a paginated list of regions and buckets in each region (not covered in this article).

Alvaro_Palacios_0-1631864473034.png

 

Environment:

Qlik Application Automation 

 

This article focuses on the available blocks in the native Cloud Storage connector in Qlik Application Automation to work with files stored in S3 buckets. It will provide some examples of basic operations such as listing files in a bucket, opening a file, reading from an existing file, creating a new file in a bucket, and writing lines to an existing file.

The Cloud Storage connector supports additional building blocks to copy files, move files, and check if a file already exists in a bucket, which can help with additional use cases. The Amazon S3 connection also supports advanced use cases such as generating a URL that grants temporary access to an S3 object, or downloading a file from a public URL and uploading this to Amazon S3.

Let’s get started.

Authentication for this connector is based on tokens or keys.

Log in to the AWS console with an IAM user to generate the access key ID and secret access key required to authenticate.

Alvaro_Palacios_1-1631864473056.png

 

Now let's go over the basic use cases and the supporting building blocks in the Cloud Storage connector to work with Amazon S3:

  1. How to list files from an existing S3 bucket

    1. Create an Automation.
    2. From the left menu, select the Cloud Storage connector.

      Alvaro_Palacios_2-1631864473079.png

    3. Search for the List Files block from the available list of blocks.

      Alvaro_Palacios_3-1631864473094.png

    4. Drag and drop the block into the automation and connect it to the Start block.

      Alvaro_Palacios_4-1631864473105.png

    5. The ‘Path’ parameter of this block allows to list the contents of a specific directory from a Dropbox account. In this example, ‘./’ indicates the root directory of your bucket.
    6. Drag and drop the Output block into the automation and connect it to the List Files on Amazon S3 block.

      Alvaro_Palacios_5-1631864473120.png

    7. Run the automation. If it was not previously saved, a 'Save automation' popup will appear. This will output a paginated list of files available in the root directory of a S3 bucket.

  2. How to open an existing file and read from it

    1. The first two steps are similar to those described before.
    2. Now use the Open File block from the list.
    3. Drag and drop the block into the automation, link it to the Start block, and fill in the required parameters, ie. Path, Region, and Bucket. You can use ‘do look up’ to search across your S3 account. Add the file directory, filename, and file extension under ‘Path’, e.g. ./4bxH6V4ac9zoAxZU.csv

      Alvaro_Palacios_6-1631864473155.png
    4. Drag and drop the Read Data From File block and link it to the previous block. Use the output from the previous block as input.

      Alvaro_Palacios_7-1631864473176.png

    5. Drag and drop the Output block into the automation before running the automation. This will output a paginated table with the data stored in the file.

      Alvaro_Palacios_8-1631864473201.png

      Alvaro_Palacios_9-1631864473216.png

  3. How to create a new file (if it doesn’t exist and delete it if it does) in the S3 bucket, write lines of data, save, and close the opened file

    1. The first two steps are similar to the two previous use cases.
    2. Now select the Check If File Exists block, drag and drop it into the automation, and link it to the Start block.

      Alvaro_Palacios_10-1631864473229.png
    3. The previous block will return ‘True’ if the file exists, and ‘False’ if it doesn’t. Now search for the Condition block to drag and drop it into the automation. Link it to the previous block and add the following condition:

      Alvaro_Palacios_11-1631864473253.png

    4. First, let’s focus on the ‘YES’ part of the condition. Search the Delete File block and drag and drop it into the automation. If it already exists, this will delete the specified file inside the S3 bucket.
    5. Hide the ‘NO’ part of the condition and continue building this automation at the loose end of the Condition block (this part executes regardless of the result of evaluating the condition). First, search the Create File block, add it to the canvas, and connect it to the previous block.
    6. Next, search the Write Line to File block and connect it to the Create File on Amazon S3 block. Fill in the required input parameters. Select CSV as ‘Mode’ and specify the ‘Column names’ (i.e. headers) of the file.

      Alvaro_Palacios_12-1631864473316.png

    7. This example shows how to define ‘Column names’ manually but this operation can also be automated using the Get Keys formula and reading files stored in S3 buckets, lists or objects defined as variable. The same applies to the ‘Data’ input parameter. In this example, one single line of data has been added manually, but we could read data from other data sources (e.g. tables, flat files, etc.) or loop through a list of items, and write each item as a line in the CSV file. It requires additional data transformations though. Check the ‘Csv’ function from the ‘Other Functions’ link.
    8. Finally, search the Save and Close block, and link it to the Write Line to File on Amazon S3 block. Optionally, add the Output block into the automation. This will show the path where the file has been saved and closed on the S3 bucket in Amazon.

      Alvaro_Palacios_13-1631864473350.png

    9. Optionally, you can add the following building blocks as a continuation of your automation to check the content of the newly created file in the S3 bucket. If the file has been successfully created and written, then this will output its content as rows, otherwise, it’ll output ‘File Not Found’.

      Alvaro_Palacios_14-1631864473382.png

 

 

Server Side Encryption update:

The Amazon S3 connector in Qlik Application Automation now supports adding the SSE header value for creating new files. This header is available on the Create File and Copy File blocks. It's possible to choose the default behavior which is AES256 encryption. As an alternative, it's possible to choose aws:kms encryption and provide a valid KMS Key ID.

sse_header.png

Attached example file: create_and_write_files_amazon_s3.json

The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.

Tags (1)
Labels (1)
Comments
elifapa
Contributor III
Contributor III

Hello,

Thank you for the documentation. 

Does Amazon S3 connector on Qlik Automations support server side encryption (SSE)? I am not able to pass the SSE header value on automations but I can do it on the  data connection configuration inside the app.

Emile_Koslowski
Employee
Employee

Hi,

Answering for consistency, (since this was answered on the forum here)

Answser: This is indeed not possible in the Amazon S3 connector in automations. 
If you require this functionality, I suggest you ask for it through ideation.

OppilaalDaniel
Partner - Contributor III
Partner - Contributor III

Hi, 

Is it possible to read data from qvd using the read data from the file block on Amazon S3 ?

Sonja_Bauernfeind
Digital Support
Digital Support

Hello @OppilaalDaniel 

Please post it in the Qlik Application Automation  forum to give your question appropriate reach and attention, 

All the best,
Sonja 

bcristescu
Contributor
Contributor

hi,

Is it possible to open a file (csv or json) and write a line to it and then close it? In this scenario, i have a file that i want to update with the line every time the job runs.

Thank you for your answer.

Best regards,

Bogdan

Version history
Last update:
‎2024-02-09 04:27 AM
Updated by: