<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>article Building automations for data loading in Official Support Articles</title>
    <link>https://community.qlik.com/t5/Official-Support-Articles/Building-automations-for-data-loading/ta-p/1788732</link>
    <description>&lt;DIV class="lia-message-template-content-zone"&gt;
&lt;P&gt;This article describes best practices for building automations in Qlik Application Automation, that load data from a source cloud application into data files on cloud storage. Example: writing data from Marketo or Salesforce into CSV files or JSON files on AWS S3.&lt;/P&gt;
&lt;P&gt;The goal of these patterns is to implement automations that are part of an overall ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) flow. For example an automation could write data to S3 files, which are then loaded into Qlik Sense using a load script.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dataloading.png" style="width: 999px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50520i3C292A6A38AE7A7F/image-size/large?v=v2&amp;amp;px=999" role="button" title="dataloading.png" alt="dataloading.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;Qlik Application Automation is not an ETL or ELT tool&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Before we dive in, please note that Qlik Application Automation is not an ETL or ELT tool.Qlik Application Automation is an iPaaS that uses APIs of cloud applications to read and write data. Automations process individual records/objects in loops, which is a different approach to batch-oriented CDC (change data capture) solutions such as Qlik Replicate.&lt;/P&gt;
&lt;P&gt;That said, you can create an automation that reads data from a source (e.g. Marketo) and writes that data to e.g. CSV files on cloud storage, as we describe in the next paragraphs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;Handling the schema&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Hardcoded schema with manual field mapping&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The schema is typically hardcoded in the automation, by mapping the desired fields from the source to columns in the destination. This means that if a custom field is added in the source, the field mapping must be added in the automation so that the new column is added to the CSV files. If you are writing to e.g. Snowflake, you would have to add the column in the correct table manually.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Getting the schema from the source (meta data)&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Some connectors have endpoints (blocks) to fetch meta data, e.g. "Describe object" in the Salesforce connector allows you to query the schema (standard fields and custom fields) of a given object in Salesforce. You can use this meta data to dynamically set the CSV columns in the automation.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Introspection of the data&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Alternatively, you can also perform "introspection" on the data, e.g. by looking at one or more records and using the keys as columns. Note that if a certain key is missing in the record used for introspection, that column will be missing in your CSV file.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;1. EXTRACT&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.1. Full data dump&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automation will read all records from the source and write these to a single CSV file.&lt;BR /&gt;On the next run, the file is removed and a new full dump file is created.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Pro: easy to build, single file per object type&lt;/LI&gt;
&lt;LI&gt;Con: does not scale, e.g. if you have 1 million Accounts in a CRM and you use a daily schedule, you might hit API rate limits and/or the automation could become too slow.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.2. Incremental data dump&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automation will read records from the source incrementally, and it will create a new CSV file on every run.&lt;/P&gt;
&lt;P&gt;The first CSV file will typically contain a full dump of the data.&lt;BR /&gt;Subsequent files will contain all new and updated records since the previous run.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/process-new-data-incrementally" target="_blank" rel="noopener"&gt;Learn more about incremental blocks&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.1 Only new data is added in the source (no updates)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Each record will only appear once in your set of CSV files. This makes loading of these CSV files in a final destinaion (e.g. a BI tool) straightforward.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.2 Data is updated in the source (create + update)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;An individual record may appear in multiple CSV files. E.g. let's assume John Doe was added to a CRM on Monday and his email was updated on Wednesday. John Doe will now appear twice in your CSV file set:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CSV file Monday: John Doe, &lt;A href="mailto:john@acme.com" target="_blank" rel="noopener"&gt;john@acme.com&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;CSV file Tuesday:&lt;/LI&gt;
&lt;LI&gt;CSV file Wednesday: John Doe, john@newcompany.com&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This requires attention when loading the CSV files into a final destination such as a BI tool. You want to make sure that John Doe appears only once in your dataset, and you want only the most recent record (from Wednesday) to survive. See below for an example load script in Qlik Sense that solves this issue.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.3 Data is updated and deleted in the source (create + update + delete)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Records that are deleted in the source will simply "no longer show up" in the API. Most APIs do not have a way to query for deleted records. In order to make sure deleted records are also removed in the final destination (e.g. your BI dashboards), you can implement following pattern:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Full sync on Sunday&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Monday&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Tuesday&lt;/LI&gt;
&lt;LI&gt;...&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Saturday&lt;/LI&gt;
&lt;LI&gt;Full sync on Sunday: delete all files, start from scratch by removing the pointer of the incremental block&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This means you will always have a maximum of 7 CSV files per object type on your cloud storage.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.3 Writing to CSV files&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Create a new file and write a header CSV line with your columns.&lt;BR /&gt;Next, loop over your data, and use the CSV formula to convert each record (object) to a CSV line.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/using-formulas-in-placeholders#csv" target="_blank" rel="noopener"&gt;Learn more about the CSV formula&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Pay attention to pitfalls when converting a JSON object to CSV:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The order of keys in json can vary across records, while the columns are fixed in a CSV file&lt;/LI&gt;
&lt;LI&gt;Some keys may be missing across records, while missing columns in a line would corrupt your CSV file&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The above pitfalls are solved by using the CSV formula with a fixed set of columns that is applied for each line that is written to the CSV file.&lt;/P&gt;
&lt;P&gt;Example automation:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dataloading_example_blend.png" style="width: 430px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50521i24A63A96C8703D9E/image-size/large?v=v2&amp;amp;px=999" role="button" title="dataloading_example_blend.png" alt="dataloading_example_blend.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;Notes on the above example automation:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;We use the following Condition to check if we are processing the first item (to write the CSV header line): { $.listCompanies.index } = 0&lt;/LI&gt;
&lt;LI&gt;We are using "introspection" on the first record to set the CSV columns. We flatten the object first with custom code. Coming soon: new formula "flatten". Then we use the formula "getkeys" to use the keys of the first flattened record as column names.&lt;/LI&gt;
&lt;LI&gt;We apply the CSV columns on each row that we write. This will map the keys of the actual object (Hubspot Company) to the CSV columns. For example, the column name "address.street" will map the nested property to the column with this name.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.4 Writing to JSON files&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Create a file on cloud storage with ".json" extension. Loop over your data and write individual objects to the file. Each object will automatically be converted to a JSON representation in the file.&lt;/P&gt;
&lt;P&gt;Note: if your final destination is Qlik Sense, use CSV files instead (see above).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;2. TRANSFORM&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Transformations in automations are accomplished by using a combination of field mappings, formulas and variables. For example, you could transform an object from a source (e.g. an Account in a CRM) to a new object (e.g. a Customer) by using a variable of type "object" and by applying field mappings on individual fields and optionally using formulas.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/variable-block" target="_blank" rel="noopener"&gt;Learn more about variables&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://help.blendr.io/docs/using-formulas-in-placeholders" target="_blank" rel="noopener"&gt;Learn more about formulas&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;3. LOAD&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automations described here write the data to CSV or JSON files on cloud storage (e.g. AWS S3 or Dropbox or Google Cloud Storage). The goal however is to eventually load the data into a data warehouse, a data lake or a BI/visualisation tool such as Qlik Sense.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;3.1 Writing to a database or data warehouse (e.g. Snowflake)&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Instead of using CSV files, you could write directly to a database or even a data warehouse (MySQL or Snowflake), using one of the available connectors.&lt;/P&gt;
&lt;P&gt;Note that an automation will write the data record by record using an "Insert" or "Update" block, and depending on the connector you could write in batches of e.g. 10 or 100 records to optimize performance.&lt;/P&gt;
&lt;P&gt;As mentioned before, the Schema management is typically not handled in the automation. The automation assumes that the tables exist in the destination with all the required columns.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;3.2 Loading data into Qlik Sense&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Once the data is written to CSV files on S3, you can load the data into Qlik Sense by setting up a datasource to your S3 bucket and using a load script.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;3.2.1 Loading data into Qlik Sense using "full data dump" files&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In this scenario you have one CSV file per table (object type) and you simple load these "full data dump" files into Qlik Sense.&lt;/P&gt;
&lt;P&gt;Example load script in Qlik Sense:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;LOAD *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;FROM [lib://Amazon_S3/hubspotcompanies.csv]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;(txt, utf8, embedded labels, delimiter is ',', msq);&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;We can use the QCS connector to set the load script from the automation and do a reload. Example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="setloadscript_doreload.png" style="width: 999px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50522i57B3821600DEDBD4/image-size/large?v=v2&amp;amp;px=999" role="button" title="setloadscript_doreload.png" alt="setloadscript_doreload.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;3.2.2 Loading data into Qlik Sense using "incremental data dump" files&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In case you have built an "incremental data dump" automation, you will have multiple CSV files per table. Example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CSV file Monday: initial dump of Account records&lt;/LI&gt;
&lt;LI&gt;CSV file Tuesday: incremental dump of Account records that were added or updated since Monday&lt;/LI&gt;
&lt;LI&gt;CSV file Wednesday: incremental dump of Account records that were added or updated since Tuesday&lt;BR /&gt;Etc.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As described in the above example with "John Doe", records may appear in multiple CSV files. The below load script will solve this issue in 3 steps:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Load all CSV files one by one in a loop&lt;/LI&gt;
&lt;LI&gt;Order the data by most recent records first (we only want the most recent record to survive)&lt;/LI&gt;
&lt;LI&gt;Remove duplicates (so that only the most recent record survives)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Below is a load script example that loads "Leads" from a folder "Marketo" on S3.&lt;BR /&gt;Naming of each file: **Leads_*timestamp*.csv**&lt;BR /&gt;Example: Leads_2020-01-20.csv (daily schedule) or Leads_2020-01-20t14:50:00.csv (hourly schedule)&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;for each file in filelist('lib://Amazon_S3/Marketo/')&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; If Index('$(file)','csv') &amp;gt; 0 And Index('$(file)','leads_') &amp;gt; 0 Then&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Leads_from_all_csv_files:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Load&amp;nbsp;&lt;/FONT&gt;&lt;FONT face="courier new,courier"&gt;*,&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Num(Textbetween(Filename(), '_', '.csv')) As FileTimestampNum&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; From [$(file)]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; (txt, utf8, embedded labels, delimiter is ',');&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; End If&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Next file&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Leads_most_recent_first:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;NoConcatenate&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Leads_from_all_csv_files&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Order By FileTimestampNum Desc;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Leads:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load Distinct&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;id As id_temp, *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Leads_most_recent_first&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Where Not exists(id_temp, id);&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Drop Table Leads_from_all_csv_files, Leads_most_recent_first;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;You will have to extend this load script for each object type (or table), e.g. one section for Accounts, one section for Leads and one section for Contacts. Each section will be a copy of the above with "leads" replaced with e.g. "accounts".&lt;/P&gt;
&lt;P&gt;Here's another load script example, where the data is stored in a QVD, and on each reload only one new CSV file is processed. The CSV file can be deleted by the automation, once the load script has executed:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//load all data from QVD (if file already exists)&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;IF (FileSize('lib://DataFiles/Full_Data.qvd')&amp;gt;0) THEN&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; Full_Data:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; LOAD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; FROM [lib://DataFiles/Full_Data.qvd] (qvd);&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;END IF;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//append new data from one CSV file (contains new &amp;amp; updated records)&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;//add column with row number, needed in sorting, see below&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;LOAD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;*,&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;rowno() as rowno&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;FROM [lib://DataFiles/new_data.csv]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;(txt, codepage is 28591, embedded labels, delimiter is ',', msq);&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//sort by newest records first, needed to dedupe and keep only most recent version of a record&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data_Sorted:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;NoConcatenate&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Full_Data&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Order By rowno Desc;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//remove duplicate records, this will keep most recent version of each record&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data_Deduped:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load Distinct&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;name As name_temp, *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Full_Data_Sorted&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Where Not exists(name_temp, name);&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Drop field name_temp;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Drop Table Full_Data_Sorted, Full_Data;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//Store new dataset in QVD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Store Full_Data_Deduped into lib://DataFiles/Full_Data.qvd (qvd);&lt;/FONT&gt;&lt;/P&gt;
&lt;/DIV&gt;</description>
    <pubDate>Thu, 26 Aug 2021 14:11:23 GMT</pubDate>
    <dc:creator>NikoNelissen_Qlik</dc:creator>
    <dc:date>2021-08-26T14:11:23Z</dc:date>
    <item>
      <title>Building automations for data loading</title>
      <link>https://community.qlik.com/t5/Official-Support-Articles/Building-automations-for-data-loading/ta-p/1788732</link>
      <description>&lt;DIV class="lia-message-template-content-zone"&gt;
&lt;P&gt;This article describes best practices for building automations in Qlik Application Automation, that load data from a source cloud application into data files on cloud storage. Example: writing data from Marketo or Salesforce into CSV files or JSON files on AWS S3.&lt;/P&gt;
&lt;P&gt;The goal of these patterns is to implement automations that are part of an overall ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) flow. For example an automation could write data to S3 files, which are then loaded into Qlik Sense using a load script.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dataloading.png" style="width: 999px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50520i3C292A6A38AE7A7F/image-size/large?v=v2&amp;amp;px=999" role="button" title="dataloading.png" alt="dataloading.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;Qlik Application Automation is not an ETL or ELT tool&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Before we dive in, please note that Qlik Application Automation is not an ETL or ELT tool.Qlik Application Automation is an iPaaS that uses APIs of cloud applications to read and write data. Automations process individual records/objects in loops, which is a different approach to batch-oriented CDC (change data capture) solutions such as Qlik Replicate.&lt;/P&gt;
&lt;P&gt;That said, you can create an automation that reads data from a source (e.g. Marketo) and writes that data to e.g. CSV files on cloud storage, as we describe in the next paragraphs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;Handling the schema&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Hardcoded schema with manual field mapping&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The schema is typically hardcoded in the automation, by mapping the desired fields from the source to columns in the destination. This means that if a custom field is added in the source, the field mapping must be added in the automation so that the new column is added to the CSV files. If you are writing to e.g. Snowflake, you would have to add the column in the correct table manually.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Getting the schema from the source (meta data)&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Some connectors have endpoints (blocks) to fetch meta data, e.g. "Describe object" in the Salesforce connector allows you to query the schema (standard fields and custom fields) of a given object in Salesforce. You can use this meta data to dynamically set the CSV columns in the automation.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;Introspection of the data&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Alternatively, you can also perform "introspection" on the data, e.g. by looking at one or more records and using the keys as columns. Note that if a certain key is missing in the record used for introspection, that column will be missing in your CSV file.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;1. EXTRACT&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.1. Full data dump&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automation will read all records from the source and write these to a single CSV file.&lt;BR /&gt;On the next run, the file is removed and a new full dump file is created.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Pro: easy to build, single file per object type&lt;/LI&gt;
&lt;LI&gt;Con: does not scale, e.g. if you have 1 million Accounts in a CRM and you use a daily schedule, you might hit API rate limits and/or the automation could become too slow.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.2. Incremental data dump&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automation will read records from the source incrementally, and it will create a new CSV file on every run.&lt;/P&gt;
&lt;P&gt;The first CSV file will typically contain a full dump of the data.&lt;BR /&gt;Subsequent files will contain all new and updated records since the previous run.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/process-new-data-incrementally" target="_blank" rel="noopener"&gt;Learn more about incremental blocks&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.1 Only new data is added in the source (no updates)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Each record will only appear once in your set of CSV files. This makes loading of these CSV files in a final destinaion (e.g. a BI tool) straightforward.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.2 Data is updated in the source (create + update)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;An individual record may appear in multiple CSV files. E.g. let's assume John Doe was added to a CRM on Monday and his email was updated on Wednesday. John Doe will now appear twice in your CSV file set:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CSV file Monday: John Doe, &lt;A href="mailto:john@acme.com" target="_blank" rel="noopener"&gt;john@acme.com&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;CSV file Tuesday:&lt;/LI&gt;
&lt;LI&gt;CSV file Wednesday: John Doe, john@newcompany.com&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This requires attention when loading the CSV files into a final destination such as a BI tool. You want to make sure that John Doe appears only once in your dataset, and you want only the most recent record (from Wednesday) to survive. See below for an example load script in Qlik Sense that solves this issue.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;1.2.3 Data is updated and deleted in the source (create + update + delete)&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Records that are deleted in the source will simply "no longer show up" in the API. Most APIs do not have a way to query for deleted records. In order to make sure deleted records are also removed in the final destination (e.g. your BI dashboards), you can implement following pattern:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Full sync on Sunday&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Monday&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Tuesday&lt;/LI&gt;
&lt;LI&gt;...&lt;/LI&gt;
&lt;LI&gt;Incremental sync on Saturday&lt;/LI&gt;
&lt;LI&gt;Full sync on Sunday: delete all files, start from scratch by removing the pointer of the incremental block&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This means you will always have a maximum of 7 CSV files per object type on your cloud storage.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.3 Writing to CSV files&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Create a new file and write a header CSV line with your columns.&lt;BR /&gt;Next, loop over your data, and use the CSV formula to convert each record (object) to a CSV line.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/using-formulas-in-placeholders#csv" target="_blank" rel="noopener"&gt;Learn more about the CSV formula&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Pay attention to pitfalls when converting a JSON object to CSV:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The order of keys in json can vary across records, while the columns are fixed in a CSV file&lt;/LI&gt;
&lt;LI&gt;Some keys may be missing across records, while missing columns in a line would corrupt your CSV file&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The above pitfalls are solved by using the CSV formula with a fixed set of columns that is applied for each line that is written to the CSV file.&lt;/P&gt;
&lt;P&gt;Example automation:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dataloading_example_blend.png" style="width: 430px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50521i24A63A96C8703D9E/image-size/large?v=v2&amp;amp;px=999" role="button" title="dataloading_example_blend.png" alt="dataloading_example_blend.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;Notes on the above example automation:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;We use the following Condition to check if we are processing the first item (to write the CSV header line): { $.listCompanies.index } = 0&lt;/LI&gt;
&lt;LI&gt;We are using "introspection" on the first record to set the CSV columns. We flatten the object first with custom code. Coming soon: new formula "flatten". Then we use the formula "getkeys" to use the keys of the first flattened record as column names.&lt;/LI&gt;
&lt;LI&gt;We apply the CSV columns on each row that we write. This will map the keys of the actual object (Hubspot Company) to the CSV columns. For example, the column name "address.street" will map the nested property to the column with this name.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="5"&gt;1.4 Writing to JSON files&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Create a file on cloud storage with ".json" extension. Loop over your data and write individual objects to the file. Each object will automatically be converted to a JSON representation in the file.&lt;/P&gt;
&lt;P&gt;Note: if your final destination is Qlik Sense, use CSV files instead (see above).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;2. TRANSFORM&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Transformations in automations are accomplished by using a combination of field mappings, formulas and variables. For example, you could transform an object from a source (e.g. an Account in a CRM) to a new object (e.g. a Customer) by using a variable of type "object" and by applying field mappings on individual fields and optionally using formulas.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://help.blendr.io/docs/variable-block" target="_blank" rel="noopener"&gt;Learn more about variables&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://help.blendr.io/docs/using-formulas-in-placeholders" target="_blank" rel="noopener"&gt;Learn more about formulas&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="6"&gt;3. LOAD&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The automations described here write the data to CSV or JSON files on cloud storage (e.g. AWS S3 or Dropbox or Google Cloud Storage). The goal however is to eventually load the data into a data warehouse, a data lake or a BI/visualisation tool such as Qlik Sense.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;3.1 Writing to a database or data warehouse (e.g. Snowflake)&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Instead of using CSV files, you could write directly to a database or even a data warehouse (MySQL or Snowflake), using one of the available connectors.&lt;/P&gt;
&lt;P&gt;Note that an automation will write the data record by record using an "Insert" or "Update" block, and depending on the connector you could write in batches of e.g. 10 or 100 records to optimize performance.&lt;/P&gt;
&lt;P&gt;As mentioned before, the Schema management is typically not handled in the automation. The automation assumes that the tables exist in the destination with all the required columns.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;3.2 Loading data into Qlik Sense&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Once the data is written to CSV files on S3, you can load the data into Qlik Sense by setting up a datasource to your S3 bucket and using a load script.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;3.2.1 Loading data into Qlik Sense using "full data dump" files&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In this scenario you have one CSV file per table (object type) and you simple load these "full data dump" files into Qlik Sense.&lt;/P&gt;
&lt;P&gt;Example load script in Qlik Sense:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;LOAD *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;FROM [lib://Amazon_S3/hubspotcompanies.csv]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;(txt, utf8, embedded labels, delimiter is ',', msq);&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;We can use the QCS connector to set the load script from the automation and do a reload. Example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="setloadscript_doreload.png" style="width: 999px;"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/50522i57B3821600DEDBD4/image-size/large?v=v2&amp;amp;px=999" role="button" title="setloadscript_doreload.png" alt="setloadscript_doreload.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;3.2.2 Loading data into Qlik Sense using "incremental data dump" files&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In case you have built an "incremental data dump" automation, you will have multiple CSV files per table. Example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CSV file Monday: initial dump of Account records&lt;/LI&gt;
&lt;LI&gt;CSV file Tuesday: incremental dump of Account records that were added or updated since Monday&lt;/LI&gt;
&lt;LI&gt;CSV file Wednesday: incremental dump of Account records that were added or updated since Tuesday&lt;BR /&gt;Etc.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As described in the above example with "John Doe", records may appear in multiple CSV files. The below load script will solve this issue in 3 steps:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Load all CSV files one by one in a loop&lt;/LI&gt;
&lt;LI&gt;Order the data by most recent records first (we only want the most recent record to survive)&lt;/LI&gt;
&lt;LI&gt;Remove duplicates (so that only the most recent record survives)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Below is a load script example that loads "Leads" from a folder "Marketo" on S3.&lt;BR /&gt;Naming of each file: **Leads_*timestamp*.csv**&lt;BR /&gt;Example: Leads_2020-01-20.csv (daily schedule) or Leads_2020-01-20t14:50:00.csv (hourly schedule)&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;for each file in filelist('lib://Amazon_S3/Marketo/')&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; If Index('$(file)','csv') &amp;gt; 0 And Index('$(file)','leads_') &amp;gt; 0 Then&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Leads_from_all_csv_files:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Load&amp;nbsp;&lt;/FONT&gt;&lt;FONT face="courier new,courier"&gt;*,&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; Num(Textbetween(Filename(), '_', '.csv')) As FileTimestampNum&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; From [$(file)]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; &amp;nbsp; (txt, utf8, embedded labels, delimiter is ',');&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; End If&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Next file&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Leads_most_recent_first:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;NoConcatenate&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Leads_from_all_csv_files&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Order By FileTimestampNum Desc;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Leads:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load Distinct&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;id As id_temp, *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Leads_most_recent_first&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Where Not exists(id_temp, id);&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Drop Table Leads_from_all_csv_files, Leads_most_recent_first;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;You will have to extend this load script for each object type (or table), e.g. one section for Accounts, one section for Leads and one section for Contacts. Each section will be a copy of the above with "leads" replaced with e.g. "accounts".&lt;/P&gt;
&lt;P&gt;Here's another load script example, where the data is stored in a QVD, and on each reload only one new CSV file is processed. The CSV file can be deleted by the automation, once the load script has executed:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//load all data from QVD (if file already exists)&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;IF (FileSize('lib://DataFiles/Full_Data.qvd')&amp;gt;0) THEN&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; Full_Data:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; LOAD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp; FROM [lib://DataFiles/Full_Data.qvd] (qvd);&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;END IF;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//append new data from one CSV file (contains new &amp;amp; updated records)&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;//add column with row number, needed in sorting, see below&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;LOAD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;*,&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;rowno() as rowno&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;FROM [lib://DataFiles/new_data.csv]&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;(txt, codepage is 28591, embedded labels, delimiter is ',', msq);&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//sort by newest records first, needed to dedupe and keep only most recent version of a record&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data_Sorted:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;NoConcatenate&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Full_Data&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Order By rowno Desc;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//remove duplicate records, this will keep most recent version of each record&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Full_Data_Deduped:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Load Distinct&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;name As name_temp, *&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Resident Full_Data_Sorted&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Where Not exists(name_temp, name);&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;Drop field name_temp;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Drop Table Full_Data_Sorted, Full_Data;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT face="courier new,courier"&gt;//Store new dataset in QVD&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Store Full_Data_Deduped into lib://DataFiles/Full_Data.qvd (qvd);&lt;/FONT&gt;&lt;/P&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 26 Aug 2021 14:11:23 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Official-Support-Articles/Building-automations-for-data-loading/ta-p/1788732</guid>
      <dc:creator>NikoNelissen_Qlik</dc:creator>
      <dc:date>2021-08-26T14:11:23Z</dc:date>
    </item>
  </channel>
</rss>

