<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Job memory performance in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296350#M69026</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The message you are getting, as it says, is that the connection of closed. So, this could be one of 2 things really:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
 &lt;LI&gt;The Server is closing the connection.&lt;/LI&gt;
 &lt;LI&gt;Your Talend job is closing the connection.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The most likely cause is the 1st option, check with the DBAs if there's any open connection timeouts, etc. Is the destination on-premise or cloud or elsewhere (with network contention)? Either way, I'd consider splitting the job into 2 distinct sections, one of accumulating the data you want to put into the DB (into a temp file) and the actual output of the data into the DB (from temp file to DB).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 17 May 2019 08:04:45 GMT</pubDate>
    <dc:creator>David_Beaty</dc:creator>
    <dc:date>2019-05-17T08:04:45Z</dc:date>
    <item>
      <title>Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296337#M69013</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;I have a job that takes data from multiple ODS tables join them with multiple tMap and insert them to a table. I should have around 80GB of data and the main flow has around 85 000 000 rows (around 15 GB).&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;All the lookup tables are stored in temp files and RAM available is 25 GB for this job.&lt;/P&gt; 
&lt;P&gt;The insert are in batch and manual commit.&lt;/P&gt; 
&lt;P&gt;Even with this, the job is quit slow and turns for several days without ending yet.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Is there another kind of optimization I can do beside changing Talend maps to sql code ?&lt;/P&gt; 
&lt;P&gt;The problem is clearly not coming from the SQL engine.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;What do you think is the average time for Talend to manage 80 to 100 Gb of data ?&lt;/P&gt; 
&lt;P&gt;&lt;SPAN class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 999px;"&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009M4BI.png"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/156102i7882323779396185/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009M4BI.png" alt="0683p000009M4BI.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Thanks in advance &lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009MACn.png"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/154443iC5B8CACEF3D12C6A/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009MACn.png" alt="0683p000009MACn.png" /&gt;&lt;/span&gt;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Regards,&lt;/P&gt; 
&lt;P&gt;Sofiane&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 May 2019 14:39:12 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296337#M69013</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-07T14:39:12Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296338#M69014</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt; 
&lt;P&gt;I'd look at:&lt;/P&gt; 
&lt;UL&gt; 
 &lt;LI&gt;Check that all of the DB inputs that feed the tMap lookups are reading in only the rows and columns that are needed.&lt;/LI&gt; 
 &lt;LI&gt;Consider splitting the flow into 2 sections , outputting the main flow to a temporary file, just after the 3rd tUnite and then reading back in from the temporary file into the tMap after.&lt;/LI&gt; 
 &lt;LI&gt;Externalise the 2 tMap on a tMap sections into a single SQL, again only reading in the rows and columns needed.&lt;/LI&gt; 
 &lt;LI&gt;Remove the tMap that has no lookups near the beginning of the main flow and replace with a tJavaRow.&lt;/LI&gt; 
&lt;/UL&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 May 2019 17:02:02 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296338#M69014</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2019-05-08T17:02:02Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296339#M69015</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;I would break the flow in slightly different way. I will merge all the initial data to a temporary table and it will be my stage 1 (if the processes inside the merge process is taking time, you can do them parallel fashion using tparallelize and merge them later).&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;Now this temp table will be my source and each of the look stages will be handled by fetching the data by joining with lookup tables within DB itself and push the result set to a new temp table. This means that you are not extracting full lookup table information and we are doing the filtering at the source itself.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; This approach means there will be lot of writes to temp tables and this can be made faster by using Bulk components instead of normal ones (considering the high input volume).&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;In a way, we are bypassing the full lookup table download to disk within Talend (which is time consuming) and all the interim table writes are also made faster using Bulk components.&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;Could you please create a duplicate job current flow using this approach and let us know the results?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Warm Regards,&lt;BR /&gt;Nikhil Thampi&lt;/P&gt; 
&lt;P&gt;Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 May 2019 18:14:11 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296339#M69015</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-05-08T18:14:11Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296340#M69016</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Thanks for your replies. I've tried to split the job and follow your suggestion but I still have the memory error. The main job is too big even for 25GB of RAM.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Any other idea to manage the memory ?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Thanks in advance.&lt;/P&gt; 
&lt;P&gt;Sofiane.&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2019 16:34:16 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296340#M69016</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-09T16:34:16Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296341#M69017</link>
      <description>&lt;P&gt;Hi&lt;BR /&gt;&lt;BR /&gt;All of the tMap components should have the “Store on disk” enabled, along with a directory path and sensible number of rows. Say, 1,000,000&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2019 16:49:20 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296341#M69017</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2019-05-09T16:49:20Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296342#M69018</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;Thanks for replying.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The flow is already set this way. After 2 days running, I've got this error : java.lang.RuntimeException:java.io.IOException&lt;/P&gt;&lt;P&gt;Let's retry.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 12 May 2019 16:23:30 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296342#M69018</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-12T16:23:30Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296343#M69019</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;So, being connected for an extended duration of time is probably now your issue.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You could try output the data locally to a file, find some kind of blocking key in the data (say year and month) and iterate through the file multiple times, reading from the file filtering on the blocking key and writing to the destination table, connecting and disconnecting the DB connection.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 12 May 2019 19:32:59 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296343#M69019</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2019-05-12T19:32:59Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296344#M69020</link>
      <description>&lt;P&gt;Hello everybody,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I've splitted the tMaps into multiple sub-jobs and reduced the batch and commit size from 100000 to 50000. This way, at least, the job turns without closing connection.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It is currently turning for two days.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Have a good day.&lt;/P&gt;
&lt;P&gt;Sofiane.&lt;/P&gt;</description>
      <pubDate>Wed, 15 May 2019 09:08:41 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296344#M69020</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-15T09:08:41Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296345#M69021</link>
      <description>&lt;P&gt;Hello everybody,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;I again face the java.sql.SQLException:Invalid state, the Connection object is closed. issue while everything is ok in the sql server.&lt;/P&gt; 
&lt;P&gt;Is there a parameter or a timeout set somewhere in talend ?&lt;/P&gt; 
&lt;P&gt;The flow is super simple reading from a temporary table with 2 tmaps and writing into another table. There is around 110000000 rows witch is nothing for a DWH.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Thanx for your help.&lt;/P&gt; 
&lt;P&gt;Sofiane&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2019 13:16:55 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296345#M69021</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-16T13:16:55Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296346#M69022</link>
      <description>&lt;P&gt;Hi Sofiane,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;Did you check the maximum connection open time allowed for your SQL Server? Usually DBAs set a value to stop connections to remain open for very long time. This could have prompted the error.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; Considering your data size, could you please use bulk components to load the data instead of using normal tDBOutput? This could change the whole equation of processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Warm Regards,&lt;BR /&gt;Nikhil Thampi&lt;/P&gt;
&lt;P&gt;Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2019 13:58:45 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296346#M69022</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-05-16T13:58:45Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296347#M69023</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are working together with the dba on these flows and everything is ok from that side.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the bulk component I've read that the running job should be in the same server as the SQL server which is not the case for me.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanx&lt;/P&gt;&lt;P&gt;Sofiane&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2019 14:05:52 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296347#M69023</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-16T14:05:52Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296348#M69024</link>
      <description>&lt;P&gt;did you try to increase this option in Run Job tab&lt;/P&gt;&lt;P&gt;Use specific JVM arguments by increasing the Xms&amp;nbsp; and Xmx?&lt;/P&gt;&lt;P&gt;the default is: Xms256M, Xmx1024M&lt;/P&gt;&lt;P&gt;You could increase to Xms1024M, Xmx4096M&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2019 17:22:10 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296348#M69024</guid>
      <dc:creator>JaneYu</dc:creator>
      <dc:date>2019-05-16T17:22:10Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296349#M69025</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am using 30GB of ram already and this is not a ram problem.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanx&lt;/P&gt;
&lt;P&gt;Sofiane&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2019 17:52:16 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296349#M69025</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-16T17:52:16Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296350#M69026</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The message you are getting, as it says, is that the connection of closed. So, this could be one of 2 things really:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
 &lt;LI&gt;The Server is closing the connection.&lt;/LI&gt;
 &lt;LI&gt;Your Talend job is closing the connection.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The most likely cause is the 1st option, check with the DBAs if there's any open connection timeouts, etc. Is the destination on-premise or cloud or elsewhere (with network contention)? Either way, I'd consider splitting the job into 2 distinct sections, one of accumulating the data you want to put into the DB (into a temp file) and the actual output of the data into the DB (from temp file to DB).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 May 2019 08:04:45 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296350#M69026</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2019-05-17T08:04:45Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296351#M69027</link>
      <description>&lt;P&gt;David,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Thank you for your reply.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;For sure it doesn't come from server, as previously told, I work with the DBA on it.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Is there a possibility to set the Talend timeout to 0 ( I don't know where is this parameter) I only know that JDBC timeout is default to 0&amp;nbsp; ?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;The infra is all on-premise and Talend server is connected to SQL server through Intranet.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Is it possible to use the bulk component while Talend job is not in the same server as SQL ? Or should I manually manage the temp file ?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Have a good day.&lt;/P&gt; 
&lt;P&gt;Sofiane.&lt;/P&gt;</description>
      <pubDate>Fri, 17 May 2019 09:18:37 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296351#M69027</guid>
      <dc:creator>castiellll</dc:creator>
      <dc:date>2019-05-17T09:18:37Z</dc:date>
    </item>
    <item>
      <title>Re: Job memory performance</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296352#M69028</link>
      <description>There may be some additional connection parameters you could pass regarding inactivity timeouts, which you could pass in the “Additional parameters”. 
&lt;BR /&gt; 
&lt;BR /&gt;Previously, for SQL server, I’ve only done the bulk load from a file in the server, it was a file share I could write to. However, for Vertica, the bulk file could be remote. 
&lt;BR /&gt; 
&lt;BR /&gt;Maybe create yourself a small test job to try it. I’m not able to test it right now to confirm. 
&lt;BR /&gt; 
&lt;BR /&gt;Thanks 
&lt;BR /&gt;</description>
      <pubDate>Fri, 17 May 2019 10:50:44 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Job-memory-performance/m-p/2296352#M69028</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2019-05-17T10:50:44Z</dc:date>
    </item>
  </channel>
</rss>

