<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Talend for Big Data - Newbie in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Talend-for-Big-Data-Newbie/m-p/2302213#M74224</link>
    <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am newbie for Talend and evaluating it, appreciate if you can provide your feedback on the below..&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Planning to replace Ab Inito with Talend for Big Data to create Spark jobs on the Hadoop. Thus, at the high level, I would need to find out where Talend can map to existing experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Talend should be able to support complex ETL tasks to curate/model the data to create snowflake and denormalized views. Is there any limitations on typical use-cases to curate big data and then create marts on the Hive Orc/Parquet?&lt;/P&gt;
&lt;P&gt;- Talend generates Spark code, if customization is required, how easy to maintain it?&lt;/P&gt;
&lt;P&gt;- Is Talent creating optimized code for Spark or optimization is done further?&lt;/P&gt;
&lt;P&gt;- Is the Spark code Spark SQL based and is it the latest versions?&lt;/P&gt;
&lt;P&gt;- Is the generated Spark code covering checkpoints so that if the job fails, it can continue after the issue is fixed?&lt;/P&gt;
&lt;P&gt;- How to implement meta-data driven ETL with Talend for Big Data?&lt;/P&gt;
&lt;P&gt;- Anyhing else you would like to mention as limitation, work around, etc..&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks in advance.&lt;/P&gt;
&lt;P&gt;CK&lt;/P&gt;</description>
    <pubDate>Thu, 11 Jul 2019 14:51:37 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2019-07-11T14:51:37Z</dc:date>
    <item>
      <title>Talend for Big Data - Newbie</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Talend-for-Big-Data-Newbie/m-p/2302213#M74224</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am newbie for Talend and evaluating it, appreciate if you can provide your feedback on the below..&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Planning to replace Ab Inito with Talend for Big Data to create Spark jobs on the Hadoop. Thus, at the high level, I would need to find out where Talend can map to existing experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Talend should be able to support complex ETL tasks to curate/model the data to create snowflake and denormalized views. Is there any limitations on typical use-cases to curate big data and then create marts on the Hive Orc/Parquet?&lt;/P&gt;
&lt;P&gt;- Talend generates Spark code, if customization is required, how easy to maintain it?&lt;/P&gt;
&lt;P&gt;- Is Talent creating optimized code for Spark or optimization is done further?&lt;/P&gt;
&lt;P&gt;- Is the Spark code Spark SQL based and is it the latest versions?&lt;/P&gt;
&lt;P&gt;- Is the generated Spark code covering checkpoints so that if the job fails, it can continue after the issue is fixed?&lt;/P&gt;
&lt;P&gt;- How to implement meta-data driven ETL with Talend for Big Data?&lt;/P&gt;
&lt;P&gt;- Anyhing else you would like to mention as limitation, work around, etc..&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks in advance.&lt;/P&gt;
&lt;P&gt;CK&lt;/P&gt;</description>
      <pubDate>Thu, 11 Jul 2019 14:51:37 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Talend-for-Big-Data-Newbie/m-p/2302213#M74224</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-07-11T14:51:37Z</dc:date>
    </item>
  </channel>
</rss>

