<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Help on tHiveOutput in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362269#M126437</link>
    <description>In Big Data Batch Spark jobs (but not Map Reduce) I see the component tHiveOutput. This component is not documented in the Help though.
&lt;BR /&gt;I have a use case to insert into a number of partitioned Hive tables in Parquet format. I would like to understand this component's behaviour to see if this it is appropriate for my needs.</description>
    <pubDate>Sat, 16 Nov 2024 10:35:22 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2024-11-16T10:35:22Z</dc:date>
    <item>
      <title>Help on tHiveOutput</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362269#M126437</link>
      <description>In Big Data Batch Spark jobs (but not Map Reduce) I see the component tHiveOutput. This component is not documented in the Help though.
&lt;BR /&gt;I have a use case to insert into a number of partitioned Hive tables in Parquet format. I would like to understand this component's behaviour to see if this it is appropriate for my needs.</description>
      <pubDate>Sat, 16 Nov 2024 10:35:22 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362269#M126437</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T10:35:22Z</dc:date>
    </item>
    <item>
      <title>Re: Help on tHiveOutput</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362270#M126438</link>
      <description>Hi,
&lt;BR /&gt;So far, the component reference of&amp;nbsp;
&lt;FONT size="1"&gt;&lt;FONT face="Verdana, Helvetica, Arial, sans-serif"&gt;tHiveOutput is not available yet.&lt;/FONT&gt;&lt;/FONT&gt;
&lt;BR /&gt;
&lt;FONT size="1"&gt;&lt;FONT face="Verdana, Helvetica, Arial, sans-serif"&gt;We can send it (pdf file) to you by email if you need.&lt;/FONT&gt;&lt;/FONT&gt;
&lt;BR /&gt;
&lt;FONT size="1"&gt;&lt;FONT face="Verdana, Helvetica, Arial, sans-serif"&gt;Best regards&lt;/FONT&gt;&lt;/FONT&gt;
&lt;BR /&gt;
&lt;FONT size="1"&gt;&lt;FONT face="Verdana, Helvetica, Arial, sans-serif"&gt;Sabrina&lt;/FONT&gt;&lt;/FONT&gt;</description>
      <pubDate>Mon, 13 Jun 2016 09:25:47 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362270#M126438</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-06-13T09:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: Help on tHiveOutput</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362271#M126439</link>
      <description>Hi,
&lt;BR /&gt;We have already sent an email with tHiveOutput component reference(pdf file). Could you please check it?
&lt;BR /&gt;Best regards
&lt;BR /&gt;Sabrina</description>
      <pubDate>Mon, 13 Jun 2016 10:04:03 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362271#M126439</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-06-13T10:04:03Z</dc:date>
    </item>
    <item>
      <title>Re: Help on tHiveOutput</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362272#M126440</link>
      <description>Many thanks. I have a couple of questions regarding the component if that's OK. 
&lt;BR /&gt; 
&lt;BR /&gt; 1. What is the reason that this component is only available in Spark Big Data jobs and not Map Reduce ? 
&lt;BR /&gt; 
&lt;BR /&gt; 2. It's good to see that it has a Parquet option (which is what my target table uses). Does that include Snappy compression? 
&lt;BR /&gt; 
&lt;BR /&gt; 3. Does the component support partitioned Hive tables? i.e. will it correctly write records into files in the correct HDFS directory structure according to the "partitioned by" clause in the DDL of the Hive tables ? 
&lt;BR /&gt; 
&lt;BR /&gt; 4. Does the component support bucketed Hive tables? i.e. will it correctly distribute the records across the buckets according to the "clustered by" clause in the DDL of the Hive tables ? 
&lt;BR /&gt; 
&lt;BR /&gt; We are looking to use these features in the design of our Hive tables so I'm hoping that I can use Talend as a more elegant and efficient solution to transform and load my Hive tables, compared to using Hive SQL and INSERT INTO statements.</description>
      <pubDate>Tue, 14 Jun 2016 09:24:27 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362272#M126440</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-06-14T09:24:27Z</dc:date>
    </item>
    <item>
      <title>Re: Help on tHiveOutput</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362273#M126441</link>
      <description>Hi Team,&lt;BR /&gt;I have installed Talend SandBox and trying to understand the job designs and components. I have questions on Big data batch Job design.&lt;BR /&gt;1.I am not seeing tHiveOutput and tHiveInput Components in Big Data Batch Job. If I want to read data from Hive tables, then do i need to use tJDBCInput Components only?&amp;nbsp;&lt;BR /&gt;2. I am not seeing Partitioners/Collectors in Big data Batch Job.&lt;BR /&gt;3. &amp;nbsp;Does Big data batch Job is converted in to Java and then executed on hadoop Cluster? and&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;Does Spark Job code is convered in to scala and then executed on hadoop/spark cluster? Could you please confirm.</description>
      <pubDate>Tue, 07 Feb 2017 12:55:08 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Help-on-tHiveOutput/m-p/2362273#M126441</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2017-02-07T12:55:08Z</dc:date>
    </item>
  </channel>
</rss>

