Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Big Data batch job for writing to Hive

Hello All ,
I am trying to accomplish inserting into hive using  big data batch job witth tJDBCOutput component . Like in screenshot attached.
And I am facing below error . Is there a way we could accomplish this task .
There are no Hive components available in Talend 6.1.1 Enterprise Big data batch job. 
Running job: job_1461884593044_0007
 map 0% reduce 0%
Task Id : attempt_1461884593044_0007_m_000000_0, Status : FAILED
Error: java.io.IOException: Method not supported
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:624)
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:1)
at org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:858)
at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:610)
at org.talend.hadoop.mapred.lib.Chain$ChainOutputCollector.collect(Chain.java:466)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:472)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:1)
at org.talend.hadoop.mapred.lib.ChainMapper.map(ChainMapper.java:63)
at org.talend.hadoop.mapred.lib.DelegatingMapper.map(DelegatingMapper.java:44)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Task Id : attempt_1461884593044_0007_m_000000_1, Status : FAILED
Error: java.io.IOException: Method not supported
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:624)
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:1)
at org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:858)
at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:610)
at org.talend.hadoop.mapred.lib.Chain$ChainOutputCollector.collect(Chain.java:466)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:472)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:1)
at org.talend.hadoop.mapred.lib.ChainMapper.map(ChainMapper.java:63)
at org.talend.hadoop.mapred.lib.DelegatingMapper.map(DelegatingMapper.java:44)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Task Id : attempt_1461884593044_0007_m_000000_2, Status : FAILED
Error: java.io.IOException: Method not supported
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:624)
at local_project.test_a_0_1.test_a$tJDBCOutput_1StructOutputFormat$DBRecordWriter.write(test_a.java:1)
at org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:858)
at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:610)
at org.talend.hadoop.mapred.lib.Chain$ChainOutputCollector.collect(Chain.java:466)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:472)
at local_project.test_a_0_1.test_a$tMap_2Mapper.map(test_a.java:1)
at org.talend.hadoop.mapred.lib.ChainMapper.map(ChainMapper.java:63)
at org.talend.hadoop.mapred.lib.DelegatingMapper.map(DelegatingMapper.java:44)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
 map 100% reduce 0%
Job complete: job_1461884593044_0007
Counters: 8
Job Counters 
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=36377
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=36377
Total vcore-seconds taken by all map tasks=36377
Total megabyte-seconds taken by all map tasks=37250048
: org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:root (auth 0683p000009M9p6.pngIMPLE) cause:java.io.IOException: Job failed!
disconnected
Job Failed: Task failed task_1461884593044_0007_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
java.io.IOException: Job failed!
at org.talend.hadoop.mapred.lib.MRJobClient.runJob(MRJobClient.java:166)
at local_project.test_a_0_1.test_a.runMRJob(test_a.java:1371)
at local_project.test_a_0_1.test_a.access$0(test_a.java:1361)
at local_project.test_a_0_1.test_a$1.run(test_a.java:921)
at local_project.test_a_0_1.test_a$1.run(test_a.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at local_project.test_a_0_1.test_a.tFixedFlowInput_2Process(test_a.java:884)
at local_project.test_a_0_1.test_a.tLibraryLoad_1Process(test_a.java:317)
at local_project.test_a_0_1.test_a.tLibraryLoad_2Process(test_a.java:964)
at local_project.test_a_0_1.test_a.tLibraryLoad_3Process(test_a.java:1002)
at local_project.test_a_0_1.test_a.tLibraryLoad_4Process(test_a.java:1040)
at local_project.test_a_0_1.test_a.tLibraryLoad_5Process(test_a.java:1078)
at local_project.test_a_0_1.test_a.tLibraryLoad_6Process(test_a.java:1116)
at local_project.test_a_0_1.test_a.tLibraryLoad_7Process(test_a.java:1154)
at local_project.test_a_0_1.test_a.tLibraryLoad_8Process(test_a.java:1192)
at local_project.test_a_0_1.test_a.run(test_a.java:1339)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at local_project.test_a_0_1.test_a.runJobInTOS(test_a.java:1277)
at local_project.test_a_0_1.test_a.main(test_a.java:1256)
Job test_a ended at 16:28 28/04/2016.
0683p000009ME0q.png 0683p000009ME2v.png
Labels (4)
1 Reply
willm1
Creator
Creator

Interesting use case which I've not encountered or used. I normally insert records into Hive via files using HDFSOut or HDFSPut or HiveLoad or HiveRow. The message (not supported) sounds like the the Hive thrift API does not support inserts, rather only selects? I've used the jdbc URL to read from Hive outside of the cluster.