Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi @rhall and other Experts
I have got delimited .tsv files generated from one of the job. I need a way where I can read files one by one and output into Redshift DB table.
My job initial looks like this.
Is there a way I can create multiple temp tables depending on my iteration and then later merge all these temp tables to output into main redshift table?
Thanks
Harshal.
@Parikhharshal,when you use on component ok to tDBRow from tDBOutput,tDBrow also execute with different table names to insert data into main table from temp/stage tables.
@manodwhb: I tried changing my job flow little bit and now everything comes straight into table and then I am doing other things.
My tDBoutput component config looks like below:
It keeps giving me an error here on this component and says:
java.sql.SQLException: [Amazon][JDBC](11220) Parameters cannot be used with normal Statement objects, use PreparedStatements instead.
at com.amazon.exceptions.ExceptionConverter.toSQLException(Unknown Source)
Is it because I am using table name based on context?
@Parikhharshal,it looks like table name context,as part of testing just use the one file and hard code the table name instead of context?
@manodwhb: The context value will create multiple tables for me in a loop.
globalMap.put(
"tempTable",context.course_id+"-"+context.student_id
);
Course_id Sutdent_id
820 14345
820 43555
623 34664
445 35656
and so on.... The table will be 820-14345, 820-43555, 623-34664, 445-35656. There is no issue with context.
@manodwhb: If you see my job flow, tMap is connected to tDBoutput. So what query I can write using tDBrow? I do not think that's really possible. What are your thoughts?