Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello Friends,
I got a requirement to ingest data from mssql to hortonworks hive database (the requirement is to create hdfs file and table dynamically - without defining any schema). So I am planned to use tSqoopImport component. Now I am using Java API mode and able to load the data into HDFS file. but the problem here is its working only for text and sequence files, not working for avro and parquet files.
now the questions are, 1) Is it possible to work with the other file formats also?
2) Is it possible to change the hdfs text files delimiter (by default its taking "," comma)?
3) Is it possible to load the hive tables using the same tsqoopimport component?
Hello,
Please have a look at this reference about:TalendHelpCenter:Which big data formats are supported.
Best regards
Sabrina
Hello,
If you want to load data to Hive in bigdata spark job, please have a look at this example shared on talend help center.
TalendHelpCenter: Loading the database table data into the Hive internal table
Best regards
Sabrina