Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Team,
I need to understand some points regarding Hadoop target endpoint.
1. If there are multiple jobs hitting on target(Hadoop) so it will fail or wait till connection available or can be automatically resume when it fails?
2. How DDL handling carry out in Hadoop/Hive.?
Regards,
Chirag
Hello @Chirag_ ,
Well , I have not tested this scenario , but I believe below is what we expect it to be
Regards
Arun
Hi @Chirag_ ,
Aarun is correct, and I've conducted a quick test to confirm the NULL is the newly added column(s) value for the existing rows, a sample:
The table was added a new column "notes" after the "id=2" replicated to target side.
Hope this helps.
John.
Hello @Chirag_ ,
Thanks for reaching Qlik community
If there are multiple jobs hitting the target (Hadoop), Qlik Replicate generally handles this by queuing the operations. It won't automatically fail or wait indefinitely for a connection to become available. Instead, it typically manages concurrency by queuing operations and executing them in the order they were received, once resources become available. If a job fails due to a connection issue or other reasons, Qlik Replicate may retry the operation based on its retry settings and error handling configuration. These settings can usually be customized to suit the specific requirements of your environment.
When DDL changes occur, Replicate follows the below:
We request you to go through the following user guide for more information
https://help.qlik.com/en-US/replicate/November2023/Content/Replicate/Main/Hadoop/hadoop_target.htm
https://help.qlik.com/en-US/replicate/May2022/Content/Replicate/Main/Endpoints/DDLStatements.htm
Thanks & Regards
Arun
Hello @Chirag_ ,
You may also refer to below article on retry configuration that can be done on Qlik replicate
https://community.qlik.com/t5/Official-Support-Articles/Changing-Task-Recovery-options-for-Replicate...
Regards
Arun
Hello team,
If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.
Regards,
Arun
Hi @aarun_arasu ,
Thank You for the respond!
Regarding point 2, As Hadoop is a file system , Assume, I have 4 fields and 10 records in my hive table, later on 1 column added so what will be the output for that 10 records ?Is that additional column shows null value or blank ? As it in RDBMS system it would be NULL.
Similarly, how this handle in Text or Parque format for HDFS?
Regards,
Chirag
Hello @Chirag_ ,
Well , I have not tested this scenario , but I believe below is what we expect it to be
Regards
Arun
Hi @Chirag_ ,
Aarun is correct, and I've conducted a quick test to confirm the NULL is the newly added column(s) value for the existing rows, a sample:
The table was added a new column "notes" after the "id=2" replicated to target side.
Hope this helps.
John.
Hi @aarun_arasu , @john_wang ,
Thank for the respond and providing detailed info for raised question.
Regards,
Chirag
Thank you so much for your great support @Chirag_