Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi everyone,
I am currently trying to understand how to use Talend to ingest CSV files on Azure Data Lake (Gen2) into a Staging Delta Lake Table on Databricks.
I am new to Talend and trying to understand all the components that can be used but to be honest I am overwhelmed with the amount of options that I can take...
As far as I could research, I think the first step would be to have a tJDBCConfiguration configured to connect to the databricks cluster, but I don't know what driver to install or choose for this to happen. Basically what you should configure on the "drivers" option. I have tried downloading the Databricks Simba 4.1 driver but apparently does not seem to work.
Second, I would like to know if anyone already worked with Delta Lake, and how it would be possible to have an incremental approach... Are there any components for that or it should be scripted?
Any help would be greatly appreciated.
Thank you
Regards
Hi Luis,
Please, regarding to the driver, please download the below and create tDeltaLakeOutput.
https://databricks.com/spark/jdbc-drivers-download
It's good that you read the article below:
https://help.talend.com/r/en-US/7.3/delta-lake/linking-components-to-design-flow-of-delta-lake-data-in
Any futher doubt, plese write here.