Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content
Announcements
A fresh, new look for the Data Integration & Quality forums and navigation! Read more about what's changed.
cancel
Showing results for 
Search instead for 
Did you mean: 
vintac
Partner - Contributor II
Partner - Contributor II

Using Delta Lake components in standard jobs with S3 storage

Hello,

i'm trying to setup a data lakehouse on AWS S3 using delta lake tables. I don't need the big data framework so i want to use a standard job, but i can't understand clearly how to set up a connection with delta lake component to use S3, if possible. I can't find any explanation in the documentation, which frankly is very poor about this topic and made me confused.

Does a standard job need Databricks or it still use spark? Is talend (java) the compute engine instead? I see i should use a jdbc driver and i wonder how do i configure the jdbc string for connection, can anyone give me any help or point me to a guide or additional documentation?

Thanks in advance,
Vincenzo.

Labels (3)
0 Replies