Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Named a 7-Time Gartner® Magic Quadrant™ Leader: See the 2026 Report
cancel
Showing results for 
Search instead for 
Did you mean: 
vintac
Partner - Contributor III
Partner - Contributor III

Using Delta Lake components in standard jobs with S3 storage

Hello,

i'm trying to setup a data lakehouse on AWS S3 using delta lake tables. I don't need the big data framework so i want to use a standard job, but i can't understand clearly how to set up a connection with delta lake component to use S3, if possible. I can't find any explanation in the documentation, which frankly is very poor about this topic and made me confused.

Does a standard job need Databricks or it still use spark? Is talend (java) the compute engine instead? I see i should use a jdbc driver and i wonder how do i configure the jdbc string for connection, can anyone give me any help or point me to a guide or additional documentation?

Thanks in advance,
Vincenzo.

Labels (3)
1 Reply
vintac
Partner - Contributor III
Partner - Contributor III
Author

Hi @patricia845 ,
thank you very much for you answer, but actually looking at jdbc driver documentation, it asks for connection to a databricks istance not just a pure S3 storage: https://docs.databricks.com/aws/en/integrations/jdbc/configure do you have any reference to eventually different jdbc drivers you mentioned?

Regards,

Vincenzo.