Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
i'm trying to setup a data lakehouse on AWS S3 using delta lake tables. I don't need the big data framework so i want to use a standard job, but i can't understand clearly how to set up a connection with delta lake component to use S3, if possible. I can't find any explanation in the documentation, which frankly is very poor about this topic and made me confused.
Does a standard job need Databricks or it still use spark? Is talend (java) the compute engine instead? I see i should use a jdbc driver and i wonder how do i configure the jdbc string for connection, can anyone give me any help or point me to a guide or additional documentation?
Thanks in advance,
Vincenzo.
Hi @patricia845 ,
thank you very much for you answer, but actually looking at jdbc driver documentation, it asks for connection to a databricks istance not just a pure S3 storage: https://docs.databricks.com/aws/en/integrations/jdbc/configure do you have any reference to eventually different jdbc drivers you mentioned?
Regards,
Vincenzo.