Hello,
i'm trying to setup a data lakehouse on AWS S3 using delta lake tables. I don't need the big data framework so i want to use a standard job, but i can't understand clearly how to set up a connection with delta lake component to use S3, if possible. I can't find any explanation in the documentation, which frankly is very poor about this topic and made me confused.
Does a standard job need Databricks or it still use spark? Is talend (java) the compute engine instead? I see i should use a jdbc driver and i wonder how do i configure the jdbc string for connection, can anyone give me any help or point me to a guide or additional documentation?
Thanks in advance,
Vincenzo.