Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I'm using the tAzureAdlsGen2Output to load data into ADLS Gen2 container as a parquet files. Is there any way to partition the output so it'll dynamically create a new folder per partition and place the parquet file in it's corresponding folder? For example, there is a column "Date" in my data flow and I'd like to partition the output by date so it creates a new folder per date.
Hello @COW_WW BA ,
You can define a job context variable, e.g. date1, then use tJavaRow component to change the value of context.date1 in your data flow. then setup Blobs path to context.date1 for tAzureAdlsGen2Output component, after the job is executed, it will create a new folder with the value of context.date1 and put the related input data file under this folder in the ADLS Gen2 container