Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
I have a job which loads around 140 files using dynamic schema. Now my problem is that one of the csv file which am receiving has more than 1000 columns and oracle has a limitation which does not allow to create a table with more than 1000 columns. So i need to Split that csv file as below mentioned example.
Example : test.csv
"request_id","name","location","desc","product","price"
"1","abc","ind","ab@gmail.com
hi all ,
regards
rayes","talend","$20000"
"2","xyx","pak"," ","tax","$1000"
Required output files
"request_id","name","location"
"1","abc","ind"
"2","xyx","pak"
"request_id","desc","product","price"
"1","ab@gmail.com
hi all ,
regards
rayes","talend","$20000"
"2"," ","tax","$1000"
Note: need request_id column in both the files
I have tried Unix cut command using tSystem component to split the file
command :"/bin /bash"
"-c"
"cut -f 1,2-500 -d ',' test.csv > test_1.csv"
"cut -f 1,501- -d ',' test.csv > test_2.csv"
the above command splits the file delimiter as comma . but the catch here is that desc column has comma in the value so the cut does not happen properly and get incorrect data in my output files.
Can anyone help to achieve my requirement .
Thanks in Advance
Also if there is any other way to achieve other than using Unix command , would help .
Thanks
hi if your desc column have a pattern, you can replace easily the comma with replaceAll method and regex.
send me love and Kudos