How to write to multiple files within same big data sparkstreaming job
My use case is to stream from either Kafka or Kinesis (we are AWS), buffer windows of data for one minute intervals and then write out the results from each one minute buffer to either HDFS or S3. The issue is that each buffer must be written to a separate file? Is this possible using a talend spark big data streaming job?