Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I am trying to read huge amount of data about 1Million+ messages from message streaming service(Kafka). My current methodology is tKafkaInput > tExtractJsonFields > tMap >tTeradataOutput.
When I am running my job, I am getting the following error
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Could someone please suggest on what can be done to avoid this issue and handle unexpected huge data of 5Million and upwards
first write in to file and then load file into table.