Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in NYC Sept 4th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

which is the best way to perform lookup operation in spark batch job?

Hi,

 

I want to perform lookup operation (defined below) in spark batch job.

Lookup operation:

My main input flow file lets say 'ABC' have columns U,V,W

I have another file(lookup file) say 'DEF' say columns .X,Y,Z

My logic should check like if(X=="APPLE") then get 'Y' value and populate to "V" else populate 'Null'

is explained more in the link:

https://community.talend.com/t5/Design-and-Development/Calling-a-user-routine-to-lookup-a-file-is-ta...

 

In Data integration job i had stored the data in hash map and written a custom java function to fetch values on the basis of keys. Now by considering spark distributed framework, what is the best way to achieve lookup operation (hashmap or rdd or pair rdd or data frame or etc.,)? Also if possible, please elaborate on why is it a best option..

 

Appreciate your suggestion/help. 

Labels (3)
2 Replies
Anonymous
Not applicable
Author

Hello,

tCacheIn and tCacheOut can be available in the Spark Batch and Spark Streaming Job framework.

Best regards

Sabrina

Anonymous
Not applicable
Author

Hi ,
Thanj you for the reply.
I agree tcache components are helpful in a connected lookup scenario.
But for my use case it needs an unconnected lookup. For DI job I have achieved through a udf using hashmaps.
In spark batch job how would I achieve the same thing ?