Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Greetings,
Problem: Stitching a Spark component containing Python scripts to its source and destination (PostgreSQL).
What I have tried:
Created a Spark component and imported its scripts, but I am unable to stitch it to its source and destination.
Created a Data Mapper and manually defined sources, destinations, and transformations. However, this approach is impractical for companies relying on large ELT scripts.
Given the massive Python codebase we have, is there a way to automatically discover the transformations a dataset field(s) has undergone within the script?
OR
What would be a more effective approach to handle this scenario?
#Talend Data Catalog
Hey @Shicong_Hong any comments, or do you know who could help?
The replies from the community on this post are not visible on any of my device.
All the more, my reply to someone else's reply has also disappeared.
What's going on?