Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi, we are trying to catalog a table with the same name that exists in two different applications (database sources). For example, we have a table named "person" in 2 different databases. When we add the second table to the catalog, we get the error "Error! pgui.error.code.ENTITY_NAME_EXISTS - Entity name 'person' already exists"
Is this a limitation of QDC or is there a recommended workaround?
That message should only occur if the source name is the same. If the entity name is unique within the source you should not receive that message. Is it possible that it was once created, deleted, and added again. If that is the case, perhaps the cleanup didn't fully take place?
Can you please provide some more details?
Thanks.
Thank you! This was our issue.
I am facing the same error ..... i added and deleted an entity and when re adding it gives this error ,
×
@rupaligupta As mentioned in the previous comment, the cleanup should be done correctly - sources with same names in this case. In UI, when deleting an Entity, ensure to set the options 'delete filesystem data' and 'delete table structure' to Yes. by default these are set to No. If you have already deleted the entitiy from UI and these options weren't set to Yes, then clean up the underlying HDFS (/<podium_base_path>/receiving/<source>/) and hive tables (log into beeline, perform show databases and you should be able to see a database name with the source name you deleted earlier via UI).
Good luck!
@JitenderR ..... thanks for quick revert. Yes i did the delete without changing no to yes for one entity and then realized i have to do Yes for all and followed for others. I got you point but can your help with how to access this hdfs and hive delete cant figure out. Im guessing i dont have access to beeline.
You will need access to the edge node where QDC is deployed. use beeline to access hive tables and with proper access you should be able to perform hdfs dfs -ls to get to the HDFS files. If you are new to both of these concepts, there are many articles on the web or refer to apache documentation. Ideally in a PROD scenario, access to these components will be controlled by your Hadoop admins.
Regards
JR