Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I have an issue with changes to _CDC tasks when I need to do a clear landing cache and generate after columns are added. When I do a generate after this I get an error on the __ar tables used to archive the __ct tables. Each time I run generate the generate process stops on the error stating a *__ar table already exists. If i hit generate again it stops at the next table in the list with the same error. I can do this until it runs through all the tables and then finally generates, however this is 150+ tables. Anyone else have this issues.
Hello Brain,
Thanks for reaching out to us on this issue. We haven't encountered this issue so far.
Which version of Compose are you using? I can try duplicating the issue if you can send me the steps and based on that I can check with R&D on the issue.
In the meantime, you may want to try using the command line to clear the cache to see if that helps you. Let me know how it goes.
Here are the commands to clear the cache:
1. From the Start menu, open the Compose Command Line console.
2. Run the following command:
ComposeCli.exe connect
3. Run the following command:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone source_name]
here:
--project is the name of your project.
--type is where to clear the metadata cache. Possible values are landing or storage.
If --type landing and you want to clear a specific landing zone, you must set the --landing_zone parameter as well.
To clear the metadata cache in all landing zones, specify --type landing and omit the --landing_zone parameter.
--landing_zone is the name of the landing zone when --type landing_zone.
Example:
ComposeCli.exe clear_cache --project myproject --type landing --landing_zone mysource1
Thanks,
Nanda
Hi @Anonymous
I have moved your question to the community section that deals with Qlik Compose. Hopefully someone here will be batter suited to answer your question.
Hello Brain,
Thanks for reaching out to us on this issue. We haven't encountered this issue so far.
Which version of Compose are you using? I can try duplicating the issue if you can send me the steps and based on that I can check with R&D on the issue.
In the meantime, you may want to try using the command line to clear the cache to see if that helps you. Let me know how it goes.
Here are the commands to clear the cache:
1. From the Start menu, open the Compose Command Line console.
2. Run the following command:
ComposeCli.exe connect
3. Run the following command:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone source_name]
here:
--project is the name of your project.
--type is where to clear the metadata cache. Possible values are landing or storage.
If --type landing and you want to clear a specific landing zone, you must set the --landing_zone parameter as well.
To clear the metadata cache in all landing zones, specify --type landing and omit the --landing_zone parameter.
--landing_zone is the name of the landing zone when --type landing_zone.
Example:
ComposeCli.exe clear_cache --project myproject --type landing --landing_zone mysource1
Thanks,
Nanda
Hi @Anonymous
Since you are adding a new column are you also clearing the metadata cache along with clearing landing cache?
If the Replicate process has already run the DDL and if you see the new column added in both _ct and _ar then we can just run the validate on the Datawarehouse without clearing the cache. Please try this.
There is another option to refresh metadata only. Project Settings -> Generate DDLs but do not run them. Check this option and validate the Datawarehouse. This will ensure scripts are run only in the metadata layer and not on the Database. Then we can uncheck the option and validate again to confirm both metadata and DB are in sync.
Hope this helps.
Thank you,
@Anonymous Did any of our suggestions help you in resolving the issue you reported? If so, please mark the solution as accepted so users can find the answers.
If you are still seeing the issue, I think it's better to open a new Support case and the team should be able to find the root cause and fix it.
Thanks,
Nanda