Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
The attrep_changes table reside in the target database. it is used a staging table for CDC processing when using batch apply:
the attrep_changes table is used as a temporary table to hold operations that are being done against the target tables. It makes the operations faster as we can do a single statement to execute many DML statements as once. Like doing an 'insert into table select * from attrep_changes' or 'delete from table where pk in select pk from attrep_changes'. This table is only used in CDC. Depending on the target endpoint type there is either one attrep_changes table for the entire task (attrep_changes<hash(task_uuid)>) or one table per target table for each task (attrep_changes<hash(task_uuid)>_<hash(table_id)>). This is due to performance and limitations of the various databases that Replicate support
The attrep_changes tables are created while the task is running. When a task is stopped, the attrep_changes tables are dropped.
The attrep_changes are erased by the task when it is stopped normally, if a task crashes those tables are not erased and you will have to perform a manual cleanup
Thanks,
Lyka
Hello,
The attrep_changes table reside in the target database. it is used a staging table for CDC processing when using batch apply:
the attrep_changes table is used as a temporary table to hold operations that are being done against the target tables. It makes the operations faster as we can do a single statement to execute many DML statements as once. Like doing an 'insert into table select * from attrep_changes' or 'delete from table where pk in select pk from attrep_changes'. This table is only used in CDC. Depending on the target endpoint type there is either one attrep_changes table for the entire task (attrep_changes<hash(task_uuid)>) or one table per target table for each task (attrep_changes<hash(task_uuid)>_<hash(table_id)>). This is due to performance and limitations of the various databases that Replicate support
The attrep_changes tables are created while the task is running. When a task is stopped, the attrep_changes tables are dropped.
The attrep_changes are erased by the task when it is stopped normally, if a task crashes those tables are not erased and you will have to perform a manual cleanup
Thanks,
Lyka
Hi Lyka , I appreciate the information you have given me on the attrep_changes table I'm looking for.
Regarding the manual cleansing of the attrep_changes table, I have a few inquiries. We must convert the attrep_changes table to HEAP in order to improve the load performance in the target application.
I must now add the parameter and end the task.We anticipate that the attrep_changes database will be empty, but upon terminating our operation, we find that a considerable number of records are still present. Ultimately, we had to drop them go.
My question now is that even if the table will be generated again when I resume the task, what will happen to the data? Will it be copied to the newly constructed table once more? In other words, it will treat this as a rollback and begin reading the data from the original source point.
Is there a way for me to use a query to determine which task the attrep_changes table belongs to?
Hello @Kohila ,
Qlik Replicate moves the stream position forwards based on the successful target applying. If there are some events that were not applied successfully (so it's left in the attrep_changes table still), Replicate will re-read the changes from source endpoint.
Hope this helps
John.
>> We must convert the attrep_changes table to HEAP in order to improve the load performance in the target application.
I doubt you can get a meaningful improvement. Replicate is ready for it, judging by the sample log output "Permanent Table or View 'HEIN.attrep_changesE987E911_0000001' was not found" Just before creating a temp one.
You could benchmark this, looking at the exact helper table usage in task log with LOGGING VERBOSE TARGET_APPLY. Just re-run the operation as per usual, and then with the suggested modification, and measure the overall improvement, not just the helper table handling. If there is a significant change, submit the suggestion to Qlik with all details (Target DB type/version/server). For Oracle Replicate engineering came up with "CREATE GLOBAL TEMPORARY TABLE ". Admittedly they may not have given the same attention to a Postgres target but I suspect it will be a giant waste of time for all involved to try this. But by all means have fun trying and please do report back for us all to learn from or help with.
If you think there is a performance issue to be improved on, maybe just submit that, well documented, to Qlik Support in a formal case. Or elaborate your question here with the pain points and improvement thinking and see if other users have useful inputs on the matter. - don't start with a 'solution' without articulating the problem!
>> we find that a considerable number of records are still present. Ultimately, we had to drop them go.
That suggests an improper task end. As Lyka indicates the helper table is normally created on task start, deleted on end and truncated while preparing a for a new bulk to be applied. After the bulk it is applied the rows (possibly hundreds of thousands) site there until the next bulk is started. There is no point pre-loading before task start.
>> Is there a way for me to use a query to determine which task the attrep_changes table belongs to?
No. You'll have to parse the reptask logs, or figure out how to use hash64 the task UUID adding the high and low 32 bits. That UUID can be found in a task export or in the REPCTL GETTASKSTATUS output, not in the AEM API best I can tell.
Good luck,
Hein
What algorithm is used for hash? I see it is not md5.
So I got UUID from the task export. Now I am trying to find what attrep_changes table the task has created in the target DB.
Just set task to LOGGING LEVEL TRACE for TARGET_APPLY start task an check the log.
You'll see.
Hein.
Facing below error. and Attrep table is not created at target end. Task is in failed state at full load only. Is it related to Attrep table? And as per above reading , attrep table will be created at cdc phase.. am i correct ? if so why this is failing ... Request your help here.
[TARGET_APPLY ]E: RetCode: SQL_SUCCESS SqlState: 42000 NativeError: 229 Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The SELECT permission was denied on the object 'attrep_changesA6056954911721A8', database 'master', schema 'dbo'. [1022502] (sqlserver_endpoint_imp.c:5528)
00015820: 2024-03-26T01:41:18 [TARGET_APPLY ]E: RetCode: SQL_SUCCESS SqlState: 42000 NativeError: 11529 Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The metadata could not be determined because every code path results in an error; see previous errors for some of these. [1022502] (sqlserver_endpoint_imp.c:5528)
[TARGET_APPLY ]E: Failed to begin bcp bulk to net changes table [attrep_changesA6056954911721A8] [1022517] (sqlserver_bulk.c:711)
00015820: 2024-03-26T01:41:18 [TARGET_APPLY ]E: Error executing command [1022517] (streamcomponent.c:1981)
Hello @suvbin ,
The table attrep_changesA6056954911721A8 is Net Change table, which is used for CDC Optimized Batch Apply mode. A Full Load ONLY task need NOT it at all.
Would you please check the task log file to see if the task is in Full Load ONLY running mode? likes:
2024-03-26T09:58:29:821227 [TASK_MANAGER ]I: Task 'Ora-to-SQL' running full load only in fresh start mode (replicationtask.c:1818)
thanks,
John.
@john_wang i could not find the above statement in the logs. But the task is stopped in full load only. And it's batch apply mode.