Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
After configuring the Qlik Stitch user in Snowflake to use key-pair authentication and updating the Stitch destination settings accordingly, existing integrations begin to fail with the following error:
SQL access control error:
Insufficient privileges to operate on schema 'XXXX'.
Permissions are verified to be correct: All role grants and schema privileges are properly configured in Snowflake as documented in Connecting a Snowflake Destination to Stitch.
Switching back to password authentication restores the integrations, but does not satisfy security requirements for connecting to Snowflake.
Run the following SQL command in Snowflake:
ALTER USER STITCH_USER SET DEFAULT_ROLE = STITCH_ROLE;
This ensures the default role is automatically used when connecting via key-pair authentication.
If no role is explicitly defined during login, Snowflake will use STITCH_ROLE by default.
The issue might occur if Qlik Stitch is using outdated schema references or if the Stitch user’s privileges have changed after switching to key-pair authentication.
Running a Target Exec of a Qlik Talend Runtime (OSGi) build type artifact fails in Studio with the following:
Execution failed :java.lang.Exception: Job was not built successfully, please check the logs for more details available on C:/Talend/Talend8/Talend8Workspace/ESB2/poms/jobs/process/testmicroservice_0.1/lastGenerated.log
[Job was not built successfully, please check the logs for more details available on C:/Talend/Talend8/Talend8Workspace/ESB2/poms/jobs/process/testmicroservice_0.1/lastGenerated.log
The lastGenerated.log will contain messages similar to:
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for org.example.esb2.service:TestMicroservice:bundle:0.1.0
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.commons:commons-lang3:jar -> version 3.10 vs 3.17.0 @ line 171, column 17
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
[WARNING] The requested profile "signature" could not be activated because it does not exist.
[INFO]
[INFO] -------------< org.example.esb2.service:TestMicroservice >--------------
[INFO] Building ESB2 TestMicroservice-0.1.0 (0.1,Job Designs) Bundle 0.1.0
[INFO] from pom.xml
[INFO] -------------------------------[ bundle ]-------------------------------
[INFO]
[INFO] --- osgihelper:8.0.11:generate (osgi-helper) @ TestMicroservice ---
[INFO] Resolve mvn:org.example.esb2.service/TestMicroservice/0.1.0
[INFO]
[INFO] --- resources:3.3.1:resources (default-resources) @ TestMicroservice ---
[INFO] Copying 2 resources from src\main\resources to target\classes
[INFO]
[INFO] --- compiler:3.11.0:compile (default-compile) @ TestMicroservice ---
[INFO] Not compiling main sources
[INFO]
[INFO] --- resources:3.3.1:testResources (default-testResources) @ TestMicroservice ---
[INFO] Not copying test resources
[INFO]
[INFO] --- compiler:3.11.0:testCompile (default-testCompile) @ TestMicroservice ---
[INFO] Not compiling test sources
[INFO]
[INFO] --- surefire:3.3.0:test (default-test) @ TestMicroservice ---
[INFO] Tests are skipped.
[INFO]
[INFO] --- bundle:6.0.0:bundle (default-bundle) @ TestMicroservice ---
[WARNING] Bundle org.example.esb2.service:TestMicroservice:bundle:0.1.0 : Invalid package name: '@BundleConfigExportPackage@' in Export-Package
[WARNING] Bundle org.example.esb2.service:TestMicroservice:bundle:0.1.0 : Invalid package name: '@BundleConfigImportPackage@' in Import-Package
[INFO] Building bundle: C:\Talend\Talend8\Talend8Workspace\ESB2\poms\jobs\process\testmicroservice_0.1\target\TestMicroservice-bundle-0.1.0.jar
[INFO] Writing manifest: C:\Talend\Talend8\Talend8Workspace\ESB2\poms\jobs\process\testmicroservice_0.1\target\classes\META-INF\MANIFEST.MF
[INFO]
[INFO] --- assembly:3.6.0:single (default) @ TestMicroservice ---
[INFO] Reading assembly descriptor: C:\Talend\Talend8\Talend8Workspace\ESB2\poms\jobs\process\testmicroservice_0.1/src/main/assemblies/assembly.xml
[WARNING] The following patterns were never triggered in this artifact inclusion filter:
o 'org.example.esb2.service:TestMicroservice:jar:0.1.0'
[INFO] Building jar: C:\Talend\Talend8\Talend8Workspace\ESB2\poms\jobs\process\testmicroservice_0.1\target\TestMicroservice-bundle-0.1.0.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.650 s
[INFO] Finished at: 2025-10-23T09:12:57-04:00
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "signature" could not be activated because it does not exist.
Change Build Type to Microservice.
The Remote Engine cannot run Talend Runtime (OSGi) build type artifacts. Therefore, Target Exec is not supported.
A Qlik Stitch Jira Integration encounters the following error during extraction:
500 JiraInternalServerError The server encountered an unexpected condition which prevented it from fulfilling the request.
...
CRITICAL HTTP-error-code: 500, Error: Internal server error
...
tap_jira.http.JiraInternalServerError: HTTP-error-code: 500, Error: Internal server error
Check Jira status page: https://status.atlassian.com/
If there are no active incidents and the error is recurring, please reach out to Jira Support and request to review the logs of your authorizing Jira account. The Jira Support team should be able to provide more insight into these occurrences. Jira Support can be reached at: https://support.atlassian.com/
In the event that they suggest Stitch needs to make specific accommodations, please reach out to Qlik Support and provide the details.
The Stitch Jira integration utilizes the JIRA Cloud REST API v2, which uses the standard HTTP status codes.
A 500 error generally indicates an issue with Jira's server(s). The integration has built-in retry logic to account for connection errors returned without an API response from Jira's server(s).
Ultimately, this error indicates that the Jira server was unable to complete our request. The 500 response is a raw error message from Jira that the extractions will capture in the logs.
A NetSuite Suite Analytics integration encounters the following error:
Failed to login using TBA
Qlik Stitch supports connecting to NetSuite2 (netsuite2.com) through token based authentication.
To use token based authentication, select the Use token based authenitcation option in your integration settings:
Note that this setting cannot be modified once the integration runs an extraction. If you are seeing this error in existing integration(s), you will need to create new ones with token based authentication selected.
For guidance on re-using schema names, see Qlik Stitch: How to Upgrade the Integration to Latest Version with Same Destination Schema.
NetSuite Suite Analytics has deprecated netsuite.com in favor of netsuite2.com. Token based authentication is required for a successful connection to netsuite2.com.
This article describes how to handle dynamic tables in Qlik Replicate, such as when source tables are frequently added and deleted. This avoids having to manually add the table to a task each time.
The methods described will work for most source endpoints.
Our example uses Oracle as the source.
Create a new table in the source:
create table SYSTEM.suri4(id int primary key, anane varchar(10))
If you are using the SAP Application Endpoint for CDC processing and want to implement the new Full Record Mode feature with the Trigger-based endpoint, you will also need to convert the existing SAP Application Endpoint to use the SAP Application DB Endpoint.
Qlik Replicate 2025.5 exposes the setting directly in the endpoint Advanced tab.
The Endpoint now has the SAP HANA source endpoint defined as the Backend DB with Version 4 Trigger in Full Record Mode enabled.
The SAP Hana Endpoint has Full Record Mode (version 4) enabled, which requires a Full Reload of the Source to move to this new mode.
Google Ads integration encounters the extraction error:
tap - CRITICAL (<_InactiveRpcError of RPC that terminated with:
tap - CRITICAL status = StatusCode.PERMISSION_DENIED
tap - CRITICAL details = "The caller does not have permission"
tap - CRITICAL debug_error_string = "UNKNOWN:Error received from peer ipv4:172.253.115.95:443 {grpc_message:"The caller does not have permission", grpc_status:7}"
tap - CRITICAL >
tap - CRITICAL errors {
tap - CRITICAL error_code {
tap - CRITICAL authorization_error: USER_PERMISSION_DENIED
tap - CRITICAL }
tap - CRITICAL message: "User doesn't have permission to access customer. Note: If you're accessing a client customer, the manager's customer id must be set in the 'login-customer-id' header. See https://developers.google.com/google-ads/api/docs/concepts/call-structure#cid"
tap - CRITICAL }
tap - CRITICAL request_id: "xxxxxxxxxxxxxx"
Sign in to the Google Ads UI and ensure that:
You have access to the customer account ID you’re trying to query.
If it’s a client account, it must be linked to a manager (MCC) account that has API access.
If you see that the customer account is cancelled or inactive, reactivate it by following Google’s guide:
Reactivate a cancelled Google Ads account | support.google.com
If the issue persists, reach out to Google Ads API support with:
The error snippet
The request_id from your extraction logs (used by Google to trace the failed call)
Re-authorize the Google Ads integration:
Open Stitch in an incognito browser window.
Go to the Google Ads integration settings.
Click Re-authorize and follow the OAuth flow.
After re-authorizing, navigate to the Extractions tab and click Run Extraction Now.
If you manage multiple Google Ads accounts, note that:
Some accounts may work while others fail if they’re not connected to a manager account.
Only Ads accounts linked to a manager (MCC) have Ads API access.
Regular advertiser accounts must be linked to a manager account for Stitch to extract data successfully.
Prevention Tips
Periodically verify that the connected Google Ads account is linked to a manager account and the OAuth token has not expired.
Check for account status (ENABLED, CANCELLED, etc.) using the CustomerStatus enum | developers.google.com if you suspect deactivation.
Document the manager–client hierarchy for clarity when managing multiple accounts.
The error message indicates that the Google Ads API denied permission for the request. This is a raw authorization error returned by Google Ads, specifically:
USER_PERMISSION_DENIED
The user or OAuth credentials being used don’t have permission to access the target Ads customer account.
If you’re accessing a client (managed) account, the manager account ID must be provided in the login-customer-id header.
See Google’s reference documentation in Authorizationerror | developers.google.com.
Qlik Data Gateway connection details can no longer be viewed or modified. New connections cannot be created.
Existing Qlik Data Gateway connectors continue to work as expected.
The following error is displayed in the connection details:
Obsolete connection type
A temporary fix has been deployed in the Qlik Cloud back end while a permanent fix is being developed, which will be deployed soon.
For the time being, the issue will resolve itself after a period of time, or by restarting your Data Gateway Direct Access server.
DCAAS-2117
When you need to integrate auth0 JWT Bear Token auth with Talend tRestRequest component, it is possible to use JWT Bearer Token with Keystore Type : Java Keystore *.jks to achive this.
Please follow the some similar steps from Obtaining a JWT from Microsoft Entra ID | Qlik Help
-----BEGIN CERTIFICATE-----
MGLqj98VNLoXaFfpJCBpgB4JaKs
-----END CERTIFICATE-----
keytool -import -keystore talend-esb.jks -storepass changeit -alias talend-esb talend-esb.cer -noprompt
Security: JWT Bearer Token
Keystore File: /path_to/talend-esb.jks
Keystore Password : changeit
Keystore Alias : talend-esb
Audience: "https://dev-xxxx.us.auth0.com/api/v2/"
Integration fails with the following error:
tap - CRITICAL 'search_prefix'
tap - Traceback (most recent call last):
tap - File "/code/tap-env/bin/tap-s3-csv", line 10, in <module>
tap - sys.exit(main())
tap - ^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/singer/utils.py", line 235, in wrapped
tap - return fnc(*args, **kwargs)
tap - ^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 81, in main
tap - config['tables'] = validate_table_config(config)
tap - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 63, in validate_table_config
tap - table_config.pop('search_prefix')
tap - KeyError: 'search_prefix'
main - INFO Tap exited abnormally with status 1
main - INFO No tunnel subprocess to tear down
main - INFO Exit status is: Discovery failed with code 1 and error message: "'search_prefix'".
Improving the integration to gracefully handle the missing key when an update to the connection/config occurs is currently on the roadmap. The R&D team is working on this behavior and a minor version upgrade is expected down the line; however, there is currently no ETA.
If you encounter this error with your AWS S3 CSV integration, please reach out to Qlik Support for further assistance.
The issue occurs due to missing keys in the configuration. Specifically, an update to the connection settings removed or modified the search_prefix, resulting in the key being absent from the config expected by the integration.
The following IBM mainframe error occurs with the ARC CDC Solutions endpoint when the wrong ARC installation files are used:
Daemon (ATTDAEMN): 09.30.44 STC06957 ASTB1001E CDC library provided in STEPLIB but ATYLIB DD card missing - no CDC 09.30.44 STC06957 ASTB1012E ERROR=ABEND in effect
Sp15-620271-ARC-mvs.zip is confirmed to be the full patch while ARC_620271_mvs.zip is a partial patch.
Use ARC installation files that have SP prefixes in the file name.
ARC installation files without the SP prefixes are partial installation files that may not contain all the components that you need.
MySQL source tables with invisible columns crash a Qlik Replicate task on start. The following error is logged:
[INFRASTRUCTURE ]E: Process crashed with signal 11, backtrace: !{/opt/attunity/replicate/lib/at_base.so!4db3bd,/opt/attunity/replicate/lib/at_base.so!3a3938,/opt/attunity/replicate/lib/at_base.so!505053,/usr/lib64/libc.so.6!4e5b0,/opt/attunity/replicate/lib/libarepmysql.so!2b931,/opt/attunity/replicate/lib/libarepmysql.so!2e094,/opt/attunity/replicate/lib/libarepmysql.so!2e3c9,/opt/attunity/replicate/lib/libarepbase.so!4cecbb,/opt/attunity/replicate/lib/libarepbase.so!4d814e,/opt/attunity/replicate/lib/libarepbase.so!4681e2,/opt/attunity/replicate/lib/libarepbase.so!55fd36,/opt/attunity/replicate/lib/libarepbase.so!560108,/opt/attunity/replicate/lib/libarepbase.so!56b0ee,/opt/attunity/replicate/lib/libarepbase.so!5a1bca,/opt/attunity/replicate/lib/libarepbase.so!77d496,/usr/lib64/libpthread.so.0!81ca,/usr/lib64/libc.so.6!398d3,}! [1000100] (at_system_posix.c:575)
This issue is caused by defect RECOB-10177. Future patches for Qlik Replicate will incorporate the handling of invisible columns for MySQL tables.
2025.5 SP03 and newer versions.
Qlik Replicate could not handle invisible columns for the MySQL source.
2025.5 SP03
After a successful upgrade to Qlik Sense Enterprise on Windows November 2024 patch 8, changing the MS SQL Server connection's password from the Data Load Editor (DLE) generates the following error:
Bad request 400
This error occurs when clicking 'SAVE connection', even after the connection has been successfully tested.
Changing the MS SQL Server connection's password from the Qlik Sense Enterprise Console (in the Data Connection view) works as expected.
This is a known defect (QCB-32467 | SUPPORT-4457) in Qlik Sense November 2024 and Qlik Qlik Sense May 2025.
Upgrade to:
QCB-32467 | SUPPORT-4457
When replicating data from MySQL integration, users may encounter the following extraction error:
Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@xxxx is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
2025-09-30 20:30:00,000Z tap - INFO [main] tap-hp-mysql.sync-strategies.common - Querying: SELECT `pk_col`, `col1`, `col2`, `col3` FROM `schema`.`table` WHERE ((`pk_col` > ? OR `pk_col` IS NULL)) AND ((`pk_col` <= ?)) ORDER BY `pk_col` (<last PK value checked>, <max PK value>)
2025-09-30 20:32:00,000Z tap - FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@XXXX is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
SELECT `pk_col`, `col1`, `col2`, `col3`
FROM `schema`.`table` WHERE (`pk_col` IS NULL OR `pk_col` > [last PK value checked]) AND `pk_col` <= [max PK value] ORDER BY `pk_col`;
SHOW FULL PROCESSLIST; SELECT ID, USER, HOST, DB, COMMAND, TIME, STATE, INFO
FROM information_schema.PROCESSLIST
WHERE STATE = 'Sending data';
If you are unable to alleviate the error following the above, please reach out to Qlik Support.
This error occurs when Stitch has an active server-side streaming ResultSet on a MySQL connection and tries to execute another statement on that same connection before the stream is fully consumed and closed. MySQL’s JDBC driver allows only one active statement per connection while a streaming result is open.
Potential Contributors
When using an Amazon S3 as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Amazon S3 which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
setx AWS_SHARED_CREDENTIALS_FILE C:\Users\demo\.aws\credentials
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
aws s3 mv s3://%1 s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test/%2.%3/%2_%3__%sn%.csv
where C:\Users\demo\.aws\credentials is generated in above step 3. The values are obfuscated in the above sample.
General
Bucket name : qmi-bucket-1234567868c4deded132f4ca
Bucket region : US East (N. Virginia)
Access options : Key pair
Access key : DEMO~~~~~~~~~~~~UXEM
Secret key : demo~~~~~~~~~~~~ciYW7pugMTv/0DemoSQtfw1m
Target folder : /APAC_Test
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename_S3.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
7. Startup or Reload the Full Load ONLY task and verify the file output.
C:\Users\demo>>aws s3 ls s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test --recursive --human-readable --summarize
2023-08-14 11:20:36 0 Bytes APAC_Test/
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT/
2023-08-15 08:10:28 9 Bytes APAC_Test/SCOTT.KIT/SCOTT_KIT__00000001.csv
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT500K/
2023-08-15 08:10:34 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000001.csv
2023-08-15 08:10:44 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000002.csv
2023-08-15 08:10:54 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000003.csv
2023-08-15 08:11:05 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000004.csv
2023-08-15 08:11:15 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000005.csv
2023-08-15 08:11:24 2.7 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000006.csv
Total Objects: 10
Total Size: 22.7 MiB
Qlik Replicate
Amazon S3 target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Wind...
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linu...
A replication fails with the following:
[TARGET_APPLY ]I: ORA-03135: connection lost contact Process ID: 19637 Session ID: 1905 Serial number: 3972 [1022307] (oracle_endpoint_load.c:862)
[TARGET_APPLY ]I: Failed to truncate net changes table [1022307] (oracle_endpoint_bulk.c:1162)
[TARGET_APPLY ]I: Error executing command [1022307] (streamcomponent.c:1987)
[TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PCA UAT DW Target [1022307] (subtask.c:1474)
[TARGET_APPLY ]I: Target component st_0_PCA UAT DW Target was detached because of recoverable error. Will try to reattach (subtask.c:1589)
[TARGET_APPLY ]E: Failed executing truncate table statement: TRUNCATE TABLE "PAYOR_DW"."attrep_changesBF9CC327_0000402" [1020403] (oracle_endpoint_load.c:856)
This may require additional review by your database admin.
In this instance, the issue was caused by a database-level trigger to monitor drop, truncate, and alter statements by name TSDBA.AUDIT_DDL_TRG, which is currently invalid.
To resolve, validate the trigger and add logic to not consider attrep_changes% tables, as this is just a temporary table for Qlik Replicate batch processing.
Tables from your integration are not being loaded to Snowflake. The loading error is:
Cannot perform CREATE FILE FORMAT. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.
GRANT ALL ON WAREHOUSE <stitch_warehouse> TO ROLE <stitch_role>;
GRANT ALL ON DATABASE <stitch_database> TO ROLE <stitch_role>;ALTER USER SET DEFAULT_ROLE MY_ROLE;
ALTER USER SET DEFAULT_ROLE STITCH;The root cause of the issue is likely related to permissions or role settings for the Stitch user in Snowflake. If you run a destination connection check in the Stitch User Interface for your Snowflake connection and it is successful, but your loads fail, then the error boils down to a permissions issue with loading the data.
After copying data from MSSQL to Azure SQL tables, the copy fails with the error:
The metadata for source table 'tabel_name' is different than the corresponding MS-CDC Change Table. The table will be suspended.
Verify if the tables you are replicating are temporal or system tables. Temporal or system tables are not supported by Qlik Replicate. See Limitations and considerations for details.
If you want to capture changes to these tables with MS-CDC and Qlik Replicate, then you have to unhide the system-generated columns:
ALTER TABLE <the table name> ALTER COLUMN [SysStartTime] drop HIDDEN;
ALTER TABLE <the table name> ALTER COLUMN [SysEndTime] drop HIDDEN;
Depending on how the table was created, the hidden column names may be different, such as ValidFrom, ValidTo.
If you don't want to make the above change, you can use the ODBC with CDC endpoint and capture both the base table and the history table using SysStartTime as the context column.
See Qlik Replicate: W: The metadata for source table 'dbo.table' is different than the corresponding MS-CDC Change Table for details.
The following error may be encountered in Qlik Replicate when reading from an Oracle Standby database node:
[SOURCE_CAPTURE ]E: Cannot create Oracle directory name 'ATTUREP_9C9D285sample_directory' with path '/RDSsamplefilepath/db/node_C/archive' [-1] (oradcdc_bfilectx.c:165)
Qlik Replicate accesses Oracle archive logs through Oracle directories from the file path assigned to the node, as retrieved from the v$Archived_log view. The mentioned error occurs when the Qlik Replicate task is unable to use the Oracle directory and file path set in the DB. In this instance, Qlik Replicate attempts to create its own custom directory.
If the user does not have Create Any Directory permissions, then this error occurs.
Read permissions on the file path of the Oracle directory are required; otherwise, the task will remain unable to access the archive logs, even when permissions to the Oracle directory are provided.
See Access privileges when using Replicate Log Reader to access the redo logs for details.
Example:
When working with the standby(Secondary) node C, the Oracle user will not have default permissions to the Oracle Directory and File Path. Giving permissions to just the Oracle Directory is not enough for the task to access the File Path. Read permissions must be given to both ARCHIVELOG_DIR_C and abc_C/arch in this example:
Provide Read permissions to both the Oracle Directory and the file path in use.
The task was missing File Path Read permissions of the Oracle Directory.