Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Oct 14, 2021 10:02:25 AM
Jul 24, 2020 8:58:58 AM
The following article provides detailed guidelines for configuring Qlik Replicate to work with the copy redo logs to temporary folder feature with ASM.
The general recommendation when having Oracle ASM is to work with the new parallel mode, see article titled:
Oracle - Optimizing read speeds using Oracle ASM storage
The Copy option is only available when the Qlik Log Reader redo log access method is selected.
When the Copy redo logs to temporary folder option is enabled, Replicate instructs Oracle to copy the full redo log or chunks of the redo log to a local folder residing on the Oracle Server machine or to a shared network folder that can be accessed by Oracle Server.
Chunks of redo logs are copied from the ASM online redo logs using the Oracle DBMS_DISKGROUP package whereas archived redo logs are copied in their entirety using the COPY_FILE method of the Oracle DBMS_TRANSFER package. In addition, Replicate uses Oracle directory objects to denote the
source and target folder in this transfer (ASM is the source and the temporary folder is the target)
After the redo logs have been copied to the temporary folder, Replicate reads them using one of the following methods:
The "Direct Access" option using a shared network folder is the fastest of the two, and has the least impact on Oracle Server resources. This is because Replicate reads the files directly from the shared folder, thereby eliminating the need for Oracle to send Replicate the file content. However, using the "Direct Access" option requires some additional configuration (as described below).
To prevent old redo logs from accumulating in the temporary folder, Replicate should be configured to delete the redo log files from the temporary folder once they have been processed. The delete operation is performed using Oracle file groups and the Oracle DBMS_FILE_GROUP package.
Certain Oracle permissions are required, regardless of which access method is selected. These permissions are fully documented in Using Oracle as a source. This section describes only those permissions required for the Qlik Log Reader and Copy redo logs to temporary folder options.
The following permissions are required:
The following permissions are required:
Example:
GRANT SELECT ON DBA_FILE_GROUPS to attu_user;
Example:
GRANT EXECUTE ON SYS.DBMS_FILE_GROUP to attu_user;
Example:
execute DBMS_FILE_GROUP.GRANT_SYSTEM_PRIVILEGE (DBMS_FILE_GROUP.MANAGE_ANY_FILE_GROUP, 'attu_user', FALSE);
Oracle ASM Access Permissions
Replicate requires ASM access permissions in order to read the online redo logs from ASM (SYSASM or SYSADM privilege). This is because reading the online redo logs from ASM is performed using a function from the Oracle DBMS_DISKGROUP package, which requires the SYSASM or SYSADM privileges.
From Oracle 11g Release 2 (11.2.0.2), Qlik Replicate must be granted the SYSASM privilege in order to access the ASM account. For older supported versions, granting Qlik Replicate the SYSDBA privilege should be sufficient.
Also validate ASM account access by opening a command prompt and issuing the
following statements:
sqlplus asmuser/asmpassword@+asmserver as sysdba
OR
sqlplus asmuser/asmpassword@+asmserver as sysasm
Setting up the File Share if the "Direct Access" option was chosen
If Replicate has file level access to temporary folder option, make sure to set up a shared network folder (using NFS or SAMBA for example) that can be accessed from all Oracle nodes.
The shared folder must allow write access and grant delete permission to the user and group on which the Oracle database is running. In addition, it must allow read access to Replicate Server. Make sure to be able to read the contents of the share when logging in to the Replicate Server machine with the user under which the Qlik Replicate Server service runs ('Attunity' user).
This configuration should be done by the customer's IT team, but Replicate Support is always available to assist.
In the Advanced tab of the Oracle source endpoint:
Security and Folder Location
Oracle database must be able to write to the temporary folder and delete from it. If the Oracle source database is part of a RAC cluster, the temporary folder must be located on a file share that is accessible to all of the nodes in the Oracle Server RAC. If chosen the "BFILE" method for accessing the temporary folder, the folder does not need to be a shared network folder. In such as case, only authorized Oracle users can access it and only through Oracle.
If chosen the "Direct Access" option, only the Oracle user and the "Remote" Replicate user (NFS or Simba user) that will read the redo log should be granted permission to access the folder.
Also, as the archived redo logs are deleted from the temporary folder after processing, the risk of unauthorized access to the redo logs is greatly diminished.
If multiple tasks use the same temporary folder, conflicts may arise such as one task needing to access a redo log that another task has already deleted.
To prevent such conflicts, only one Replicate task should access the same temporary folder at any given time. However, create a different subfolder under the same root folder for each task. Then, for example, instead of specifying /mnt/share as the temporary folder for both tasks, and can specify /mnt/share/task1 for one task, and /mnt/share/task2 for the other.
When working with the temporary folder, one should enable the Delete processed archived redo log files option described in Configuring the "Copy to Temp Folder" Option in Replicate. This ensures that the redo logs will not accumulate in the temporary folder.
As Replicate needs to copy the redo logs from all the RAC nodes, make sure to allocate enough space in the temporary folder for (at least) the maximum size of the redo logs multiplied by the number of RAC nodes in the Oracle source. Also, for each temporary folder, allocate up to 50 MB multiplied by the number of RAC nodes. This is for small files which are copies of the chunk that Replicate is currently reading from the ASM online redo log. Although these files are not deleted, there will only be one such small file per RAC in each temporary folder at any given time.
In the event that multiple tasks are running, disk space calculation should also be multiplied by the number of tasks as each task has its own temporary folder.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution may not be provided by Qlik Support.
In a POC we are connecting to an Oracle RAC that uses ASM. Since we do not have SYSASM access and no real time requirement, we configured the endpoint settings this way:
Replicate Log Reader
Use path as it appears in database
Copy redo logs to temporary folder (/qlik_fsqa which is an NFS share and required because we have 2 nodes in the RAC)
Delete processed archive redo log files
Use archived redo logs only
/qlik_fsqa is on Linux and is set to 755.
When Qlik copies the file to the Linux folder it writes the file to as read only (440). The requirement is to copy the file to the Linux folder with read-write for the owner (640) so Qlik can delete the processed files automatically, which is not happening due to read only permissions.
We created another Oracle directory to point to a folder on the local Oracle machine. When we execute the DBMS_FILE_TRANSFER.COPY_FILE command to send a file to the local storage, it gets created with correct read-write permissions (644). But when we execute the DBMS_FILE_TRANSFER.COPY_FILE to send the file to the NFS share, it gets created as read only (440).
We spent quite a bit of time troubleshooting with an Oracle DBA and a Linux admin, and some smart Qlikkies but haven't yet find the root cause for the permissions issue. Has anyone run into a similar issue? Do you have any other ideas that we could test?