Replicate Error for oracle source endpoint:
ORA-01801: date format is too long for internal buffer.
Cause
The date format string was too long to process. This should occur only if several long literals are specified as part of the date at the sour
...Replicate Error for oracle source endpoint:
ORA-01801: date format is too long for internal buffer.
Cause
The date format string was too long to process. This should occur only if several long literals are specified as part of the date at the source.
Resolution
Remove long literals from the date format string.
Identify the exact record which is causing the issue on the database side by, fo example using cqlplus utility to scan a particular column.
Related Content
ORA-12899: value too large for column
Environment
- Oracle as source endpoint for Qlik Replicate
- Labels:
-
Data Connection
This article lists the resolution and troubleshooting steps that must be followed if images are not rendered correctly in Qlik NPrinting reports.
NOTE: Before proceeding below, please check this article
...This article lists the resolution and troubleshooting steps that must be followed if images are not rendered correctly in Qlik NPrinting reports.
NOTE: Before proceeding below, please check this article
Troubleshooting Steps:
Image export failures may be due to some configuration errors. The following must be checked:
- Insufficient system resources: Low Qlik Sense, QlikView and NPrinting System resources can cause this issue. Please do performance monitoring to check for Peak CPU and RAM usage that on all NP QS and QV servers. NPrinting has the ability to exhaust a production Qlik Sense or Qlikview server if those servers are not sufficiently provisioned with enough RAM in particular. https://community.qlik.com/t5/Official-Support-Articles/Finetuning-and-preparing-your-NPrinting-202x-Deployment-for-use/ta-p/1716222
- It may be useful to Enable performance logging on the NP, QS and or QV servers to verify if it is being heavily taxed while NPrinting reports and or report publish tasks etc are being executed. Qlik Sense - How to monitor resources using Microsoft Performance Monitor
- Assistance/suggestions mode for the Qlik Sense charts is not supported. Ensure it is disabled for all the objects imported in the report.
- The imported object ideally will be a native Qlik Sense chart object. 3rd party extensions may result in unexpected results. Contact your extensions vendor for 3rd party extension support
- Certain Qlik Sense visualizations from the visualization bundle are not supported: See compatibility matrix here:
Creating a visualization using a custom object
- For a listing of compatible, built-in Qlik Sense objects, check the compatibility matrix found in the following link to better understand specific Qlik Sense object limitations with NPrinting reporting.
Working with Qlik objects
-
Please check and validate minimum cipher requirements on the Qlik NPrinting and Qlik Sense server. See NPRINTING ERROR CEF RENDERING EXCEPTION Cipher Issue
- Check that the internet options/proxy settings are cleared as per article:
NPrinting Issues to generate report and previews, connect to Qlik Sense apps, image and objects issues and on demand issues
- Check that Qlik NPrinting Engine and Server Internet Options are configured. See:
How to Configure NPrinting Server and NPrinting Engine computer Internet Options
- Check that 'In' and 'Out' bound ports between the Qlik NPrinting server Qlik NPrinting engine and the Qlik Sense server are configured. Keep in mind that Ports 4997 and 443 must now also be configured since November 2018 release of Qlik NPrinting.
- Check that ports 2727 and 15672 are not being used by any other process/program on the NPrinting server and engine computers. These ports are required for normal NPrinting operation. See Ports - Qlik NPrinting
- Check that the Qlik Sense objects and or Qlik Sense visualization used in your NPrinting report are supported as per the "Keep in Mind" section above
- Check if HTTP is used in the Proxy field of the Qlik NPrinting Connection to the Qlik Sense app in the 'Proxy Address' field. If yes, see article
NPrinting Unexpected CEF rendering exception with HTTP connection
- Check that the Qlik Sense Proxy default 'https://' listening port settings has not been changed. Open the following link for more information:
NPrinting Designer CEF Rendering error and or Error Object reference not set to an instance
- Check for the use of a Qlik Sense proxy 'prefix' in the Qlik Sense Management Console. If one exists, via the NP web console, the prefix must be added to the NP connection string ie: https://qlikserver1.qlik.com/qlik
- Check to see if anything has been entered into Additional Response Headers under the Qlik Sense Virtual Proxy being used (select Advanced to view), if so remove and test.
- Check the Qlik NPrinting logs for 'Websocket' errors. If found check this article to resolve that issue.
NPrinting Engine connection stays offline on February 2019 release and newer - RabbitMQ log shows "Insufficient Security no_suitable_ciphers"
- Abort all running Task and On Demand Task executions then reboot the Qlik NPrinting Server, Qlik NPrinting Engine computer and the Qlik Sense server at the first available opportunity and retest if the issue still persists. Ensure all services on each computer are up and running before retesting.
If the issue persists after checking all points above, please check the solutions in the resolutions section of this article.
Resolution steps:
Resolution 1
Fully Qualified Domain Name path or some other parameter of the original Qlik Sense server certificates may have changed. Use the following article to resolve this issue: NPrinting Verification process does not capture certificate FQDN mismatch in turn resulting in GRPC errors
- Open windows File Explorer
- Click 'View' tab
- In 'Show/Hide' area of the 'View' ribbon, Click checkbox "File Name Extensions" in order to show the full file name
Resolution 2
GRPC error: Check NP CEF Engine log file: May be represented by the error: "The remote server returned an error: (401) Unauthorized" . (Internal JIRA defect reference: OP-8814)
- Check that the path to the QMC proxy is the same as the connection path used in the NP connection proxy address path.
- You must also ensure that port 443 and port 4997 are not blocked in your environment as mentioned in the article description.
Resolution 3
If the NPrinting Error logging or preview errors contain the message "CEF rendering exception - "Buffer cannot be Null" go to the following article for the solution: CEF rendering exception - "Buffer cannot be Null" Error in NP report preview with Qlik Sense Connection"
Resolution 4
If Resolution 1 and 2 above do not resolve the issue and all image export requirements are met, check and perform the steps below:
- Stop the NPrinting Engine Service.
- Open the renderer.config file in C:\Program Files\NPrintingServer\NPrinting\Scheduler. You may need to open the file in administrator mode.
- Make a backup copy of the file.
- Add (or modify if present) the following parameter:
<add key="webrenderer-timeout-sec" value="120" />
120 is the time in seconds for the timeout. It can be increased up to 300 if necessary and retest. - Restart the NPrinting Engine service
Increasing these timeout values may lead to improved performance as NPrinting may simply need more time to execute tasks and report generation. These changes also need to be made after an NPrinting upgrade. NP Config files are overwritten during the upgrade process.
How to Enable 'Logging'
If resolutions above have not resolved the issue, please perform the following steps and reproduce the issue one again in order to retrieve debug logging necessary for a deeper investigation. Then zip and upload the requested debug logs below up to the support case.
- Enable debug logging as per Troubleshooting logs - Qlik NPrinting
Causes:
Possible causes of GRPC error
- assistance/suggestions mode is enabled (it is currently on the road map to have this feature fully supported)
- the chart is an unsupported 3rd party extension (map extensions for example). See article below regarding Qlik Sense extensions and how to enable them for image export.
- In some cases, image web rendering default time while connected to Qlik Sense needs to be adjusted perhaps due to network latency or other network performance related issues
- Communication between the NPrinting server and the Qlik Sense server is discovered to be problematic.
- The Qlik Sense server itself could be overloaded and not responding to the NPrinting server in a timely manner.
- When NPrinting is used in conjunction with Qlik Sense Deployments From September 2018 and higher where "HTTP" is used rather than "HTTPS", specific CEF or GRPC errors will be logged. See article Qlik NPrinting Unexpected CEF rendering exception with HTTP connection for more information.
- Certificate mismatch as identified in article NPrinting Verification process does not capture certificate FQDN mismatch in turn resulting in GRPC errors
- Invalid path used in the NP connection 'proxy' address path. Path used with the NP connection should be the same as the path used to open the QMC or QS Hub
- Error logging to look for NPrinting Logs: "Failed to open a resolver for connection" and "The remote server returned an error: (403) Forbidden"
Reference:
Creating visualizations using chart suggestions
Enabling export of your visualization extension
Using Qlik Sense third-party extensions
- Labels:
-
Configuration
The following steps outline a test Multi-Cloud App distribution setup in QlikView Server where applications are distributed to Cloud Hub on Qlik Cloud (Qlik Sense Enterprise SaaS).
Note that starting on the April 2020 release and newer, re-distributio
The following steps outline a test Multi-Cloud App distribution setup in QlikView Server where applications are distributed to Cloud Hub on Qlik Cloud (Qlik Sense Enterprise SaaS).
Note that starting on the April 2020 release and newer, re-distributions of apps have persistent Cloud Hub Space assignment. Meaning once an App is assigned to a Space by a tenant admin, it will remain assigned to that specific Space after the app is reloaded or edited, then re-distributed to the Cloud Hub(s).
Environment
- QlikView , April 2020
- Qlik Cloud
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Note: Steps covered on the video and some additional information are found below:
1. Connecting QlikView Server to a Qlik Sense Enterprise cloud deployment:
After adding the new Deployment via Qlik Management Console (QMC), under the General tab, copy the generated Local Bearer Token (Copy to clipboard). This will be pasted when setting up the Multi-Cloud Identity Provider in Qlik Sense Enterprise SaaS (QSE SaaS) using Qlik Cloud Services format.
2. Adding the bearer token to the Qlik Sense Enterprise Cloud Hub:
Login to the Management Console and under Identity Provider, create a new Multi-Cloud type provider for the QlikView Cloud Deployment connection. Then paste the Local Bearer Token previously copied above.
3. Publish a QlikView document or a link to the document in a Qlik Sense cloud hub:
Make sure to have Distribute to Cloud-Native and/or Distribute Link selected, with the correct created Deployment selected. If distributing the document link, a QlikView Server also needs to be added under Distribute to QlikView Server. As an option, the app can have Tag values associated in order to control App access in Cloud Hub.
5. Now the app should show up in the Qlik Cloud Console under Apps, and can be assigned to a Space.
6. Note that with QlikView Cloud Deployments, if the document/app or Link needs to be removed from Cloud hub, it needs to be deleted manually. This is not the case with Qlik Sense Multi-Cloud Deployments where the app is removed by removal of associated Deployments custom property value. More information under Example Multi-Cloud App distribution setup in Qlik Sense
7. Tags assigned to the app can be seen in the App's Details information in Cloud Hub > Explore > ... > Details
Related Content
- Example auth0 authentication setup on Qlik Cloud Services
- Setting Up Qlik Sense Enterprise Multi-Cloud or a SaaS edition
- Qlik Multi-Cloud Frequently Asked Questions (FAQ)
- Publish QlikView documents and links in a Qlik Sense cloud hub
- How to configure Unified Hub and publish QlikView document links in Qlik Sense Hub
- Labels:
-
Administration
-
Configuration
This article describes the possible designs of backfill patterns. A backfill sync is a process that syncs historical data from a source to a target.
Long-running data syncs in Blendr
Record processing blends can run into some issues when they are us
This article describes the possible designs of backfill patterns. A backfill sync is a process that syncs historical data from a source to a target.
Long-running data syncs in Blendr
Record processing blends can run into some issues when they are used to sync many records.
We usually see this in contact sync use-cases where a blend is used to sync contacts between 2 CRM systems. During its first run, the blend will sync all historical contact information. In the subsequent runs, it will only sync new and updated contacts.
In some cases where an account has many contacts (>1million) this first run can cause challenges:
- It can take longer than the maximum blend run duration
- While this run is being executed other blends in a bundle are blocked
Backfill
These challenges can be overcome by building backfill blends. These blends won't sync all data in the first run, but they will only process a small batch of the total data in each run and rerun when that is finished. When all data is synced, the blend will stop rerunning.
By splitting the total amount of records over multiple blend runs, we allow the sync process to run longer than one blend run. And depending on the mechanism used to rerun the blend, it will be possible to run other blends between the runs of the backfill blend.
Currently, there are 2 approaches to rerunning blends: using a triggered blend that triggers itself or using a scheduled blend.
Triggered blend
This blend will trigger itself in order to perform the backfill operation. To specify which portion of the data should be synced, the blend needs to keep track of a field from the record that was synced last in its previous run (an id, an updated_at timestamp, or another field). This can be done by storing this record as a parameter in the Data Store or by sending it in the payload when retriggering the blend.
This approach will be faster than working with scheduled blends. But if a run fails, the retriggering is interrupted. And when used in a bundle, it will only allow webhook blends to run between the triggered runs. Other blends will need to wait until the initial sync is finished.
Triggered blends: https://help.qlik.com/en-US/blendr/Content/blend-editor/calling-a-data-blend-via-a-webhook-url-rest-api-endpoint.htm
Scheduled blend
This blend will be executed according to a schedule. Similar to the triggered blend approach, it needs to be specified which portion of the data should be synced. For a scheduled blend, this can't be done by using a payload so it needs to store a "state" parameter in the Data Store.
This approach will be slower than working with triggered blends. But a failed run won't interrupt the sync and it can even be set to retry a failed batch X times before processing the next batch.
How to build a backfill sync with a scheduled blend
Scheduled blends: https://help.qlik.com/en-US/blendr/Content/blend-editor/scheduling-data-blends.htm
- Tags:
- blendr.io
- Labels:
-
Workflow and Automation
A backfill sync is a process that syncs historical data from a source to a target.
When performing a backfill sync, the total amount of records that need to be synced between multiple systems is split over multiple blend runs. This allows the total s
...A backfill sync is a process that syncs historical data from a source to a target.
When performing a backfill sync, the total amount of records that need to be synced between multiple systems is split over multiple blend runs. This allows the total sync to take longer than the maximum execution time of a single blend run.
In this article, scheduling will be used to execute a single blend multiple times.
It would also be possible to build a solution with a triggered blend that calls itself, but this is more error-prone as a failed blend execution can break the chain.
When building backfills, a scheduled blend offers 2 advantages over a triggered blend:
- other scheduled and triggered blends won't be blocked by the partial sync
- a failed run won't impact the schedule and a retry strategy can be implemented
The downside of using a scheduled blend is that it might take longer, the shortest possible interval for the schedule is 30 seconds. If there are many syncs that have to process an empty range of records, some time will be lost on these syncs.
1. Setting the schedule
Set the template to run in "scheduled" run-mode. As long as the schedule is set, the blend will keep executing according to it.
When the sync is finished, the schedule needs to be disabled.
This can be done by executing the "Update Blend" block with the "Schedule every" parameter set to "Disabled". Combine this block with a condition block that checks if the sync has finished.
2. Dividing records and storing a state
The total amount of records needs to be split over multiple blend runs. And each blend run will need to know the current state of the sync. In other words, which part of the records it should process.
How this is done depends on the available list blocks for the connector belonging to the platform where data is fetched from. Our Shopify connector will be used as an example as it has all 3 types of list blocks.
2.1 Incremental block
If an incremental block is available, the pointer can be set using the "Set Pointer" block. This pointer is stored in the Blendr backend so there's no need to worry about storing the state. To prevent the incremental block from fetching all records, set the "Maximum numbers of items to retrieve" parameter in the block's settings tab to the number of records each run should process.
This approach only works if the records are returned sorted from old to new.
After all records for a certain run are processed, use the"Update Pointer" block to change the pointer to the timestamp from the newest record returned by the list incremental block.
2.2 Search block
For an example, see the attached file: Shopify scheduled backfill sync - search block.json
How to import a blend from a JSON file
A search block is a block that allows you to retrieve a list of records with a certain query. This can be an SQL-like query or some predefined fields. The block can be used with multiple strategies, this article focuses on a time window strategy. This consists of fetching records between two timestamps for example, "timestamp_min" and "timestamp_max". These timestamps define the time window.
Time window
The size of each batch is defined by the time between the timestamps that make up the time window, this should be a fixed value, for example, 5 days or a couple of hours. This depends on the "density" of the data.
To keep track of the current batch that needs to be processed (the state of the backfill), one timestamp should be saved in the CDP or Data Store. The other timestamp can be determined by adding or subtracting the time window size. The formula {blendguid} can be used as a unique identifier to scope variables to their template. During development, this will be parsed into the template's GUID. After a user installs the template, it will be parsed into the GUID of that specific installation.
When processing records from new to old, save the pointer in the CDP (or Data Store) during the setup flow. Use the "Date" formula with "now" as input time to generate a datetime string for the time of the template's installation.
The first steps of the template's main flow will be to fetch the state from the CDP or Data Store and define variables for the timestamps that make up the time window. In the included example, these variables are called "WindowStartDate" and "WindowEndDate".
Casting these values to clearly named variables will lower the template's complexity and will make it easier to use them later on in the backfill sync.
When performing the backfill from new to old, WindowEndDate will be set equal to the saved pointer. WindowStartDate will be set to the pointer minus the time window size.
Once these variables are defined, they can be used in the search block.
Other strategies
There are many other possible strategies to split the records into batches when using a search block. This will differ from connector to connector as not all APIs have a search endpoint. Please contact our support team if a connector is missing a search block (that should be available according to the API documentation).
2.3 Regular list block
If for a certain connector an incremental list block and a search block are both not available. A regular list block is the only possibility. This approach is not efficient and can take a long time.
The strategy is to store the ids of every processed record. After fetching a new batch of records, compare the ids of those records to the ids of already processed records by using the "Compare Lists" block.
This approach will take a long time and the "Compare Lists" block might take too long to run on big lists.
3. Implementing a retry strategy
3.1 Manual retry
Optionally an inputs field can be included in the settings flow with additional parameters like
- a final date, records older than this date will be ignored
- a custom start date, records younger than this date will be ignored
The custom start date can be used to restart the full sync from a point in time different than the moment of installation. It's even possible to retry a specific time window by specifying both a final date and a custom pointer.
3.2 Auto retry
Add a new variable "max_retry" to the CDP or Data Store and assign a numeric value to it. This equals the number of times the blend should retry a run if an error is encountered.
Go in the settings tab for each block that could cause an error and change "On Error" to "Warning". After each of these blocks, catch a potential error with a condition block. Retrieve the max_retry variable, if it's greater than 0, subtract 1 and stop the blend without updating the pointer in the CDP or Data Store. This will cause the next run to use the same time window. If the max_retry variable equals 0, update the pointer in the CDP or Data Store and use an alerting connector (for example, email) to notify a user or your services team that a portion of the backfill has failed for a certain time window.
Reset the max_retry variable and stop the blend. Optionally, store the information of failed runs in a new variable and send only one notification of all incomplete runs.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Related Content
- Tags:
- blendr.io
- Labels:
-
How To
-
Workflow and Automation
Environment:
Qlik Sense Enterprise on Windows
Microsoft SQL Server
Problem:
The Test Connection option fails when you try to configure Qlik Sense to connect to an SQL Server.
When the test fails, the following error message displays:
Test Failed:
Environment:
Qlik Sense Enterprise on Windows
Microsoft SQL Server
Problem:
The Test Connection option fails when you try to configure Qlik Sense to connect to an SQL Server.
When the test fails, the following error message displays:
Test Failed: Network IOException: certificate_unknown(46)
The above occurs when the SQL Server is configured to use a custom certificate for its connections.
Cause:
The cryptography provider requires the certificate that the SQL Server uses to have the Digital Signature parameter set under the Key Usage field. If the certificate used by your SQL Server does not have this parameter set, the error occurs.
To validate the issue:
- Open SQL Server Configuration Manager.
- Expand SQL Server Network Configuration.
- Select the Protocols for <instance name>.
- Right-click and select Properties.
- Click the Certificate tab.
- Click View, and view the cert details.
- Click the Details tab.
- Click the Key Usage field and validate whether Digital Signature is present or not.
Solution:
Configure your SQL Server to use a certificate that has Digital Signature enabled under Key Usage.
NOTE: It might be required to restart the SQL Server after you change the certificate.
Environment:
Qlik Sense Enterprise on Windows
Microsoft SQL Server
Problem 1:
Upgrade of Qlik Sense fails when you try to install Qlik Sense on a computer where:
- The SQL 2012 Native Client is already installed
And - The Native Client version is low
Environment:
Qlik Sense Enterprise on Windows
Microsoft SQL Server
Problem 1:
Upgrade of Qlik Sense fails when you try to install Qlik Sense on a computer where:
- The SQL 2012 Native Client is already installed
And - The Native Client version is lower than 11.3.6538.0,
The installation fails with the following symptoms:
- When you click Next on the Database Information window, the following message is displayed, even though the information entered is correct.
Unable to make a connection to the database server.
- The Qlik sense setup.log contains the following entries:
SQL::connect to server <SQL server name>
Testing NT Authentication to SQL Server.
Failed to connect to SQL Server [<SQL server name>] with error code [0x80004005]
Description for error code is [TCP Provider: An existing connection was forcibly closed by the remote host.]
Failed in connectToSQLServer with error code [0].
Where <SQL_Server_Name> is the name of the SQL Server being used.
Problem 2:
Qlik Sense stops functioning, even after it was successfully installed on a computer, where:
- The SQL 2012 Native Client is already installed, and the version is lower than 11.3.6538.0.
And - The Transport Layer Security (TLS) 1.0 is then disabled on the SQL Server.
Cause:
Qlik Sense uses the SQL 2012 Native Client to make connections to SQL. When the client is not present at the time that Qlik Sense is installed, the Qlik Sense installer installs the correct version. But, if an existing version with a lower minor version number is installed, the Qlik Sense installer does not upgrade it.
If the previously installed version only supports TLS 1.0, Qlik Sense is unable to connect to an SQL Server that has TLS 1.0 disabled. This fact can cause the following problems:
- The initial installation of Qlik Sense to fail.
- Qlik Sense stops functioning, if:
- Qlik Sense was installed when TLS 1.0 was enabled on the SQL Server.
And - TLS 1.0 was later disabled.
Solution:
The solution to both these problems is the same. You must upgrade the SQL 2012 Native Client on the Qlik Sense server to a version that supports TLS 1.2.
Method 1: From the Data Sources
- Open the Qlik Analytics Service
- Select Catalog
- Pick your Space
- Click Space Details then
- Select Data Files from the context menu
- Locate the data you wish to delete
- Click the ellipses to open the next menu
- Click Dele
Method 1: From the Data Sources
- Open the Qlik Analytics Service
- Select Catalog
- Pick your Space
- Click Space Details then
- Select Data Files from the context menu
- Locate the data you wish to delete
- Click the ellipses to open the next menu
- Click Delete
- Alternatively, add /data at the end of the URL.
Method 2: From the Data Load Editor
- Open the App with a connection to the data files you wish to edit
- Go to Data Load Editor
- On the Right Side of the Data Load Editor under add data section select your data space.
- Click on the table icon under datafiles.
- select the Delete option on the right side of each file that you want to delete.
Environment
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
- Labels:
-
Cloud Migration
-
Data Connection
-
How To
When configuring the mail server in Qlik Alerting, the following symptoms are all experienced:
- "Internal Server Error" in the web interface
- "409" error in the browser's developer tools
-
Log files contain a : G
lobal ERROR {"stack":"Error: 10716:e
When configuring the mail server in Qlik Alerting, the following symptoms are all experienced:
- "Internal Server Error" in the web interface
- "409" error in the browser's developer tools
-
Log files contain a : G
lobal ERROR {"stack":"Error: 10716:error:1425F102:SSL *rout ines:ssl_choose_client_version :unsupported protocol:*[c:\\ws\\deps\\openssl\\openssl \\ssl\\statem\\statem_lib.c |file:///C://ws//deps//openssl//openssl//ssl//statem//statem _lib.c]:1958:\n","message":"10 716:error:1425F102:SSL routines:ssl_choose_client_ver sion:unsupported protocol:[c:\\ws\\deps\\openssl\\openssl \\ssl\\statem\\statem_lib.c |file:///C://ws//deps//openssl//openssl//ssl//statem//statem _lib.c]:1958:\n","library":"SS L routines","function":"ssl_choo se_client_version","reason":"u nsupported protocol","code":"ESOCKET","co mmand":"CONN"}
Using another software on the same machine, it's possible to send emails via the same mail server.
Environment
- Qlik Alerting February 2021
Resolution
Upgrade to Qlik Alerting May 2021 (upcoming) to solve this.
Cause
This is an issue on Node.js handling of TLS connections, we added an improvement on Qlik Alerting May 2021 to solve this.
Internal Investigation ID(s):
QIAB-366
- Labels:
-
Data Connection
Unable to Start the Qlik Sense Services in Services.msc due to:
Error 1068: The dependency service or group failed to start
Environment
- Qlik Sense Enterprise on Windows All Versions
Resolution
- Check the dependency of the services in Servic
Unable to Start the Qlik Sense Services in Services.msc due to:
Error 1068: The dependency service or group failed to start
Environment
- Qlik Sense Enterprise on Windows All Versions
Resolution
- Check the dependency of the services in Services.msc and start the services accordingly.
- Right-Click on the Service which is throwing the error -> Properties -> Check Dependencies Tab
- Use a service account (with local admin rights) to run the services in Services.msc
- Check the Service account if it has 'Log on as a service' rights
Launch local group policy editor by running gpedit.msc from Run command(Windows+R), browse to Computer Configuration -> Windows Settings -> Security Settings -> Local Policies -> User Rights Assignment. Find 'Log on as a service' and make sure this account is added. If not, it definitely needs this privilege to run the services that you're trying to start. - Try to update/Verify Service Account credentials for that service throwing the error in Services.msc
- Qlik Sense should be installed on Windows Servers not on Windows XP/7/10 etc.
- Last resort - Repair or Reinstall(Try to Repair first)
- Labels:
-
Configuration
Downloading or transferring the QlikSense.exe from a shared path fails with:
No hash value found
This means that the file is being blocked by anti-virus software or other restrictions set by the local IT department. A possible set of solutions is d
Downloading or transferring the QlikSense.exe from a shared path fails with:
No hash value found
This means that the file is being blocked by anti-virus software or other restrictions set by the local IT department. A possible set of solutions is documented below, but we recommend speaking to your local IT department for details.
Environment:
Qlik Sense Enterprise on Windows
Resolution
Step 1: Obtain the hash.
To obtain a hash, use Powershell:
- Open PowerShell with administrative access
- Run the following command: Get-FileHash "Shared file location"
For example:Get-FileHash "\\QlikSense1\Dropzone\Applications\Qlik Sense\2020-11\Qlik_Sense_setup.exe"
- . Make a note of hash value so that file hash value can be excluded.
Step 2: Add the hash value to your endpoint anti-virus configuration
Step 3: Add the hash value to your endpoint anti-virus configuration
If excluding hash value in step 2 did not fix the issue, the file extensions need to be excluded from restrictions.
For example: .exe; .msi; .*
For more information related to Hash values please refer to the below Microsoft article:
- Labels:
-
Deployment
A Qlik NPrinting report fails to fetch data from Qlik Sense.
The server task log records the error:
This server's clock is not synchronized with the primary domain controller's clock.
Environment:
Qlik NPrinting
Qlik Sense Enterprise on Windows
A Qlik NPrinting report fails to fetch data from Qlik Sense.
The server task log records the error:
This server's clock is not synchronized with the primary domain controller's clock.
Environment:
Qlik NPrinting
Qlik Sense Enterprise on Windows
Either of the following applies:
- The Windows Time service is not running and is set to Disabled on the client.
Or - The time has not been synced with the domain controller.
Resolution
Make sure that the Windows Time service is set to Automatic and that it is running.
To sync the time with the domain controller, run the following commands in an administrative command window:
w32tm /resync
net time \\DC /set
Where DC is the full domain name of the domain controller.
- Labels:
-
Configuration
Environment:
IMPORTANT: If you are trying to capture traffic to and from an SQL Server, do not use Wireshark, because it does not readily display such traffic. Instead, use th
Environment:
IMPORTANT: If you are trying to capture traffic to and from an SQL Server, do not use Wireshark, because it does not readily display such traffic. Instead, use the Microsoft Message Analyzer. For information about how to use Microsoft Message Analyzer to capture SQL Server traffic for analysis, see Data Collection - How to use Microsoft Message Analyzer to capture SQL Server traffic for analysis.
On the system where you want to capture loopback traffic, do the following:
- Download and install the latest version of Wireshark from https://www.wireshark.org/.
- During the installation, a dialog displays where you can choose to install npcap. Accept the default settings and install npcap.
- To start capturing traffic, run Wireshark.
- At the initial screen, select and double-click the Adapter for loopback traffic capture adapter.
Wireshark now captures loopback traffic. After the traffic has been captured, stop and save the Wireshark capture.
NOTES:
- To capture local loopback traffic, Wireshark needs to use the npcap packet capture library.
- This package is included with the later versions of Wireshark. But older versions included the WinPcap library, which does not support loopback capture.
If you have an older version of Wireshark on your Qlik server, remove both Wireshark and WinPcap, and then install the latest Wireshark version.
Related Content:
Useful Wireshark features and tests for communication troubleshooting
- Labels:
-
Configuration
-
Deployment
Environment:
- Microsoft Message Analyzer (MMA)
- Any Qlik Software
Traffic to and from a Microsoft SQL Server is encapsulated in Tabular Data Stream (TDS) packets, which make it hard to analyze using common tools such as Wireshark. Instead, you can
Environment:
- Microsoft Message Analyzer (MMA)
- Any Qlik Software
Traffic to and from a Microsoft SQL Server is encapsulated in Tabular Data Stream (TDS) packets, which make it hard to analyze using common tools such as Wireshark. Instead, you can use the Microsoft Message Analyzer (MMA) to capture and analyze this traffic.
NOTE: Microsoft retired MMA in November 2019, and it can no longer be downloaded from Microsoft. But if you already have a copy available, you can continue to use it with the following advice below.
On the system where you want to capture traffic:
- To start capturing traffic, run MMA.
- At the initial screen, start a new trace:
- If you want to capture loopback traffic, for example, if Qlik server and SQL are on the same system, select Loopback and Unencrypted IPsec from the Favorite Scenarios menu.
- If Qlik server and SQL are on separate systems, select Local Network Interfaces from the Favorite Scenarios menu.
MMA now captures traffic. After the traffic has been captured, stop, and save the MMA capture as a .matp file.
- Labels:
-
Data Connection
-
How To
Environment:
- Wireshark
- Microsoft Windows Server 2012 and later
- Qlik Sense Enterprise on Windows
- QlikView
- Qlik NPrinting
Qlik Technical Support has requested a packet capture, but your security policy or a warranty restriction prevents you fro
Environment:
- Wireshark
- Microsoft Windows Server 2012 and later
- Qlik Sense Enterprise on Windows
- QlikView
- Qlik NPrinting
Qlik Technical Support has requested a packet capture, but your security policy or a warranty restriction prevents you from installing Wireshark.
Resolution:
Use the following steps to generate a packet capture in Windows 2012 and later.
- Open a command-line session using Run as administrator.
- Start the capture:
Type netsh trace start capture=yes protocol=TCP and press Enter.
NOTE: View the command output. The output lists where the capture is saved. - Keep the command-line session open.
- Reproduce your issue.
NOTE: Technical Support strongly recommends that you list all IP addresses and hosts used in the session. - Return to the open session or open a new command-line session using Run as administrator.
- Stop the packet capture:
Type netsh trace stop and press Enter. - Navigate to the folder the session listed as the output location.
The capture file is in ETL (Microsoft Tracelog) format. - Copy the files from the output directory and send them to Qlik Technical Support.
- Labels:
-
General Question
-
How To
As of Replicate 6.5, when working with Oracle endpoint using Attunity log reader to access redo log the following grant is required for capturing RESETLOGS operations:
Grant SELECT ON V_$DATABASE_INCARNATION
If Replicate detects that the user sepci
...As of Replicate 6.5, when working with Oracle endpoint using Attunity log reader to access redo log the following grant is required for capturing RESETLOGS operations:
Grant SELECT ON V_$DATABASE_INCARNATION
If Replicate detects that the user sepcified in the endpoint setting does not have the required permission, a warning will be written to the log and the task will continue as normal.
However, when replicating from a RAC environment and/or capturing RESETLOGS operation, not setting this permission may result in data loss and may cause the following error to be written to the log:
]E: Cannot find any Archived Redo log in the current incarnation (probably, the provided destination id is incorrect), thread ..
Resolution
To solve this error you can either add the missing premission i.e.:
Grant SELECT ON V_$DATABASE_INCARNATION
or
turn off the supportResetLog option. i.e. :
- Go to Oracle source endpoint --> Advanced tab --> scroll down to the internal parameters
- on the text box please add the following : supportResetLog
- Uncheck the box
- Labels:
-
Configuration
Question
Which DB2Z utilities can cause table suspend?
Answer
In general, whenever there is an action/change in a table that is not logged to the db redo logs, Replicate is not aware of the change. In this case, if Replicate continues as if nothi
Question
Which DB2Z utilities can cause table suspend?
Answer
In general, whenever there is an action/change in a table that is not logged to the db redo logs, Replicate is not aware of the change. In this case, if Replicate continues as if nothing happened, there is a risk of loss of data or causing target endpoint to get our of sync. When Replicate detects such an action, it usually suspends the table. The only way you can 'unsuspend' a table is by reloading it so that it assures that the target enpoint will be in sync with source as it includes all the changes perfromed in the action that was not captured by Replicate and Replicate can safely continue CDC without risk of data loss.
When working with DB2 ZOS endpoint there are serveral db utilis runs resulting in subtype 83 diagnostic log records being written to the db log that will casue table to be suspended.
The known DB2/Z utilities that will cause suspend table are:
- LOAD RESUME ( all except shrlevel change )
- LOAD REPLACE ( note that LOG YES vs. LOG NO has no impact on suspension from LOAD ) - the internal parameter db2LoadOption can be used whether to ignore action or suspend table (as of Replicate 7.0 this parameter is set by default to is "IGNORE", in which case a warning message will still be printed).
- REORG DISCARD ( if any data is deleted )
- CHECK DATA DELETE YES ( if any data is deleted )
- RECOVER TO POINT IN TIME
- Labels:
-
Configuration
When replicating from MS SQL Always On Availability Group endpoint and facing a problem of Replicate unable to establish connection to replicas with the following error:
Failed to connect to replica 'XXXXXXXX\YYYYYY'
RetCode: SQL_ERROR SqlState: 0800
When replicating from MS SQL Always On Availability Group endpoint and facing a problem of Replicate unable to establish connection to replicas with the following error:
Failed to connect to replica 'XXXXXXXX\YYYYYY'
RetCode: SQL_ERROR SqlState: 08001 NativeError: 2 Message: [Microsoft][ODBC Driver 17 for SQL Server]Named Pipes Provider: Could not open a connection to SQL Server [2]
[SOURCE_CAPTURE ]T: sqlserver_src_set_odbc_connection_string(): connection string (without credentials): 'DRIVER={ODBC Driver 17 for SQL Server};SERVER=XXXXXXXX.ZZZZ.net;DATABASE=dbname;Trusted_Connection=no;' (sqlserver_endpoint_util.c:222)
Resolution
Problem is caused by wrong alias setting on the AlwaysOn Availability Group
The name of the alias needs to be XXXXXXXX.ZZZZ.net for connecting to replica XXXXXXXX\YYYYYY.
- Labels:
-
Configuration
Question
How does Replicate read changes from DB2 LUW db log?
Answer
Replicate reads the log records from the DB2 database logs using the db2ReadLog API method. The log reading frequency is set by default to 5 seconds. The reading frequency can b
Question
How does Replicate read changes from DB2 LUW db log?
Answer
Replicate reads the log records from the DB2 database logs using the db2ReadLog API method. The log reading frequency is set by default to 5 seconds. The reading frequency can be controled in the endpoint setting under:
DB2 LUW SOURCE endpoint --> advance tab --> "Check for changes every (sec):"
- Labels:
-
Configuration
Question
How does replicate handles foreign keys?
Does Replicate supports delete cascade?
Answer
In general apply changes to tables with foreign keys is not supported.
However, since Replicate reads the db log to retrieve the change events, in
Question
How does replicate handles foreign keys?
Does Replicate supports delete cascade?
Answer
In general apply changes to tables with foreign keys is not supported.
However, since Replicate reads the db log to retrieve the change events, in case of delete cascade, since delete cascase is usually translated to a series of delete statements, on two tables, and those are written to the db log, Replicate will read those deletes and process them, like any regular deletes.
Replicate does not create the master-child relationship on the target. If that relationship is manually created on the target, Replicate is not aware of that. In this case, when a delete arrives for a row in the master, Replicate will delete it. If a delete cascade is automatically done (by the db, not by replicate), then when Replicate will try to delete the child rows, it will generate errors. in this case, Replicate should be configured to ignore those errors (no rows found for delete-->Ignore record)
- Labels:
-
Configuration
-
General Question