Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in Manage Cases. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
This Techspert Talks session covers:
Chapters:
Resources:
Q&A:
Q: Your multi language apps is really critical to my business as we globalise - where is there more content about how we can handle dimension name translation in line with the native Qlik langauge translations?
A: Dimension names can be renamed in the load script, but it may not be necessary, just translate the dimension labels in the app instead.
Making a Multilingual Qlik Sense App
Q: Do the objects within the new Container have to be master visualizations?
A: No, you don't need to use Master visualizations. You can add new charts to the object or drag and drop existing charts from the sheet.
Q: When will the Layout Container be available?
A: Most likely later this year
Q: After update, there is a problem with filtering a "toString" field. I can't open the application for 10 minutes. What wrong with that field? (e.g. load * inline [toString test1];)
A: Hard to tell without seeing the app and knowing what the field is. As a general rule, keep the cardinality of fields down. If I would guess toString in this case may stop the engine from optimizing the field. To learn more, read HiCs post.
Symbol Tables and Bit-Stuffed Pointers
Q: Will we be able to pin objects to certain locations on the grid? As shown, a sheet menu build using the layout container would be nice to pin to the top left corner for example?
A: In the first release positioning and size will be using percentages. So, if you would have the position 0% for both axis then it would be pinned in the corner.
Click here to see video transcript
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
NOTE: Reminder that the NPrinting Engine service domain user account MUST be ROOTADMIN on each Qlik Sense server which NPrinting is connecting to.
The Qlik NPrinting server target folder for exported Qlik Sense certificates
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
When using Kafka as a target in a Qlik Replicate task, the source table's "Schema Name" & "Table Name" are not included in the Kafka message, this is the default behavior.
In some scenarios, you may want to add some additional information into the Kafka messages.
In this article, we will summarize all available options and weigh out the pros/cons between the different options. We use the "Table Name" as an example in below alternatives:
Cons:
-- No variable is available, so it's not dynamic value but a fixed string. In our sample the expression string is 'kit'
-- Affects the single table only
-- The table name appears in message's data part (rather than headers part)
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka", "tableName": "kit" }, "beforeData": { "ID": "2", "NAME": "ok", "tableName": "kit" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911032325000000000000000000005", "timestamp": "2023-09-11T03:23:25.000", "streamPosition": "00000000.00bb2531.00000001.0000.02.0000:154.6963.16", "transactionId": "00000000000000000000000000060008", "changeMask": "02", "columnMask": "07", "transactionEventCounter": 1, "transactionLastEvent": true } } } |
Global Rules can be used to add table name column to all the tables messages
Prons:
-- Affects all the tables
-- Variables are available, in our sample the variable $AR_M_SOURCE_TABLE_NAME is used.
-- The table name can be customized by combining with other transformation eg adding suffix expression "__QA"
Cons:
-- The table name appears in message's data part (rather than headers part)
If both tables transform and global rules transformation are defined (and their values are different) then tables level transform overwrites the global transformation settings.
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 2", "tableName": "KIT" }, "beforeData": { "ID": "2", "NAME": "test Kafka", "tableName": "KIT" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911034827000000000000000000005", "timestamp": "2023-09-11T03:48:27.000", "streamPosition": "00000000.00bb28db.00000001.0000.02.0000:154.7632.16", "transactionId": "00000000000000000000000000170006", "changeMask": "02", "columnMask": "07", "transactionEventCounter": 1, "transactionLastEvent": true } } } |
Enable the "Table Name" option will include the header information in Kafka messages.
Cons:
-- Affects the single table only
Prons:
-- This new feature was released in Replicate 2023.5 and above versions
-- The table name appears in message's headers part (rather than data part)
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 3" }, "beforeData": { "ID": "2", "NAME": "test Kafka 2" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911041053000000000000000000005", "timestamp": "2023-09-11T04:10:53.000", "streamPosition": "00000000.00bb2c30.00000001.0000.02.0000:154.9378.16", "transactionId": "00000000000000000000000000060005", "changeMask": "02", "columnMask": "03", "transactionEventCounter": 1, "transactionLastEvent": true, "tableName": "KIT" } } } |
Enable the "Table Name" option will include the header information in Kafka messages.
Prons:
-- Affects all the tables
-- This new feature was released in Replicate 2023.5 and above versions
-- The table name appears in message's headers part (rather than data part)
If both table level and task level "Message Format" are defined (and their values are different) then table level settings overwrites the task settings.
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 4" }, "beforeData": { "ID": "2", "NAME": "test Kafka 3" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911042445000000000000000000005", "timestamp": "2023-09-11T04:24:45.000", "streamPosition": "00000000.00bb2e56.00000001.0000.02.0000:154.9799.16", "transactionId": "00000000000000000000000000080001", "changeMask": "02", "columnMask": "03", "transactionEventCounter": 1, "transactionLastEvent": true, "tableName": "KIT" } } } |
Qlik Replicate (versions 2023.5 and above)
Kafka target
When using Kafka as a target in a Qlik Replicate task, the "Control Table" name in target is created in lower case, for example the Apply Exception topic name is "attrep_apply_exceptions" (if auto.create.topics.enable=true is set in Kafka broker config/server.properties file), This is the default behavior.
In some scenarios, you may want to use the non-default topic name or topic name in upper case etc to match the organization naming standards rule. This article describes how to rename control table topics' name.
In this article, we will use the topic name "attrep_apply_exceptions" as an example. You can customize below control topics using the same process:
The same way works for more generic level , not only for Kafka target endpoint. In generic level we may rename other metadata eg "target_schema" too.
Kafka topic names cannot exceed 255 characters (249 from Kafka 0.10) and can only contain the following characters:a-z|A-Z|0-9|. (dot)|_(underscore)|-(minus)
More detailed information can be found at Limitations and considerations.
The safest topic name length is 209 (rather than 255/249).
Qlik Replicate (versions 2022.11, 2023.5 and above)
Kafka target
case #00010983, #00108779
Generate a record to attrep_apply_exceptions topic for Kafka endpoint
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in Manage Cases. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
In Replicate Oracle source endpoint there was a limitation:
Object names exceeding 30 characters are not supported. Consequently, tables with names exceeding 30 characters or tables containing column names exceeding 30 characters will not be replicated.
The limitation comes from low versions Oracle behavior. However since Oracle v12.2, Oracle can support object name up to 128 bytes, long object name is common usage at present. The limitation in User Guide Object names exceeding 30 characters are not supported can be overcome now.
There are two major types of long identifier name in Oracle, 1- long table name, and 2- long column name.
1- Error messages of long table name
[METADATA_MANAGE ]W: Table 'SCOTT.VERYVERYVERYLONGLONGLONGTABLETABLETABLENAMENAMENAME' cannot be captured because the name contains 51 bytes (more than 30 bytes)
Add an internal parameter skipValidationLongNames to the Oracle source endpoint and set its value to true (default is false) then re-run the task:
2- Error messages of long column name
There are different messages if the column name exceeds 30 characters
[METADATA_MANAGE ]W: Table 'SCOTT.TEST1' cannot be captured because it contains column with too long name (more than 30 bytes)
Or
[SOURCE_CAPTURE ]E: Key segment 'CASE_LINEITEM_SEQ_NO' value of the table 'SCOTT.MY_IMPORT_ORDERS_APPLY_LINEITEM32' was not found in the bookmark
Or (incomplete WHERE clause)
[TARGET_APPLY ]E: Failed to build update statement, statement 'UPDATE "SCOTT"."MY_IMPORT_ORDERS_APPLY_LINEITEM32"
SET "COMMENTS"='This is final status' WHERE ', stream position '0000008e.64121e70.00000001.0000.02.0000:1529.17048.16']
There are 2 steps to solve above errors for long column name :
(1) Add internal parameter skipValidationLongNames (see above) in endpoint
(2) It also requires a parameter called "enable_goldengate_replication" is enabled in Oracle. This can only be done by end user and their DBA:
alter system set ENABLE_GOLDENGATE_REPLICATION=true;
Take notes this is supported when the user has GoldenGate license, and Oracle routinely audits licenses. Consult with the user DBA before alter the system settings.
Internal support case ID: # 00045265.
Keeping the trailing spaces in IBM DB2 for iSeries and IBM DB2 for z/OS source endpoint s is supported by adding the internal parameter keepCharTrailingSpaces.
Source Data type ORlength, which holds trailing spaces
Target Data type length, Which removed the trailing spaces.
Adding an Internal Parameter , detailed steps are:
While working with DB2 LUW endpoint, Replicate reports error after the 64-bit IBM DB2 Data Server Client 11.5 installation:
SYS-E-HTTPFAIL, Cannot connect to DB2 LUW Server.
SYS,GENERAL_EXCEPTION,Cannot connect to DB2 LUW Server,RetCode: SQL_ERROR SqlState: IM003 NativeError: 160 Message: Specified driver could not be loaded due to system error 1114: A dynamic link library (DLL) initialization routine failed. (IBM DB2 ODBC DRIVER, C:\Program Files\IBM\SQLLIB\BIN\DB2CLIO.DLL).
Install the 64-bit IBM DB2 Data Server Client 11.5.4 (for example 11.5.4.1449) rather than 11.5.0 (actual version is 11.5.0.1077).
Qlik Replicate : all versions
Replicate Server platform: Windows Server 2019
DB2 Data Server Client : version 11.5.0.xxxx
Support cases, #00076295
Replicate reported errors during resume task if source MySQL running on Windows (while MySQL running on Linux then no problem)
[SOURCE_CAPTURE ]I: Stream positioning at context '$.000034:3506:-1:3506:0'
[SOURCE_CAPTURE ]T: Read next binary log event failed; mariadb_rpl_fetch error 1236 (Could not find first log file name in binary log index file)
Replicate reported errors at MySQL source endpoints sometimes (does not matter what's the MySQL source platforms):
[SOURCE_CAPTURE ]W: The given Source Change Position points inside a transaction. Replicate will ignore this transaction and will capture events from the next BEGIN or DDL events.
Upgrade to Replicate 2022.11 PR2 (2022.11.0.394, released already) or higher, or Replicate 2022.5 PR5 (coming soon)
If you are running 2022.5 PR3 (or lower), then keep run it, or upgrade to PR5 (or higher) .
No workaround for 2022.11 (GA, or PR01) .
Jira: RECOB-6526 , Description: It would not be possible to resume a task if MySQL Server was on Windows
Jira: RECOB-6499 , Description: Resuming a task from a CTI event, would sometimes result in missing events or/and a redundant warning message
support case #00066196
support case #00063985 (#00049357)
While working with PostgreSQL ODBC DSN as source endpoint, The ODBC Driver is interpreting JSONB datatype as VARCHAR(255) by default, it leads the JSONB column values truncated no matter how the LOB size or data type length in target table were defined.
In general the task report warning as:
2022-12-22T21:28:49:491989 [SOURCE_UNLOAD ]W: Truncation of a column occurred while fetching a value from array (for more details please use verbose logs)
There are several options to solve the problem (any single one is good enough😞
I) Change PostgreSQL ODBC source endpoint connection string
II) Or on Windows/Linux Replicate Server, add one line to "odbc.ini" in the DSN definition:
MaxVarCharSize=0
III) Or on Windows, set "Max Varchar" to 0 from default value 255 in ODBC Manager GUI (64-bit):
Qlik Replicate all versions
PostgreSQL all versions
Support cases, #00062911
Ideation article, Support JSONB
This is a guide to get you started working with Qlik AutoML.
AutoML is an automated machine learning tool in a code free environment. Users can quickly generate models for classification and regression problems with business data.
Qlik AutoML is available to customers with the following subscription products:
Qlik Sense Enterprise SaaS
Qlik Sense Enterprise SaaS Add-On to Client-Managed
Qlik Sense Enterprise SaaS - Government (US) and Qlik Sense Business does not support Qlik AutoML
For subscription tier information, please reach out to your sales or account team to exact information on pricing. The metered pricing depends on how many models you would like to deploy, dataset size, API rate, number of concurrent task, and advanced features.
Qlik AutoML is a part of the Qlik Cloud SaaS ecosystem. Code changes for the software including upgrades, enhancements and bug fixes are handled internally and reflected in the service automatically.
AutoML supports Classification and Regression problems.
Binary Classification: used for models with a Target of only two unique values. Example payment default, customer churn.
Customer Churn.csv (see downloads at top of the article)
Multiclass Classification: used for models with a Target of more than two unique values. Example grading gold, platinum/silver, milk grade.
MilkGrade.csv (see downloads at top of the article)
Regression: used for models with a Target that is a number. Example how much will a customer purchase, predicting housing prices
AmesHousing.csv (see downloads at top of the article)
What is AutoML (14 min)
Exploratory Data Analysis (11 min)
Model Scoring Basics (14 min)
Prediction Influencers (10 min)
Qlik AutoML Complete Walk Through with Qlik Sense (24 min)
Non video:
How to upload data, training, deploying and predicting a model
Data for modeling can be uploaded from local source or via data connections available in Qlik Cloud.
You can add a dataset or data connection with the 'Add new' green button in Qlik Cloud.
There are a variety of data source connections available in Qlik Cloud.
Once data is loaded and available in Qlik Catalog then it can be selected to create ML experiments.
AutoML uses variety of data science pre-processing techniques such as Null Handling, Cardinality, Encoding, Feature Scaling. Additional reference here.
Please reference these articles to get started using the realtime-prediction API
By leveraging Qlik Cloud, predicted results can be surfaced in Qlik Sense to visualize and draw additional conclusions from the data.
How to join predicted output with original dataset
If you need additional help please reach out to the Support group.
It is helpful if you have tenant id and subscription info which can be found with these steps.
Please check out our articles in the AutoML Knowledge Base.
Or post questions and comments to our AutoML Forum.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
When working with Qlik Replicate, log.key file(s) are used for Decrypt Qlik Replicate Verbose Task Log Files , the log.key file can be re-created by restarting tasks, or restarting Replicate services if the log.key file is missed/deleted. However sometimes we need the file creation prior to the first time task run eg
(1) Set proper file protection manually by DBA
(2) Task movement among different environment eg UAT and PROD
(3) In rare cases the file auto-creation failed due to some reasons
This article provide some methods to generate file "log.key" manually.
There are several methods to get a "log.key" file manually.
1. Copy an existing "log.key" file from UAT/TEST task folder;
It's better to make sure the "log.key" uniqueness, so below method (2) is recommended:
2. Run "openssl" command on Linux or Windows
openssl rand -base64 32 >> log.key
The command will return a 44-chars random unique string (the latest char is "=") in "log.key" file. For example
n1NJ7r2Ec+1zI7/USFY2H1j/loeSavQ/iUJPaiOAY9Y=
Support cases, #00059433
This is a handy tool to run against a diagnostic package downloaded from Replicate. The diagnostic package contains recent log files (less than 10MB) and task information which is helpful for troubleshooting issues.
The goal of the script is to search for common key words or phrases quickly without having to open and read each log manually.
It is meant to be run on a linux environment with bash or shell scripting enabled. In the steps below, I am connecting to a Centos machine with MobaXterm. Then I am viewing the report with WinSCP after connecting to the same machine.
runT.sh : shell script to set up instance folder, and then trigger the health_check.sh script
health_check.sh : script to search through task.json and log files for information related to metadata, errors, warnings and then prints a report.
These files are included in health_check.zip which is attached to the article. When you unpack them make sure they are executable (chmod +x ..).
1. Download both scripts and move them to an environment with bash/shell enabled.
2. In the same folder or directory location, upload the diagnostic package as a zipped file.
*Note this can be the only zip file in the directory when running the ./runT.sh
3. When you run the script, ./runT.sh, you must supply and folder name. When I use this, I call the folder the case name, but could be the task name, etc.
Example:
#./runT.sh squeeze_example
This will create a new folder in the directory called 'squeeze_example' with the unzipped contents of the diagnostic package and the report_date.out file.
Here is a sample of the report_date.out file.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The scheduling feature is now available in Qlik AutoML to run a prediction on a daily, weekly, or monthly cadence.
1. Open a deployed model from Qlik Catalog
2. Navigate to 'Dataset predictions' and click on 'Create prediction' on bottom right
3. Select Apply Dataset, Name prediction datset, select your options, then click on 'Create Schedule'
4. Set your schedule options you would like to follow then click confirm
5. Your options now are to 'Save and close' (this will not run a prediction until the next scheduled) or 'Save and predict now' (this will run a prediction now in addition to the schedule)
Note: Users need to ensure the predicted dataset is updated and refreshed ahead of the prediction schedule.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
I needed to test a ML experiment recently with a QVD instead of a csv file.
Here are the steps I followed below to create a QVD file which was then available in Catalog.
1. Upload the local csv dataset (or xlsx,etc) and analyze which will create an analytics app
2. Open up the app and navigate to "Data Load Editor"
3. Add a new section under the Auto-generated section (with the + symbol marked with a red arrow above. Note this section must run after the Auto-generated section or will error the data is not loaded.
Add the following statement:
Store train into [lib://DataFiles/train.qvd];
or
Store tablename into [lib://DataFiles/tablename.qvd];
4. Run "Load data"
5. Check Catalog for recently created QVD
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
During the 'Create' phase of an AutoML experiment, there is a section at the right hand pane called 'Data Treatment'.
This section tracks any Feature Type changes you make to your training dataset.
I started the process of creating a ML experiment for the Ames housing dataset.
Then I changed 'Wood Deck SF' and 'Open porch SF' to Categorical instead of Numeric type under the 'Feature Type' column.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This Techspert Talks session addresses the following:
00:00 - Intro
01:02 - What Qlik Replicate does
01:41 - How to create a task
05:02 - Troubleshooting Full Load error
06:06 - Analyzing the task logs
08:54 - Setting the correct schema in task settings
11:49 - Troubleshooting Change Data Capture (CDC)
13:38 - Solution in Mainframe
15:10 - Solution in Replicate
16:34 - Testing Data Capture Changes
18:03 - Q&A: How to do Replication without Primary Keys?
19:05 - Q&A: How to fix missing records?
19:58 - Q&A: What are best practices for Full Load and CDC?
21:03 - Q&A: Where is help for installing Replicate?
21:46 - Q&A: What about communication error?
22:49 - Q&A: Can Replicate have multiple tasks with same Source?
23:31 - Q&A: Where to find driver version info?
24:43 - Q&A: How to know the build number?
25:43 - Q&A: What is the N column in SQL error message?
26:38 - Q&A: Where is Kafka endpoint documentation?
27:28 - Q&A: How to find the table ID referenced in the logs?
28:16 - Q&A: What is restarting with Advanced Run Options?
28:50 - Q&A: What is Resume Processing?
29:12 - Q&A: Can you have multiple tasks pointing to the same target?
29:31 - Q&A: What log files are there?
30:21 - Q&A: Where to turn on Verbose logging?
31:19 - Q&A: How to get error-triggered verbose logging?
32:36 - Troubleshooting Recommendations
How to analyze a Qlik Replicate log
Qlik Replicate User Guide (Help.Qlik.com)
Installing Qlik Replicate documentation
Troubleshooting CDC Missing Data
Troubleshooting Missing Data Durring Full Load
Adding and managing End Points - documentation
Release notes
Ideation
Q&A:
Q: Is it possible to convert to consumable data?
A: Further research is needed. Can you open a case and give an example so that we may thoroughly answer your question?
Q: Sometime log files will be missing? Have you ever had that kind of issue? Did customers ever report that?
A: Yes, there can be multiple reasons for missing database logs. Sometimes it is due to a situation where failovers occur and archive logs are not available on the other node, or in DB2 LUW they can become unavailable if not managed properly by the log management facility for archiving and de-archiving, in a timely fashion. Sometimes there have been IBM DB2 Patched needed to be applied. Definitely search the Community and if there are no solutions there then open a case and attach a Diagnostic Package. This article should be helpful: What information to provide when troubleshooting missing data?
Q: I don’t see the option to automatically enable data capture change from the Replicate side. Is that option only in db2 source endpoint?
A: Yes, this is fore DB2 Z/OS and DB2 LUW endpoint. N. Not for the DB2 iSeries endpoint. It is not in other Sources like Oracle or SQL Server.
Q: Hi, is there is an option to find out if Replicate was able to replicate a specific record based on specific key? Do we need to enable control tables for it?
A: The Source_Capture set to Verbose may help as well as other Components may need Verbose Logging. It really depends on the Source type. You can try setting DATA_RECORD to verbose. Unfortunately, it does not allow you to select by Key, but you turn on the Store Changes (in Task Settings) to help expose the data records and their keys and find it in the target for a table called “your schema.tablename__CT”
Q: I have a specific problem when I activate the CDC, it shows the error.
"Cannot initialize subtask. Failed while preparing stream component 'st_0_SqlServer2019-111'. Please add TRACE logging to the component that is showing that message.
RetCode: SQL_ERROR SqlState: 42000 NativeError: 20028 Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The Distributor has not been installed correctly. Could not enable database for publishing. Line: 1 Column: -1 Failed (retcode -1) to execute statement: 'exec sp_replicationdboption @dbname = N'Northwind', @optname = N'publish', @value = N'true''
A: Yes this message is a SQL Server Message. I would first check with the SQL Server DBA on what the message means and search the Community for what other customers did to resolve the issue. Also, you can add TRACE logging to the component that is showing that message. Rerun the task and if it does not show enough information then open a case and attach the Diagnostic Package.
Q: I had a "snapshot too big" issue on one of my task when using Oracle as a source. Is replicate doing a snapshot of the table to help it keep track of the changes happening on the table during the full load ?
A: Yes, search the community for that error message. See if these article might be helpful: Loading a big table in Qlik Replicate or ORA-12899: value too large for column
Q: I am seeing now and then some records get added into apply_expection table, how to debug the reason - any specific log level to be increased from Info.
A: YES, you can change the Target_apply to trace or verbose as needed BUT ONLY keep the extra logging on for a FEW MINUTES ONLY and then set it back to INFO when done. Then you can examine the contents of the attrep_apply_changes as it will have the INSERT/UPDATE/DELETE and the associated erro message. Depending on the error search the Community and ask your DBA, then if you cannot resolve it then open a case and attach the DIAGNOSTIC PACKAGE to the case.
Q: Is the troubleshooting process similar in cases where Logstream is used for CDC?
A: Yes, it is the exact same. Great question! View the logs search for the problem, add TRACE or VERBOSE logging only for a few minutes during the problem, then put the logging back to INFO. Search the community for the messages. If you cannot find the resolution and if it is a SQL error you can either Google it and discuss it with your DBA, or if needed, open the case and attach the Diagnostic Package to the case.
Q: Which material should we go through in order to prepare for the qlik replicate certification?
A: Review the User Guide extensively and peruse through the Community for How To’s. There are also some excellent courses on Learning.Qlik.com for ramping up on everything to do with Qlik Replicate
Q: When using DB2z as a source and Kafka as a target - we see that empty fields (not NULL fields - actual empty fields filled with blanks) at the source are not sent at all in the data message. Is there an option to force Replicate to send empty fields? We would like to be able to differentiate empty fields from NULL fields when reading the attunity messages in Kafka.
A: Yes, if you have DB2 z/OS going to Kafka, you should select the row of the table and see the row in HEX ( HEX ON ) . Do you see 40’s which are BLANKS(SPACES) in the field? If so, they can come through to the target. We would need your Diag Pkg, and CREATE TABLE DDL to reproduce it in-house. A transformation may or may not be needed. Please create a new case.
Q: Tasks can be exported in JSON, and then you can compare versions exported on a regular basis.
A: Yes, you can compare the versions of Replicate in the exported task. It will look like this:
"version": "2022.5.0.499"
Q: Barb, In your demo you mentioned you can re-run Full Load to grab the 8 records that you missed, if you do that does not duplicate records on your MS SQL Databse?
A: The Duplicate records can be easily handled by going in the task settings, then Error Handling, then Apply Conflicts. Change the following. “Change to task policy” . Then Duplicate key when applying INSERT: should become “UPDATE the existing target record” and No record found for applying an UPDATE: should become “INSERT the missing target record” Save the settings and reload the table.
When working with Qlik Replicate, there is a limitation applies to LOB columns:
• When replicating a table that has no Primary Key or Unique Index, LOB columns will not be replicated
This is because Replicate uses SOURCE_LOOKUP function to retrieve the LOB columns thru PK/UI position to the row(s) in source DB table. Hence the PK/UI is mandatory.
Another limitation about ROWID data type:
This article describes how to overcome the above limitations and setup Replicate task to replicate LOB columns if the source table has no Primary Key or Unique Index. The workaround works for Oracle source only as the Oracle materialized view is introduced, and the Oracle internal hidden ROWID Pseudocolumn will be used as PK/UI of the materialized view.
Basically the idea is
(1) Expose hidden column ROWID as a regular column in materialized view (in step 3.)
(2) Change the data type ROWID to CHAR (explicitly, or implicitly) (in step 3.)
(3) Define the ROWID column as the PK/UI of the materialized view (in step 4.)
Then the materialized view can be replicated just as same as a regular Oracle table.
1. Assume there is a table in Oracle source database which has NO Primary Key nor Unique Index. Column "NOTES" is a CLOB column.
CREATE TABLE kitclobnopk (
id integer,
name varchar(20),
notes clob
);
2. Create materialized view log for the above table
create materialized view log on kitclobnopk WITH ROWID including new values;
3. Create materialized view which exposes ROWID hidden column as regular column (alias KITROWID) to Replicate.
CREATE MATERIALIZED VIEW KITCLOBNOPK_MV
REFRESH FAST ON COMMIT
WITH ROWID
AS
select ROWIDTOCHAR(t.rowid) kitrowid, t.id, t.name,t.notes from kitclobnopk t;
The materialized view data refreshes whenever the base master table changes (I/U/D) being committed. The MV can be custom to meet other requirements and add other rich logics etc.
4. Define the column ROWID as the Primary Key of the materialized view
ALTER MATERIALIZED VIEW KITCLOBNOPK_MV ADD CONSTRAINT KITCLOBNOPK_MV_PK PRIMARY KEY (kitrowid);
5. Add the materialized view kitclobnopk_mv to task as same as it's a regular table, the MV meets both Full Load and CDC demand. (NO need to include the original source master table "kitclobnopk" in Qlik Replicate task).
6. Limitations and considerations of the WA
(1) The single UPDATE operation in master table maybe translates to DELETE+INSERT pair operations in materialized view, depends on how the materialized view data refreshes. We may see that in Replicate GUI monitor perspective.
(2) If the ROWID changed in the materialized view (eg ALTER TABLE <tableName> SHRINK SPACE special operations ), then table or task reload requires. Just like the RRN Column in DB400 .
(3) This is only a sample which works in internal labs (sanity test based on Oracle 12c source + Replicate 2022.5). No huge data stress test done, or being verified in a PROD system yet. For implementation and further questions, Professional Service engaged.
Introduction
This article outlines the steps for generating API keys for AutoML in the Qlik Cloud environment. API keys can be used for real time prediction pipelines.
Steps
1. Log into your Qlik Cloud Environment
2. Navigate to Management Console
3. On left hand side scroll down to Integration-> API Keys
4. Click to open this page. On the right hand side click 'Generate Keys'
An API key is generated.
Copy the API key and store it in a safe place.
Note: you need developer role on your tenant to generate API keys.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Replicate Oracle source endpoint got errors:
2022-08-03T10:48:53 [SOURCE_UNLOAD ]W: ALL COLUMN supplemental logging is required for table '<schemaName>.<tableName>' (oracle_endpoint_utils.c:600)
2022-08-03T10:48:53 [SOURCE_UNLOAD ]E: Supplemental logging for table '<schemaName>.<tableName>' is not enabled properly [1022310] (oracle_endpoint_unload.c:190).
When task Apply Conflicts set to use UPSERT, it requires supplemental logging for all columns.
User Guide description: Step 4: When the Insert the missing target record Apply Conflicts option is selected, supplemental logging must be enabled for ALL the source table columns.
1- Turn off UPSERT, use other options rather than "INSERT the missing target record"
If UPSERT is necessary then:
2- Add supplemental logging for all columns:
ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
1. Internal support case ID: # 00047346.
2. Michael_Litz Oracle ALL Column Supplemental Logging
3. Michael_Litz Qlik Replicate: Implementing UPSERT and MERGE modes by applying a Conflicts Handling Policy