Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
When using Kafka as a target in a Qlik Replicate task, the "Control Table" name in target is created in lower case, for example the Apply Exception topic name is "attrep_apply_exceptions" (if auto.create.topics.enable=true is set in Kafka broker config/server.properties file), This is the default behavior.
In some scenarios, you may want to use the non-default topic name or topic name in upper case etc to match the organization naming standards rule. This article describes how to rename control table topics' name.
In this article, we will use the topic name "attrep_apply_exceptions" as an example. You can customize below control topics using the same process:
The same way works for more generic level , not only for Kafka target endpoint. In generic level we may rename other metadata eg "target_schema" too.
Kafka topic names cannot exceed 255 characters (249 from Kafka 0.10) and can only contain the following characters:a-z|A-Z|0-9|. (dot)|_(underscore)|-(minus)
More detailed information can be found at Limitations and considerations.
The safest topic name length is 209 (rather than 255/249).
Qlik Replicate (versions 2022.11, 2023.5 and above)
Kafka target
case #00010983, #00108779
Generate a record to attrep_apply_exceptions topic for Kafka endpoint
In Replicate Oracle source endpoint there was a limitation:
Object names exceeding 30 characters are not supported. Consequently, tables with names exceeding 30 characters or tables containing column names exceeding 30 characters will not be replicated.
The limitation comes from low versions Oracle behavior. However since Oracle v12.2, Oracle can support object name up to 128 bytes, long object name is common usage at present. The limitation in User Guide Object names exceeding 30 characters are not supported can be overcome now.
There are two major types of long identifier name in Oracle, 1- long table name, and 2- long column name.
1- Error messages of long table name
[METADATA_MANAGE ]W: Table 'SCOTT.VERYVERYVERYLONGLONGLONGTABLETABLETABLENAMENAMENAME' cannot be captured because the name contains 51 bytes (more than 30 bytes)
Add an internal parameter skipValidationLongNames to the Oracle source endpoint and set its value to true (default is false) then re-run the task:
2- Error messages of long column name
There are different messages if the column name exceeds 30 characters
[METADATA_MANAGE ]W: Table 'SCOTT.TEST1' cannot be captured because it contains column with too long name (more than 30 bytes)
Or
[SOURCE_CAPTURE ]E: Key segment 'CASE_LINEITEM_SEQ_NO' value of the table 'SCOTT.MY_IMPORT_ORDERS_APPLY_LINEITEM32' was not found in the bookmark
Or (incomplete WHERE clause)
[TARGET_APPLY ]E: Failed to build update statement, statement 'UPDATE "SCOTT"."MY_IMPORT_ORDERS_APPLY_LINEITEM32"
SET "COMMENTS"='This is final status' WHERE ', stream position '0000008e.64121e70.00000001.0000.02.0000:1529.17048.16']
There are 2 steps to solve above errors for long column name :
(1) Add internal parameter skipValidationLongNames (see above) in endpoint
(2) It also requires a parameter called "enable_goldengate_replication" is enabled in Oracle. This can only be done by end user and their DBA:
alter system set ENABLE_GOLDENGATE_REPLICATION=true;
Take notes this is supported when the user has GoldenGate license, and Oracle routinely audits licenses. Consult with the user DBA before alter the system settings.
Internal support case ID: # 00045265.
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
NOTE: Reminder that the NPrinting Engine service domain user account MUST be ROOTADMIN on each Qlik Sense server which NPrinting is connecting to.
The Qlik NPrinting server target folder for exported Qlik Sense certificates
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Qlik Support communicates Product Releases in its Release Notes board, and information on Product alerts and Support related activities (Webinars and Q&As) on the Qlik Support Updates blog.
This will alert you for activities such as:
This will alert you for activities such as:
The IBM DB2 for iSeries source endpoint occasionally encounters an error during the CDC stage. This issue appears to be linked to the presence of the IBM i Access ODBC Driver versions 7.1.26 and 7.1.27.
The error message in task log file:
[SOURCE_CAPTURE ]E: Error parsing [1020109] (db2i_endpoint_capture.c:652)
The issue specifically arises during the CDC stage; however, the Full Load stage operates smoothly without any complications.
As a workaround please downgrade IBM i Access ODBC client from versions '07.01.027'/'07.01.026' to '07.01.025'
The most recent version of IBM i Access ODBC Client is '07.01.027' as of today. For compatibility reasons, it's advisable to revert to version '07.01.025', as '07.01.026' exhibits the same issue.
Various factors can contribute to encountering the 'Error parsing' message, including:
• DB2i ODBC Version '07.01.027' (as described in this article)
• In a single task, the total number of captured tables exceeds 300
• The source table is created by DDS
• Garbage data in table
• Special characters in table object identifier (table name, or column name)
If you continue to encounter the error after switching to '07.01.025', please reach out to Qlik Support for further assistance.
The behavior of the IBM DB2i ODBC Versions '07.01.026' & '07.01.027' differ slightly from that of '07.01.025'. In certain scenarios, it may return incorrect column lengths
#00158029, #00160002, QB-26413
Upgrade installation or fresh installation of Qlik Replicate 2023.11 (includes builds GA, PR01 & PR02), Qlik Replicate reports errors for MySQL or MariaDB source endpoints. The task attempts over and over for the source capture process but fail, Resume and Startup from timestamp leads to the same results:
[SOURCE_CAPTURE ]T: Read next binary log event failed; mariadb_rpl_fetch error 0 () [1020403] (mysql_endpoint_capture.c:1060)
[SOURCE_CAPTURE ]T: Error reading binary log. [1020414] (mysql_endpoint_capture.c:3998)
Upgrade to Replicate 2023.11 PR03 (expires 8/31/2024).
The fix is included in Replicate 2024.05 GA.
If you are running 2022.11, then keep run it.
No workaround for 2023.11 (GA, or PR01/PR02) .
Jira: RECOB-8090 , Description: MySQL source fails after upgrade from 2022.11 to 2023.11
There is a bug in the MariaDB library version 3.3.5 that we started using in Replicate in 2023.11.
The bug was fixed in the new version of MariaDB library 3.3.8 which be shipped with Qlik Replicate 2023.11 PR03 and upper version(s).
support case #00139940, #00156611
Replicate - MySQL source defect and fix (2022.5 & 2022.11)
Starting from version "November 2021" (Version 2021.11), Qlik Replicate introduced support for log data encryption to safeguard customer data snippets from being exposed in verbose task log files. However, this enhancement also presented its own set of challenges, such as making it very difficult to read the logs and complicating troubleshooting processes, the steps of Decrypting Qlik Replicate Verbose Task Log Files takes time.
In response to customer feedback and feature requests, we implemented a new feature allowing users the flexibility to disable log encryption in Qlik Replicate. This enhancement was rolled out with the release of Qlik Replicate version 2023.5 GA. This article serves as a guide on how to effectively disable Task Log Data Encryption.
{
"port": 3552,
"plugins_load_list": "repui",
... ...
"enable_data_logging": true,
"disable_log_encryption": true
}
Important Note:
Please exercise caution as verbose task logs may contain sensitive end-user data, and disabling encryption could potentially lead to data leakage. Always ensure appropriate measures are taken to protect sensitive information.
How to Decrypt Qlik Replicate Verbose Task Log Files
Sometimes we need to join different tables in the source databases and filter records according to another table records values. In this article we are using Oracle source endpoint, to demonstrate how to build up such a task in Qlik Replicate.
In the below sample task, table testfilter will be replicated from Oracle source database to SQL Server target database. During the replication, the records which value is INACTIVE in table testfiltercondition will be filtered out and are ignored in both Full Load and CDC stages. The two tables are joined by the same primary key column "id".
In the above expression, the 2 tables testfilter and testfiltercondition are joined by the PK column "id" in function source_lookup.
#00145952
QnA with Qlik: Qlik Replicate Tips
Qlik Replicate SAP Extractor Endpoint can be tuned to improve the performance of data being processed from an SAP environment. This is achieved by adding an internal parameter: sqlConnector. To confirm the performance improvements of the task, first record the task duration before implementing the parameter, then compare it to the subsequent runs.
Before the internal parameter can be added to the Qlik Replicate Extractor Endpoint, the below SAP Transports must be installed:
Once the transports have been installed make sure you follow the below steps to set the added Permissions on the SAP Replicate User:
Add the following authorization objects and settings to the RFC user ID:
Do the following:
For more information on how to import/upgrade the sqlConnector Transport, see Importing the SAP transports.
Configure the SAP Extractor Endpoint by defining the useSqlConnector Internal Parameter.
SAP Transports
Importing the SAP transports
This Techspert Talks session covers:
Chapters:
Resources:
Q&A:
Q: Your multi language apps is really critical to my business as we globalise - where is there more content about how we can handle dimension name translation in line with the native Qlik langauge translations?
A: Dimension names can be renamed in the load script, but it may not be necessary, just translate the dimension labels in the app instead.
Making a Multilingual Qlik Sense App
Q: Do the objects within the new Container have to be master visualizations?
A: No, you don't need to use Master visualizations. You can add new charts to the object or drag and drop existing charts from the sheet.
Q: When will the Layout Container be available?
A: Most likely later this year
Q: After update, there is a problem with filtering a "toString" field. I can't open the application for 10 minutes. What wrong with that field? (e.g. load * inline [toString test1];)
A: Hard to tell without seeing the app and knowing what the field is. As a general rule, keep the cardinality of fields down. If I would guess toString in this case may stop the engine from optimizing the field. To learn more, read HiCs post.
Symbol Tables and Bit-Stuffed Pointers
Q: Will we be able to pin objects to certain locations on the grid? As shown, a sheet menu build using the layout container would be nice to pin to the top left corner for example?
A: In the first release positioning and size will be using percentages. So, if you would have the position 0% for both axis then it would be pinned in the corner.
Click here to see video transcript
When using Kafka as a target in a Qlik Replicate task, the source table's "Schema Name" & "Table Name" are not included in the Kafka message, this is the default behavior.
In some scenarios, you may want to add some additional information into the Kafka messages.
In this article, we will summarize all available options and weigh out the pros/cons between the different options. We use the "Table Name" as an example in below alternatives:
Cons:
-- No variable is available, so it's not dynamic value but a fixed string. In our sample the expression string is 'kit'
-- Affects the single table only
-- The table name appears in message's data part (rather than headers part)
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka", "tableName": "kit" }, "beforeData": { "ID": "2", "NAME": "ok", "tableName": "kit" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911032325000000000000000000005", "timestamp": "2023-09-11T03:23:25.000", "streamPosition": "00000000.00bb2531.00000001.0000.02.0000:154.6963.16", "transactionId": "00000000000000000000000000060008", "changeMask": "02", "columnMask": "07", "transactionEventCounter": 1, "transactionLastEvent": true } } } |
Global Rules can be used to add table name column to all the tables messages
Prons:
-- Affects all the tables
-- Variables are available, in our sample the variable $AR_M_SOURCE_TABLE_NAME is used.
-- The table name can be customized by combining with other transformation eg adding suffix expression "__QA"
Cons:
-- The table name appears in message's data part (rather than headers part)
If both tables transform and global rules transformation are defined (and their values are different) then tables level transform overwrites the global transformation settings.
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 2", "tableName": "KIT" }, "beforeData": { "ID": "2", "NAME": "test Kafka", "tableName": "KIT" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911034827000000000000000000005", "timestamp": "2023-09-11T03:48:27.000", "streamPosition": "00000000.00bb28db.00000001.0000.02.0000:154.7632.16", "transactionId": "00000000000000000000000000170006", "changeMask": "02", "columnMask": "07", "transactionEventCounter": 1, "transactionLastEvent": true } } } |
Enable the "Table Name" option will include the header information in Kafka messages.
Cons:
-- Affects the single table only
Prons:
-- This new feature was released in Replicate 2023.5 and above versions
-- The table name appears in message's headers part (rather than data part)
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 3" }, "beforeData": { "ID": "2", "NAME": "test Kafka 2" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911041053000000000000000000005", "timestamp": "2023-09-11T04:10:53.000", "streamPosition": "00000000.00bb2c30.00000001.0000.02.0000:154.9378.16", "transactionId": "00000000000000000000000000060005", "changeMask": "02", "columnMask": "03", "transactionEventCounter": 1, "transactionLastEvent": true, "tableName": "KIT" } } } |
Enable the "Table Name" option will include the header information in Kafka messages.
Prons:
-- Affects all the tables
-- This new feature was released in Replicate 2023.5 and above versions
-- The table name appears in message's headers part (rather than data part)
If both table level and task level "Message Format" are defined (and their values are different) then table level settings overwrites the task settings.
{ "magic": "atMSG", "type": "DT", "headers": null, "messageSchemaId": null, "messageSchema": null, "message": { "data": { "ID": "2", "NAME": "test Kafka 4" }, "beforeData": { "ID": "2", "NAME": "test Kafka 3" }, "headers": { "operation": "UPDATE", "changeSequence": "20230911042445000000000000000000005", "timestamp": "2023-09-11T04:24:45.000", "streamPosition": "00000000.00bb2e56.00000001.0000.02.0000:154.9799.16", "transactionId": "00000000000000000000000000080001", "changeMask": "02", "columnMask": "03", "transactionEventCounter": 1, "transactionLastEvent": true, "tableName": "KIT" } } } |
Qlik Replicate (versions 2023.5 and above)
Kafka target
Keeping the trailing spaces in IBM DB2 for iSeries and IBM DB2 for z/OS source endpoint s is supported by adding the internal parameter keepCharTrailingSpaces.
Source Data type ORlength, which holds trailing spaces
Target Data type length, Which removed the trailing spaces.
Adding an Internal Parameter , detailed steps are:
While working with DB2 LUW endpoint, Replicate reports error after the 64-bit IBM DB2 Data Server Client 11.5 installation:
SYS-E-HTTPFAIL, Cannot connect to DB2 LUW Server.
SYS,GENERAL_EXCEPTION,Cannot connect to DB2 LUW Server,RetCode: SQL_ERROR SqlState: IM003 NativeError: 160 Message: Specified driver could not be loaded due to system error 1114: A dynamic link library (DLL) initialization routine failed. (IBM DB2 ODBC DRIVER, C:\Program Files\IBM\SQLLIB\BIN\DB2CLIO.DLL).
Install the 64-bit IBM DB2 Data Server Client 11.5.4 (for example 11.5.4.1449) rather than 11.5.0 (actual version is 11.5.0.1077).
Qlik Replicate : all versions
Replicate Server platform: Windows Server 2019
DB2 Data Server Client : version 11.5.0.xxxx
Support cases, #00076295
Replicate reported errors during resume task if source MySQL running on Windows (while MySQL running on Linux then no problem)
[SOURCE_CAPTURE ]I: Stream positioning at context '$.000034:3506:-1:3506:0'
[SOURCE_CAPTURE ]T: Read next binary log event failed; mariadb_rpl_fetch error 1236 (Could not find first log file name in binary log index file)
Replicate reported errors at MySQL source endpoints sometimes (does not matter what's the MySQL source platforms):
[SOURCE_CAPTURE ]W: The given Source Change Position points inside a transaction. Replicate will ignore this transaction and will capture events from the next BEGIN or DDL events.
Upgrade to Replicate 2022.11 PR2 (2022.11.0.394, released already) or higher, or Replicate 2022.5 PR5 (coming soon)
If you are running 2022.5 PR3 (or lower), then keep run it, or upgrade to PR5 (or higher) .
No workaround for 2022.11 (GA, or PR01) .
Jira: RECOB-6526 , Description: It would not be possible to resume a task if MySQL Server was on Windows
Jira: RECOB-6499 , Description: Resuming a task from a CTI event, would sometimes result in missing events or/and a redundant warning message
support case #00066196
support case #00063985 (#00049357)
Qlik Replicate - MySQL source defect and fix (2023.11)
While working with PostgreSQL ODBC DSN as source endpoint, The ODBC Driver is interpreting JSONB datatype as VARCHAR(255) by default, it leads the JSONB column values truncated no matter how the LOB size or data type length in target table were defined.
In general the task report warning as:
2022-12-22T21:28:49:491989 [SOURCE_UNLOAD ]W: Truncation of a column occurred while fetching a value from array (for more details please use verbose logs)
There are several options to solve the problem (any single one is good enough😞
I) Change PostgreSQL ODBC source endpoint connection string
II) Or on Windows/Linux Replicate Server, add one line to "odbc.ini" in the DSN definition:
MaxVarCharSize=0
III) Or on Windows, set "Max Varchar" to 0 from default value 255 in ODBC Manager GUI (64-bit):
Qlik Replicate all versions
PostgreSQL all versions
Support cases, #00062911
Ideation article, Support JSONB
This is a guide to get you started working with Qlik AutoML.
AutoML is an automated machine learning tool in a code free environment. Users can quickly generate models for classification and regression problems with business data.
Qlik AutoML is available to customers with the following subscription products:
Qlik Sense Enterprise SaaS
Qlik Sense Enterprise SaaS Add-On to Client-Managed
Qlik Sense Enterprise SaaS - Government (US) and Qlik Sense Business does not support Qlik AutoML
For subscription tier information, please reach out to your sales or account team to exact information on pricing. The metered pricing depends on how many models you would like to deploy, dataset size, API rate, number of concurrent task, and advanced features.
Qlik AutoML is a part of the Qlik Cloud SaaS ecosystem. Code changes for the software including upgrades, enhancements and bug fixes are handled internally and reflected in the service automatically.
AutoML supports Classification and Regression problems.
Binary Classification: used for models with a Target of only two unique values. Example payment default, customer churn.
Customer Churn.csv (see downloads at top of the article)
Multiclass Classification: used for models with a Target of more than two unique values. Example grading gold, platinum/silver, milk grade.
MilkGrade.csv (see downloads at top of the article)
Regression: used for models with a Target that is a number. Example how much will a customer purchase, predicting housing prices
AmesHousing.csv (see downloads at top of the article)
What is AutoML (14 min)
Exploratory Data Analysis (11 min)
Model Scoring Basics (14 min)
Prediction Influencers (10 min)
Qlik AutoML Complete Walk Through with Qlik Sense (24 min)
Non video:
How to upload data, training, deploying and predicting a model
Data for modeling can be uploaded from local source or via data connections available in Qlik Cloud.
You can add a dataset or data connection with the 'Add new' green button in Qlik Cloud.
There are a variety of data source connections available in Qlik Cloud.
Once data is loaded and available in Qlik Catalog then it can be selected to create ML experiments.
AutoML uses variety of data science pre-processing techniques such as Null Handling, Cardinality, Encoding, Feature Scaling. Additional reference here.
Please reference these articles to get started using the realtime-prediction API
By leveraging Qlik Cloud, predicted results can be surfaced in Qlik Sense to visualize and draw additional conclusions from the data.
How to join predicted output with original dataset
If you need additional help please reach out to the Support group.
It is helpful if you have tenant id and subscription info which can be found with these steps.
Please check out our articles in the AutoML Knowledge Base.
Or post questions and comments to our AutoML Forum.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
When working with Qlik Replicate, log.key file(s) are used for Decrypt Qlik Replicate Verbose Task Log Files , the log.key file can be re-created by restarting tasks, or restarting Replicate services if the log.key file is missed/deleted. However sometimes we need the file creation prior to the first time task run eg
(1) Set proper file protection manually by DBA
(2) Task movement among different environment eg UAT and PROD
(3) In rare cases the file auto-creation failed due to some reasons
This article provide some methods to generate file "log.key" manually.
There are several methods to get a "log.key" file manually.
1. Copy an existing "log.key" file from UAT/TEST task folder;
It's better to make sure the "log.key" uniqueness, so below method (2) is recommended:
2. Run "openssl" command on Linux or Windows
openssl rand -base64 32 >> log.key
The command will return a 44-chars random unique string (the latest char is "=") in "log.key" file. For example
n1NJ7r2Ec+1zI7/USFY2H1j/loeSavQ/iUJPaiOAY9Y=
Support cases, #00059433
This is a handy tool to run against a diagnostic package downloaded from Replicate. The diagnostic package contains recent log files (less than 10MB) and task information which is helpful for troubleshooting issues.
The goal of the script is to search for common key words or phrases quickly without having to open and read each log manually.
It is meant to be run on a linux environment with bash or shell scripting enabled. In the steps below, I am connecting to a Centos machine with MobaXterm. Then I am viewing the report with WinSCP after connecting to the same machine.
runT.sh : shell script to set up instance folder, and then trigger the health_check.sh script
health_check.sh : script to search through task.json and log files for information related to metadata, errors, warnings and then prints a report.
These files are included in health_check.zip which is attached to the article. When you unpack them make sure they are executable (chmod +x ..).
1. Download both scripts and move them to an environment with bash/shell enabled.
2. In the same folder or directory location, upload the diagnostic package as a zipped file.
*Note this can be the only zip file in the directory when running the ./runT.sh
3. When you run the script, ./runT.sh, you must supply and folder name. When I use this, I call the folder the case name, but could be the task name, etc.
Example:
#./runT.sh squeeze_example
This will create a new folder in the directory called 'squeeze_example' with the unzipped contents of the diagnostic package and the report_date.out file.
Here is a sample of the report_date.out file.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The scheduling feature is now available in Qlik AutoML to run a prediction on a daily, weekly, or monthly cadence.
1. Open a deployed model from Qlik Catalog
2. Navigate to 'Dataset predictions' and click on 'Create prediction' on bottom right
3. Select Apply Dataset, Name prediction datset, select your options, then click on 'Create Schedule'
4. Set your schedule options you would like to follow then click confirm
5. Your options now are to 'Save and close' (this will not run a prediction until the next scheduled) or 'Save and predict now' (this will run a prediction now in addition to the schedule)
Note: Users need to ensure the predicted dataset is updated and refreshed ahead of the prediction schedule.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
I needed to test a ML experiment recently with a QVD instead of a csv file.
Here are the steps I followed below to create a QVD file which was then available in Catalog.
1. Upload the local csv dataset (or xlsx,etc) and analyze which will create an analytics app
2. Open up the app and navigate to "Data Load Editor"
3. Add a new section under the Auto-generated section (with the + symbol marked with a red arrow above. Note this section must run after the Auto-generated section or will error the data is not loaded.
Add the following statement:
Store train into [lib://DataFiles/train.qvd];
or
Store tablename into [lib://DataFiles/tablename.qvd];
4. Run "Load data"
5. Check Catalog for recently created QVD
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.