Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
After a successful upgrade to Qlik Sense Enterprise on Windows November 2024 patch 8, changing the MS SQL Server connection's password from the Data Load Editor (DLE) generates the following error:
Bad request 400
This error occurs when clicking 'SAVE connection', even after the connection has been successfully tested.
Changing the MS SQL Server connection's password from the Qlik Sense Enterprise Console (in the Data Connection view) works as expected.
This is a known defect (QCB-32467 | SUPPORT-4457) in Qlik Sense November 2024 and Qlik Qlik Sense May 2025.
Upgrade to:
QCB-32467 | SUPPORT-4457
Your company might need to migrate its users from an old Active Directory domain to a new one. Sometimes usernames will also be renamed.
In some cases, it won't be possible to use the QMC to perform the migration of a document permission, due to users having the same name in the old and new domain.
If documents are being distributed using the QlikView Publisher functionality, then the DistributionDetail.xml can be edited to have the new and old domain and user names replaced.
Prior to doing this, ensure that a QVPR backup exists.
RecipientName="domain1\user1"
RecipientName="domain2\user2"
This article describes the procedure for when QlikView Server is migrated to a new domain. In this scenario, the existing QlikView Server that will be moved to a new domain is a single server QlikView Server installation and has a static IP address.
What you need to take into account are permissions (Service Account, User access to files) and the name of the machine in case that changes as well. License assignments such as User CALs and Document CALs will need to be redone, as those will reference the previous domain name.
Changing the hostname of the QlikView Server requires a change of the references to the hostname for each service. See Migrate and restore your backup in the QlikView upgrade and migration section on our Help for details.
CALs will not automatically refer to the new domain\ prefix. You will need to manually re-assign them.
Refer to the Power Tools for QlikView and the User Management.
NOTE: The CALs will not be available for 7 days; no exceptions. Plan to perform the migration period during an appropriate date range. The only possible alternative to avoid the quarantine is to completely clear the license and then, after reapplying it, reassign all the CALs.
The QlikView Administrator will have to edit the domain\ prefix for all available objects.
The QlikView Shared File Cleanup tool can be used to change ownership of objects. See How to change Server Object Owner in QlikView using the inbuilt Cleanup Tool for details.
See How to migrate Active Directory Users in QlikView for details.
When replicating data from MySQL integration, users may encounter the following extraction error:
Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@xxxx is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
2025-09-30 20:30:00,000Z tap - INFO [main] tap-hp-mysql.sync-strategies.common - Querying: SELECT `pk_col`, `col1`, `col2`, `col3` FROM `schema`.`table` WHERE ((`pk_col` > ? OR `pk_col` IS NULL)) AND ((`pk_col` <= ?)) ORDER BY `pk_col` (<last PK value checked>, <max PK value>)
2025-09-30 20:32:00,000Z tap - FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@XXXX is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
SELECT `pk_col`, `col1`, `col2`, `col3`
FROM `schema`.`table` WHERE (`pk_col` IS NULL OR `pk_col` > [last PK value checked]) AND `pk_col` <= [max PK value] ORDER BY `pk_col`;
SHOW FULL PROCESSLIST; SELECT ID, USER, HOST, DB, COMMAND, TIME, STATE, INFO
FROM information_schema.PROCESSLIST
WHERE STATE = 'Sending data';
If you are unable to alleviate the error following the above, please reach out to Qlik Support.
This error occurs when Stitch has an active server-side streaming ResultSet on a MySQL connection and tries to execute another statement on that same connection before the stream is fully consumed and closed. MySQL’s JDBC driver allows only one active statement per connection while a streaming result is open.
Potential Contributors
https://dev.mysql.com/doc/refman/8.4/en/server-system-variables.html
When using an Amazon S3 as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Amazon S3 which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
setx AWS_SHARED_CREDENTIALS_FILE C:\Users\demo\.aws\credentials
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
aws s3 mv s3://%1 s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test/%2.%3/%2_%3__%sn%.csv
where C:\Users\demo\.aws\credentials is generated in above step 3. The values are obfuscated in the above sample.
General
Bucket name : qmi-bucket-1234567868c4deded132f4ca
Bucket region : US East (N. Virginia)
Access options : Key pair
Access key : DEMO~~~~~~~~~~~~UXEM
Secret key : demo~~~~~~~~~~~~ciYW7pugMTv/0DemoSQtfw1m
Target folder : /APAC_Test
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename_S3.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
7. Startup or Reload the Full Load ONLY task and verify the file output.
C:\Users\demo>>aws s3 ls s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test --recursive --human-readable --summarize
2023-08-14 11:20:36 0 Bytes APAC_Test/
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT/
2023-08-15 08:10:28 9 Bytes APAC_Test/SCOTT.KIT/SCOTT_KIT__00000001.csv
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT500K/
2023-08-15 08:10:34 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000001.csv
2023-08-15 08:10:44 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000002.csv
2023-08-15 08:10:54 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000003.csv
2023-08-15 08:11:05 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000004.csv
2023-08-15 08:11:15 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000005.csv
2023-08-15 08:11:24 2.7 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000006.csv
Total Objects: 10
Total Size: 22.7 MiB
Qlik Replicate
Amazon S3 target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Wind...
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linu...
QVS files (read more) cannot be uploaded to Managed Spaces in Qlik Cloud.
.qvs (QlikView Script) files cannot be directly uploaded to a managed space in Qlik Cloud. QlikView Script files are intended as reusable load script blocks and are not considered application files (such as .qvf and .qvw).
To use a .qvs file, copy the script's contents into an app's load script editor or use an $(Include=...) statement to reference the file, which needs to be stored elsewhere and made accessible to the app.
This capability has been rolled out across regions over time:
With the introduction of shared automations, it is now possible to create, run, and manage automations in shared spaces.
Limit the execution of an automation to specific users.
Every automation has an owner. When an automation runs, it will always run using the automation connections configured by the owner. Any Qlik connectors that are used will use the owner's Qlik account. This guarantees that the execution happens as the owner intended it to happen.
The user who created the run, along with the automation's owner at run time, are both logged in the automation run history.
These are five options on how to run an automation:
Collaborate on an automation through duplication.
Automations are used to orchestrate various tasks; from Qlik use cases like reload task chaining, app versioning, or tenant management, to action-oriented use cases like updating opportunities in your CRM, managing supply chain operations, or managing warehouse inventories.
To prevent users from editing these live automations, we're putting forward a collaborate through duplication approach. This makes it impossible for non-owners to change an automation that can negatively impact operations.
When a user duplicates an existing automation, they will become the owner of the duplicate. This means the new owner's Qlik account will be used for any Qlik connectors, so they must have sufficient permissions to access the resources used by the automation. They will also need permissions to use the automation connections required in any third-party blocks.
Automations can be duplicated through the context menu:
As it is not possible to display a preview of the automation blocks before duplication, please use the automation's description to provide a clear summary of the purpose of the automation:
The Automations Activity Centers have been expanded with information about the space in which an automation lives. The Run page now also tracks which user created a run.
Note: Triggered automation runs will be displayed as if the owner created them.
The Automations view in Administration Center now includes the Space field and filter.
The Runs view in Administration Center now includes the Executed by and Space at runtime fields and filters.
The Automations view in Automations Activity Center now includes Space field and filter.
Note: Users can configure which columns are displayed here.
The Runs view in the Automations Activity Center now includes the Space at runtime, Executed by, and Owner fields and filters.
In this view, you can see all runs from automations you own as well as runs executed by other users. You can also see runs of other users's automations where you are the executor.
To see the full details of an automation run, go to Run History through the automation's context menu. This is also accessible to non-owners with sufficient permissions in the space.
The run history view will show the automation's runs across users, and the user who created the run is indicated by the Executed by field.
The metrics tab in the automations activity center has been deprecated in favor of the automations usage app which gives a more detailed view of automation consumption.
Qlik is aware of some industry concerns around the use of the NPM library fast-glob. To address these concerns, Qlik is taking steps to remove this library from the Qlik Sense for Windows product. The removal is expected to be complete as of the November 2025 release.
A replication fails with the following:
[TARGET_APPLY ]I: ORA-03135: connection lost contact Process ID: 19637 Session ID: 1905 Serial number: 3972 [1022307] (oracle_endpoint_load.c:862)
[TARGET_APPLY ]I: Failed to truncate net changes table [1022307] (oracle_endpoint_bulk.c:1162)
[TARGET_APPLY ]I: Error executing command [1022307] (streamcomponent.c:1987)
[TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PCA UAT DW Target [1022307] (subtask.c:1474)
[TARGET_APPLY ]I: Target component st_0_PCA UAT DW Target was detached because of recoverable error. Will try to reattach (subtask.c:1589)
[TARGET_APPLY ]E: Failed executing truncate table statement: TRUNCATE TABLE "PAYOR_DW"."attrep_changesBF9CC327_0000402" [1020403] (oracle_endpoint_load.c:856)
This may require additional review by your database admin.
In this instance, the issue was caused by a database-level trigger to monitor drop, truncate, and alter statements by name TSDBA.AUDIT_DDL_TRG, which is currently invalid.
To resolve, validate the trigger and add logic to not consider attrep_changes% tables, as this is just a temporary table for Qlik Replicate batch processing.
Tables from your integration are not being loaded to Snowflake. The loading error is:
Cannot perform CREATE FILE FORMAT. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.
GRANT ALL ON WAREHOUSE <stitch_warehouse> TO ROLE <stitch_role>;
GRANT ALL ON DATABASE <stitch_database> TO ROLE <stitch_role>;
ALTER USER SET DEFAULT_ROLE MY_ROLE;
ALTER USER SET DEFAULT_ROLE STITCH;
The root cause of the issue is likely related to permissions or role settings for the Stitch user in Snowflake. If you run a destination connection check in the Stitch User Interface for your Snowflake connection and it is successful, but your loads fail, then the error boils down to a permissions issue with loading the data.
After copying data from MSSQL to Azure SQL tables, the copy fails with the error:
The metadata for source table 'tabel_name' is different than the corresponding MS-CDC Change Table. The table will be suspended.
Verify if the tables you are replicating are temporal or system tables. Temporal or system tables are not supported by Qlik Replicate. See Limitations and considerations for details.
If you want to capture changes to these tables with MS-CDC and Qlik Replicate, then you have to unhide the system-generated columns:
ALTER TABLE <the table name> ALTER COLUMN [SysStartTime] drop HIDDEN;
ALTER TABLE <the table name> ALTER COLUMN [SysEndTime] drop HIDDEN;
Depending on how the table was created, the hidden column names may be different, such as ValidFrom, ValidTo.
If you don't want to make the above change, you can use the ODBC with CDC endpoint and capture both the base table and the history table using SysStartTime as the context column.
See Qlik Replicate: W: The metadata for source table 'dbo.table' is different than the corresponding MS-CDC Change Table for details.
The following error may be encountered in Qlik Replicate when reading from an Oracle Standby database node:
[SOURCE_CAPTURE ]E: Cannot create Oracle directory name 'ATTUREP_9C9D285sample_directory' with path '/RDSsamplefilepath/db/node_C/archive' [-1] (oradcdc_bfilectx.c:165)
Qlik Replicate accesses Oracle archive logs through Oracle directories from the file path assigned to the node, as retrieved from the v$Archived_log view. The mentioned error occurs when the Qlik Replicate task is unable to use the Oracle directory and file path set in the DB. In this instance, Qlik Replicate attempts to create its own custom directory.
If the user does not have Create Any Directory permissions, then this error occurs.
Read permissions on the file path of the Oracle directory are required; otherwise, the task will remain unable to access the archive logs, even when permissions to the Oracle directory are provided.
See Access privileges when using Replicate Log Reader to access the redo logs for details.
Example:
When working with the standby(Secondary) node C, the Oracle user will not have default permissions to the Oracle Directory and File Path. Giving permissions to just the Oracle Directory is not enough for the task to access the File Path. Read permissions must be given to both ARCHIVELOG_DIR_C and abc_C/arch in this example:
Provide Read permissions to both the Oracle Directory and the file path in use.
The task was missing File Path Read permissions of the Oracle Directory.
This article outlined the steps needed to prevent data loss and how to resume tasks in Qlik Replicate after moving an Oracle Database.
This step is crucial to prevent any changes from being missed.
Facebook Ads integration extraction fails with the following error:
SingerSyncError POST: 400 Message: (#3018) The start date of the time range cannot be beyond 37 months from the current date.
If you are suddenly seeing this error, it is likely due to resetting the integration or table while having an older Start Date.
To resolve, change the Start Date in your Facebook Ads integration settings to a value within the last 37 months. This aligns with Facebook's current policy and allows the integration to function properly.
If you have any questions about this limitation or to discuss potential options for accessing older data, please contact Facebook. They may have additional insights or alternative solutions for businesses needing to access older advertising data.
Going forward, it's important to be aware of this 37-month limitation when working with Facebook Ads data, especially when setting up or resetting integrations. Regular data backups or exports might be advisable to retain historical data beyond this window for long-term analysis and reporting needs.
Can we find out who changed the date?
Qlik Stitch does not track this type of user activity. You will need to check with other users in your organisation.
This is a Facebook Ads API limitation documented by Stitch.
To investigate Task failure, It is necessary to collect the Diagnostics Package from Qlik Cloud Data Integration.
Option Two: Monitor view within the task
Often, Support will request that specific logging components be increased to Verbose or Trace in order to effectively troubleshoot. To modify, click on the "Logging options" located in the right-hand corner of the logs view. The options presented in the UI do not use the same terminology as what you see in the logs themselves. For better understanding, please refer to this mapping:
UI | Logs |
Source - full load | SOURCE_UNLOAD |
Source - CDC | SOURCE_CAPTURE |
Source - data | SOURCE_UNLOAD SOURCE_CAPTURE SOURCE_LOG_DUMP DATA_RECORD |
Target - full load | TARGET_LOAD |
Target - CDC | TARGET_APPLY |
Target - Upload | FILE_FACTORY |
Extended CDC | SORTER SORTER_STORAGE |
Performance | PERFORMANCE |
Metadata | SERVER TABLES_MANAGER METADATA_MANAGER METADATA_CHANGES |
Infrastructure | IO INFRASTRUCTURE STREAM STREAM_COMPONENT TASK_MANAGER |
Transformation | TRANSFORMATION |
Please note that if the View task logs option is not present in the dropdown menu, it indicates that the type of task you are working with does not have available task logs. In the current design, only Replication and Landing tasks have task logs.
This article gives an overview of the available blocks in the dbt Cloud connector in Qlik Application Automation.
The purpose of the dbt cloud connector is to be able to schedule or trigger your jobs in dbt from action in Qlik Sense SaaS.
Authentication to dbt Cloud happens through an API key. The API key can be found in the user profile when logged into dbt Cloud under API Settings. Instructions on the dbt documentation are at: https://docs.getdbt.com/dbt-cloud/api-v2#section/Authentication
The blocks we have available are built around the objects Jobs and Runs. Furthermore for easy use of the connector there are helper blocks for accounts and projects. For any gaps we have raw API request blocks to allow more freedom to end users where our blocks do not suffice.
Blocks for Jobs:
Blocks for Runs:
The following automation is added as an attachment and shown as an image and will run a job in dbt Cloud and if successful reloads an app in Qlik Sense SaaS. It will always send out an email, this of course can be changed to a different channel. Also it would be possible to extend this to multiple dbt jobs:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article aims to answer the following questions:
Stitch is a cloud-based ETL platform, which means it is not real-time and may experience latency due to the nature of cloud infrastructure and its step-based processing model.
Stitch’s replication process consists of three independent steps:
Extraction → Preparation → Loading
Each step takes time to complete and is influenced by various factors.
For more information, see: Stitch’s Replication Process | stitchdata.com
The speed and efficiency of Stitch’s replication process can be affected by:
These factors can vary over time and across integrations, which is why replication durations are not always predictable.
The replication frequency determines how often Stitch initiates a new extraction job (when one isn’t already in progress). Stitch tracks your tables and updates them based on the replication method you’ve selected.
However, this frequency does not guarantee that data will be prepared and loaded within the same time window. For example, a 30-minute frequency does not mean the full replication cycle completes in 30 minutes.
Stitch extracts one table at a time per integration (sequentially). It must finish extracting one table before moving to the next.
Once data is extracted, Stitch begins the preparation phase, which involves cutting records into rectangular staging files. This step is batch-based and starts as soon as data is returned from the source. The duration of this phase depends on the structure and volume of the data.
Stitch can load up to 5 tables concurrently per destination. If 5 tables are already loading, others must wait until a slot becomes available. For example, with 10 integrations and 20 tables each, Stitch will load 5 tables at a time per destination.
Stitch’s loading systems check every 15–20 minutes for batches of records that are fully prepared and ready to be loaded into your destination.
What may appear as missing data is often just incomplete processing. Most data discrepancies resolve themselves once Stitch finishes processing.
The Qlik Cloud and Qlik Sense Enterprise on Windows Straight Table come with a menu option to Adjust Column Size.
Clicking this option does not have an immediate effect.
What does it do?
Adjust Column Size sets the column into a state that allows you to change its width using your arrow keys. Once in this state, you can use the left and right arrow keys to make the column larger or smaller.
IBM DB2 iSeries connector in Qlik Cloud Data Integration requires setting up a Data Gateway - Data Movement (see Setting up Data Movement gateway) and installing the supported DB2 iSeries ODBC driver on the same server (see Preparing the installation | IBM DB2 for iSeries).
This article aims to guide you through the process.
Currently, Qlik Cloud Data Integration supports DB2i ODBC driver version 1.1.0.26, as can be confirmed by viewing the /opt/qlik/gateway/movement/drivers/manifests/db2iseries.yaml after the data gateway is installed.
cd /opt/qlik/gateway/movement/drivers/bin
sudo mkdir -p /opt/qlik/gateway/movement/drivers/db2iseries
sudo wget -O /opt/qlik/gateway/movement/drivers/db2iseries/ibm-iaccess-1.1.0.26-1.0.x86_64.rpm “https://public.dhe.ibm.com/software/ibmi/products/odbc/rpms/x86_64/ibm-iaccess-1.1.0.26-1.0.x86_64.rpm”
./install db2iseries
sudo systemctl restart repagent
To ensure CDC works with this connector, set the internal property useStorageForStringSize to true. There is a known issue with BOOLEAN datatype and driver version 1.1.0.26, and this parameter will ensure smooth replication. Otherwise, you will see an error like:
[TASK_MANAGER ]I: Starting replication now (replicationtask.c:3500
[SOURCE_CAPTURE ]E: Error parsing [1020109] (db2i_endpoint_capture.c:679
[TASK_MANAGER ]I: Task error notification received from subtask 0, thread 0, status 1020109 (replicationtask.c:3641
[TASK_MANAGER ]W: Task 'TASK_gG1--Wsyl3drvCJf636TqQ' encountered a recoverable error (repository.c:6372)
[SORTER ]I: Final saved task state. Stream position QCDI_TEST:QSQJRN0001:6504, Source id 3, next Target id 1, confirmed Target id 0, last source timestamp 1759171643379185 (sorter.c:772)
[SOURCE_CAPTURE ]E: Error executing source loop [1020109] (streamcomponent.c:1946)
[TASK_MANAGER ]E: Stream component failed at subtask 0, component st_0_EP_SYcKapbEJQZiVETw_g5z4w [1020109] (subtask.c:1504)
[SOURCE_CAPTURE ]E: Stream component 'st_0_EP_SYcKapbEJQZiVETw_g5z4w' terminated [1020109] (subtask.c:1675)
To configure useStorageForStringSize: