Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
During our company's auditing, we have been alerted to several vulnerabilities in the embedded Java VM for Replicate v2024.11.0.177
https://nvd.nist.gov/vuln/detail/CVE-2024-21235
https://nvd.nist.gov/vuln/detail/CVE-2024-21208
https://nvd.nist.gov/vuln/detail/CVE-2024-21217
https://nvd.nist.gov/vuln/detail/CVE-2024-21211
https://nvd.nist.gov/vuln/detail/CVE-2024-21210
Is there a newer version of Replicate that addresses these? If not, what is the recommended path to fix these vulnerabilities?
To resolve the Issue, upgrade OpenJDK to version 17.0.13 or later or upgrade Replicate to version 2025.5.
Upgrade only Java while keeping your current Replicate version 24.11
You should upgrade to a Java version that addresses the CVE (JAVA version 17.0.14+7), but not a major version. Alternatively, download any other relevant JRE binaries.
Manual steps to update the JRE for Replicater
The steps are similar for Replicate on Windows or Unix.
Defects: CVE-2024-21235;CVE-2024-21208;CVE-2024-21217;CVE-2024-21211;CVE-2024-21210
When a task is processing CDC data, the internal replicate engine uses "Linux iconv" API to convert the incoming delta records. If the conversion fails, following warning and error is displayed in the logs:
2025-07-09T06:44:56:68127[SERVER W: Conversion error 84,CodepageFrom=943, CodepageTo=65001, insize=16, output='', outSize=99(str.c:509)
2025-07-09T06:44:56:70391[SOURCE_CAPTURE]E: Error converting column 'column name' in table 'table name' to UTF8 using codepage 943 [1020112](db2luw_endpoint_capture.c:1321)
2025-07-09T06:44:56:70414[ASSERTION]W:The problematic column value:
7f7a040a921a: 834F838A815B8358834183628376FCFC | .O...[.X.A.b.v..
7f7a040a922a: 8140814081408140814081408140| .@.@.@.@.@.@.@
(db2luw_endpoint_capture.c:1323)
The change data converted to hexadecimal contained character value "FCFC", which could not be converted to Unicode.
To validate whether the issue is caused by the data itself, we can also use third-party tool such as DBeaver to check if throws the same error when selecting the data from source DB.
When upgrading to Qlik Talend Studio R2025-06v3 or later tSnowflakeRow with Snowflake-jdbc-driver-3.22.0 that uses the function LAST_QUERY_ID() in command always returns “Alter session set JDBC_USE_SESSION_TIMEZONE=false” instead of the the correct query id(query id last query ran)as a result.
To have LAST_QUERY_ID() return the last query and not the alter session query, it is required to modify the command to use LAST_QUERY_ID(-2).
For example:
Select query used in tSnowflakeRow component
"SELECT LAST_QUERY_ID() as LAST_QUERY_ID(-2);"
In Studio R2024-04, a new checkbox "Use Session Timezone" was added to the tSnowflakeInput and tSnowflakeRow components. The default value is unchecked (Alter session set JDBC_USE_SESSION_TIMEZONE=false) to avoid Snowflake regression issue Incorrect timezone handling for java.sql.Time, However this does bring the side effect of snowflake functions like LAST_QUERY_ID() returning the wrong value.
2025-04-studio-known-issues for 'Use Session Timezone' option in the Snowflake components
After upgrading to Remote Engine 2.13.13, when enabling the option to execute a job from Studio on a remote engine, the process fails due to SSL and PKCS-related errors.
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
SSL Configuration for Talend Studio to Connect with Remote Engine
During the installation of Talend Remote Engine, SSL credentials are automatically generated. To retrieve the keystore password, execute the following command:
cat /opt/TalendRemoteEngine/bin/sysenv
and locate the line
TMC_ENGINE_JOB_SERVER_SSL_KEY_STORE_PASSWORD=<PASSWORD>
The following files are necessary for secure communication between Talend Studio and the Remote Engine:
/opt/TalendRemoteEngine/etc/keystores/jobserver-client-keystore.p12
/opt/TalendRemoteEngine/etc/keystores/jobserver-client-truststore.p12 (Truststore file added with RE 2.13.13)
Transfer these files to your Talend Studio workstation and store them in a dedicated folder.
Edit the Studio startup configuration file, depending on your operating system:
-Dorg.talend.remote.client.ssl.force=true
-Dorg.talend.remote.client.ssl.keyStore=path_to_keystore/jobserver-client-keystore.p12
-Dorg.talend.remote.client.ssl.keyStoreType=PKCS12
-Dorg.talend.remote.client.ssl.keyStorePassword=<password_from_step_1>
-Dorg.talend.remote.client.ssl.keyPassword=<password_from_step_1><
-Dorg.talend.remote.client.ssl.trustStore=path_to_truststore/jobserver-client-truststore.p12
-Dorg.talend.remote.client.ssl.trustStoreType=PKCS12
-Dorg.talend.remote.client.ssl.trustStorePassword=<password_from_step_1>
-Dorg.talend.remote.client.ssl.disablePeerTrust=false
-Dorg.talend.remote.client.ssl.enabled.protocols=TLSv1.2,TLSv1.3
Talend Remote Engine enforces SSL communication by default, ensuring that all interactions with the engine are encrypted. If Studio does not have the appropriate certificates installed, it will be unable to establish a secure connection with the Remote Engine.
In processing step, under Lineage, you may unable to select the "Allow Lineage collection of this task" check box to enable it when creating or editing your task in Talend Management Console.
To verify your Qlik Talend Cloud License Type
The reason of "Allow Lineage collection of this task" option is not visible when creating tasks in Qlik Talend Management Console is due to the type of license currently in use. The Lineage feature is only available with the Qlik Talend Cloud Enterprise Edition or Qlik Talend Cloud Premium Edition licenses.
For more details please find the links below
For Qlik Talend On-premises Solution, when using global context values in all jobs, If you want to change the file path values to replace d:/ with t:/, you can propagate the change to all jobs in the Studio.
How about Qlik Talend Cloud and Hybrid Solutions? This article provides a brief introduction to changing the value of context parameters and propagating the change to all Tasks in Talend Cloud.
You can use this API to update artifact parameters
Put https://api.eu.cloud.talend.com/orchestration/executables/tasks/{taskid}
In the body, set the paramters you want to update:
Example: change the ContextFilePath": from "C:/Franking/in.csv" to "h:/Franking/in.csv")
{
"name": "contextpath",
"description": "CC",
"workspaceId": "61167bef18d7d656bfae071d",
"artifact": {
"id": "689b5d04febbe74489779c31",
"version": "0.1.0.20251208032554"
},
"parameters":{
"ContextFilePath": "h:/Franking/in.csv"
}
}
To maintain context values better in the long run for multiple jobs, create a Connection in Talend Cloud. This way, all context in the job can be updated by updating the file name in the Qlik Talend Management Console Connection as opposed to running the API multiple times. For information on how to set up a connection, see Managing connections.
If using the tap-shopify connector in Stitch for the orders stream, you may encounter the following error:
"An error occurred with the GraphQL API"
To resolve the error:
Why This Happens
This is a Shopify platform constraint, not a Stitch limitation.
The Shopify Bulk API has an important restriction that only one bulk operation per type (e.g., orders) can run at a time per shop.
This means: If a bulk job is already running, any new request of the same type will fail. Overlapping jobs from multiple Stitch connections or other platforms can trigger this error.
When You Might See This Error
You may encounter this message if:
Shopify Bulk API Documentations:
https://shopify.dev/docs/api/usage/bulk-operations/queries#limitations
Bulk operations with the GraphQL Admin API
When migrating a web service job from Talend Studio 7 to Talend Studio 8, the job fails to run in Talend 8 with Java 17, which was previously executed successfully under Java 8 or Java 11.
The execution is failing with the error:
The package javax.xml.namespace is accessible from more than one module: <unnamed>, java.xml
Upon detailed review, it is identified that a Code → Routine Utility Class referencing third-party JARs (such as jaxrpc and xpp3) includes the javax.xml related package which causes a module conflict.
To resolve the conflict, these JARs should either be removed from the project’s library references or modified to exclude the javax.* package, thereby preventing duplication with the JDK provided java.xml module.
This seems to be related to Java module conflicts introduced in Java 9 and above. In Talend 8, the job runtime relies on Java 17, which appears to be incompatible with the way certain XML packages were handled in the earlier environment.
This error message indicates that the same package is being loaded from multiple sources. In this context, it means there is a duplicate JAR reference in the job design.
Potatial Causes
Since Java 9, the JDK already includes the java.xml module, which contains the javax.xml.namespace package. If third-party JARs also provide this package, it causes a module conflict.
You may be expericening the following compilation error in your talend ESB route cJMS component with "WebSphare MQ" when migrating from v7 to Talend v8-R2025-06.
The method jmsComponent(JmsConfiguration) in the type JmsComponent is not applicable for the arguments (ConnectionFactory)
Use the studio patch v8-R2025-08 which contains updated javajet file and README with details of installation process.
The code generator for cJMS with "WebSphare MQ" component is still using javax.jms.ConnectionFactory in talend studio, which is not aligned with camel-jms (Camel 4.8.1) that is using jakarta.jms.ConnectionFactory connection Factor.
Internal Jira case ID: QAPPINT-1661
The following error is found in log when working on a Git project in Talend Studio:
org.talend.commons.exception.CommonExceptionHandler - Branches configuration is out of sync, can't find the specified branch ****, will try to recover it using it's remote branch.
In order to fix this, please modify the remote repository as well.
For example, if you delete the branch on Talend Studio, please delete it from the remote repository as well.
If you are using Github, it would mean deleting the branch on Github from the following link:
https://github.com/[repository name]/branches
or using Git command tools like Git Bash to delete the branch from the remote repository.
The git branch structure in Talend Studio does not match the repository in the actual Git repository.
This usually occurs when you modify the branch in the local project, but it is not modified the same way in the remote repository.
You may be facing the error as below when trying to run a job (ESB) in Talend Studio.
"Too many constants the constant pool for **** would exceed 65536 entries"
In order to avoid this, you can re-design your job and use a separate job to handle some of the job flow or use dynamic-schema
For example, you can use tRunJob component and put some part of the job flow into another job.
As mentioned in the error message, there are too many constants being used for the job code.
The java code has a limit of 65536 entries, and this usually occurs when there are too many components being used in one job or too many columns in the schema.
If you are receiving an error that is related to the 65535 bytes limit, but the error message is different, please refer to this documentation:
How to prevent errors in code generation caused by the 65535 bytes limit
When configuring a Google Analytics integration in Stitch, users can define custom combinations of metrics and dimensions. However, there are two common issues that may arise:
Below are steps to troubleshoot each scenario.
A common error message users may encounter is:
"Invalid combination. The combination of metrics and dimensions you entered is invalid. Please see the Dimensions & Metrics Explorer to learn which combinations of metrics and dimensions are valid."
This error is raised within Stitch but originates from Google’s API. When a report is created, Stitch sends the selected metrics and dimensions to Google. If the combination is unsupported, Google returns an error.
Test the combination using Google Query Explorer
Use the same metrics and dimensions as defined in the Stitch integration, with a single start and end date.
Interpret the results:
If users report that the data in Stitch doesn’t match what they see in the Google Analytics UI, it’s important to understand the following:
Use Google Query Explorer
Run a query using the same metrics and dimensions defined in the Stitch integration, with a single start and end date.
Compare the Results
Next Steps
If the data in Query Explorer does not align with what Stitch has replicated, refer to Stitch’s Data Discrepancy Troubleshooting Guide for further investigation and guidance on contacting support.
Question
I need to read data from a DB2 database and the field type is defined as CHAR () FOR BIT DATA. When I create the connection in Talend metadata and try and view the data, it appears as HEX. Using something like DBeaver I can see the data. How can I get Talend to read the data correctly?
Tools like DBeaver automatically cast data types. To get the same result before processing in a component, add this to your SQL statement:
SELECT CAST(your_column AS VARCHAR(100) CCSID 37) AS utf8_col FROM your_table
CCSID 37 is US EBCDIC, used by IBM AS/400.
See a table here:
https://www.cs.umd.edu/~meesh/cmsc311/clin-cmsc311/Lectures/lecture6/ebcdic.html
A Talend Spark Stream Job configured with Yarn cluster mode and Kerberos enabled is encountering issues and failing to execute, presenting the following errors:
YarnClusterScheduler- Lost executor 3 on me-worker1.xxx.co.id: Unable to create executor due to Unable to register with external shuffle server due to : java.lang.IllegalStateException: Expected SaslMessage, received something else (maybe your client does not have SASL enabled?)
jaas.conf content =
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/tmp/adm.keytab"
principal="cld_adm@XXX.CO.ID"
doNotPrompt=true;
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/tmp/adm.keytab"
principal="cld_adm@XXX.CO.ID"
doNotPrompt=true;
};
The external shuffle service within YARN is configured to mandate SASL authentication; however, the Spark executor is either improperly configured to use SASL or is transmitting an incompatible message.
Common causes includes:
This error results in the executor's failure to register with the shuffle service, prompting the YarnClusterScheduler to mark it as lost.
Ensure that the external shuffle service is enabled, and that the SASL settings are in accordance with YARN's configurations:
spark.authenticate true
( #not necessary
spark.network.crypto.enabled true
spark.network.crypto.saslFallback true
)
When passing parameters from postman to a talend services, tRestRequest component is receiving null response.
Review the schema of the tRestRequest component to ensure the Comment field is appropriately assigned.
In REST API Mapping section, by default, if you leave the Comment field empty, the parameter is considered as a Path parameter.
There are some parameters missing/misconfiguring in the Comment field of tRestRequest component and you need to define what type of parameter it is in the Comment field of the schema.
Below is a list of supported Comment values:
empty or path corresponds to the default @PathParam,
query corresponds to @QueryParam,
form corresponds to @FormParam,
header corresponds to @HeaderParam.
matrix corresponds to @MatrixParam.
multipart corresponds to the CXF specific @Multipart, representing the request body. It can be used only with POST and PUT HTTP methods.
https://help.qlik.com/talend/en-US/components/8.0/esb-rest/trestrequest
When attempting to build a set of items (jobs, routes, services, etc) from a parent project with a reference project, Maven and the P2 may only see the items in the parent. Additionally, there may be errors of certain items (child jobs, routines, etc) missing during different stages of the build; such as (but not limited to the following):
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal on project <project_name>: Could not resolve dependencies for project org.example.<project_name>.job:<job_name>:jar:0.1.0: The following artifacts could not be resolved: org.example.<reference_project>.joblet:<joblet_name>:pom:0.1.0 (absent)
When the YAML or Pipeline scripts being used with the orchestrator, Talend CICD may need to be modified, so that the parent and reference projects are checked out and copied into one folder. This process may depend on the orchestrator used (such as Jenkins/Cloudbees, Azure DevOps, Gitlab Actions, etc); for specific information, please check with your DevOps team for specific commands and process.
In short, the process should look similar to the following:
Most YAML or Pipeline scripts are setup to pull and checkout from one repository/branch; however, if a Reference Project is being used, it does require that repository/branch also be pulled down. If the reference project is not copied into the same folder as the parent project, Maven/Talend may only see the parent project checked out without reference project.
Question
After upgrading Talend Administration Center (TAC) to TPS-5612 (R2024-12) or later, why do the four MetaServlet commands createBranch, branchExist, createTag and deleteBranch not work as before and throw the error like below?
{"error":"Unknown command 'branchExist'","returnCode":2}
With the release of Talend Administration Center patch TPS-5612, project references and Git access have been removed from Talend Administration Center to improve the performance.
The following elements have been removed:
This is documented in the change notes here: https://help.qlik.com/talend/en-US/release-notes/8.0/r2024-12-administration-center
You may be experiencing a ClassCastException error with the tHTTPClient component during an HTTP call.
java.lang.LinkageError: ClassCastException
You can resolve this by changing the "Version" parameter in the .POM file to the proper version:
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.3</version>
</dependency>
This is caused by the HTTP Client used in the .POM file:
customizing-project-pom-settings
You may have a job that was designed with several tRunJobs components and are supposed to use the context group values from the parent job.
While the job itself runs well within Studio with no issues and publishing is just fine, the job may fail when deployed to a Remote Engine via TMC. There error found within the task logs may show something, such as
Error locating <insert resource>, not found
You should make sure to seclect “Transmit Whole Context" checkbox if the child job is supposed to use the parent job's context values.
When you are using multiple tRunJobs components, the child jobs need to use the context group values from the parent job. If the “Transmit Whole Context” option is not enabled in the tRunJob component setting, the child jobs will attempt to use it's own context group, which could be wrong values for your environment. Issue may be in some cases that the correct contexts were not being transmitted to a child job.
In Talend tDBInput component, CLOB data types can not be handled properly when you are attempting to ingest data from source DB2 to Target one.
This article briefly introduces how to properly handle CLOB data types when ingesting from DB2 to Target using Talend.
The code below is a scratch version that needs testing and rewriting.
Use this as an example to study:
==Example code: === package routines; import java.sql.Clob;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.apache.commons.io.IOUtils; package routines; import java.sql.Clob;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.apache.commons.io.IOUtils; import java.io.IOException;
import java.io.InputStream;
import java.io.StringWriter;
import java.lang.String; public class ClobUtils { public static String getClobAsString(Object obj) {
String out = null;
if (obj != null && obj instanceof Clob) {
out=getClobAsString((Clob) obj);
} else {
Logger.getLogger("ClobUtils").log(Level.FINE, "null value");
}
return out;
} public static String getClobAsString(Clob clobObject) {
String clobAsString = null;
if (clobObject != null) {
long clobLength;
try {
clobLength = clobObject.length(); if (clobLength <= Integer.MAX_VALUE) {
clobAsString = clobObject.getSubString(1, (int) clobLength);
} else {
InputStream in = clobObject.getAsciiStream();
StringWriter w = new StringWriter();
IOUtils.copy(in, w, "UTF-8");
clobAsString = w.toString();
}
} catch (SQLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
return clobAsString;
}
}