Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
In the era of artificial intelligence (AI), the need for accurate, high-quality, real-time data has never been greater. Qlik Talend Cloud, our market-leading data integration and quality solutions offering, utilize AI-enriched, no-code pipelines to rapidly deliver real-time trusted data throughout your organization — driving AI innovation, intelligent decisions, and business modernization.
Just a few months ago, we announced the general availability of Qlik Talend Cloud and we are making it increasingly easier for developers and data engineers to build, manage and fine tune their high-performance data pipelines to support their enterprise needs and use cases.
Today, we are excited to announce several new Dev-Ops capabilities and innovations that would make it even simpler and easier for our customers to adopt and manage Qlik Talend Cloud for ingesting, transforming, modeling, collaborating and working with data for analytics and AI needs.
Some of the key new capabilities that are being launched include:
Let’s dive into each of them a little bit more.
In modern data ecosystems, data sources and business requirements are often dynamic, so being able to evolve schemas—i.e., the structure of data—without interrupting the flow of data is critical for smooth operations.
Schema evolution allows users to easily detect structural changes (aka schema drift) to multiple data sources and then control how those changes will be applied to your project. At Qlik, we previously supported schema evolution and schema drift for Replication pipelines with a manual process for multi-step data pipelines where data engineers had to manually monitor and make relevant changes to the pipelines.
Today, we are thrilled to launch Automated Schema Evolution, featuring the ability to detect and automate all of the DDL changes that were made to the source database schema and dramatically simplify the efforts required to modify the pipelines.
Any changes in the data structure at the source database will automatically be picked up and applied to the target structure - including the Type 2 history, a comprehensive live view of the architecture - and the changes applied to the pipelines without any manual interventions.
What this enables is a much more automated and well-oiled data operations, with fewer errors and things breaking downstream, and a storage (bronze) layer that is always up to date - even when new values or columns are added to the source.
So, if a new column is added to the source database, it is automatically detected and captured with Type 2 history and the appropriate changes are reflected in the tables in the landing and storage zones, without the need for any data reloads. Also, for any changes beyond the bronze layer, users get notifications/alerts to schema changes in the bronze layer so they can react quickly to make changes to downstream pipelines.
Here is a quick demo on the new Automated Schema Evolution functionality:
With Automated Schema Evolution, users can set distinct rules and have access to a series of more fine-grained controls for schema evolution. These include the specific actions that need to be taken for various DDL events – such as automatically adding a column to the target, or suspending a table on rename etc. These configurations allow users to prevent downstream impact and control behavior in the target data platform.
See example below.
Automatic schema evolution is now generally available, and you can learn more about the feature in the Qlik help/documentation page here.
Version Control empowers developers to work concurrently on different aspects of a project—such as adding new features or fixing bugs—without disrupting the main version of the project. This approach supports incremental, collaborative, and secure development, allowing teams to release updates progressively while maintaining stability.
Today we are excited to launch Version Control for Qlik Talend Cloud Pipelines through efficient and secure GitHub integration and branching support.
Every Qlik Talend Cloud user in the organization can utilize their GitHub account with a personal access token to connect their projects to any authorized GitHub repository.
More importantly, with the new ‘branching’ feature, it enables multiple developers to:
This parallel development model allows team members to sync their changes efficiently and collaborate without conflicts and putting each other’s work at risk.
Users can even open an existing project located on a GitHub repository. This allows sharing projects across spaces and tenants.
Here is a quick demo on the new Version Control functionality:
Using GitHub, developers can submit pull requests, where other team members can review and approve the code before it’s merged back into the main project. This review process ensures collaborative quality control and reduces the risk of introducing errors while enabling developers to safely commit and push the changes to the central repository.
Version control, branching and schema prefixes further enhances security as well, minimizing the risk of data loss or corruption, while fostering collaborative engagement.
The Version Control feature is now generally available now in Qlik Talend Cloud Standard edition (and upwards). For more details on the Version Control feature, and how to get started, please visit the documentation page here.
Along with the other DevOps innovations, we are delighted to announce the launch of a new set of REST API endpoints for importing and exporting Pipelines or projects in QTC. This will in turn enable users to start building and managing their data pipelines using a Continuous Integration/ Continuous Deployment (CI/CD) approach.
These new APIs programmatically reproduce the capabilities available within the user interface; and allow programmers to manage projects across tenants and spaces for deployment purposes in an easy-to-use fashion.
In just a few API calls, users can now read project variables (referred to as bindings), export and re-import projects.
The export API creates a ZIP file containing all necessary project contents for re-import. Besides all project-related resources (including tasks, datasets etc…), the export API also generates a separate “bindings” file whose purpose is to list all project parameters and variables for users to customize on re-import.
To import a project, users have the choice of either creating a new project (using the dedicated API) or importing within an existing project. In the latter case, users only need to read/update the bindings and import the project contents to overwrite the existing one.
The list of Import/ Export APIs endpoints that we are launching include:
|
API |
Description |
Command |
|
Export API Definition |
Exports the project content as a ZIP file.
|
GET /v1/di-projects/{projectId}/actions/export
|
|
Get Project Binding |
Retrieves the bindings for the specified project.
|
GET /v1/di-projects/{projectId}/bindings |
|
Create a New Project |
Creates a new data integration project with the specified parameters. |
POST /v1/di-projects
|
|
Update Project Binding |
Updates the bindings for the import of the specified project. |
PUT /v1/di-projects/{projectId}/bindings
|
|
Import Project Content |
Imports project content into an existing project from a ZIP file.
|
POST /v1/di-projects/{projectId}/actions/import
|
Please find below a quick demo of the new Import/ Export APIs:
This feature is available now generally available, and you can find more details on the Export/ Import APIs, visit the documentation page here.
These new DevOps features in Qlik Talend Cloud represent a significant step forward in simplifying and optimizing data pipeline management. By automating key processes like schema evolution, integrating robust version control capabilities, and enabling seamless CI/CD workflows through new APIs, we're empowering data teams to work faster, smarter, and with greater confidence.
As businesses continue to rely on real-time, trusted data for AI-driven decision-making, these innovations ensure that your data pipelines are not only resilient and scalable but also future-proof. Whether you're managing complex data environments or developing cutting-edge analytics solutions, these enhancements make it easier to adapt, collaborate, and stay ahead of the curve. We’re excited to see how these new features will help you unlock even greater value from your data, driving innovation and business transformation.
Authors: Vijay Raja, Vincent Menard
Here are some useful resources to learn more:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.