Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!

Q&A with Qlik: AI with Qlik Talend Data Integration

cancel
Showing results for 
Search instead for 
Did you mean: 
Troy_Raney
Digital Support
Digital Support

Q&A with Qlik: AI with Qlik Talend Data Integration

Last Update:

Feb 4, 2026 4:20:05 AM

Updated By:

Troy_Raney

Created date:

Feb 4, 2026 4:20:05 AM

Troy_Raney_0-1770196715701.gif

 

Environment

  • Qlik Cloud
  • Qlik Talend Data Integration

Transcript


Hello, and welcome to Q&A with Qlik! Today, we're going to be talking about AI and Qlik Talend Data Integration.
I'd like to quickly introduce myself. My name's Troy Raney. I work to put together helpful videos and webinars like this one.
We've got a great expert panel for you today.
I'll just pass it around and let everyone quickly introduce themselves. Antoine, why don't we start with you?
Hello everyone, so I'm Antoine Richard. I'm based in France, I'm a product manager working on Qlik Talend Cloud.
I'm especially focusing on data transformations and Qlik open accounts.
Great, thanks. Damien?
Damien Edwards, Master Principal Integration Architect, focused a lot on the Data Integration as a subject matter expert, so helping you build out your data pipelines and helping you solve Data Integration use cases.
Fantastic. Manuel?
Ah, hello everybody. So, Manuel Salgado here, uh, based out of Houston, Texas, and like Damien, a colleague, uh, you know, on the same team, um, principal Integration Architect here, and subject matter expert on Qlik Talend Cloud and Data Integration.
Uh, and uh, hope to answer as much of your questions as you have today.
Great, thank you. And Simon.
Hi folks, my name is Simon Swan. I work on the product marketing team at Qlik, formerly of Talend, so a little bit of experience there.
Um, I'm not as technically capable as some of these folks that we have on the Q&A today, but where I can, hopefully I can add a little bit of color to some of the answers.
Fantastic, thank you. Well, when you registered, you had an opportunity to.
submit some questions ahead of time, so we're going to start with those.
If you have any questions already in mind, go ahead and you can post them in the Q&A tool, and we'll get to them.
First question. What's the difference between Talend Cloud and Qlik?
Simon, you want to take that?
Yes. Um, I saw this question come in earlier, and it's an interesting question, because, um.
At Qlik, we're not focused on the differences, we're focused on the similarities. Now, as many of you know, um.
Qlik and Talend were two organizations. Um, Talend was acquired by Qlik, I think.
two, three years ago now. Um, so, um, we've worked on an integration between the two.
Now, Qlik Talend Cloud is a composite of all of the features that we had across both portfolio companies brought together.
And, um, this basically allows us to build for the future, as well as support our legacy and a lot of the sort of functionality that customs have used for many, many years.
Now, our aim is to consolidate these into a very powerful singular platform that covers all of these use cases.
As an example, um, Talend users can benefit from, um, CDC now, and a whole variety of other tools that came from the Qlik acquisition.
And on the Qlik side, obviously, the power of the Data Integration tools makes this a very compelling type of, um.
uh, sort of offering. Now, um, in terms of the differences, when you go to Qlik Talend Cloud, you have perspectives.
And the perspectives are designed to serve people who were using those tools under the previous sort of, like, arrangement, but on the Qlik side and the Talend side. Um, so basically, um, most people use, uh, Qlik Talend Cloud Pipelines as an example to access some of the features that we had from the Attunity side.
Now, um, what we're finding is that through this consolidation process, we're building a very strong singular platform.
And this is going to be manifest a little later this year, when we work on the orchestration side, bringing together remote engines and gateways into a single portal, so people can access all of their jobs from that particular experience.
So, when we talk about, sort of, um, uh, was it, uh, Talend Cloud and Qlik Cloud, basically, these are perspectives within Qlik Talend Cloud, allowing people to access.
the features and functionality that they've used for many years.
as we then bring these two powerful brands together.
Fantastic. Good answer. Next question, and this came from our previous Techspert Talk on Knowledge Marts.
Can you monitor the usage of the Knowledge Mart?
Yes, so, um, I can take this one as well. So, first of all, thank you everyone who attended, um, and some of the great questions and feedback that we had.
Um, from that session. Um, it's pretty satisfying to know that people are very interested in the features, the new features.
that we roll out. Um, in terms of Knowledge Marts themselves, um, just to refresh a lot of people, um, this is a feature that we introduced to Qlik Talend Cloud last year.
Um, it allows people to ingest both structured and unstructured data into.
of vector stores, um, to allow them to create AI experiences over the top.
Now, um, if you haven't seen the demonstration, please go back to that previous session. I think Troy will be able to link it. He's got it up on the screen here, this is great.
Um, so, um, that allows people to see what happened previously. Now, um, when we look at, um, the, the, the, how we, uh, sort of.
allow people access to Knowledge marts. Um, typically what happens is that we use the supporting platform, the data platform, to actually do the processing. And so it can be considered a pass-through in many circumstances. So, as an example, we have.
dedicated, um, support both for, um, Databricks and Snowflake.
So as such, we don't actually monitor or give these metrics to customers directly, because.
All of that insight is alongside all of the other workloads that people have on these platforms today. So, um… What I would advocate for is people using tools that we have on top of Snowflake. Well, actually, um… yes, you just brought it up, fantastic, Troy. So, we actually have a series of dashboards that work on top of Snowflake to help people monitor.
and predict what's happening in terms of how they're consuming the compute resources as well.
What I'll do, I'll just share my screen out, Troy, because I've got a video that I use for another session, uh, let's have a look here.
Sure.
Uh, it's gonna allow me to do this. Here we go.
There we go. So, um, this is a short video I put together showing the actual dashboard. It's very comprehensive, actually, and what it allows you to do is to drill down into costs if you're FinOps-minded, um, the utilization, usage of the different data assets as well. And what happens is you can, if I can scroll here.
Yeah, am I able to scroll? Sorry, my mouse cursor's very challenging. Here we go. So you can see here that I'm able to look at our queried distribution. So this is everything that's been pushed down.
from what we do in Qlik onto Snowflake, and we can actually look at those workloads which have been generated or initiated on the Qlik side as well.
As we move forward, we can get very granular with this in terms of the warehouse, the application, the workload, and also the number of queries that has been presented. So, this dashboard connects to your Snowflake instance, and it allows you to view all of this information.
What's quite interesting is when we get to the distribution of the AI workloads, and you can see here we have a perspective which allows you to look at the different, uh, sort of classifications of complete.
classify sentiment and translate within the Snowflake using the cortex features as well.
Now, further to that, if we… if I scan ahead a little bit more, um, here we go, we can also see how we can forecast.
potential costs as well, we can look for anomalies. In terms of seeing the spike in these types of, um.
uh, sort of workloads. So this really gives a very comprehensive view of what's happening in terms of your Cortex utilization, and it's always going to be your first stop.
Um, just, well, I don't want to dominate this conversation with this, but scrolling through a little bit longer, you can see here, uh, I can also look at the performance of queries, and in this instance, and apologies if it's very small on the screen here, what we're able to do is actually predict what's happening. So you can see here that I have a SQL query, which is, um, uh, in that box there, and it's actually giving an explanation as to.
how it's being run, and how optimized it is.
Finally, I can… there we go, you can see it there clearly, and then finally, in this dashboard, what we have.
is the ability to basically use answers, then to drill down and ask a question. This involves us.
basically pushing through… there we go, I've got a question here. So, if I was to basically use this type of query.
It can then compute the costs on the Snowflake side to help us determine exactly how expensive that's going to be. So, tremendous level of granularity. So, to answer the question, we do have the features, we basically rely on that in terms of the destination platform.
Um, for the… for people to investigate it, because it's very, very deep and robust. Right now, we have something available for Snowflakes on the FinOps side to encourage people to go to that link on GitHub to download it, and obviously speak to our team if you need any help with the implementation, um.
Moving forward into Databricks, we're going to be producing similar types of dashboards around that as well. But again, all of this is built on top of our Qlik Analytics features, and so if not all of this, some of this can be implemented by customers as well.
That's very cool, and that app you showed, that's the installed version of what's available on GitHub, that link.
That's correct, yes.
Fantastic. All right.
And in case people didn't get, that was… that's free of charge, Gratis, right? No… nothing, no additional cost for you to just go there to GitHub and deploy this on your Qlik tenant.
Excellent, thank you for clarifying that. Alright, next question. Can you use QVD data already running through Qlik Cloud?
Yes, I'll take that question. Uh, currently, yes, you can. Um, you can catalog existing QVD datas.
And they can be used and enabled for other features of one feature specifically, which has already been mentioned, is Qlik Answers.
You can use QVD data to be part of your Knowledge base, and index that data to be able to chat with it using LLMs as well within our RAG use case.
You can also use it, we have another product called Data Products, so you can bring QVD data in, and you can use it to create governed data assets that can be shareable, um, and also, we have a concept called an AI Trust Score, which can basically use data validation rules or data quality, and you can generate specifically an AI Trust Score to show how trustworthy the data is. So, as you bring data in and use it for analytics.
You can then build out trustworthy products that can be shared and consumed also to create other applications as well.
Or you can even just use QVDs that already exist in another, uh, Qlik, uh, analytics, QlikSense application, so you can definitely make use of.
QVDs, as well as in the Data Integration pipeline, create more QVDs to bring into your application as well.
Fantastic. Yeah, a lot of options there.
And I would add, uh, to everything that Damien said, also, if you have a need to expose QVDs to external consumers.
in a neutral data format, using data products, you can actually expose QVDs.
using SNL data API. So let's say you have an application outside of Qlik, or you have a web service that needs to.
Yes.
you know, you need to expose this… you need to export this data, but the QVD format is not suitable.
We with… and this is all code-free, no… no effort on your part, all you have to do is bring the QVD over to a data product.
And you can say, I want to consume this as an application API, and we will automatically generate for you an all-data API.
For you to consume that QVD.
Very cool. All right, uh, next question.
What new features provide a competitive advantage over other tools?
Yeah, I'll take that one as well. There's two features in particular I want to mention. Uh, we have some documentation, and we've also had tech sessions on our Open Lakehouse solution. Uh, we definitely think this provides a great advantage because we're able to provide lower ingestion cost, ingestion near real time.
By utilizing the open format, such as Iceberg, where we can bring the data in, ingest it, and then we can mirror it to other different targets.
So this allows you to be able to save on your ingestion costs in a scalable solution, and then also move it to multiple different targets, so you don't have to re-ingest. So this gives you a lot of capabilities around.
saving costs, and then also making sure the data is available, and not just one, but multiple different.
targets downstream. You also have the ability to use us for transformations with this capability as well to help you out on more pushdown and saving on compute as well.
Uh, the second, uh, feature, which is gonna be GA, really, relatively soon, is our MCP server, which is we enable in our Qlik Talend Cloud. This will give you access to our Agenta framework and agents that expose tools, where you can then use, uh, third-party.
tools, uh, such as, uh, ChatGPT or Cloud to basically call these specific tools and help build out assets with our platform, so you'll be able to access tools, generate.
applications, also data products, which we mentioned as well, can be used using this agent framework. So this gives us.
ability to use MCP with things such as our associative engine that will give you a lot of, uh, different use cases outside of over some of our competitors as well.
That's very exciting, and while you're on the topic, a question did come up.
come in on that. When will it be available, and will it be free?
the MCP server.
Um, well, it will be available this month, it should go GA. As far as, uh, free, no, there is going to be an associated cost, um, which there will be some documentation on that.
Depends on the tier you're at, right?
Well, if I may also, so… You know, so I have additional information on that, so… Uh, it should come out the February th, I believe, or if I got the date correctly, please don't safe harbor here in case I got the number wrong, but it's sometime in the middle of February for…
Um, our Qlik Cloud analytics of full access through, um, you know, through the MCP server for.
The analytics functions for Data Integration functions, those will come out later this year, don't have a date yet for that.
In terms of cost, it will be, as long as you're a Qlik Cloud, QlikTalend Cloud customer, it would be added to.
do your tier without any additional. Uh, uh, charge. It would… it would be part of every tier of Qlik Talend Cloud and Qlik Cloud.
Fantastic. That's very exciting. Next question. We're running into some issues creating tasks mirroring data from Iceberg to Snowflake.
Is there an online tutorial to help with that?
I can take this one for you. So, yes, there is, there is online documentation on our website.
I will provide a link on the chat here after.
But we, uh, just a quick recap here. So, the… that we have with the open local solution that Damien just presented before as a differentiator for.
for Qlik. So the whole copy me a ring is.
really how quick customers can benefit from the benefits of.
like a low cost, low-latency data ingestion into Iceberg.
without using any data warehouse. But, you know, with the zero continuing, we are enabling customers to access this data from their data warehouse of choice, such as Snowflake.
without any copy, with, you know, by keeping one single source of truth, right?
Uh, so, we do have documentation on that. In the documentation.
Documentation, you will see that. Uh, so… sorry, one command first. The documentation right now is for Snowflake only, because we only support Snowflake as a mirror. We are about to release mirroring to Redshift, like, next week or so.
And next, we'll come database, and then BigQuery as well. So, you'll have everything.
Every data support you need, and you can mirror from one Iceberg table to multiple.
Data with all these, right? Snowflake, energy, for example.
So right now, the documentation is only about… Snowflake, and it will be, you know, extended.
to document for the other platforms as well, right?
So, basically, you… you have to configure, like, a refresh mechanism. So that's how the metadata reflects into your target data warehouse.
So there are two ways with Snowflake. One is the Snowflake managed. It's really a serverless operation, uh, that's taking care of.
by Snowflake, managed by Snowflake, and it's really, like.
automatically manage the refresh interval of those. pointers to the object tables are managed by Snowflake, uh, and you have an automatic refresh configuration that you can adjust, I think, right?
But we don't Qlik loses ownership of this mirroring here, if you pick that, right?
Uh, something we would recommend is maybe use the Qlik manage option. That's a second option here.
Which do require, uh, like, an active Snowflake warehouse, but we'll provide you with.
full control on when you want. the mirror to be refreshed, and if you have.
you know, downstream transformation on Iceber that are using this, uh, sorry, downstream transformation on Snowflake that are using these Iceberg tables, you can control that as well. You can trigger downstream transformations, you can monitor and schedule the neural task here.
So, for that, uh… This is what we recommend, and this is particularly relevant for if you have multi-tables, and.
Especially mutable transformations downstream, as we will manage the metadata update for all tables simultaneously, and provide you with consistency here, which you might lose if you use a Snowflake managed option, right?
So, that's really something to keep in mind. The documentation details all of that, and, you know, all the prerequisites that you need to properly configure.
these external objects in Snowflake that points to your… IVR cables. So I'm sharing the link right now in the chat.
That's great. Alright, I'll pass that along.
There it is, thank you, find that link.
Yeah, that's the one.
Alright, next question. Is there a NetSuite connector in Qlik Talend Data Integration?
Um, I can answer this one quite quickly. Um, yes, it is in studio.
We have support for… I think it's version, uh, API version 20-25.1.
Um, and it's been in the software for many, many years. Um, I think it uses the SOAP protocol.
Um, to connect to, uh, data. So, um, uh, Go to Studio, um, the components available there, and you can use it like any other.
database component to allow you to read or to write information to that source or destination.
Fantastic. I'll share that link as well. Great. Next question.
Is it possible to send multiple files from a folder via the Talend?
Tsend Mail component. Very specific.
Yeah, um, I kind of agonized over this answer, because I like to tell people the answer, but then I always want to dig into the why a little bit, asking, you know, why are people doing that? So, um… The answer is yes, you can. Um, you have to use an iterative process.
Yeah.
I think there's a component called a t-file list, um, which basically allows folks to, um, iterate through the contents of a folder, as an example.
Um, and this allows them to send out the attachments with an email.
Now, in terms of best practices, um, I believe it's one attachment per email that's sent, um, so if you want multiple.
attachments, then you may have to zip them prior to sending it out, or potentially look for a solution.
where they can be sent. But I do… sometimes question why people are sending attachments via email.
you know, looking at what a lot of our customers are saying of being essentially governed organization.
Um, their preference is to have discrete object storage, you know, exactly the types of reports.
where they live centrally, so people aren't working with versions or things that may be wrong or out of sync and things like that. I understand for convenience sake, that this can happen, but, you know, in a similar way. You could write the files to an S bucket or to SharePoint, um, and access them that way as well. So the answer is yes, you can do it.
Um, but maybe look a little bit further, um, with that type of requirement to figure out exactly what's best fit.
I love that answer, Simon. A lot of times.
what may be better for the future.
I come from support, and we're always trying to.
sometimes solve the problem right in front of our nose, but not the bigger picture. Like, maybe that's not the best solution.
So that's great.
Sure.
you know, and I'm guilty of this, I still email myself stuff, so, um, you know, it happens all the time, just for convenience sake, um, but, you know, if we're looking at the way that people are sharing data across an enterprise, especially with reports and things like that.
It's almost an imperative that they are centrally managed and governed now, because, um, any numbers which are incorrect, it just allows people to make worse decisions.
Yeah, definitely. Great, uh, next question.
What is best practice for ingesting SQL Server to Snowflake across dev, QA, production environments?
Do we need multiple tenants? That's a fair question.
Yeah, no, no, you don't need multiple tenants. We have, uh, features as part of.
intelligent pipelines that help you, uh, segregate different environments and move.
You know, promote from one to another. So, the feature is called a space, right?
And we would recommend that you create a space for each environment you need, right? One for development, one for QA, one for production.
You would also create, like, dedicated data connections. to your sources and target in each space, right? You are not using the same connection, probably, to your production SQL server than to your dev.
SQL Server, right? Alongside that, the spaces and the connections by space, we do have a mechanism, uh.
to take, like, a development pipeline, development… project and move it.
QA, and then to prod. So, the basic way to do that is to use the export and import functionality, right? You take your dev.
project, you export it, you import it back in QA, right? When you… When you do the import process, you will be prompted to select which connections you want to use.
And so that you can, you know, use the QA sources and use QA targets as well, right?
One other advance, more advanced, uh… way of doing that is to use the GitHub integration as well, right? This is probably what, you know.
more, uh, like. mature or… like, coder, uh… Customers would do, and then for critical deliverance, that's really recommended. So you would have your spaces, and you would have, as well, one branch.
of your project for each space, right? Uh, and then you will go and do a GitHub operations to move from one environment to another.
We do have, uh, best practices on that, that are documented in the documentation, so I'm looking for.
the link, and I will share it in the chat here.
You should have everything you need. Uh, in the docks.
Okay.
But that's great.
I just want to add as well, we also have REST APIs that can help you do the export, as well as all of the features and functionality that Antoine just mentioned.
Fantastic. Here's that link… Wonderful.
If you scroll down a bit, Troy, you should see, yeah, working in Bunch is there.
But the part for key temperature machines and spaces.
Great.
All right, next question. talking about AI. With AI, there's no advantage of using GUI-based tools like Talend anymore. Are there any major updates planned for Qlik to address this?
I can… I can take this one, and also the… the next one, and combine it into the… some of… another question that we had. So… Uh, and anyone, if you want to chime in, please feel free.
Yeah.
I would say that, first, I would… I would, um, question the premise, right? I don't know that… GUI-based tools are going to go away now that we have a way to verbalize and.
transcribe what we need the system to do, and the agents can do it for us.
There's… there's still gonna be a need to fine-tune some of the solutions the agent come up with, at least.
At the beginning, and maybe throughout the entire term of the development.
I would look at it more as a complement, right? Now that we have agents, right, now that we have, uh.
Generative AI and agents, agentech, that can. Uh, delegate, and from our expression of what we want the system to do in a plain English or other language.
have the system do what we want, right? Think of that as another interface into the system, whereas before, you only had the UI, or maybe you had the script, the code, right? First, you had the script, then we had the UI that.
translated objects in the screen to scripts. Now we have the, uh, variable agent that can take your words and translate that into the… Uh, what we want the system to do. So, I would say that they are still… all of those three things are still going to be needed going forward, particularly for the really hard use cases and problems.
Now, to the question itself, you know, we are very aware, as Qlik, as a company of this.
new and exciting way of interacting with systems, and that's why.
Uh, we are bringing to market the MCP server, uh, that's gonna come out the middle of this month.
Um, there's gonna be extensions of that into, uh, Talend and into Qlik Talend Cloud.
Uh, to make… to enable those systems to work in that… in that… in that fashion, right? Where you can just tell it what you want it to do, and… You know, we could create a pipeline in the cloud, it could create a Talend job, right, basically.
Based on your prompt, right, of what you want the system to do.
Um, so that is definitely coming later. First, we're gonna start with our stronger heritage, which is the analytics, uh.
Uh, uh, side of the house, right, where we have a robust and mature set of APIs.
Because if you understand MCP, you know that you need behind it for it to really work, and for the agents to actually perform.
They need to have well-defined and documented APIs that can be called upon to do the different things in the system.
And those are, uh, more mature and advanced in the analytics side, and source the rest of the year, we're working on bringing that up.
capability up to the… to the Data Integration side of the house.
So that's… hopefully that answers the question as to what we're planning to do.
Did you want to talk about the difference between Qlik Answers and Knowledge marts, or…
Um… Yes, but I'll pause here, I don't know if any other panelist wants to chime in and add anything that I may have missed.
Yeah, just to add in, even with the use of agents and AI, you still need trustworthy data, so you're still going to want to have.
Very good.
data that has quality, that's trusted, has guardrails around it, so that way you can get really good data results from the AI models that you use, and you can take action, because if you don't have good data.
Then the results that you have aren't going to be good, so that we don't foresee, um, any of these pipelines going away as of now.
Because these are still good ways to get that quality data.
Thank you for bringing that up, Damien.
Yep, excellent, excellent point. Um, so let me answer now, there's a… there's a… A question here about Qlik Answers, and… And what it can do, and the difference between Qlik Answers and Knowledge Mart. So, I'll start with.
The difference between Qlik Answers and Knowledge march as.
Qlik Answers is today. So let me share my screen here. I have one slide that can very succinctly, uh, show this.
So, um, if you can… you can see my slide, basically, the… the… The differences between Qlik Answers and Qlik QTC Knowledge Marts.
Um, as it's… as it exists today, Qlik Answers basically offers you that out-of-the-box.
Out of the box, self-service solution. All you have to do is add your documents, and you have a full-fledged RAG.
Referial augmented generation-based agent. that you can ask questions, and it will, based on the Knowledge base that you provide, the documents that you have provided.
Uh, we'll find you the best answer, right, using RAG and generative AI models, right, that are all within the.
the Qlik Answers box, right? So it's kind of like, just add your data, your structured data solution.
And it… and you have the… the thing is implemented, right?
On the bottom of the slide, we have what Knowledge March do today, which is… I like to call it, you know, Qlik Answers is like rag in a box or rag in a box, and.
Qlik Knowledge Marts as build your own rag, you know, so, uh, you know, the idea here is that you, you want to have.
more control over, you know, what embedding model I'm using, what platform and vector store I'm using, am I starting my vectors in Snowflake, or Databricks, or am I storing them in.
pinecone, elastic, or any of the supported vector databases.
And then you, you know, from… we get you the data to the point where it's AI, Gen AI ready, right? So the output of a pipeline.
Uh, with Knowledge Mart is really vector store populated with embedded data.
Uh, that's ready for consumption now. It's on you to build an agent using a completion model, a chat model, right?
of your choice, and then to provide the last mile to the customer experience.
So, as it stands today, Qlik Knowledge Marts gives you the full end-to-end experience.
Uh, for… for a rack-based chat… chatbot use case.
And Knowledge Marts gives you the ability to customize the components of that experience, still giving you the automation, right?
Even though Qlik Talend Cloud Knowledge Marsh is… is sort of like half of the work.
It's still a no-code way to do that half, which otherwise would tell… would take you a lot of effort to implement, so you get the benefit of that automation.
Uh, uh, for the vectorization and embedding of that data, okay?
So that is as it stands today. Now, what I… what I want to say about where we're going, right, is that… and in answering the question.
Uh, that is here about, uh, if we can include structured data.
and Qlik Answers. Well, as it stands today, we cannot. However.
As of middle of this month, when the next generation Qlik Answers and MCP server, so, by the way, gonna be part of.
One, uh, one bundle, right? Uh, when we get this, this, this next version of Qlik Answers, you will be able to plug in.
Both documents and structured data from your Qlik Sense applications.
into the Qlik Answers box, right, Engine, and have that be available.
Uh, to answer… to answer questions, uh, when you're, you know, and refer back to charts in the application.
Uh, and so forth, when you're… when you're using the Qlik Answers chat interface.
Okay? Now, what is Qlik Answers actually becoming? This is the third part of the question.
Um, it's actually more than that. So, when Qlik, this new version, it's actually gonna be part… with the MCP server.
It's gonna be a full agentic experience on the platform. So, not only Qlik Answers can do what it did before with now both structured and unstructured data.
It's also going to be able to trigger actions in the system for you to.
Build an entire Qlik application using your prompt, your, you know, language-based prompt, translated into objects and Qlik Sense, right?
So that's gonna… gonna be what the next Qlik Answer instruction's gonna be towards the middle of the month.
Um, and then alongside that comes the NCP server, which will give you the ability to.
to use that same functionality within your cloud code or your, you know, your OpenAI code, Copilot, right?
You'll be able to, from not… so with the MCP server, what that means is that you no longer have to log into Qlik.
to create objects in Qlik. You can set up the connection to the MCP server from your platform of choice, let's say Cloud Code.
And then use that platform, and from that platform, it trigger actions within Qlik to generate the data that, uh, the analysis that you want to perform on.
Anything you can do within the Qlik Analytics platform.
I know it was a long answer, there were several elements in there, so I was trying to cover them all, but hopefully… and anybody that wants to chime in.
That's very cool.
Please, go ahead.
That was great, and uh… basically, it's a very exciting release that's coming up with the MCP server, and what that opens up for us.
Uh, next question, try to get through them all. Besides using AI and Talend for the actual.
doing, what plans exist for Agentic Coding and Talend? I just want to bring that up since.
It's all on the same topic where you're just saying.
Manuel.
Yeah, so this is just more to the… to the same, um, the release of the MPC server, or enable it within the tenant, it'll allow us to be able to use tools.
Um, within these third-party platforms to do things such as semantic search, build different, uh, applications, as well as when we start to release them on the i side, Bill help build pipelines and.
And different data products as well, so this is our approach for more of an agented framework with MCP, which we're starting that journey this month, uh, once it's released.
Fantastic, thank you. Next question, and it's a bit of a long one, they're using Qlik Data Gateway Direct Access to pull on-prem SQL Server data into Qlik Cloud.
The installation and SQL connectivity are working as expected.
And we can successfully refresh data during reloads, that's great. However… They just need a clarification on one specific requirement, and that is to enable real-time user-driven SQL.
queries from the Qlik front end. For example, user inputs a parameter in the Qlik application, such as company code .
supplier Walmart. And then they want to query SQL Server using those user-entered variables and return results instantly without performing a.
full reload. Alright, so that was a very specific requirement.
Anyone got any thoughts on that?
Uh, so the first thought I have is that this is a really great question for the analytics experts. Unfortunately, the people we have on this panel are more of a Data Integration experts.
However, it's about Qlik Sense and Qlik analytics. However.
Yeah, especially since it's a front-end solution, yeah.
Based on my limited Knowledge of that side of the platform, right, and anybody else can chime in, Damien, you probably could too as well.
Mhm.
I believe there's two ways to address this, uh, within Qlik Sense, right? One is to use.
Yeah.
Direct SQL, and there is a capability within… at least there was, if it's still there, to use direct SQL query for, you know, a QlikSense application.
That would… that would address, I believe, what… what this… what this user is… is asking for. I'm not sure if it's compatible with Data Access Gateway, I don't know why it wouldn't be, but… That used to be a function that I would remember, right? Um, Damien, I'm not crazy, right?
Yeah. No, no.
The second option, which is a little bit more involved, and maybe it's a little bit trickier, but there is a way to set up.
a beta integration pipeline that. Gives you an actively updated QVD, right? And within Qlik Talend Cloud, you can take.
data from SQL Server and have that data. In near real time, and I say near meaning, like, minutes or so.
have the QVD updated, and then when the application goes to that QVD, the data would be updated, uh, as of… as of that frequency.
There's also the capability, and even more… so that's… I mentioned two things. The third thing that I understand from my.
My analytic expert colleagues, uh, is that there is a way to create a QVD that you can update in real time.
Without having to reload the entire data set, and I'm pretty sure if.
The user here, that has this question, I would definitely recommend that you set up some time with, uh, your account.
Uh, executive with, uh, so we can guide you, because there's, uh, services folks here and, uh, that can, that can definitely, and experts that can definitely.
help figure out what's the best solution for the use case.
Yeah.
But I'll leave you with those things that I, uh… Pretty sure it's… it's, uh, gonna… one of those recommendations is gonna hit the nail here.
And I totally agree, I come from the analytics side, and I would say reach out to your account representative to get some services help, because.
There's definitely ways to do it, and we just need to find the best way for you that would be best fit.
Uh, next question. Does Talend provide a native utility for real-time JVM monitoring?
We're currently lacking granular visibility into memory consumption per job.
It makes it difficult to identify high-resource consumers. Okay.
Yeah.
I think it's fair to say we may have reached the extent of our Knowledge on this. Obviously, I think this… we're talking… what we're talking about here is an on-prem installation, obviously, and so, um, I would really advocate speaking with the support team specifically on this.
Um, Troy, if you're able to get the details of this individual, then we'll be happy to follow up and see what we can do here. I'm sure this is not a unique requirement. We see this in cloud-based solutions all the time, so, um.
Let's maybe, um, take this one offline with a quick follow-up to see what we can do to help.
Great. Great plan. Next question.
Uh, can you talk about AI components in Talend Studio?
Yes, I can. Um, we actually have quite a bevy of AI componentry, which many people may not have seen. Um, we started rolling out these features, I think it was at the end of.
into . Um, rather than telling, let me show, uh, because that's what I'm asked to do.
Here we go. Um, so, um, here's some of the componentry that we have today in Studio. So, um.
As you can see, we have support for LLMs such as Claude.
Olama and OpenAI. Um, in each instance, you can select your token, which is provided by the platform.
You then select which model you want to use, and then… and also your prompt.
What I love about these components is that you can drop them into workflows and pipelines that you have today. And so, I'm going to show one example of something you can do.
But this is one area that gets quite exciting. You do need to be a little bit careful, though, in terms of how you use these. So, as an example.
Asking general questions about data is much better than maybe iterating through every single data point.
Because we know that that increased costs. So, um, strategically use these in the best way possible.
In addition to that, we have support for our RAG and Vector sort of databases.
Um, we support both Milvus and Pinecone today, with, um, dedicated components for chunking, which incorporates the ability to determine the chunk size and the overlap between chunks, so you can get better results.
And also, um, the embeddings as well, how you basically configure the embedding model that you're using. So.
That's pretty good. Um, and as an example, um, this is one area where I get excited. You can see here that I've got OpenAI.
Helping us classify customer feedback. So, um, this could be unstructured data coming into a studio job, and then we can classify it.
And it's going to basically say, um, look at what the customer's saying in the notes, and then classify it if according as a delivery problem, technical problem, they're dissatisfied with the product, etc. And then this allows us to channel the information to the correct workflow.
Um, this is fantastic, because basically we move beyond just the data driving this. We're looking at the insight, specifically when you're talking about a model that you've trained.
with your enterprise data. One area that it gets exciting to me is that if you pass it to data stewardship, as an example.
Um, what you're able to do is also add some context.
why it's happening, to help inform the downstream user, make that decision quicker. So, um… These components can be used from anything for, like, drop them into pipelines, they can even be used for translation.
There's tremendous scope here, just from these components, so I encourage people.
to incorporate those into their workflows today. In addition to that, um, routes, uh, CAML routes, um, we have some.
Really cool technology which allows us to implement Langchain into our routes.
What this does, we can create a series of different tasks.
And to create, like, an agentic experience for the users. What this means is that the user query comes in, play textual, uh, sort of, uh, information comes in, and then we can determine which workflow can be used for resolution.
So we moved beyond giving generalist answers. In fact, we can capture a generalist answer and say this wasn't created through a specific workflow.
But being able to identify what's happening, it might say, um, people say, my delivery's late, why is that? It might check the weather.
As an example, as one of these tasks, you can create these very modular experiences for your users.
Which really helps reduce things like hallucinations. And remember, because these are camel routes, these are ESB-grade sort of types of implementations.
These can happen in real time as well, really allowing you to accelerate through. So, um, if anybody has any questions on the Talend side, I know, um, that the team really want to do another session with you, Troy, on these agentic routes, actually showcasing how we can do it. So, um, hopefully that will be coming pretty soon.
Great.
Um, because not enough people know about it, and it's extremely potent technology, which really allows technologists to help accelerate these AI use cases.
Wow, it's always fun getting to see the product and the capabilities. That looked really cool, that flow.
Thank you, uh… a few more questions have come in that.
Let's see, next question… Question regarding Lake House on Iceberg.
how to manage… the instances, provisioning the instances are sometimes large, medium, and small, and how can that.
work without referencing the workload, how can those be managed?
So… so ask, you know, when you create a lake house cluster as part of the Open Lakehouse solution, you are asked to provide a different informations, like which instance family type you want to use.
Type R, for example, which instance size you want to use.
Mm-hmm.
x large. This is what we recommend by default, right? And then you are asked to provide the number of.
minimum spot instances and maximum spot instances that. The cluster can scale out.
On, right? So you sh… if you configure that properly, you shouldn't.
see, you know, different instance size or families being used, right? You specify one, which will be used for all.
the spot instances of your cluster.
Great. And, uh… Follow-up question to that. When I use that demo environment.
He's not sure how to stop the AWS consumption without deleting it. Is there… Plans add away, or is there a way to pause that?
So there's ways to stop your cluster, but they're still, like, what we call a network integration, which is basically the link between.
Qlik Cloud and your own, you know, the customer infrastructure, which uses one instance, right, to ensure the communication.
We, uh… we don't have, right now, uh, an option to stop or pause that network integration, but we are working on it. That will be coming soon as part of the project.
Great, that's good to know about that. Next question.
Will transformation and Datamart functionalities be available with Qlik Open Lakehouse without mirroring?
Short answer, yes. So we are working on that as well, right? We will progressively add.
transformation capabilities in the Open Lakehouse. And that will be progressive, you know, on the first half of the year, maybe through the summer, you will see incremental transformation probabilities being made available on the Open Lakehouse to support.
different sort of transformation, like… Your stomach cleansing standardization, but also, uh, hierarchical data sets, unlisting.
array flattening, uh, we will be supporting aggregations, joins at some point.
So, yeah, you will see, uh, incremental delays of those capabilities, which will reduce the need for you to use, uh, like, a data warehouse to create, for example, like, in your Medal architecture, your.
You'll be able to support, like, your silver layer, and maybe at some point.
Not maybe, but at some point, uh, the gold layer as well, right? And help you reduce your compute costs.
Fantastic. It's great hearing about all the, uh. the updates and improvements.
The next question… Is Python-based Talend still in the pipeline?
Um, yeah, I can give an answer on that. I'm not sure it was a commit that we made in the past, and, um, please correct me if I'm wrong here.
I think what we're seeing is that, um, Studio really has a very robust feature set, very mature feature set.
And for many people who don't know, it uses Java and the backend to do a lot of the processing.
You know, obviously, as an organization, we're talking all the time about how we can evolve.
Um, how we can do new things. Um, but I just want to set expectations that potentially, you know, a Python version of Studio is not something we… I don't think we've discussed widely, um, and certainly not a commitment that we've made to people.
That said, um, we're very well aware of people's use cases around Python, and it's our… Our aim to meet people where they work. Um, a lot of what we're seeing in terms of MCP.
Um, type features, um, being able to bring that functionality to people within AI, I think is probably a better route for a lot of enterprise organizations, you know, us.
creating software. Um, it would probably be wiser that we followed that path, rather than trying to re-architect a significantly sizable.
application, um, to re-engineer the backend to do something with Python. So, um, we do have some plans, but unfortunately, we don't have anything that we can announce today.
That's fair. Great, thank you. And these are all great questions, thank you so much for your… Your participation today. Last question that's in the Q&A tool.
Are there any built-in components or SDK integrations in Talend Studio to interact with.
Gemini AI models?
It's very specific.
Okay. Um, I don't have an answer. I was actually trying to contact one of my colleagues to see if it was on the roadmap.
What we can do, um, is we can probably get back to that particular attendee, um, with our future plans. I know that we are looking to do a lot more.
That said, there are… there is the feature to be able to call RESTful services as well, which does allow a limited.
Um, uh, sort of contact to that, a contact to that kind of platform. And then when we did the first iteration.
of our AI componentry for prompts. That's the way that we did the initial implementation, followed by something which was componentized, more, um, solidly. So, um, I will get an answer for that, but we're just going to have to ask some people on the PM side.
Sure. And I… when it comes to very specific requests like this, I always like to bring up.
Qlik Ideation, you can find it on Qlik Community.
Under support and Ideation, and this is a form.
For customers and partners like yourselves to be able to.
see what suggestions for future improvements or. additions to the product that other customers are requesting.
be able to look at those, vote on those.
and to suggest your own, so… I definitely recommend you take a look at this, we love getting feedback directly from customers about their needs and.
Our product management team. Definitely considers this when weighing.
which improvements to make next, so… There's a link to that. Well, those are all the questions that have come in so far. Thank you so much to everybody for their participation and all these great questions.
Thank you for the expert panel today. You guys have been great.
There is one thing before we go. The connect. Connect.
Hey, before you… before… before… Before you go, Qlik connect, please, Troy, let's not miss that.
Yeah.
how could I forget?
Damien, myself… Simon, are you going? Oh no, Antoine, I saw something from you.
Okay, so all of us will be at Qlik Connect.
We have sessions, if you look in the session catalog, uh, we, you know, Damien and I have sessions, we also have a.
Fantastic workshop to get you, if you want to know more about Qlik Talend Cloud.
How do I build pipelines and integrate data all the way to analytics?
Please join our introductory workshop. Uh, so we'll… we'll be there, or our sessions, and we love to chat with you on the floor as well, if you… If you… if you catch us.
Very cool, thank you for not letting me forget that. Qlik connect, definitely go, you get to meet these people in person.
And definitely sign up for those workshops. get to actually try out all these features and ask… Your unique questions, and… in person. Great!
Well, everybody, hope you have a great rest of your day, and hope to see you in the next Q&A session.
Bye-bye

Contributors
Version history
Last update:
‎2026-02-04 04:20 AM
Updated by: