Skip to main content
Announcements
NEW Customer Portal: Initial launch will improve how you submit Support Cases. FIND OUT MORE

Q&A with Qlik: Qlik Replicate Best Practices 2024

cancel
Showing results for 
Search instead for 
Did you mean: 
Troy_Raney
Digital Support
Digital Support

Q&A with Qlik: Qlik Replicate Best Practices 2024

Last Update:

Feb 28, 2024 10:31:18 AM

Updated By:

Troy_Raney

Created date:

Feb 28, 2024 10:31:18 AM

Troy_Raney_0-1709134134111.gif

 

Environment

  • Qlik Replicate

Transcript


Welcome to Q&A with Qlik. Today's topic is Qlik Replicate Best Practices. My name is Troy Raney; I help put together videos and webinars like this one, and let me introduce our expert panel we have for you today; go around the room and Bill, why don't you introduce yourself?
Hi, good afternoon, good morning folks. This is Bill Steinagle. I work in the QDI product suite, and I've been with Qlik for 5 years.
Great, Steve?
Hi, this is Steve Nguyen. Pretty much I am with similar to Bill; I'm a Princial Support Engineer for QDI products; been with Qlik for over 15 years.
Awesome, Shashi?
Hi everyone. I'm Shashi; I work as a Senior Support Engineer and I’m certified in both Qlik Replicate and Qlik Compose products, and I yeah, I support them as an SME.
Great. Thank you and Swathi?
Hi everyone, this is Swathi. I'm a Senior Support Engineer; I'll work on Qlik Replicate and QCDI and I have been with Qlik for 3 years.
Fantastic. Thank you. All right, so we've already got a few questions coming in; so I'll just take him from the top. What is the best way to run a multi-node system or are there any considerations for such?
Pretty much can that right there for you, Troy. Much that for multi- I'm not sure it's an end point or a Replicate configuration but for multi, we usually do a cluster fail over so that way you can have a multi node setup right there, so if whoever's actually asked that question if it's a cluster environment, yes, we can handle that with a multi node.
Great thanks, and next question: should yeah yeah you want to say some more on that?
No Troy, you can continue sorry.
Okay, should the data folder be separate from the log stream folder? Swati looked like you're ready to take that one?
Yeah, so it is not mandatory, but if we use log stream heavily right then it is better to keep it on different drive in that way; it will ensure that like your IO operations are isolated to that particular task to that particular task.
Okay, great the next question: how do you set up notifications for task failures?
I can handle that one right there you - can share screen where you use how is that –
Do you want to share your screen for that?
Yeah.
It'd be great to walk us through what those settings look like.
Screen screen screen screen number four, okay that one sure tell me you when you see my screen?
Yep we see it.
Okay, so pretty much is that for task notification you would first of all, under the Replicate UI, under the upper left corner, you go down to Server, and right here, notice first thing is that you have Notification; but before you actually can set any notification; make sure you have your mail setting set up and test your mail connectivity and so on. You can also add in the list of recipient or a distribution list and after that from notification from the drop down where you can add in new for task or server event, so in this question, we're talking about task event, so in here we can do like task starting task stopping, any error handling, task stop, task warning, or any performance as in term of latency, memory, and so on; also we already have a great form from Kelly that create on how to create a notification for individual task. I'll put that into the chat window as well, so that way that you guys have a reference to it. I just need to find my chat window, where is that I think this is it I don't know where the chat window is.
That's right we'll get that link to everybody, don't worry.
Okay no problem so that sums sum it up in term of notification for tasks there. Awesome, okay perfect. Thank you. All right, there it is. Which files should be archived slash backed up prior to upgrading?
Okay, I'll take this one. So for the upgrade, the best practice is: first you have to ensure that all the tasks are stopped, and also if also we can stop the Replicate Services, once everything is done all the T stop duplicate stop then we take the backup of the data folder so data fer is is the brain behind Replicate that has everything all the DS and everything and that is not Backward Compatible, and that's the main reason we take that in case we want to revert if if something goes wrong in the upgrade if we want to revert to the previous version, then we need the data folder that's the only thing that we back up.
Great, just want to highlight we had a recent session from Alan all about Upgrading Qlik Replicate Best Practices, and he goes through and demonstrates that so I'll put the link to that as well. Thank you, okay next question: any special considerations for setting up Replicate with SAP as a source?
Yeah excuse me I'll take this one. So it kind of it's dependent on what SAP Source, because with the SAP endpoint there's SAP application, application DB, extractor and then ODP, so it it's all really dependent on what source endpoint, but as far as the setting up of the endpoint, as long as the prerequisites are installed and configured for that Replicate server, whether it's on Windows or Linux, that's the -that's the first thing to do, because you won't see certain end points within the drop down if some of those configurations are not done first, there's other there's like a excuse me some jar files for the ODP endpoint for the SAP extractor, you have to do a transport on the SAP system, so like like I was saying in general, it it kind of varies on what endpoint you're going to use and set up as your Source.
All right, but there are some files depending on the specifics.
Yeah yes correct yep yep.
All right, great. Next question: what type of joins can you create and Replicate for example: left, iner, full, Etc… and how can you join three tables? Is there a maximum number of support for table joins? Max member allowed?
Yeah, I can take this.
Great.
We cannot use joins in Replicate; so we can create a view with joins and Replicate view or like we can use QCDI to create transmissions on landed tables. Anyone else want to add on this?
Yeah, so in Replicate we cannot join multiple tables, no, not do, that we can use Compose or we can go with QCDI yeah.
So that's possible in QCDI? Is that you're saying?
Yeah.
Oh, that's good to know. Great, all right, next question: in any particular task, is there a way to use API or command to reload a single table?
I’ll take this. You said current currently other command line or API, there's no way to reload a single table. I'm pretty sure there's a multiple Idiation on this already. I'm pretty sure management is aware of this, but currently it's only able to Reload a task itself not a single table from API or command line.
Okay, so –
Just with that Steve, so if if there's a need to do that they can set up a single task table and then you know use the rep CTL command I guess, that's the only option without the Ideation being.
Okay all right moving on next question: when you're going to schedule a Qlik Compose or Qlik - oh we got a suggestion for the next couple topics: the Compose and Qlik Enterprise Manager, well you know what we've got experts on those if we have any specific questions go ahead and submit. Are there any exten- Qlik extensions available that allow users to draw and save annotations on a Qlik map layer like playing Sim City for instance drawing and saving boundaries in airfields. I think we're.
I don't think that's, I think that's not a more on the analytics side yeah.
Yeah, there could be some external extensions to allow that, but that's not our expertise at the moment, so we'll just move on: does Qlik provide any data validation for the data being loaded via Qlik Replicate?
Yeah I can. There is, yeah, there is no validation tool available with Qlik Replicate. Like if they want, like they have to write a script or use third party validation tool.
Okay, that's fair. Next question: is there a suggested amount of data that a Qlik Replicate instance can handle or perhaps a Max?
I think you skip a question Troy, but can handle that later. The recommended HDR strategy for Qlik Replicate.
Okay, what is the Replicate HDR strategy for Qlik Replicate and that has a Qlik server in a separate data center?
So so, that's gonna be depend, yeah that's; so that's going to be dependent on your Source, and then if you have Replicate in a cluster failover environment, but it's really dependent on the source, because some source endpoints, you have to configure the source to like as an example: you have a db2 environment set up in two nodes, one primary, one standby, with Replicate you'd have to make sure you define your endpoint to the primary, Replicate will connect to the primary, do its processing, if there's a failover in this event of it you know a disaster recovery site or whatever, you would have to change the endpoint setting, and then you have Oracle, you can set Oracle as a standby to point only to the standby node in your Disaster Recovery site, so it wouldn't connect to the primary, it would just connect to your standby, so it's it would have to be at the source, if it's a source HDR setup, and or cluster if it's a Replicate setup.
Great. Okay then the next question is one I read earlier: is there a suggested amount of data that a Qlik Replicate instance can handle or max out at?
Yeah go ahead. We can always like fine tune our tasks and Replicate server to achieve maximum performance with huge loads. We don't have any documented limits which we encountered till now.
Great, so no documented limits.
On the server RAM and CPU. Yeah, yeah.
Of course. Hardware always comes into play.
Yeah.
Great, next question: are you guys aware of a guide to rep CTL commands out there?
I can handle that ;pretty much is; I currently rep rep rep CTL command mostly for export and import, there are still some Legacy documentation that like resume task or stop in a task, using the rep CTL command line; however the best recommendation is to use a QEM API Call to communicate through QEM, that communicate through the Replicate server, and to control your task in term of that stopping/starting or getting more information. I also put in the chat window on the current documentation and rep CTL documentation of how to stop resume, some of the feature you might run and is not available, because we are adding the rep CTL command line directly from Replicate itself, so and we lean toward more development on the Enterprise Manager API.
Great hope that helps. I'll be sure to include those links with the recording that I'll post of this soon. Okay, next question: is there a tool to check data quality? For example: sources matching the Target?
I think Swathi just answered that in earlier question already, am I correct?
Validation yeah.
This yeah, nothing with Qlik, but as I knew a tool from Quest can do this job with less cost. But with Qlik, yeah there is nothing such kind of data quality check we can do.
Okay next question: if using SAP ODP endpoint, and the SAP Source goes down for patching, can we make Replicate retry automatically over a 30 to 45 minute period instead of requiring a manual restart of the CDC task?
Yes, well with the ODP endpoints specific, you would have to do it manually, because there are some restrictions and limitations for when a task stops or the server stops, for these type of source end points, and it's the it's due to the mechanism with Replicate, because the connections and the calls to SAP is just done through an API call, whereas some of the other native ODBC endpoints, there's a lot more control, the stream position, you can set the retry restart parameter within the task to retry after 30 minutes, so with the ODP endpoint there's certain limitations and restrictions that you would have to it would be a manual intervention, where you would just stop your Replicate task, do your maintenance, and then resume the task.
Okay thanks. Next question, there's a couple that have come in through the chat so I'm going to try and address those first. Swati, what was the name of that third party tool you mentioned regarding data validation and quality?
Quest. Q-U-E-S-T.
Great and next question: about change process tuning; I'd like to change the batch tuning settings on the weekend to have Target updated less frequently, and during the week have it to update more frequently. I have several tasks I'd like to apply this to is there a way to do this programmatically or via command line? Is there something you can point me to?
What you reading this question?
It came in on the chat.
Let me just reread it.
Sure. I also paste it in that Word dock we were starting with.
So there's no way to do that programmatically or the - in my thinking is that: one is that you have to use first you have to of course when the batch tuning you actually have to stop the task ready so so in that's that's you can use API to stop the tesk, now then go in there and change their batch tuning parameter, then you resume the task. Another way that you can do this very carefully is using the QEM API to stop the task, export the task JSON file, then you would you run your own script to edit the value within the task for the batch tuning, then reimport the task, then resume the task.
Okay all right; so basically: no.
Well, basically you can do it, but it's just that you have to be very careful when you add in the JSON file. If you something go wrong, then you go ahead and corrupt your task all together.
Right, right okay. Well move, we can move on to the next question: what is the impact of increasing log retention period and what's the minimum value by default?
So, increasing the log retention period right, it will consume like more storage, so the default is 45 days, and also we should know whether they're asking for log stream or increasing for the logs, if it is for the logs whatever I answered like default is 45 days.
By default, isn't, is that enabled it at all? The log cleanup, or do you have to enable that?
No, you actually have to enable that under log clean up by default; now if they're talking about log retention, in log stream like Swathi was talking about, it's 45 days, in Log stream it is 48 Hours.
Yeah it is 45 days.
Okay.
Okay. Tthere was one more that came in on the chat. Let's try and keep the questions to the Q&A tool so I don't accidentally miss it, but I caught this one: let's say a source where we're replicating 500+ tables, and some tables are big and busy and some tables are small, do you have any best practices on how you think about setting up tasks for this? Do you put all 500+ tables on one task or do you try and break it up break up the source into a a couple different tasks?
So, I could take that. It depends on the table, and like you said the activity. High transactional tables, and then other considerations would be if there's lob, data that you have some lob columns, there's a lot of lob processing, because that can add over overhead to your task, and then if you have 500 tables in a single task, then if you have to troubleshoot, or if there's an issue with one of the high transaction all tables, so it's really dependent on the environment, and the set of tables that you're going to have within your task, so customers usually have, just say they have an application, this application has 200 tables, and then you have another application that has 100 tables in in it, and then you have another application there's 200 table, so you would have three separate tasks, one task for each application if that's if that's help.
Yeah thank you. Okay next question: does the disk space utilization notification and the server events track the log streaming staging area space when using a separate Mount point from The Qlik Replicate binaries?
Good question. No, that would be a feature request.
Okay and by featue request, we're talking about Ideation.
Yeah.
The Qlik Idiation page, I had that pre-loaded up here just because I knew it would come up, so this is on Qlik Community. It's a space where you can suggest recommendations for product management, to feature requests, and things like that, and you can upvote ones that have already been submitted, and this is something that our product management team definitely considers when it comes time to make improvements on the product, so I recommend you check it out. Our next question: the new pivot table doesn't seem to display in PDF. Okay questions regarding pivot table, I don't know if you guys are aware, but that's a new feature on Qlik Cloud. The new pivot table doesn't seem to display in the PDF attached to a subscription to a sheet. Can I remove the link to the PDF entirely? That's a good question. I'm not sure, I'm familiar with how you can create subscriptions to objects and tables, but I'm not unfortunately familiar with the that feature that well, but I recommend going to the community and the forums under Analytics and App development. I know that a lot of our the community out there including product management and Qlik Support experts, they keep an eye on this, so you can submit that question there and hopefully you'll get a quick reply. All right next question: is there a preferred method for creating tasks from multiple tables? Especially for CDC, is it better to have a table per task or would it be better to have all the tables within one task?
Yeah I can take this question. So it's better to have all similar tables in one task, like example, all non-LOB tables in one task, and all non-PK tables in one task, and PK tables in one task in that way like there won't we won't be having issues for the PK tables related to Performance or latency related issues, so it's always better to have similar tables in one task.
Thanks. That's great and the next question was: can we elaborate on what QCDI is? that's Qlik Cloud Data Integration, so it's the cloud product for Data Integration.
Do you have a link for that?
Qlik.com. That's where you log-in.
Okay, yeah.
All right, next question: what is the best starting point and learn learning path for someone new to Qlik Replicate? Which documents should be read first and what should be the learning road map? That is a great question. There's a page called Learning.Qlik.com which is where we have all of our learning paths and in here, there are learning plans for data architect, webinars, you can search for things here. I recommend – , yeah go ahead.
I I didn't mean to cut you off, Troy, and just a side note: for for the best learning and understanding, I would just look at the community and look at the release note or not the release notes, the user guide for the product. There's a lot of good detailed information at the you know Source, Target, how CDC Works, how full load works, it would be a good starting point to look at the user guide for that specific version Replicate as well.
Yeah and by user guide, we're talking about the Help.Qlik.com, you can go over here select Replicate, even select your specific version on the left, and there's a lot of information here, introduction, installing, security considerations, you can also search this. This is a great resource. All right, next question: is it possible to create reports based on Qlik tasks and tables?
Through Enterprise Manager you can get some reporting for different tasks and information that that would be mostly from Enterprise –
QEM analytics. Yeah, using QEM analytics, like we can create reports on task basis, but not on table level.
Right.
Great but that is nice to know. I wasn't aware of that all right next question: are there any Qlik Replicate expression buildings with SQL Light webinars?
Oh, I'm not aware of any resources for expression building, do you guys know of anything that's out there?
No, and and that's that's more, so that's a component with then Replicate, with the Transformations, Expression Builder, So within that context, when you're in Replicate you use the Expression Builder, that uses the SQL Light syntax, so depending on the syntax that SQL light uses, that's what Replicate Supports.
That's good to know. Okay, moving on: is there a way to do CDC for an entire Source schema? For example: bring in all new tables that are generated automatically?
Yeah.
Yeah, go ahead Sashi.
I believe you can use it from the include and exclude pattern from Replicate itself. Shashi, if you can share screen. Troy, I have to drop on my other call. I apologize, but I think Shashi can demonstrate how to use a pattern within the selection of schema or tables.
Yeah, I can stop sharing if you want to share your screen.
Let me bring up my Replicate. I'll share it in a couple of minutes; now I'm having some issues.
Yeah we'll get back to this question sure. A question came in regarding AWS RDS Oracle so (in the chat). We use AWS RDS Oracle instance as our source, we're getting latency about of about 5 minutes due to issues with RDS caching issues. I was told by Support there is no work around to this latency issue. Since this is Supported product by Qlik, when are you you're going to fix caching issue with Oracle RDS instance? Okay so it's very specific.
Yeah, but back to that point about caching, that's at the database level more so than Replicate level, so it would be hard for Replicate to handle that type of caching if it's at the database level. That's probably something within Oracle and the RDS type of database that caches with Oracle. We would, if there was a specific issue please share it with us, and then we could always offer you know our Professional Services team to be engaged as well, but yeah, if if you need more on that, I would I would share a case to better understand the issue.
Sure. Sounds like they may already have been in contact.
I am ready for that other question so maybe…
Okay, yeah go ahead and share your screen whenever you're ready.
Yeah, yeah let me know you can see my screen.
Yep, yes.
Okay.
Okay, so tell us what you're doing here?
Yeah, I don't know why I cannot bring that one up other one okay. So when we select the table, right, so let me know you're seeing my screen right you're seeing my…
Replicate screen selecting tables DBO, yeah.
Yeah so, you would select the schema you want to select, so I have multiple schemas, right, so you choose whatever the schema you want the data from, and incl-, have that % as a wild card, and include.
Okay.
So anything with DPO schema will be added automatically. All right so that'll pull in all the tables from that schema?
Tables within that schema, yeah DPO, if it is only full load task, then it will get all the views as well.
Great.
If it is CDC, then only the table.
So the “%” is the Wild Card?
Right.
That's good to know, thank you very much for showing that.
Yeah, no problem.
Okay, next question: we have a use case with a large SAP table that does not support Delta loading, but does have a Date field where data from previous years is fairly static. Is there a straightforward way to have Replicate filtered the Source table data Date field by the current year when loading the data and append the current year to the Target table name, so we can minimize the amount of data we have to load on daily reloads?
So yeah, I'll take that one. So, that that's also dependent, SAP broad term, so for each component, so if this is ODP or Extractor, you can actually go within SAP, within the Replicate Extractor tool if it's as SAP extractor, and set the filter there. If it's ODP, you would go into your SAP environment, go into I believe the tcode is RSA 3 or 7. I'd have to get confirmation, but you can actually set the filter there as well. If the actual table or object in SAP has that date field, and you're using the ODP endpoint, you can actually use Replicate to filter that Date field. And I could, so I could share out real quick and just to a visual here share here.
Fantastic.
Okay so this is just an example, let me close that window, so I have so this is for the SAP UDP endpoint, we can do basic one here, this is a this is a certain table, I would double click the table, and this with the ODP endpoint it works like other tables specifically in the the newer version 2023.5 and the 2023.11 version, where there was a lot of filtering work added to the product for the send point, so here's your different Source columns for that table or object. However, you want to call it, so just as an example: this is your Date, you could filter it over here, and then slide out way, and then put it in your range, and then you can add a range, and say equal to, and give it to your Date. So as long as that field is defined in the object, the API call for Replicate should pass that filter condition to the ODP server, and pull the data. That's specific to the SAP ODP endpoint, if you're using the SAP extractor endpoint, there's another, this is my SAP environment, you would go into the Replicate (I don't think this decode here). Let me get to I have another system, let me just share real quick, do you have that, one. I'll share with you real quick, and then this is where you add the filter if you're using the SAP extractor.
Okay.
Okay, oh it's not here, okay one sec, so this goes back to using the help link that we shared. Here. I just wanted to get the T-code, I can't remember them off head pre-requisites, there it is, so this is where you actually activate your there, here's the transaction code.
Okay.
And this would be in here, oh and it's not there. Well, I can't share that with you, let me –
That's right.
Okay.
But that link to the documentation, can you copy that and share? That was –
Yes.
Pretty handy, just to find out where that code is.
Yep.
All right, we'll move on to the next.
Aologize, for the server must have done some maintenance, but I'll share the link.
Great. Is it possible to export replication tasks with endpoint details to CSV?
Yeah, we can export task using API with task names maintaining in CSV. It a simple loop through CSV records, and we have to run export API call. Just need to write a Powershell or python script for that.
Fantastic. Thank you. I just posted that link to the documentation that Bill had done. It's a follow-up question about joins: is Source lookup an alternative to joins based on –
Yeah, I can, Community article, right?
Yes.
Yeah, I can answer this like. So, Source lookup is like heavy operation. For simple joins, using Source lookup is not a good option, as I said earlier, like Replicate is designed for replicating data. We have other tools to handle standard ETL.
Right, were you saying that I, sorry, I apologize for not remembering all the details, but was, what would be a better tool for that?
Yeah, as like Shashi mentioned right, we can go through Compose, or like we can use QCDI.
Right. Thank you. Okay, next question: our company creates Replicate tasks on the fly with API in automated process by generating task JSON configurations and submitting them to Enterprise Manager. Great. There is some concern in my organization that these JSON configurations could change during version upgrades. Is there any documentation on these JSON configurations that we can follow from version to version?
The only information that we provide is the release notes for the given service pack or service release of QEM. If there's a change with the JSON, sometimes it's missed, it's software, but sometimes the you know a variable may say (just an example), example they read in as megabytes in a previous version or old version now it's reading in kilobytes, so that kind of information may not be there in the release notes, but if there's a change within the Enterprise Manager tool, that would be in the release notes for this given service pack version.
Great, thank you. Next question: in the states option, we have Running. We have “Running, Stopped, Recovering and Error.” Is there any way we can add “Success” as a state option? After completed successfully, it comes into “Stopped” state. It's confusing to identify whether job completed or not. Would that would be a feature request?
I'm not sure I understand the question fully. So if you have a task that stops normally, that would is
That maybe the Full Load only task.
During Full Load they asking I think.
Yeah, that should be a featured request.
Yeah, that does sound like a good idea to be able to see in reports it, so yeah that would be back to Ideation, and I'll definitely share the link to that, but we can move on to the next question: Replicate is Supported on Windows and Linux. Is there a preferred platform?
So depending on your environments, your data center, where you're going to have your Replicate installed, it's really dependent on your company's infrastructure what the requirements are. We run on Windows and Linux, it just depends on your environment setup. Obviously Windows easier, you have the ODBC configurations to consider, whereas Linux you have the different paths, you have to update the ODBC manager, you have your ODBC file. So it's it's just dependent on your organization or your company.
Yeah totally concur. Okay, next question: any Q REP certification study guide? The Qlik Replicate certification study guide? That's a that's a good question, because a lot of our team have taken that certification exam? Any tips for the people out there working on that?
We have to go through the user guide.
All right.
Most experience yeah.
Yeah, no we don't have a specific study guide, but yeah, day to day whenever we work the based on your experience, and go through the some of. The certification also has questions specific to some of the end points, so that you can get into in the user guide.
Great.
Like some of the limitations, and yeah prerequisites.
Also, I would recommend Learning.Qlik.com, especially if you're getting started with it. There are a lot of endpoint specific courses here as well.
Yeah actually in Replicate certification, they are asking about the Enterprise Manager as well, so they should have hands-on experience, and as Shashi mentioned, right prerequisites, limitations, permissions, what kind of should like have so, yeah everything.
It's not easy, yeah. All right, next question: if the source tables does not have PKs (primary keys), but the target is Kafka S3, does not having primary keys have any negative effects on the performance or the output?
Yeah. No, Kafka Target, we only do like transactional apply, so still we have to follow the source requirements.
Okay great. Next question: what's the maximum number of tables you can put in a task? Is there a limit?
We cannot state like, we can have 100 tables or like 200 tables. It all depends on the size of the tables we are trying to load in a task, and what does the like RAM disk and CPU available, right? In the Qlik Replicate server.
Yeah.
I think like they should test, and based on that, like they have to consider okay these many tables in a task will work, for their environment like is but you can add if you guys.
Yeah, because that's that's also going to decide the row length of the actual table, like Swathi was indicating, so the amount of rows, and the actual row length, the row length, comes with the amount of columns you have in the table, and the size of those specific columns, because you might run into an issue where there's too many columns within a certain table, that's going to cause an error with the task.
Sure okay, there’s a question: why we cannot Replicate functions? Do you guys know what that's?
Yeah, because we do only support data related object tables and Views.
And yeah, a lot yeah, and a lot of that information is not captured within the transaction logs within the given Source endpoint, so it would be another good Ideation for the product that a you version.
Okay great. All right, next question: let's say we have a long running uncommitted transaction running for three days in DB2 for a system, and the log stream retention is set to 48 hours. If such a transaction is subsequently committed, would LSS audit trails get cleared out when the transaction is still in an uncommitted state? How does Qlik Replicate handle long-running uncommitted transactions when using log stream?
So with the DB2 I, or I series Source, it's really dependent on the, on your retention period, because if you're using logstream, when we're processing the CDC changes, and basically logstream is just we're capturing whatever's in the transaction logs from your given source, and we're put them in into these proprietary Replicate files within your logstream directory, so if they're uncommitted transactions, they're not going to be committed to logstream, so you could actually miss that event if your retention period is set too low, so this goes back to the whole design and setup of your task, the environment, knowing the application, know your commit rates, it's it's there's a there's a whole setup in design of your tasks, before you can decide what your retention periods are for log stream, because there is a potential to in this specific example to miss some of the transactions to your target.
Thank you.
Sure.
Next question: it's pretty basic. What's the pricing on this product? I would say you need to reach out to our Sales or your Account Manager , you can find somebody if you don't have one on Qlik.com, but they would be able to better answer that question. We deal more with the technical side. And the question after that is: is there a link available for the recording of this session? And yes. That will be available on Qlik Community under Techspert Events. If you just switch to Past Events it's little second little tab there, you can find the recordings of all previous Q&A with Qlik sessions, and it'll also be available on the Qlik Support YouTube channel, you just go to YouTube and search for Qlik Support, you'll see that, and I'll be posting that later this week, as soon as I can get all the links and files everything set together, I'll be pushing that out, so you should be able to find it, and if you registered for this event you will get an email with a link directly to it, so that will be available hopefully by Friday. Next question: coming back to the question about JSON configuration for Replicate tasks, are those JSON settings documented anywhere? I've been unable to find any documentation, only get examples for from creating a job manually and exporting it?
I put in the chat, from the user guide, some information that might be helpful. I'll share that in the chat.
Yeah was this link here.
No that that was for the SAP, and one right here, oh where's my chat, oh here it is. I'll send it. Wait. I just shared that. So, we do have information in the user guide about exporting task, importing task, and then within the actual JSON itself, some of the connection properties and parameters within JSOn File, other than that that's the information we have, other than doing what user, the customer stated in the in the chat: trial and error; export, take a look, but that's the documentation we have.
Okay. well it's good to know what's available. We have a few questions left. I think we'll get through them all before the top of the hour: is there a planned JDBC data connection feature release for an On-Prem Windows configuration?
Yeah, this would be Ideation.
Yeah. Yeah, and unfortunately we don’t –
Through QCDI right Shashi and Swathi? That's that's the connector that we use QCDI.
Yes.
Replicate On-Prem is ODBC.
Yeah, yeah should be a futurequest.
So ref- for Replicate On-Prem it's ODBC, and that would be a feature request to get the JDBC, but that is available on Qlik Cloud, is that what you're saying?
Yes, yeah QCDI.
And just to be clear, that's Qlik Cloud Data Integration?
Data integration, yeah.
Cool, next question: how common is it to split Replication into multiple tasks? We're replicating a single DB2 I series source to a single SQL Server Target at the moment, but it's 500+ tables in that one task. Would there be any benefit to splitting that task into multiple tasks? Good question.
Yeah so yeah go ahead sorry.
So yeah, that like we talked earlier, depends on the tables the application servicing that task, and then you also got to look at the SQL server on your target, because you have, you know obviously the target that's where the consuming the data, and then the other option is with the old with with direct source endpoints, you're going to have four to six threads active on I series for that given task, so just as an example: you have 5 tasks set up reading from One Source, the DB2 I the DBA or the system spokes they'll see 50 connections or 50 threads, so then you would might consider log stream, and then separate your single task the same SQL Server, so it's really part of the setup and design of the task when you initially set it up, but there is no restriction, you could have 500 tables at a single Task.
Great. I'm just posting a QR code to a survey about today's session we love getting your feedback and we've got two more questions at the moment. Next one: we have Qlik Replication task which is using SQL view, is the performance dependent on the query performance of the joins used in The Source endpoint?
Yeah so, Replicate also will run that view on database once it get the result set, then it will start transferring this data, so there will be a performance dependent.
Okay, thank you for that. Potentially the last question: (if you have any questions you haven't submitted yet this is your your last chance) We see Qlik Replicate running in Windows 2022 trying to negotiate TLS 1.3 and allow DB Target thousands of times and failing. Subsequently, we see it communicate over TLS 1.2 and succeed. Allow DB Supports both TLS 1.3 and TLS 1.2 but prefer TLS 1.3. Is there a way to configure Qlik to connect using TLS 1.3?
And Troy, I shared a link with you on an article post submitted. It's related to TLS 1.0, 1.2, and then it goes on about other information, it's really dependent on the operating system Replicates running on, because Replicate Support both 1.2 and 1.3, but it's the actual OS that's actually doing the lowlevel IO call to the given system. And you can set that at the Replicate server level, So within Linux, it's in the TLS config, there's a TLS config file, and then on Windows, there's a Windows registry that you could set within Windows, they only use 1.3.
All right, and Shashi just shared this.
Yeah, the same thing what Bill said. It's all dependent on the Replicate.
It's nice to see it documented, thank you. Okay, last question for today: is it possible to set up Global Rules defaults so that the generated fields are added to Future tasks
that have changed tracking? Setting up defaults, is that possible?
It depends what excuse me, so default for future tasks, yeah that the global rule set up at the task level, so yeah it depends on your task, and or you can use the same Global rule, depending on what the global rule is set up, so you may have a global rule to prefix the table name, upper lowercase your columns, it's really dependent on what task, and what Global rules you're going to set up.
Yeah currently, the global rule is only at the task level, so if you want it like available for all the task, then I think that would be a feature request. We can add it in the server server settings, we can set up that Global Rule. Currently we don't have it.
Okay. Well everybody, thank you so much. Thank you again to our very knowledgeable expert panel we had for this today. I really appreciate you guys going through; and thank you everybody for attending, for these wonderful questions. We appreciate your engagement with us. Thank you all again, and hope you have a great rest of your day. Take care folks. Thank you. Thank you everyone. Thanks everyone. Bye.

Contributors
Version history
Last update:
‎2024-02-28 10:31 AM
Updated by: