Skip to main content
Announcements
See why Qlik is a Leader in the 2024 Gartner® Magic Quadrant™ for Analytics & BI Platforms. Download Now

Q&A with Qlik: Qlik Replicate Sources and Targets

cancel
Showing results for 
Search instead for 
Did you mean: 
eherndon
Employee
Employee

Q&A with Qlik: Qlik Replicate Sources and Targets

Last Update:

Nov 28, 2023 8:13:37 AM

Updated By:

Troy_Raney

Created date:

Nov 15, 2023 2:03:45 PM

Troy_Raney_0-1701177103172.gif

Environment

  • Qlik Replicate

Transcript


Welcome everyone and welcome to another Q&A with Qlik today's topic is Qlik: Replicate Sources and Targets. I am one half of the host team for the Q&A with Qlik my name is, Emmanuel Herndon and I'm a digital customer, customer success specialist here at Qlik, that focus on Customers from trials to renewals and in between making content like this Q&A. I'm not doing it alone. My colleague, Troy, is helping me. Will you say hello, Troy? Of course. Hi, everybody. Welcome to Q&A with Qlik. My name is Troy Rainey and I help out creating largely videos and webinars like this one. And we have an excellent expert panel for you are today. We'll pass around, let everyone introduce themselves starting with Dinesh. Could you say hi? Hey this is Dinesh. So, I’ve been with Qlik for about... A little more than a year and a half I'm technical adoption specialist from the support group, so, I work with support kit related cases and also help customers to kick off with their basic configurations and things like that. Great. Thanks, Steve. Oh, hi, this is Steve. I can hear me? Yes, yes, yes. Okay. Okay. This is Steve Nguyen. I’ve been with Qlik for over 15 years. I'm senior, senior support engineer mainly with replicate and also dealing with QDI and QDIC and G and all the auto grade stuff. So, specialize mostly in replicate and QEM and Barbara. Hi, I'm Barb Fill and I've been with Qlik for almost eight years, and I'm in New, I'm in Florida, used to be in New Jersey for those who may know me, and I'm QDI tech support and adoption specialist for the, you know, replicate family of tools and endpoints and yeah, yeah, that's it, really, thank you. And Shashi. Yeah. Hi, everyone. I'm Shashi. I’m with Qlik for more than two years now. I'm a senior support engineer and, and I'm, yeah, I’m a semi and replicate and compose. Awesome. Well, we've already got a couple questions coming in and we'll start off with the first one. But please start entering any questions you might have in the Q&A panel, and we'll address them as they come in. First question. Where can I find a list of possible sources and targets? Steve, Where is there a list of Sources and Target? you mentioned you wanted to take that one. Yeah, pretty much is that most of the source and target are in our user guide. Pretty much is that if you go to our website and go to Qlik documentation and just pick the product. Give example in here is replicate and in the left panel you go down to platform support and endpoint support. It will list your platform, source endpoint, target endpoint. And are they compatible with Linux or Windows et cetera. That's great. And I'll go ahead and post this link in the chat so people can have that. Okay, thank you. There is a, there's a question that did not come necessarily in the Q&A panel, Could Qlik Replicate replace Stitch? but it came in the webinar chat. And it says, currently using Stitch data loader. Could Qlik replicate, replace Stitch or compliment it? What is the upside for using Qlik over Stitch? Hmm. Rob, anybody else know about Stitch? I don’t know Stitch products, so I'm not sure. Yeah, I'm not sure either. I believe it may be similar. So, I'm sure there are some overlapping capabilities, of course. But I also think that there may be some uniqueness between. That that separate each other. I cannot tell you off the top of my head, though. Yeah. Unfortunately, I don't know. So, I don't know what is the similar and what is the difference. So I apologize. That's fine. And that's always I mean, we're here to try and address things live, but there's always, the forum on the community. If you all have any questions that we're not able to address, that's a good form for them, but I'll move on to the next question. Will this apply to Qlik Cloud Data Integrations? Do we expect to be addressing QCDI as well, or are we just talking about Replicate today? I think we have experts for both. Sorry to just take that question. Feel free to Qlik Cloud as well, but we're trying to stay on topic for Replicate, but I'll move on to the next one. Is Oracle Autonomous or ATP database on the Qlik Roadmap as a source endpoint? Will Oracle Autonomous (ATP) database be a Source? Anybody have any insight to that? Checking, checking real fast on that Troy. I think there's some, case that was already open on it, but I believe that particular endpoint is right now not supported. Okay. Yeah, that that end point is not supported yet. Most likely is that what customer can do is open a feature request on our ideation. And so that if more management see that there's a need for such a platform, then they will consider the inputs. Okay, thank you see, and Troy is now showing on the screen ideation. That's where you can go to put your idea in. So, we’ll move on to the next question. When If adding a Destination ID, must the task be stopped? updating an existing source that has. Tasks running to add a destination ID in the Advanced tab. Do we need to bounce the running task for it to take effect? Obtain a system source to add a destination in the Advanced tab. Advanced tab, destination ID. Okay, yeah, pretty much I can take that. Pretty much is that I'm assuming that this is in regard to Oracle endpoint where they can set the destination ID for reading the redo logs. So pretty much is that anytime when you edit an endpoint, you definitely have to stop the task and resume the task for the task to pick up the new definition of the endpoints. Great. Thank you. I see another question came in through the chat. There are two different tools would be really helpful if you could place all of your questions in the Q&A tool, but I'll go ahead and read it. Is there any way to use D365 Cloud as a source Can D365 Cloud be a source? with the Attunity integration software on premise? And if not, are there any plans for the implementation of that? Barbara, do you know that endpoint, Barbara? Sounds like an IBM endpoint or something like that. D365? Is that a form of DB2? Let's see if they can quickly reply to us in the chat. But I'm not sure. Oh, there it is. Microsoft Dynamics D365. Yeah, that's pretty new. I've never heard of it. Yeah, I've never heard of that yet. So yeah, it's the same scenario right there. If it's not on the support matrix yet, then it will be a feature request. I apologize. So, it's not available yet. Okay. We'll move on. I'll give Troy a minute. I think he's searching for something. I just wanted to put here the list of supported target endpoints for people can browse themselves. And I put that link in the chat for everybody, so, they can start looking through what is actually supported and available in our documentation. Okay. Well, I'll move on to the next question. In the documentation, does it state. Which input Which endpoints are supported for Qlik Cloud? are supported for Qlik cloud and which are supported for on Prem installation? Yeah, we have separate user guide for on Prem versus cloud. You see, so yeah, so, we have to go respective ones and we can see which endpoints are supported. So, the one which is being shown on the screen is for on Prem. So, let me put the link for cloud. I think Troy, sorry they are ahead of you right there. Oh, okay. Yeah, yeah. So, connecting to data sources or target platforms in your project. Yeah, if we go to that connecting to databases. Yeah, that is for source. And the next link is for the... I'll go to this link and put that in the chat. There you go. Okay. Thanks. That was a good question. So next question. I see there was a follow up that stitches for cloud sources, not CDC. I'll move on to the next. For a HANA trigger-based CDC, what’s the best practice to follow in releasing a transport, like adding a What are best practices for HANA maintenance tasks? new column or any maintenance task for a given table? Darn it, Bill should be on this own call. Unfortunately, I'm not a big fan of the CDC post and be with and a CDC portion, not, we might have to loop back and give you an answer that in the background somewhere, Troy. Yeah, maybe I can try to answer that. Yeah, thanks. Yeah, we always reference the primary key columns. So, any change, any other change to the tables will not be affected. So yeah, you should be. You can do it like a regular maintenance. It won't affect the CDC. I'm also curious. The third part of that question says for any maintenance task for a given table. So, I guess it also depends on if there's any dramatic changes to that table. It could alter basically the layout and therefore maybe a reloading of that table would be required. I think it's an answer, right? Unless we change the primary key columns, I don't think the triggers are affected. So, unless we do a drastic change on the table, we try to change the primary key columns and all those things, yeah. Unless we do that, yeah, the triggers won't be affected. Triggers are on the primary key. Okay, thank you. Next question. Are there any architecture diagrams that that shows Are there architecture pics for Replicate Gateway? how replicate gateway device Linux box can be deployed in and around on premise or in cloud? Environments like Azure or a W. S. I believe we do have some type of diagram on that. Denise. I just don't remember on top of my head, but we did have a practice session where they show a diagram. I'm not, I'm not sure if that's able to share out or not. That's the only thing. Dinesh, do you know if we can... Yeah, I'm not sure. Let me... I guess I should have it. Well, I know that for the, the, for the replicate gateway device, which is Linux bar, that, that can pretty much be, exist pretty much anywhere, on prem or on the cloud. It doesn't really matter to us, as long as we're able to connect to it. And then that box has to be able to connect to your sources as well, right? If it's on prem source. Well, pretty much it's the data gateway movement. So the data gateway movement needs to be connected to anywhere that is connected to your source. Yes. Hopefully that answers Stefan's question. Great. Thanks, guys. All right. Moving on. Next question. Is there any error code list. Is there an error code list for SQL? for the source endpoints? Microsoft SQL. Interesting question. I'm not sure what error code they're looking for. But in terms of SQL Server, Oracle Server, et cetera, most of the connection using replicate is going through the ODBC connection. So, the error codes usually come out from their ODBC error themselves. Most of the time that when you see in our logs and error, it's all coming from the ODBC driver themselves. So, I think that's what they’re looking for in terms of error code wise. And I just wanted to follow up on that when you all as technicians. don't understand an error code, how do you find out about what it means? Well, pretty much I, I, I do exactly what great tech would do is, is copy that particular error code from the ODBC on Replicate and try to Google as much as I can. That's what I figured. I just wanted to hear it from the horse's mouth. Yeah, we use Google. We use Google, yes, we use Google. Most of the time, the error codes or the error messages are, you know, from SQL itself. They're SQL error codes. And that's, you know, that would be like Microsoft SQL server errors, you know, typical SQL query errors. We've got this Shashi Googled it for us and came up with this and I'll post that to everybody. Yeah, I had a similar case earlier where the customer wanted it. Fantastic. Okay, next question. Well, I think next question is a statement. So, I'll read it. QCDI and the new data factory connectors are closer to the way stitch works. Okay. Okay. Thank you for that. And I’ll read the next one. It says, will the, Where will the recording be posted? okay. Will the webinar be able for replay? Yes. The recording comes out next Tuesday. It will be on YouTube as well as Qlik community. Yeah. Just to let everybody know quick little plug. Thanks for the opportunity up here in blogs and events textbook events. You can find under this past events tab, the recordings of all of our Q and a with Qlik sessions. And you could also go to the Qlik support YouTube channel and we'll post Those recordings here as well. You can see all of our previous recordings on Qlik support YouTube channel just Like and subscribe support. Yeah All right moving on So the next question can, or will we be able to use QDs as a source Can QVDs be used as an end point? or target Qlik’s are, that's a proprietary Qlik file for data. Yeah. Yeah. Those are, those are Qlik data themselves. Not sure That’s up to management in terms of that. I, I have, so far, I haven't seen anything in that nature yet, so I would just say no for now. That would never know at this point. I believe they are cloud compatible, but don't, don't take my word for it. Yeah, I believe they're cloud compatible. You're right, Troy, but not until Replicate wise right now. Okay, thank you. Next question. What is the optimal way to leverage parallel load Tips for large parallel loads from Oracle to Azure? for large tables? Greater than a billion rows when the source endpoint is Oracle and the target endpoint is Azure? Data Lake storage. Okay, so pretty much is that is how fast we can read the source and how fast we can write to the target, right? So pretty much in Oracle, you definitely want your reading from close to the replicate close to the source as close as possible. And then what you can do is you can leverage segment parallel low. And on top of that, you can do a bulk array read. Which is internal parameter so that it can read Oracle faster. Then from there, we can leverage the right to all Azure data leg storage on that point. Now, the best leverage that you, in performance wide is to engage our professional service team so that they can understand what you’re trying to accomplish. And they can see what you're trying to move and look at your environment wise and everything. So that way they can help you fine tune your performance transfer level. That's great. Thanks. All right. Someone gave us a little bit of follow up to the D365 question that as a statement, Windows Azure does not allow a CDC connection to D365 yet. So, thank you for that. Moving on to the next question. Is there a Source to Target counts reconciliation option? Is there a source to target data counts reconciliation check option in Replicate? Right now, we're performing counts check outside of the tool to get a count for that. I think that would be more of it like a data remediation capability that currently Replicate does not have. Yeah, that would be a good feature request. Yeah. Enhancement request. Okay. So, again, there's the ideation space on Qlik community that we're referring to. I'll go ahead and put that link in the chat as well. Just so everybody quickly gets to it. Thank you. Next question. How does the table load order settings low, normal, How does task priority affect the loading? high priority affects the table loading example should large tables be set to the lowest priority and another follow up question is this settings just applicable during full table loads or CDC as well. Yeah, that’s definitely just for full load only. And the purpose. The purpose of the order is to allow you to give priority, basically, which table do you want to get loaded first? Now, the question is, you know, which is better? Should I load my large tables first or my small ones? It, it, it's a depends on type of answer. It could be that you need the data for the large ones first. So, you may want to prioritize them. If the if the small ones are quick, you can prioritize them to be loaded first and then allow the large tables to be loaded last. I, I think it really. does come down to which, which of all the tables need to get loaded first due to business needs. Hopefully that answers your question there. Thanks, Thomas. That's great. Thanks, Barbara. Next question. Is there a best practice document available to help make sure we're configuring are there loading best practices? new and existing source targets? Just feels like we’re missing a setting. Best practice. And I was, I saw that one coming in and there was a, Michael Litz hosted a Qlik Replicate Loading Best Practices where I believe he looked at some of those options and gave some tips. That's called Replicate Loading Best Practices. You can find it on Qlik Community. And he there's some chapters here you can see, and he talked about that a bit. No thanks Troy. Now, I add on to Matt's question, give example is that Existing source and target will be what Michael Litz has presented. Now, for configuring new source, target, and end point, If the source and endpoint is able to set up using ODBC connection outside Replicate on the Replicate server, and it's able to talk, then Replicate will be able to talk. We have many customers that try to set up endpoint, but the communication is not able to communicate with the target or source endpoint outside Replicate and wondering why Replicate doesn't work. Well, if you can't set it outside Replicate, definitely Replicate will not work. I would, I would also like to add on to that, just saying that if you're looking for a best practice on setting up a new or existing source, source or target environment everything is done in, in phases, sort of. So, what you should do probably is just set up your source and target as plain vanilla, take all the default options. And I’m, I’m thinking that the follow up question is since he says, Just feels like we're missing a setting it may just be that you’re asking maybe about performance or something. So, I, I would think first start up as plain vanilla as possible, you know, just take the defaults and then if you see behaviors such as latency or something. Chances are then. you need to go back in and, and look at some of the settings that are already there as defaults and maybe increase or decrease them, you know, depending on what the behavior is of the task. So, I don't think there's necessarily a best practices document in regard to setting up the source targets. Because with every customer and every source and target endpoint it could be very different between customers or even between individual endpoints. So, I don't think it's the best practice, but in my opinion, start, start playing, take the defaults, run it, see what happens. If it's not performing, then you can go back in and, and try to adjust some of those buffers or max file sizes, things like that. Yeah, and just to just to add to that, also go through the user guide for the specific endpoints you are working on as well, right? So, for example, say ADLS as an endpoint, I think it has a particular setting inside that you can say, wait for this many seconds before you put it to the target table kind of thing, right? So, you might think, I don't see any anything making to my target in your configuration endpoint, you might say, okay, after And 10 minute, put it into the target kind of thing, right? So, so also go through the specific endpoint in the user guide just to, just to go through the configurations and, and just to support what Dinesh is saying. There is also in that user guide, it also shows you what version of the ODBC driver is required. So, if it says something like, oh, I need 11.5 0.6 for a DB two ZOS source endpoint don't try installing 11 point seven. It is asking specifically for 11. 6. So I think that's another, maybe a best practice is following the user guide, exactly what it says. There you go. Thanks Dinesh. Okay. thanks. Thanks to you all for that. Oh, it's beautiful. Next question. Is there documentation or helpful hints available Documentation for exporting tasks? to export a task from development, Qlik replicate to production, Qlik replicate to minimize build from scratch. We have a lot of those on the form already. Let me just search that. Bear with me. Might be something on the forum and I'm pretty sure there is because I think I've seen one as well. But I guess while, while he's looking, I can just talk a little bit about the fact that when you're exporting a task, um, depending on where you’re exporting it from, like, Qlik Enterprise Manager, or if you're just exporting from Replicate, you can say if you're just, well, if it's just Replicate, you can export And, you know, we’ll bring the list of tables and I believe it's without the end point definitions, but it does show the end points there. The, you know, where you're pointing to from a source or target perspective. And then you would import it into your production environment, um, and then depending on what the business need is, you might need to either, you know, re, you know, load the task, do a fresh start, or load the task, uh, or if it's a CDC only task, you may just want to do an advanced run, start from timestamp, or start from now, meaning start from fresh. And then I'll start picking up the changes from that point onwards. But and also again, just to add another point to it, because I explained similar situation as well. Right? Because you're talking about moving it from development to production. So. If, if you, if they're identical right now and you’re making some changes in your development, say you're adding some columns or whatever to make everything work as expected in the development, and now you're moving it over to the production export and input then be careful that that table might be reloaded in the production environment. Correct. And also keep in mind also, depending on what version is in your non prod versus your prod. You, you should try to be going from like a QA, which should be probably the same version as your prod because QA is designed to simulate what would happen in production as much as possible, of course. So, I'm assuming that the, then the version of Replicate they're using in that QA, uh, should be the same version of the Replicate in production. And then when you export it and then import it into production, it, you know, it should work fine because if you're trying to export from a very old version, for example the translation of it may not sync up right as you go to production. So, you're going to want to test. Those types of things out as well as you might need to go through a conversion process if you're going from like 7.0 to May 2023 you know, an unsupported version to the most current version we have you probably should go through like an upgrade conversion path to make sure that Replicate interprets that JSON correctly. Great. Well, let's move on to the next question. Due to some limitations, Can Triggers affect HANA performance our company is looking to move from SAP HANA log CDC to Trigger CDC? We're concerned about Triggers hindering HANA performance. Have any other customers have any. issues with Triggers affecting HANA performance, and if so, what suggestions do you have? Now if you ask me, like Triggers are more performance effective than the CDC based? I have not seen any. customers having any issues with performance of triggers, but if you have a large data coming in. So, when trigger-based approach, we use two control tables, right? One is that ITREP changes and one, the other one is ITREP CDC log. So, you need to make sure like you put the data like in sufficient interval. So that those tables doesn't get bigger. That's the only main thing. Other than that, yeah, triggers are. Much better than CDC, but yeah, I hope that you're on mute. Ah, thank you. Next question seems to be maybe a support case, but I will read it when using SQL server as a source. Sometimes the Why some changes not applied with CDC? error source changes. Changes that would have had no impact were not applied to the target database. Refer to that error table for. Details pop up. Even though the error says source changes that would have had no impact were not applied to the targets database. Sometimes this results in recording getting missed in the target. D. B. source D. B. is. L server and target DB is snowflake. Any ideas? I think probably all have some ideas. I'll let you start Bob. Oh, okay. Yeah. Yep. My, my quick take is the purpose. of that message saying, you know refer to the ATT, apply, accept ATT, ATT, rep apply exceptions is because it did not apply that transaction to the target. Now, why didn't it? Because. It says it, there would have been no impact on the target. In other words, let's say you're running an update, um, to a row that is not really there on the target and your task settings may be set that if the row is not there, just write this transaction to this exceptions table so you can inspect it later and see it. I'll, you know, maybe look at the PK and search for it on your Snowflake target. It's basically saying that the row is not there to update. And your task settings have controlled the behavior of replicate such that he'll write this exceptions record to this exceptions table. And it is correct then that that row did not get written to the target because It's not there to maybe it's not there in the first place and maybe should have been and if you change the task settings, you can alter it such that if the row is missing for some reason, could be any reason that we, we, we can convert that update into an insert. And then this way, the road does get there on the target. That's my initial take on that. Yeah, thanks Bob. Pretty much it's that the task configuration by default is always going right to the exception table the ATT rep exception table, and whenever you get that particular message like, like Bob was saying is that, go to that particular table, look at what is the error message say could be that is, primary key violation could be I don't think primary, but it could be that instance issue or duplicate or could be anything like that. So that's why. you're not getting it to the target because we’re logging it to the exception table by default. Okay, great. Moving on. Any plans for using preload and post load script, for example, Any plans for using pre/post load script for full load? dropping and recreating index for a full load? I believe we have the ability for you to control your tasks such that, let’s say you're doing a full load, um, as far as a post load goes, you can tell Replicate to stop the task, either before or after applying changes, the task will stop and not continue on with CDC. And then it gives you that opportunity of, of either manually creating if you need to, um, or in the replicate task settings, I believe there is also another option. I'm just going to double check that for a second. Yeah, go ahead and check that, Bob. From what you said I believe they’re talking about Endpoint as, like, SQL Server because you didn’t, didn’t read Next and so on, or Oracle. But currently there's no pre or post in the replicate tasks themselves for that particular endpoint. That's considered a feature request. But give example, if you write into S3 target, we do have pre and post command for S3 target. So, there's certain, certain endpoint have pre and post, but right now... The end point that you're talking about, I believe it is SQL or it could be Oracle, but then, yeah, there's no pre or post command procedure at this one, right? And in the full load the full load settings of your task, there is a check box that says for a primary key or unique index creation, you can check the box that says create the primary key or unique index after the full load completes. So maybe that might be what you’re looking for. It's just something to check. Okay. Thank you. Next question. Why is a primary key or index required Why is a primary key required? when replicating Oracle source endpoint to Oracle target endpoint? Some tables have no primary constraint or unique index, but Qlik forces us to create a unique key. Why? Bob, you in? Yeah, I mean, I, I’ve experienced so many customer situations where there was no primary key or there was nothing unique on the target, and the latency was absolutely phenomenal. It was out, it was outrageously large, and basically, it's doing table scans and trying to find a matching entire row of that data. Looking for looking for that match to perform an update, for example, or delete. So, even if the table doesn't have a primary key or unique index, you can just double Qlik on one of the column names that you think has a greater chance of being unique. And when you full load and, or reload or whatever, get the fresh metadata to the target it, it will make performance significantly better. And it doesn't matter whether it’s Oracle to Oracle or not, it could be for any source going to a relational target. You, you know, you want to have some sort of primary key, unique index, whatever, some sort of key field, so that table scans are not being run. That's an excellent answer. Thank you. Barbara. Next question. Is there a pay go option for professional service assistance? What are Professional Services payment options? Might be. I don't know if you all are that familiar with professional services, but it feels like that's a question for your account manager. Professional service offerings are really all about very specific to your needs. So, there might be, I don't know, hit up your account manager. I mean, not that particular, but I know that a lot of customers bias small to medium sized chunk of hours, for example. And, and they can, they can use them, you know, an hour here or two hours there, maybe to ask some questions, get a little learning education in, or. Or ask for a particular configuration of a certain endpoint, maybe spend one or two hours with them. So, it's, I know it's a very flexible process from that angle. I don't know if it's a pay as you go, stick your credit card out like years ago. But I agree. I was about to say the same thing. They are very flexible depending on your specific needs and requirements. So definitely talk to your account manager. Thank you. Commenting on the next question is commenting on exporting tasks. We use. Can script auto-export tasks without stopping? atomic scripts to automatically migrate tasks through environments behind the scenes. Our developers do not. Need to stop their task. I don't know if that was a question or a statement. Well, from, sounds like, it sounds like they, it's a statement and a question, so pretty much just that if, if they're using a backend script or something like that to export. So, exporting a task and importing, this is great. Exporting, just gonna grab the information out of. of the information of the task themself right now. So pretty much is that it could actually lead to the task that is already running, right? It's currently running. You can export a task that is currently running. That's no issue right there. All right. So, when you input the task back now, give an example, if you input the same task back into the server that's currently having that task, that's a totally completely issue. But you input that task back into a different server. No problem at all. Because that that that task doesn’t exist on the new server or another server. So, exporting on a running task. No problem Okay, next question. What are the differences between AWS DMS How does AWS DMS compare? and Qlik replicate? Legal issue and I don't want. I don't know. I, I, I'm not I know what it is, but I, I don't think I can answer it. So, is it fair? for us to say our, is this better? Yes. Our fair to say, fair to say is that A-W-S-D-M-S is very. limited in term of configuration wise, replicate have the full back end of the configuration wise. DMS is just doing a minor, portion of it. So yes, replicate is a full-blown version in term of AWS DMS. Okay. Next question. I wanted to, before we go to the question right there, there was a question that popped up in the chat a while ago and I missed it. And it says, is there some way. Can we write to SAP directly with APIs? to write SAP system using API with our connecting direct in database? That was in the webinar chat, and I'll read it again for you might not be able to find it. It says, is there a way to write in SAP system using API without connecting directly in a database? I don't think there is. Am I No, I know, you have to connect one way or another. Yeah, that's kind of the whole point of SAP is like having it very locked down, so you have to be able to have the whole connection. Without connecting directly to the database, I don’t, I don't think there is a way, but I'm not in an SAP system admin or anything like that, but I don't think there is. My answers. I know. Ah, unless they're asking about. whether they can connect directly to a back-end database. Not necessarily well. You know, sometimes you have SAP on the front end, you have the front-end image, and then you've got the back-end database environments, whether it's HANA, SQL Server, DB2, Oracle. I mean, theoretically, yeah, I mean, I do have some customers that can go directly to the back-end databases and not go through SAP HANA, or, sorry, the SAP front end. But that's okay too. It just depends on whether the data is of any meaning. And you may also be missing some columns that that are specific to SAP and the backend database just has it in a different format or layout. so Technically, physically you could connect to those backend database databases, but I'm not sure, if that's, but also, I think the permission is also important to consider here, right? Yes. What level of permissions you have. Yes. Okay. Well, next question. And yeah, man, thank you for catching that. I totally missed that one. For Kafka target endpoint, does anything get written to the apply Does anything get written to the Apply Exception for Kafka? exception topic? Probably if there's an exception and if your task settings tell replicate to right there to write any exceptions to that table topic. So at least it's possible. Yeah. Yes, yes. Yes, it is. Okay. You have to create that. topic manually. I don't think replicate will create it, but yeah, it should. Yeah. Okay, thank you. Next question. I guess is the, is it possible using change tables? If Possible to use change tables with Snowflake target? the target is snowflake. Yes, there was, I don’t know the statement question, but. We suppose, and I. Yeah, go ahead. I was going to say, I’ll go ahead and read the next question. It says all CDC software requires a P. I. due to the same issue without a P. I. you can also drop the target data. All right, well, we'll move on to the next one. in the exception table, the error is captured as Why exception table says 0 Rows affected? zero rows affected. Not sure why the record misses, why the record missing in the target. In the cases we checked, this has resulted in count mismatch between source and target. Source the record, the record, while in the target, the record did not flow through at all. Zero, zero affected. Same thing. So too much zero affected is that usually is that most likely it's going to go to the exception table again, where we the other question was, sorry we picked up on it and that so most likely is that when you zero affected is that if you read the log a little bit higher top, it'll say something about zero affected and look at your exception table. I'm pretty sure of it. Yeah, I think all the scenarios will be better handled. If we enable that upset mode in apply conflict, it's all it's all. Yeah, it's most of most of the end point will be able to set up using upsert right there. So most of the endpoint that gives them both Snowflake, SQL Server, Oracle, many of them allow upsert. So, you can use the upsert, so you won’t see that zero-row affected right there. Because zero row effect is usually just a duplicate key, missing key, stuff. Be a delete and the rows not there to be deleted. Yeah. I was on mute. Okay, thank you. We have a few more questions. We're trying to get to it as quickly as possible. The next question is not quite, Documentation on SAP HANA performance? not quite understood the response regarding Hannah performance when using trigger CDC. Please explain or provide some white papers regarding this. Our SAP instance. Has around 10 to 20 million changes daily. Yeah, we have other customers were even more than that. But yeah, if you're looking for a specific white paper, I can find it, I can, I can share it offline. Okay. We'll make sure to have that link within the videos. That way you will be. able to see it when the recording comes out. Okay. Next question then. Have any. Do Triggers affect SAP task performance? of your customers experienced an SAP performance degradation when using trigger-based replicates, and what was their solution, if so? I'm not sure if I remember a lot of performance degradation, I'm, I'm not sure which side if they’re talking about the task. I do think that was there, was there performance degradation There might have been some due to volume, extremely massive volumes. I'm not quite sure. I think probably more details would need to be added here. We would have to get to know your environment and understand what your, your database environment is set up as, you know, and how, you know, how far is, for example, replicate to the source of, you know, the SAP, because our best practice is that you always keep the replicate server closest to the source. It can be a little farther from the target, but so I think it depends on what you mean by SAP performance de degradation. Is it the SAP environment or the, the, yeah, I think it replied like source side, sap. Yeah. I think we usually give that as internal parameter. We give that performance sense. Whatever that query hint that SAP uses, we provide that we can use it and. Yeah, that will enhance the performance. Okay. I think the last question, I believe was typing in, but we have some time real quick. So, if she can answer this aloud if full load is disabled and you can make, How is the table affected when resuming after adding a new column? add a new column to a table and you resume the task, what happens to the table? Next question. Does it continue CDC or fail or still forces a full load? And the next is another scenario when the target is can pronounce it from right now. Thank you. You make changes to the source table column and use events options. To start the task, what happens to the topic on the target side? The first one is if a full load is disabled, and you add a new column to a table and resume the task. So that's assuming the task got stopped and you're adding a new column. Replicate needs to go get depends on where you added the column. Was the add, was the column added on the original table? Or was it added in a transformation? Well, Replicate should basically... Get that new metadata information and if it's been added to the original source table, but if it’s been added if on the fly as a transformation replicate will probably try to refresh the target metadata for that, for that table. Obviously, a full load won't kick in because full load is turned off, but it still may get fresh metadata. Does it continue CDC? Yeah, it should continue with CDC. after it refreshes the target metadata, or does it still force a full load? It can't force a [00:49:00] full load, and if anything, it wouldn't be a real full load. It might be what they call a dummy full load where it just goes back to the source to get fresh metadata if that new column was added on the original source table. Anyone else want to take number two? Let's see, now this is similar, so when target is Kafka, you make a change to the source column, add, use events to run time. What happened to the topic on the target? Well, when you run events run option, How is a Kafka target after adding a column? I'm assuming events run option, it's going to be like from a timestamp or something like that. Anytime an event time option run, it’s going to do a similar to what a fresh run, what we call, it's a fresh run. So it's gotta drop the topic table when you run in it from a timestamp in events run option. I think that's the answer that already right there. Fantastic. Go ahead, Manny. Okay, well since there is no more, any, no more questions we will wrap up this q and a with Qlik. We want you to I want you to look the screen Troy has. The, again, the Q&A with Qlik that is. coming up in November 28th on that Tuesday, and it is Qlik Visualization, and we have Patrick, which I believe he was on the Techpert Talks, he has, he's locked in for that session, so. Please join us for that session. It should be very good. Also, I want you to bring to attention the session survey for today. We ask you to use. You can scan that with your phone is a QR code and give us your feedback because we are looking to make the Q and a would Qlik. Even greater. Also check out Qlik community. Check out the Techperts Talks. Tomorrow there is actually a one on one with the technical adoption specialist. Umberto, he is and along with me, we have we call a one-on-one test for workshops. Where we talk about, and we get. into some one-on-one deep things with some demos. You can actually register for that. You can't register for the one tomorrow as that is closed, but you can register for a ones in the future as also look out for Techperts Talks, which is up November 16th. That is with Troy Rainey, our host as well. And also look at other You know, Qlik Community there's ideation, there’s, you know, support, there's the blogs. If you need more information, understanding on different topics, Qlik Community is where you can go to get more information. We would also like to thank our panelists here. They were great, very knowledgeable, and we thank you for your time. And from myself from Troy, the panelists, from Qlik, and myself, we say have a great day, and thank you.
Contributors
Version history
Last update:
‎2023-11-28 08:13 AM
Updated by: