Here you'll find the recordings of our Techspert Talks sessions (formally Support Techspert Thursdays).
00:00 – Intro
01:00 - What is Replicate and how does it work?
03:38 - What is pure Scale?
05:55 - Setting up a Db2 pureScale endpoint
06:52 - Creating a full load task
10:03 - Demonstrating how CDC works
12:22 - What happens if task fails?
13:21 - Error handling settings
14:03 - How to manually recover a Purescale task
17:46 - Tip for after recovery
18:55 - A look at the Oracle target
20:05 - Q&A
db2pd -log -db NAMEOFDATABASE
To get the LRI position:
db2flsn -db NAMEOFDATABASE -lrirange LOGFILENUMBER
Q: What is qlik replicate versus Qlik Data integration ?
A: Qlik Replicate is one of many Products under the QDI area of products.
Q: What is with large tables and often data changes?
A: The size of the Table has no requirements, they are all treated same. Even can Replicate LOB Data as well depending on the Source.
Q: Can we use Replicate for DR( disaster recovery) use case with Oracle or SQL Server?
A: Yes as long as you have access to the Transaction Logs (Redo Logs from Oracle) and the (TLOG Backups from MSSQL)
Administrative commands for Db2
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Hello everyone and welcome to the February edition of Support Techspert Thursdays.
Today’s presentation is Configure & Recover Db2 Pure Scale as a Qlik Replicate Source Endpoint with Bill Steinagle. Bill, why don’t you tell us a little bit about yourself and what we're going to be talking about today.
My name is Bill Steinagle. I’m a customer support engineer. I’ve been at Qlik for the last three years and I’ve been in IT industry for about 27.
All right, and what are we talking about today.
Today we're going to talk about Qlik Replicate and the accessing of a Db2 to pure Scale environment; setup a Replicate endpoint; how to recover; some troubleshooting tips and some takeaways and things you should remember.
Bill, could you start by summarizing for us for some of us: what is Qlik Replicate and how does it work?
Yeah, Qlik Replicate is a database management tool that is configured so you can move data from your source to your target. The main component of Replicate is the CDC component, which is the change data capture. That's the component that allows you real time data access from your source and target environments; where we're able to capture all of your changes coming into your source. And we're able to replicate that to your target environments; like your tests, QA, doing sanitation of the data.
So basically, Qlik Replicated allows you to keep all your data live and up-to-date regardless of where the sources are?
Correct, correct. Yeah, and that's a very cost effective methodology. You don't have to run loads from your production to your test and other environments. And you keep your data at real time, so what image you have in your source environment, you're going to have in your target.
So, how does it work?
Sure. These are the types of systems that we access. This is the integration piece. And then these are your different landing zones. And then these are the applications that reading from the landing zones. This piece right here: here's all your source environment.
This is where the CDC solution comes in. Depending on the database or the environment you're reading from. What Replicate does is: we actually go against the log files that are defined to your source environment. As an example, with Db2 you have transaction logs; with Oracle you have redo logs. What Replicate does and does very effectively is we're able to read the actual transactions that are being defined and updated on your source, and process those through the CDC solution to your target. That's where you read the inserts, updates, and deletes; and that's how we keep track of the information.
I know you’re about to demonstrate this, but how will this work specifically with Db2 pure Scale?
Replicating will call that API Db2 log reader agent that actually gets the data from the transaction log; pulls it out; goes through the CDC process, and gets updated into your target.
And we have intermediate tables that keep track of that. Also we have something that's called Sorter files that are defined and they're basically the cached events before they're committed to the target.
So, it's keeping track of the data, all the way along the process?
Today we're gonna be talking about pure Scale. What exactly is pure Scale? How is that different from regular Db2.
Pure Scale is an add-on that IBM decided to do for more power, more availability, more uptime.
And the availability to have multiple database instances across the clustered environment. And the Db2 pure Scale, he's basically like the control cop for all the Db2 instances that are underneath, and they're called members.
How many members can a pure Scale environment have?
The max number of members, which are actually just Db2 instances is 128. That's the max the pure Scale environmental will handle.
Okay, so that's a cluster of 128 possible Db2?
And as you can see in the image here, the primary and secondary, CF stands for caching facility, so this is like the brain of pure Scale right here.
He keeps track of everything that's happening and all the Members. These are your Db2 instances, and these are the applications coming in reading from Db2.
On the backside, pure Scale is keeping track of this information and what's going-on with all the members. Because it's clustered environment, you have a primary and secondary; so if one goes down, you have a backup.
Now is pure Scale support-ability with Qlik Replicate, is that a new feature?
Yes, so we added pure Scale in 6.6 of Replicate.
It’s not much different than your regular standard Db2 LUW that you're accessing.
It's got the add-on feature, where he's got the caching facility, he can add multiple members. Where in your Db2, it could be in a cluster as well, but you don't have the pure Scale piece on top that keeps track of everything.
Okay, so yeah you could set up individual sources as Db2, but you're just connecting to each individual instance.
All right, can you show us an example of how to set this up to Replicate data?
Oh sure. I’m running the April 2020 6.6 release of Replicate. And the current release is 7.0.
I’m just one release back.
And so, what are you going to do first?
So, what I'd like to show you is: set up a basic endpoint accessing the Db2 Member that's in a pure Scale environment.
Alright. Where do we start?
So here, I already have a task running, we can close that. You want to do: Manage Endpoints.
New Endpoint Connection.
Give it a name.
And this has going to be your source.
We don't have Db2 pure Scale listed, so you want to make sure you select the Db2 for LUW.
You have different connection methods. You can either use the DB Alias is that you have set up your client, or I use the connection properties.
My database environment is on my local host.
So that’s your IP address?
And it doesn't have to be local. This could be, you know, the server could be in, you know, California. And I’m living in New Jersey, so, it’s the IP address of your database server.
And, then you select your database.
Put in your username.
Test your connection.
And that's the password of the database?
So, I want to save this.
So now, I have a pure Scale data source or endpoint that we're going to access.
So now that you’ve got your endpoint setup, what’s next?
So, now that I have my endpoint setup, and I know what target I want to write to; we're gonna set up a new task.
So, new task in the toolbar window here. That's my task name.
It's gonna be a full load and apply changes.
This is the basically the CDC piece. It's going to be Unidirectional.
And, say okay.
So over here on the left side is all of your source endpoints and targets.
And what’s the difference between sources and target, for those who aren't that familiar?
Yeah, so the source is what you're going to be accessing. You have a bunch of tables, you want to access and replicate to your target. So your source is where you're going to read from.
Okay it's like the original.
Exactly that's going to be your baseline. That's what you want to replicate, that's the data.
So in this case our source is pure Scale.
Yep. You can drag it or use the arrow. And we Qlik on targets.
And all these targets that are listed here and sources; are these kind of built-in default when you.
No. No, so that that's the other key point here. When you're accessing your source and target environments, you have to have the client software already installed on the operating system that you're going to be using Replicate with. So whether this is Linux or Windows; you want to make sure you have the database software that you're going to use (the client software) to connect to the database. That just puts less load on the database and we require that as part of the product implementation.
I love, how it does this visualization for you.
It helps. So, from here, we want to go on table selection. We want to select the list of tables from our Db2 environment. You select the schema; whatever schema of tables you want to look for. You just do a search. So, we'll get the employee table, copied over. We can do inventory. We can do product.
So, I have four tables in my task. These are the tables I want to replicate to my Oracle environment. My Oracle environments is used by another team that runs reporting, you know, however, they want to use the data. We send it to them and we keep it live active as what we have in our source.
And from this screen, basically you just want to start processing.
So here, it’s loading the table. Making a copy from source to target. And then going into CDC mode.
Yep. So, this is the monitor tab in Replicate. So, he's doing a full load; he said he's completed.
These are sample tables, but if you have a table with a million rows or 200 million rows; this is where your transfer accounts going to be.
Now that the full load is complete, Replicates going to switch into CDC, because this is a load and CDC task.
One more time, what does CDC stand for?
Sure: Change Data Capture.
I just want to demonstrate a couple points here specific to pure Scale. You can use any of your
Applications. This is just an example. This is the employee table.
So, this is a standard database admin tool to look directly at the source data?
Correct. So, This this will kind of give you a visual how Replicate works. I’m making some changes.
I’ll do one more.
Okay, so you're making a few changes to the database itself.
Right, this is the source system, the sample database that I’m connected to which is Db2 10.5. He's a member of the pure Scale cluster.
So, I just made some changes and I’m going to commit the changes. Once I commit the changes to the source database, you're going to see what happens in Replicate.
There it is. See the four updated on the employee table.
So, walk us through exactly what happened there?
Sure, so in my source side, I’m just using a database administration tool. What I was doing was: I was actually changing the data in the source system using this tool. And Replicate is keeping track of those updates. That's what this this screen here shows. We did four updates on the source side, he got four updates to the target.
So, it's keeping track of every change made to the data?
Correct, on those tables. So, as an example, I’ll use another table that I had to find.
This is kind of a wow moment to get to see it happen so immediately.
Right, I if you missed it; I just deleted a row. So, we are going to insert a row. Some random value.
Saving those changes.
Correct. So, as you can see, for the inventory table that we were just updating, we inserted a row, and we deleted a row. Replicate’s keeping track of that.
This way you're keeping your source and target in sync at all times, meaning the structure is your source and you're structuring your target are the same.
And it's all happening instantly, it's all happening live, as you say.
Correct. So, there's really not much manual intervention. That's the power of the tool.
So, Bill, what happens if a task like this you setup fails?
Okay, that's a good question. So what I’m gonna do is I’m going to force a crash of my task, and then we can go through the steps to recover.
This is specific for Db2 pure Scale.
We’re looking at your list of tasks running on the system. You’re forcing it to crash now?
Correct. Now watch what happens. This task is trying to recover, because it crashed.
Okay, so it’s giving an error message there.
I go over to monitor.
The big things I’m picking up are the message on the far right with the big red X saying ‘task stopped abnormally.’
Correct. And I can say, view logs. So this is what’s actually happening with this task. And he’s trying to recover from the crash. It says: ‘running full load CDC resume recoverable error, retry 1.’
Replicate’s: “ Something happened to the task, I’m going to recover.”
If he’s not able to recover, then you have to manually recover the task. Using the steps I’m going to show.
And if I go in here. And you remember, I said recoverable errors. Here's the error handling. It's got a maximum retry count, its infinite because it's minus one.
So, by default, it'll just always try and pick up where it left off?
Correct, without any manual intervention.
I’m going to stop the task, because I have a recoverable error unlimited, so he's not gonna crash.
Even if the database goes away, when the database come back, Replicate’ll try to come back online as well. We're gonna stop the task.
So, it looks like it's built not to fail, but we’re in Support, right? Sometimes bad things happen. And this is the process for manual recovery.
And you know, it's dependent on your source environment. You know, if there's a missing log file, a connection issue, you know different scenarios for a task to fail. And with pure Scale, you have to you have to know a few Db2 commands to get that information so you're able to resume your task.
All right, let's could you walk us through that?
Sure, you have to run a few Db2 commands to get: the it's called the LRI position within the log file that was being accessed.
Okay, so we've got a command prompt setup you've taken us to that path. Which path is that?
Yeah, this is the Db2 administrative command window.
I run this command window, because I make sure I have all the windows rights and permissions to run these commands.
The first command you want to run is: you want to get the list of log files associated with your database.
Its: “db2pd -log -db” and then the name of your database.
It may take a minute to come back.
So that's just getting a list of all the logs from your database?
Correct, and it also gives you the log number that it's reading, which is important for the next command that we're going to run.
This field here: current log number is number 85.
See this? I have three log files associated with this database instance.
Now that I have the current log number, it’s starting LSN.
This is the Db2 command to get the stream position within that log file.
So this will take us to the specific point where the fail occurred? So data-wise, we can just pick up where we left off without any loss of data?
Exactly. Correct. As you can see, it gave it gave an LRI range between this value and this value.
This is the begin. This is where it was. This is the value, you want to copy 34 Bytes long, so you want to make sure you copy all 34 Bytes.
To make sure, just open an editor.
34 bytes. This is the value, you need to resume your tasks that failed.
That's the exact position where it stopped?
Correct, within the log file. There’s an internal parameter for the Db2 environment that you have to set with that value, so he'll be able to pick up from that position.
So, you open up your endpoint.
You can click on the advanced tab. Internal parameters.
There it is. This is the recoverable method for a pure Scale task.
Copy and paste. Say okay.
Don't forget to Save the endpoint.
And now we're ready for this option: Advanced Run Options.
With pure Scale, you have to have that context value in the source endpoint.
Even though it says that the tables are already loaded, start processing from now. We're not going to use this value here.
We're going to read the Start From Context from the source endpoint. That's why it was important to make sure you have the right value.
I can just click OK. And the task should start.
So now we'll do the same thing. We'll do a quick test.
Delete couple rows.
Now they show up, see that?
That was kind of the steps to recover manually when your task fails in a pure Scale environment. And if there was an issue with the LRI value with the stream position, that we showed when I ran this command. We would have an error, when the task started.
One other tip as far as the pure Scale environment goes: once you have the task has been up and running for a little bit. You want to stop the task.
We keep that stream position in the STS file that's associated with the task. Now that the task is OK, go back into the source endpoint.
Go back to your internal parameters and remove it.
Now that it saved, you can just do a resume processing. So, resume is basically going to pick up where he left off.
Now, those Db2 commands you used, where did you find those?
These, they’re basic Db2 utility commands.
Google's my best friend. If I need to know something I usually just go ask Google.
But I actually have links that will be available for Db2 pure Scale information and there's also, I believe some Db2 command stuff as well.
Just so everyone knows, we will include links to the resources mentioned along with the recording. So keep an eye out for that.
Bill, we’ve talk a lot about the source side. What about the target? The tasks have been replicating to Oracle?
Let me connect to Oracle. This is basically SQL+.
Password. This is my user.
So if I do: select count(*) from employee
So what are you trying to demonstrate here?
I have 40 rows in my target.
And that was the employee table.
That's your source?
Yeah, this is the source. 40 rows.
I thought it'd be beneficial to see on the target side.
So now both tables are in sync both tables are loaded. So, anytime there's an update in this table, I mean we can do one real quick.
We can delete a row.
So, look: 39 rows.
We do the same command.
That's the power of the product. That's the Change Data Capture. We keep track of your changes, keeping your source and target in sync.
Okay. Now it’s time for some Q&A.
Please submit your questions on the left side of your On24 console.
Bill, which question would you like to address first?
Okay, it looks like we have a first question, what is the default character set for Replicate?
Is it UTF-8? If so, where's it defined?
Yes, the Replicate tool that's the default encoding character set up UTF-8.
And it's it's really defined within the product.
You don't have to set it anywhere set automatically.
Great next question.
Okay, does the connection to pure Scale requires a special license or is it an add on?
Now as as we discussed earlier in the presentation.
Pure Scale is an add on for Db2 so any the licensing, I believe, would be on the Db2 side.
Where Replicate as long as you have Db2 LUW as your source, I’m assuming your Scale you wouldn't need a license for purse pure Scale.
Is there just add on to that, Bill, is there a specific version of Replicate that’s necessary?
Um yes, you need to be running at least 6.6 of Replicate it might be 6.5 but we didn't introduce it until 6.5 or 6.6 of Replicate.
The user guide recommends that on Db2 servers Replicate user must have SysAdm or Db2 or dbadm and data access authorities, is there a way to limit this access?
Well good question but no the reason:
Replicate user requires certain access because of where the data is being pulled from the logs So if you don't have the proper permissions we're not going to be able to capture the data from the transaction logs to find the Db2.
That makes sense.
And these are all documented in the Replicate user guide as prerequisites before you set up a source endpoint.
Okay. Next question.
Are there any data extraction limits for pure Scale would Replicate?
No, I do not believe there's any data extraction limitations, as long as.
I guess there I guess the question might be, is there a limitation on how many how many rows of data, can you extract.
From your Scale with Replicate I do not believe there is a limit from a pure Scale environment.
but I could double check and get back to you.
Next question, after task has been running for 22 hours, it fails, with a Db2 connection timeout, any idea what could be the problem?
Okay that's a good question, and this comes up a lot well we're just basic Replicate his database management tool where we're utilizing the connections, you have in that environment to access your database if there's a database issue, connection issue.
Replicate can't connect, we usually push the the issue on to the.
The DBA or the network admins because we utilize the connections that are already there.
So if there's a connection problem or a db Db2 timeout we usually get the DBA evolved at help see what was happening on the source side when the connection problem happened.
Is it possible to import existing test from an older version of Replicate to a newer version that is installed on a different server?
Great question comes up all the time, because you have different environments prod test development. So yes, it is possible when you export a task in json format and you could take that export file.
Send it to your other environment use Replicate as to import the task your task is ready in live and ready to go.
As long as you're accessing the same source and targets that information is also the defined in your task. As your endpoint source and target.
All right, Bill we have time for one last question.
Oh, here we go we're planning on upgrading our Replicates server soon what version should you recommend we upgrade to?
So currently the Current version is seven dot zero with service pack , which is a service release, which is bundled of fixes, APARs, you know, whatever terminology you use that's the current version, that would be the recommended version to upgrade if you're going from 6.5 6.6.
Doing an upgrade in place basically it's going to upgrade the current version, you have installed and it's always a great idea.
Before you do any kind of upgrade is make sure you take a backup of the Replicate data directory. That's where we store all the information for all your tasks.
That you already have set up and Replicate this way, if you have to fall back or you know fall back to the version, you are upgrading from, you just basically bring back your data directory, startup Replicate you're good to go.
Thank you everyone for attending. And again it was my pleasure, hopefully, the information I shared was beneficial take care and have a great day.
Thank you everyone. We hope you enjoyed this session. Thank you to Bill for presenting. We always appreciate getting experts like Bill to share with us.
Here is our legal disclaimer.
And thank you once again. Have a great rest of your day.