Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
cancel
Showing results for 
Search instead for 
Did you mean: 
powerqlik
Contributor III
Contributor III

Qlikview Services CAL issue

Can someone please help?  I upgrade Qlikview to the latest version (May 2022) on a Windows 2008 R2 Server hosted on Amazon EC2.  Then, I created another instance in Amazon for a VPC on Windows 2019 Virtual Server.  Next, I took a copy of the Qliktech folder and a copy of the Documents folder from 2008 server, which all the QVD dashboard files are stored.  Next I stopped services and stopped the old Amazon EC2 server for Windows 2008.  Next I stopped services on the Amazon VPC Windows 2019 Server.  Next I copied the Qliktech folder from Windows 2008 over to the Windows 2019 and deleted the current version of Windows 2019 Qliktech folder.  Lastly, I put the Qlikview Documents folder with all the old dashboards into the Windows 2019 Server.  Next I licensed Qlikview with the old license number from Qlikview that I use to use on 2008 server.  Lastly, I restarted all Qlikview services.  

The issue I have is that all the Named Cals who previously had complete access in Windows 2008 to all QVD dashboard files, now can't access Qlikview Accesspoint, as it appears it was not carried over, only the "administrator" account can.  I have 2 other "Named Cals" under the 'Administrators" group in Local User and Groups  in the 2008 server and neither of them can access them in 2019 server.  I hit "run" in 2019 and entered lusrmgr and it doesn't look like any of the Named CALs or user CALS were carried over from the 2008 folder, even though I copied the Qliktech folder and QVD folder which has all the Qlikview documents.  What am I doing wrong?  Below is my setup.

Another question I have is that I am unable to access the Public DNS address to test out the document CALS that I had assigned to an outside user.  The link won't open and says the site can't be reached  ec2-xx-xx-xxx-xx.compute-1.amazonaws.com 

The link use to be a different link in the EC2 classic instance in Amazon 2008 and it had no problem opening and allowing document cal user to access it; however, since I stopped that server and now run 2019 server in VPC, it won't give me access online.

powerqlik_1-1659405844701.png

 

1 Solution

Accepted Solutions
marcus_sommer

I have no real experience with aws or similar cloud-environments and/or using containers/instances for the various services and also not with distributed QlikView installations and/or clustering them. But I think that the multiple instances are here regarded as multiple servers from a QlikView point of view.

Independent from used type (of distribution) of the installation I suggest to replicate it completely 1:1. No change to the number of server or instances and/or to the used QlikView releases and/or to the program-data, other configurations, all data and all applications - means a complete mirroring of the environment - unless the following need to rename all server-name references within the various config-files. The only change in your case would be the upgraded OS release.

The aim is to get a smoothly transition from one environment to the another. In regard to your mentioned OS change I would expect that the new environment would run immediately without any issues. But if not you could easily switch to the old environment again. By applying the OS upgrade in this way any occurring issue would be directly related to the changed OS and is much easier to detect and to solve as if there were various other changes which may influence each other and/or creating a depending cascade which single parts might be very hard to detect.

I understand it sounds cumbersome to replicate everything - and with it all data and applications - but in many scenarios it will be most efficient way to do the job because the most time will be needed for an afterwards troubleshooting and not for a carefully preparing the task (deeply reviewing the old environment + updating of all documentations + some cleaning + a full BACKUP + planning the new environment + creating/preparing the new environment + starting everything step by step). The final move could be then done within a nightly session or over the weekend or similar suitable down-times. By a nightly session would not be much time to check everything but over the weekend there would be also some time to fix any issues if something didn't work at once.

- Marcus

View solution in original post

5 Replies
marcus_sommer

The general approach of migrating a QlikView environment from one server to another by copying the QlikView program data as well as all belonging data + applications is working. I did it already multiple times and in the most cases it worked very smoothly.

Important is to stop the QlikView services before starting the copy-tasks, using the identically folder-structure for installing QlikView + all the data/applications (if not more or less efforts are needed to adjust all paths), to rename the old server-name to the new server-name within a lot of the config-files from the various QlikView services (that's the hardest work - to look with CTRL + F for the old name within two or three dozens files and replacing all of them) - and most important doing this BEFORE starting the QlikView services the first time on the new server.

Beside this there might be more things to save and to transfer from the old to new, for example installed data-base driver, user-settings for Excel or the pdf-printer, the batch-settings.ini from the distribution service (is stored within windows), any used windows-tasks and similar stuff. Without an appropriate documentation it could be easily overseen which additionally measurements exists.

In your case it's difficult to say what went wrong. Very important is that the multiple QlikView services are not only running else that they could communicate with each other. This happens per http on various ports and here various network/security settings/measurements may prevent the communication, like restricted ports or another firewall-rules or any proxy-settings or load-balancer or ...

Because on the fact that it worked for the admin and not for the normal users the above mentioned things are probably not relevant. One screenshot showed that you used a local directory for the authorization - so make sure that all your normal users have also the appropriate access rights on the root/mounted folder. Another possibility is that a used section access prevents the access to the applications.

In regard to your first screenshot try it by replacing the localhost with the server name or the IP from the server (if the server name doesn't work but the IP then it's a sign that the DNS resolving for the server didn't work).

- Marcus

powerqlik
Contributor III
Contributor III
Author

Hi Marcus, I am relatively new to Qlikview and in my head taking a big task, can you please be more specific on what you are saying?  Which folders need to be moved from the old server to the new server?  I thought only the Qliktech (Programdata) folder in Windows and the documents folder (D:\Qlikview\Dashboards) needs to be copied from the old server and placed on the new server with a different name (i.e. Qliktech_Old).  From there, we take only the .PGO files and the Ful.dat files and we copy them to the new server using the same file path.  Do we do this or should we just replace all the contents of the new server by deleting Qliktech and replacing it with the old server folder?

Also, attached below is the 2008 Management Console and 2019 Management Console.  The only difference I see between the two is the "QMS" Service Name and "Running on" have a different name on the 2019 version.  Do I need to change the name to match what was on the 2008 server?  If so, how would I do this?

The other thing that's different between 2008 server and 2019 server is the location of the Qlikview Assigned CALS. In 2008, the assigned CALS reference a machine name tied to the Publisher License but in 2019, the Assigned CALS are referencing two locations that I can't make sense of (Ec2amaz-6f & ip-0a519 (this one looks like it's the Local Directory name DSC and it matches on both servers)  Side note, when I try entering a publisher license it says I don't have an enterprise account so I don't know if that's related.

Everything else looks identical to the old server, please see below and thank you!

OLD Qlikview Management Console

 

NEW Qlikview Management Console

 

Old Server 2008

 

New Server 2019

 

2019 and 2008 Server (Same)

 

2019 and 2008 Server (Same)

 

2019 and 2008 Server (Same)

 

2019 and 2008 Server (Same)

 

marcus_sommer

Only taking parts of the program-data - means here the mentioned pgo-files - could cause issues. At least by us it would be a problem because we are using a DMS authentication with custom users and here are relevant access information stored within the meta-files. I'm not sure if it's also relevant by other authentications.

Further it looked as if the services were running on multiple servers - within the old and the new environment. Were all old server equally migrated to new server and all there program-data transferred? Further by using multiple servers the above mentioned method to rename the server names is more difficult because now you will need to refer to the right servers.

Beside this I noticed on your sixth screenshot the directory service connector reference to ...864 and within the second screenshot we could see that the dsc is running on localhost - this may a mismatch.

Ideally you replicate the new environment 1:1 from the old one - at least for the first step. Means any other adjusting to use a different distribution of the services to server, another folder structure/authentication method and/or upgrading QlikView to a new release or any other changes should be done afterwards. This may sound a bit cumbersome but it simplified the matter especially if it really comes to any troubleshooting.

- Marcus

powerqlik
Contributor III
Contributor III
Author

Hi Marcus, so you are saying rather than copying the .pgo and ful.dat files from the C:\ProgramData\QlikTech folder and the documents folder housing the .qvd dashboard files, I would need to replace the entire contents of those two folders with the Windows 2008 old server files?

As far as authorization, we are using NTFS authorization.  As far as the old server we only had one server running Windows 2008 on Amazon EC2 Classic known as "New Fast Server"  and on the New 2019 Windows server on VPC known as "Public Instance 2A".  I stopped 2008 "New Fast Server".  I didn't necessarily "migrate" the server but rather created a 2019 server known as "Public Instance 2A" as really the only thing we use that server was to run Qlikview on.  Was I suppose to copy the entire C:\ProgramData folder over or just the Qliktech folder? 

Regarding your comment on the 864 mismatch, it's set like that on the 2008 Server as was working fine (see below).  If I need to correct the mismatch, what would I list and where (I am newbie here so forgive any silly questions).

2019 Server

 

 

2008 Server

 

2008 Server

 

marcus_sommer

I have no real experience with aws or similar cloud-environments and/or using containers/instances for the various services and also not with distributed QlikView installations and/or clustering them. But I think that the multiple instances are here regarded as multiple servers from a QlikView point of view.

Independent from used type (of distribution) of the installation I suggest to replicate it completely 1:1. No change to the number of server or instances and/or to the used QlikView releases and/or to the program-data, other configurations, all data and all applications - means a complete mirroring of the environment - unless the following need to rename all server-name references within the various config-files. The only change in your case would be the upgraded OS release.

The aim is to get a smoothly transition from one environment to the another. In regard to your mentioned OS change I would expect that the new environment would run immediately without any issues. But if not you could easily switch to the old environment again. By applying the OS upgrade in this way any occurring issue would be directly related to the changed OS and is much easier to detect and to solve as if there were various other changes which may influence each other and/or creating a depending cascade which single parts might be very hard to detect.

I understand it sounds cumbersome to replicate everything - and with it all data and applications - but in many scenarios it will be most efficient way to do the job because the most time will be needed for an afterwards troubleshooting and not for a carefully preparing the task (deeply reviewing the old environment + updating of all documentations + some cleaning + a full BACKUP + planning the new environment + creating/preparing the new environment + starting everything step by step). The final move could be then done within a nightly session or over the weekend or similar suitable down-times. By a nightly session would not be much time to check everything but over the weekend there would be also some time to fix any issues if something didn't work at once.

- Marcus