search cancel

PAM Appliances Migration to a different ESX server environment


Article ID: 221388


Updated On:


CA Privileged Access Manager (PAM)


We are planning to migrate our PAM appliances to HCI environment. We would want to know if there are any foreseen issues related to this and also best possible option to migrate the PAM clustered appliances without down time. The network configuration, IPs, subnet and gateway, will not change, but the MAC addresses of the interfaces are expected to change. Will this cause problems?


Release : 3.4

Component :


The MAC addresses are used to calculate the hardware ID that is used for licensing. In old releases (PAM 2.X) a change in the MAC address, or a change in the number of interfaces defined for a PAM VM, used to break the license. This no longer is the case with supported PAM releases as of August 2021. PAM will update the license file at boot time and you will notice that the hardware ID has changed, but PAM should function normally. Similar to application of maintenance or upgrade patches, it is prudent to make sure a recent SSH debug patch is installed, and on page Configuration > Diagnostics > System option "Remote CA PAM Debugging Services" is turned on prior to starting the migration, so that PAM Support can access the VM via SSH should there be any problem after the migration.

When migrating a cluster site, make sure you have at least three nodes in the primary site so that when one goes down the remaining nodes still have a quorum. Migrate on node at a time. If the migration is fast, the node can be powered off while in the cluster, migrated and powered back on. Since the network configuration didn't change, it should sync with the other nodes right away. Once the node is in sync and fully functional, move on to the next node. If the migration is expected to take time, it may be better to make one node at a time leave the cluster, migrate it and then have it join the cluster again. Make sure the node is in maintenance mode, the expired password processor is disabled, and delete any scheduled job on the node once it comes back up, so that it doesn't update any target account passwords while out of the cluster.