Before swapping a persistent disk, as long as your deployment is in a working state, it is recommended to backup your VMs with
Bosh Backup and Restore or through some other means.
Prerequisites:
- To minimize downtime, please ensure any releases you are attempting to restore in the release manifests are uploaded to bosh director before beginning this procedure (for example, matching concourse, garden-runc, and postgres versions). Most releases can be found at http://bosh.io.
- To minimize downtime, please ensure all stemcells with matching versions referred to in the release manifests are uploaded to bosh director before beginning this procedure.
- Please ensure that all VMs and disks that are part of the releases being restored are removed from vMotion or live migration policies.
- Any variables referred to in the manifests being restored are defined within the deploy command or are defined in Credhub (examples: ((postgres_password)), ((token_signing_key)) )
- Know the names of all the releases VMs as found in vSphere and their jobs (example for Concourse for VMware Tanzu: know which one is the db, the web, and the Concourse for VMware Tanzu workers)
- All releases being restored should be the same version as the current deployment.
- Persistent disk manifest size must be the same between the current brownfield persistent disk and the new greenfield persistent disk.
We will refer to the brownfield VM persistent disk with existing data as
disk-{brownfield-uuid}.vmdk. This existing deployment will be referred to as the
brownfield deployment.
We will refer to the new bosh deployment VM persistent disk without existing data as
disk-{greenfield_db-uuid}.vmdk. This new deployment will be referred to as the
greenfield deployment.
Proceed with these steps only after all prerequisites are complete.
Shut down the current
brownfield deployment:
- Shut down all VMs in vSphere using shut down guest OS. (Concourse for VMware Tanzu example: Make sure to shut down all worker, web, db and any other VMs in the deployment)
- Ensure all the current brownfield VMs have shut down successfully. We want to avoid any IP address conflicts when we redeploy.
- Proceed with "Copying the persistent disk from the vm in vSphere" steps.
Copying the persistent disk from the vm in vSphere:
- Make note of the brownfield VM's persistent disk name. The disk's name should be in the format disk-{brownfield-uuid}.vmdk
- Clone the brownfield VM's persistent disk in vSphere. It should be the third disk listed in the VMs disks under settings. It will match the size of the persistent disk as defined in the manifest used for this deployment. There should be no need to detach the VM's persistent disk to complete the clone operation.
- Make note of the cloned VM's persistent disk's name and datastore location as found in vSphere (disk-<cloned-brownfield-name>.vmdk).
Deploy and shut down a new deployment:
- Deploy a new deployment with the existing manifest and release versions used in the existing (brownfield) deployment.
- Ensure new deployment is healthy with bosh vms --vitals before proceeding!
- Use bosh -d <deployment> stop to stop all monit processes on all VMs in new deployment.
- In vSphere, shut down guest OS for all greenfield (just redeployed) deployment VMs. Ensure the VMs shut down successfully before proceeding!
Replace greenfield VM persistent disk with cloned brownfield (existing) VM persistent disk:
- Move the cloned brownfield VM persistent disk to the datastore and folder of the greenfield VM persistent disk in vSphere.
- Detach the greenfield persistent disk VM in vSphere. This should be the third disk in the VMs settings (it will match the size of the persistent disk as defined in the concourse manifest). Please note the datastore location and name (with extension) of the greenfield persistent disk in vSphere. This name will be used in a later step.
- Rename the detached greenfield persistent disk VM in vSphere to disk-<greenfield-uuid>_new.vmdk
- Rename the cloned brownfield persistent disk VM in vSphere to disk-<greenfield-uuid>.vmdk
- Move disk-<greenfield-uuid>.vmdk to to the same datastore and folder as the detached greenfield persistent disk "db" VM in vSphere (disk-<greenfield-uuid>_new.vmdk).
- Attach disk-<greenfield-uuid>.vmdk to the greenfield VM.
Powering on the new deployment:
- Power on new deployment VMs in vSphere.
- All VMs should be in a stopped state in bosh vms --vitals before proceeding.
- Execute a bosh -d <deployment-name> start
- All deployment VMs must be in a started state before continuing.
Check deployment health (examples):
- If Concourse for VMware Tanzu, check that the existing pipelines' appear in concourse.
- If MySQL for VMware Tanzu, check cluster status
- Keep original deployment VMs in vSphere as backups until the new deployment's health is confirmed.