Migrate Tanzu Kubernetes Grid Cluster nodes to another Datastore in same vCenter.
search cancel

Migrate Tanzu Kubernetes Grid Cluster nodes to another Datastore in same vCenter.

book

Article ID: 372575

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid VMware Tanzu Kubernetes Grid Plus

Issue/Introduction

This article explains how to migrate the TKG cluster vms from existing datastore to another datastore in same vCenter.

If you have installed class based or legacy plan based cluster in Tanzu kubernetes Grid and you want to migrate the cluster vms to different datastore we can follow the procedure refered below.

Environment

VMware Tanzu Kubernetes Grid.

Resolution

If you have already deployed a TKG cluster in one datastore and you have second datastore in same vCenter and under same Cluster.

You can achieve this by following the below procedure in class based clusters.

  1. Switch the cluster context to management cluster. kubectl config use-context <name-of-mgmt@context>
  2. Run kubectl get clusters -A ( select the cluster from the output list you want to migrate.)
  3. Run kubectl edit cluster <cluster-name> -n namespace
  4. So you need to change the datastore path here and update the new path for your second datastore(new datastore).
  5. After editing just save the cluster object file. This process will recreate the cluster nodes and provision it into the new datastore.
  6. Run kubectl get machines -A | grep -i <cluster-name> . It will provision the new nodes in same vCenter into the new datastore.
  7. For validation you can go to the new datastore in vCenter UI and check the VMs column you will see your cluster vms there.



If you want to migrate the TKG cluster vm's into the different datastore which has different cluster then you follow below procedure.  

 

  • Switch the cluster context to management cluster. kubectl config use-context <name-of-mgmt@context>
  • Run kubectl get clusters -A ( select the cluster from the output list you want to migrate.)
  • Run kubectl edit cluster <cluster-name> -n namespace
  • Here you need to change the datastore : path to the new datastore 
  • Additionally you will need to change the resourcepool :   (Path  of the resourcepool which is created in new cluster ).

After editing just save the cluster object file. This process will recreate the cluster nodes and provision it into the new datastore.




*As the new Cluster will be part of same Datacenter no need to change the datacenter inside the cluster object. 










Procedure to migrate legacy plan based cluster in Tanzu kubernetes Grid to different datastore in same vCenter.



  • When we make any change in the kcp object (kubeadmcontrolplanes) for the control plane it will delete the old machine in the cluster and will create new one following the rolling update strategy. Similarly when we edit the md object (machinedeployment) for the worker nodes it will also delete the old machines and create one's. 
 
 
  1. Set your k8s context the the Mgmt Cluster

    kubectl config use-context <your MC context name>

  2. Obtain the current vspheremachinetemplates used by your Cluster VMs. Control Plane and Worker vms use different templates :

    kubectl get vspheremachinetemplates -n <namespace> 

  3. Create new vspheremachinetemplates yaml files from the ones used for both CP and Workers.  e.g reference below :


    kubectl get vspheremachinetemplates -n tkg-system workload-control-plane-vmware-n7xua -o yaml > workload-control-plane-vmware-n7xua-newdatastore.yaml
     
    kubectl get vspheremachinetemplates -n tkg-system workload-worker-vmware-gskfv -o yaml > workload-worker-vmware-gskfv-newdatastore.yaml

  4. Edit the new vspheremachinetemplates yaml files for control plane & worker :

    vim workload-control-plane-vmware-n7xua-newdatastore.yaml

    vim workload-worker-vmware-gskfv-newdatastore.yaml

    Change .metadata.name to your new vspheremachinetemplate name.

    Example: workload-control-plane-vmware-n7xua-newdatastore

    Change .spec.template.spec.datastore to reflect your new DataStore:

    name: workload-control-plane-newdatastore
      namespace: tkg-system
      ownerReferences:
      - apiVersion: cluster.x-k8s.io/v1alpha3
        kind: Cluster
        name: workload
        uid: 150dc79d-9ab1-4ae3-8dc4-f2e4b50323c7
      resourceVersion: "24880211"
      uid: 797a2f90-f975-4c81-8d9d-249ceb2de712
    Spec:
      template:
        spec:
          cloneMode: fullClone
          datacenter: /Datacenter
          datastore: /Datacenter/datastore/mydc




  5. Once the edit is done then create the new vspheremachinetemplates objects


    kubectl apply -f workload-control-plane-vmware-n7xua-newdatastore.yaml
    kubectl apply -f workload-worker-vmware-gskfv-newdatastore.yaml


  6. Validate the new  vspheremachinetemplates object are created.


    kubectl get vspheremachinetemplates -n <namespace>


  7. Now you rotate the Control Plane nodes of Workload Cluster to point to new vspheremachinetemplate --> Datastore
         Identify the Workload Cluster kubeadmcontrolplanes object used by the Control Plane nodes. You will edit this.
         
         kubectl get kubeadmcontrolplanes -n <namespace>
       
         Edit it, changing it to point VSphereMachineTemplate name to the new template.
 
         kubectl edit kubeadmcontrolplanes -n <namespace> workload-control-plane-xxx
 
        Just change .items[*].spec.infrastructureTemplate.name and the VMs will begin their rolling update.
 
         Example changes in below :

         
spec:
  infrastructureTemplate:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: VSphereMachineTemplate
    name: workload-control-plane-newdatastore
    namespace: tkg-system
 
 
         Then repeat the same for the Workload Cluster Worker nodes using below steps. You can wait for the Control Planes Nodes to finish first.
          Identify the Workload Cluster machinedeployments object used by the Worker Nodes.  You will edit this.
         
          kubectl get machinedeployments -n <namespace>
         
         Edit it, changing it to point VSphereMachineTemplate name to the new template
 
          kubectl edit machinedeployments -n <namespace>
 
         Just change .spec.template.spec.infrastructureRef.name 
           
         Example changes in below: 
 
 
spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
          kind: KubeadmConfigTemplate
          name: workload-md-0
      clusterName: workload
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: VSphereMachineTemplate
        name: workload-worker-newdatastore
        namespace: tkg-system
      version: v1.20.1+vmware.2

 

Now you can validate that the datastore migration of the cluster vm's will be done successfully to the target new datastore. 

 

**Note by following this procedure only the cluster vm's will be migrated to another datastore. This will not migrate the persistent volume.   

 




Additional Information

**Note by following this procedure only the cluster vm's will be migrated to another datastore. This will not migrate the persistent volume.   

   Migration of cluster across different vCenter are not tested and support.