VMware is aware of this issue as reported in TCA 2.1 and fixed in upcoming TCA 2.1.1 , current workaround is available as mentioned below to address the issue.
Workaround:
To prevent the day-2 operation failures on such transformed clusters, following workaround steps has to be executed once per management cluster post at least one cluster transformation.
Step 1. Collect the following values:
A. hcxUUID of TCA-CP where workload cluster is deployed
1. SSH login to TCA-M
2. Connect to mongo shell using command: mongo hybridity
3. Identify the TCA-CP vim name where the workloadcluster is deployed and put it in the following query:
db.VimTenants.find({"vimName": "__fill_tca-cp_vim-name_here__"}).pretty();
and run the query.
4. The result will contain "hcxUUID"
B. vCenter host of TCA-CP where workload cluster is deployed:
1. SSH login to TCA-CP, where workload cluster is deployed
2. Connect to mongo shell using command: mongo hybridity
3. Run the following query:
db.ApplianceConfig.find({"section": "vcenter"}).pretty();
4. The result will contain "url". Url will look like this: "https://__vcenter_fqdn_or_ip__". Extract the host value, which is __vcenter_fqdn_or_ip__
C. SHA1 Fingerprint of vCenter:
1. SSH login to vCenter Server Appliance as root user
2. Execute the following command in shell:
openssl x509 -in /etc/vmware-vpx/ssl/rui.crt -fingerprint -sha1 -noout
3. The result will contain SHA1 Fingerprint. It will look like this :
AB:26:CD:6A:82:09:36:C9:9B:A0:EF:90:3E:DF:73:A3:D4:37:68:58
Step 2. Update the vCenterPrime CR:
1. SSH login to TKG management cluster, which is managing the concerned workload cluster using capv user
2. Execute following command:
kubectl get tkc __fill_workload_luster_name_here__ -n __fill_workload_luster_name_here__ -oyaml
3. From the result get the VCenterPrime name. It will be under spec > cloudProviders > primeRef > name. It should look like this: p-59fc78cd83f58ea4781bfa1ce788c47f
4. Execute following command next:
kubectl get VCenterPrime -n tca-system __fill_vcenterprime_name_here__ -oyaml
5. From the result get the datacenter name. It will be under spec > subConfig > datacenter
6. Paste the following content in a new file (lets say filename.yaml). (Update the required fields with appropriate values)
apiVersion: telco.vmware.com/v1alpha1
kind: VCenterPrime
metadata:
labels:
tca-Id: __fill_hcxUUID_here__
tca-datacenter: __fill_datacenter_name_here__
name: __fill_vcenterprime_name_here__
namespace: tca-system
spec:
server:
address: __fill_venter_host_here__
credentialRef:
kind: Secret
# following name should look like this p-59fc78cd83f58ea4781bfa1ce788c47f-secret
name: __fill_vcenterprime_name_here__-secret
namespace: tca-system
subConfig:
datacenter: __fill_datacenter_name_here__
thumbprint: __fill_sha1_fingerprint_here__
7. Execute following command next:
kubectl delete VCenterPrime -n tca-system __fill_vcenterprime_name_here__
This should delete the VCenterPrime CR. (To confirm, you can run Step 2.4 again, this time it should say not found)
8. Next, execute this:
kubectl apply -f filename.yaml
This should create the VCenterPrime CR. (To validate, run Step 2.4 again. You can also verify the status under status section. "phase" should be "Succeeded")
After execution of the workaround, the day-2 operations will no longer fail.