Attempting rehome a workload cluster using Workflow Hub fail.
Failed to Edit cluster. Reason: Request for moving cluster <cluster name> sub-workflow Failed. Response: {"errors":[{"errorCode":"INTERNAL_SERVER_ERROR","internalMessage":"Error executing
workflow : errorCode: 7,errorMessage: Failed to validate job, error: failed to validate the moved cluster <cluster name>: TBR <tbr-bom-3.0.0-v1.26.8---vmware.1-tkg.2-tca.24420203> referenced by
TcaKubernetesCluster <<cluster name>> doesn't exist in the target management cluster"}],"warnings":[]}/[{"jobId":"xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx","state":"SEND_MOVE_CLUSTER_REQUEST","didFail":true," jobData":{"spanId":"0000000000000000","payload":{"spec":{"args":{"
clusterName":"<cluster name>","sourceManagementClusterUUID":"xxxxxx -xxxx-xxxx-xxxx-d29e194xxxx","targetManagementClusterUUID":"xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"},"jobType":"PivotWorkloadCluster"}," metadata":{"name":"pivot-<cluster name>-1747910971"}},"tcaCpId":"
xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx","traceId":" 00000000000000000000000000000000","intentId":xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx","metadata":{"name":"<cluster name>","tcaCpId":" xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx","mgmtClusterName ":"<management cluster>"},"response":{"errors":[{"errorCode":"
INTERNAL_SERVER_ERROR","internalMessage":"Error executing workflow : errorCode: 7,errorMessage: Failed to validate job, error: failed to validate the moved cluster <cluster name>: TBR
<tbr-bom-3.0.0-v1.26.8---vmware.1-tkg.2-tca.24420203> referenced by TcaKubernetesCluster <<cluster name>> doesn't exist in the target management
cluster"}]},"clusterId":"xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"," clusterName":"<cluster name>","originHcxUUID":"xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx","mgmtClusterName":"<management cluster>", "endpointUserInfo":[{"userName":"HybridityAdmin","endpointId":" xxxxxxxxxxxx-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"}],"
sourceManagementClusterName":"ci-lab-mgmt-1-carrier"," targetManagementClusterName":"<management cluster>"},"jobType":" ClusterAutomation","lastUpdated":"2025-05-22T10:49:34.401684","
parentState":"MOVE_CLUSTER","workflowType":"RequestPivot","previousState ":"BEGIN"}]
3.2, 3.3
The root cause of this issue is that the target management cluster does not contain the Tanzu BOM Release (TBR) required by the workload cluster. Despite this incompatibility, Workflow Hub does not perform sufficient pre-validation and still allows the “Move to” operation to be initiated. As a result, the backend validation fails due to the missing TBR, and the workload cluster enters a “move failed” state. This leads to a mismatch between the cluster’s actual state and what is displayed in the TCA UI.
Workaround:
kubectl exec -it postgres-0 -n tca-mgr -- psql -d tca -U tca_admin -h localhost
select val->'metadata'->>'mgmtClusterName' from "K8sClusterDetails" where val->>'rowType'='nodePool' and val->'metadata'->>'clusterName'='e2e-wc3';
select val->'metadata'->>'mgmtClusterName' from "K8sClusterNodeConfiguration" where val->>'rowType'='nodePool' and val->'metadata'->>'clusterName'='e2e-wc3';
UPDATE public."K8sClusterDetails" SET val = jsonb_set(val, '{metadata, mgmtClusterName}', '"e2e-mc1"') where val->>'rowType'='nodePool' and val->'metadata'->>'clusterName'='e2e-wc3';
UPDATE public."K8sClusterNodeConfiguration" SET val = jsonb_set(val, '{metadata, mgmtClusterName}', '"e2e-mc1"') where val->>'rowType'='nodePool' and val->'metadata'->>'clusterName'='e2e-wc3';
select val->'metadata'->>'mgmtClusterName' from "K8sClusterDetails" where val->>'rowType'='addOn' and val->'metadata'->>'clusterName'='e2e-wc3';
UPDATE public."K8sClusterDetails" SET val = jsonb_set(val, '{metadata, mgmtClusterName}', '"e2e-mc1"') where val->>'rowType'='addOn' and val->'metadata'->>'clusterName'='e2e-wc3';
select val->'metadata'->>'mgmtClusterName' from "K8sClusterDetails" where val->>'rowType'='cluster' and val->'metadata'->>'name'='e2e-wc3';
UPDATE public."K8sClusterDetails" SET val = jsonb_set(val, '{metadata, mgmtClusterName}', '"e2e-mc1"') where val->>'rowType'='cluster' and val->'metadata'->>'name'='e2e-wc3';
UPDATE public."K8sClusterDetails" SET val = val #- '{status, hasPivotError}' where val->>'rowType'='cluster' and val->'metadata'->>'name'='e2e-wc3';
UPDATE public."K8sClusterDetails" SET val = val #- '{status, pivotAccepted}' where val->>'rowType'='cluster' and val->'metadata'->>'name'='e2e-wc3';
4. After the above database remediation, the workload cluster UI will have the warning status because of the legacy error message. However it won't block any cluster operations.
5. Ensure that the previous error conditions have been taken care of (eg: Cluster is upgraded to the right TBR). Then rehome the cluster again.
6. If the cluster status in the TCA UI remains in the Processing state, initiate a dummy cluster update operation without modifying any parameters. This action triggers a cluster update request, however, since no configuration changes are introduced, the backend completes the update almost immediately. During this process, TCA Manager re-synchronizes its database with the latest cluster status and conditions from the cluster, ensuring that the UI, which reads status data from the TCA-M database through the API, accurately reflects the correct Provisioned state.