When launching a terminal or running a diagnostics test from TCA against a cluster, the task's newly created pods fail with an ImagePullBackOff
error.
Describing the pod and checking the events you see the message:
Failed to pull image "vmwaresaas.jfrog.io/registry/kubectl:2.2.0": rpc error: code = Unknown desc = failed to pull and unpack image "vmwaresaas.jfrog.io/registry/kubectl:2.2.0": failed to resolve reference "vmwaresaas.jfrog.io/registry/kubectl:2.2.0": failed to do request: Head "https://vmwaresaas.jfrog.io/v2/registry/kubectl/manifests/2.2.0": dial tcp: lookup vmwaresaas.jfrog.io on 127.0.0.53:53: read udp 127.0.0.1:12345->127.0.0.53:53: i/o timeout
2.1
2.2
The airgap server FQDN is missing in the database of the TCA Control Plane node.
This could happen after cluster certs are renewed using manual/automated work arounds provided.
Update Airgap FQDN for management cluster in TCA-CP DB
mongo hybridity
db.ApplianceConfig.find({"config.clusterName": "<CLUSTER_NAME>"}).pretty();
db.ApplianceConfig.update({"config.clusterName": "<CLUSTER_NAME>"}, {$set: { "config.airgapFqdn": "<AIRGAP_FQDN>" }});
Note: in the above commands, update the <CLUSTER_NAME> and <AIRGAP_FQDN> values appropriately
Examples:
Before adding airgap FQDN:
{ "_id" : ObjectId("63ee0fced91947fc27f9b96d"), "config" : { "url" : "[https://10.0.0.1:6443|https://10.0.0.1:6443/]", "clusterName" : "<CLUSTER_NAME>", , "UUID" : "a7640b6b-f3b5-4a6d-98f3-b19e16c6dbe2", "version" : "1.22", "kubeSystemUUID" : "9f24e947-61bc-46c3-874f-2a430554befb", "clusterType" : "WORKLOAD" } , "section" : "kubernetes", "enterprise" : "HybridityAdmin", "organization" : "HybridityAdmin", "lastUpdated" : ISODate("2024-03-20T07:59:24.211Z"), "lastUpdateEnterprise" : "HybridityAdmin", "lastUpdateOrganization" : "HybridityAdmin", "lastUpdateUser" : "HybridityAdmin", "creationDate" : ISODate("2023-02-16T11:13:18.802Z"), "creationEnterprise" : "HybridityAdmin", "creationOrganization" : "HybridityAdmin", "creationUser" : "HybridityAdmin", "isDeleted" : false }
After updating the changes in DB:
{ "_id" : ObjectId("63ee0fced91947fc27f9b96d"), "config" : { "url" : "[https://10.0.0.1:6443|https://10.0.0.1:6443/]", "clusterName" : "<CLUSTER_NAME>","kubeconfig" : "", "UUID" : "a7640b6b-f3b5-4a6d-98f3-b19e16c6dbe2", "version" : "1.22", "kubeSystemUUID" : "9f24e947-61bc-46c3-874f-2a430554befb", "clusterType" : "WORKLOAD", "airgapFqdn" : "[<AIRGAP_FQDN>|http://<AIRGAP_FQDN>/]"} , "section" : "kubernetes", "enterprise" : "HybridityAdmin", "organization" : "HybridityAdmin", "lastUpdated" : ISODate("2024-03-20T07:59:24.211Z"), "lastUpdateEnterprise" : "HybridityAdmin", "lastUpdateOrganization" : "HybridityAdmin", "lastUpdateUser" : "HybridityAdmin", "creationDate" : ISODate("2023-02-16T11:13:18.802Z"), "creationEnterprise" : "HybridityAdmin", "creationOrganization" : "HybridityAdmin", "creationUser" : "HybridityAdmin", "isDeleted" : false }