GemFire for K8s Operator installation issue - failed to pull and unpack image
search cancel

GemFire for K8s Operator installation issue - failed to pull and unpack image

book

Article ID: 385496

calendar_today

Updated On:

Products

VMware Tanzu Gemfire

Issue/Introduction

The GemFire for K8s Operator installation failed to pull and unpack the image, even after satisfying all the prerequisites. including accepting all terms and using the right username and password.

Error Messages:

kubelet  Failed to pull image "registry.packages.broadcom.com/tanzu-gemfire-for-kubernetes/gemfire-controller:2.4.0": rpc error: code = DeadlineExceeded desc = failed to pull and unpack image "registry.packages.broadcom.com/tanzu-gemfire-for-kubernetes/gemfire-controller:2.4.0": failed to resolve reference "registry.packages.broadcom.com/tanzu-gemfire-for-kubernetes/gemfire-controller:2.4.0": failed to do request: Head "https://registry.packages.broadcom.com/v2/tanzu-gemfire-for-kubernetes/gemfire-controller/manifests/2.4.0": dial tcp aa.bbb.cc.ddd:xxx: i/o timeout   Normal   BackOff 

 

Environment

Applicable to all Tanzu GemFire on Kubernetes Versions.

Cause

The error indicates that the kubelet is unable to pull the image "registry.packages.broadcom.com/tanzu-gemfire-for-kubernetes/gemfire-controller:2.4.0" due to a timeout issue. The problem is likely caused by one or more of the following:

1. Network connectivity problems: The connection to the registry may be slow or unstable, causing the pull to exceed the default timeout.
2. Registry issues: The Broadcom registry might be experiencing slowdowns or intermittent availability problems.

 

Resolution

To resolve this issue, please try the following solutions:

1. Check network connectivity: Ensure your nodes have stable and fast internet connections to the Broadcom registry.
2. Verify registry status: Contact Broadcom support to check if there are any known issues with their container registry.

If the problem persists after trying these solutions, you may need to investigate further by examining the kubelet logs and events associated with the affected pods.

Additional Information