CWP deployer status incorrect set to stopped after kubectl rollout
search cancel

CWP deployer status incorrect set to stopped after kubectl rollout

book

Article ID: 408318

calendar_today

Updated On:

Products

CA API Gateway

Issue/Introduction

When running kubectl rollout restart for a gateway pod in Kubernetes  cwp property "portal.deployer.status" become stopped while deployments are working oke.

Environment

gateway 11.0 

Cause

For the cwp "portal.deployer.status"  the following happens when the deployer service is started the old node is still running the deployer service and the cwp property is set to started , the new pod starts  and  the deployer service , tried  to set the cwp to started but as it is already set to started there is no message the cwp is updated in the logs , when the new pod is fully running the old pod is stopped and it will stop the deployer service which then set the cwp to stopped , this happens about 10 sec after the new pod has set it to started . 

Resolution

To prevent this from happening you could consider not to use kubernetes rollout restart  but scale down to 0 or update the rollout  strategy to recreate . 

The other option is to recycle the deployer service after the rollout is completed .