After setting the grace-period of a Kubernetes cluster at the cluster level with following command:
$ tkgi update-cluster <cluster-name> --kubelet-drain-grace-period 300 --compute-profile <compute-profile-name> --node-pool-instances <node-pool>:<number-of-instances>
which results in the following setting:
$ tkgi <cluster-name> --details
PKS Version: 1.21.1-build.7
[..]
Kubernetes Settings Details:
Set by Cluster:
Kubelet Node Drain timeout (mins) (kubelet-drain-timeout): 15
Kubelet Node Drain grace-period (seconds) (kubelet-drain-grace-period): 300
Kubelet Node Drain ignore-daemonsets (kubelet-drain-ignore-daemonsets): true
Kubelet Node Drain delete-emptydir-data(formerly delete-local-data) (kubelet-drain-delete-local-data): true
Set by Plan:
Kubelet Node Drain force (kubelet-drain-force): true
Kubelet Node Drain force-node (kubelet-drain-force-node): false
and you would like to remove the grace-period set at the cluster level, so that it takes the grace-period provided by the plan.
TKGi 1.2.x
Unfortunately, once you update the grace-period at cluster level there is no going back and can't be set to the plan level.
The only possible workaround would be to set the --kubelet-drain-grace-period to the same value as the plan.
Plan based setting takes place, but once the changes are made using update-cluster, there is no available method to annul the setting and go back to the default plan settings, however the cluster can be updated with default settings by using again the update-cluster command.