For management and workload legacy clusters using ytt overlay:
- Create ytt overlay; replace 1.2.3.4 by your NTP server hostname/IP
$ cat > ~/.tanzu/tkg/providers/ytt/03_customizations/add_ntp.yaml <<EOF
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
#@overlay/match missing_ok=True
ntp:
enabled: true
servers:
- 1.2.3.4
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
ntp:
enabled: true
servers:
- 1.2.3.4
EOF
- Dryrun the cluster creation and verify your NTP server is configured.
$ tanzu cluster create dryrun-cluster --dry-run --file cluster-config.yaml > dryrun-cluster.yaml
$ cat dryrun-cluster.yaml | yq e 'select(.kind == "KubeadmControlPlane") | .spec.kubeadmConfigSpec.ntp' -
enabled: true
servers:
- 1.2.3.4
- Create the cluster
- Once the cluster is created, there are more data points to verify your NTP server is taken.
- The NTP configuration is stored in the cloud-init script which is used to provision the IaaS VM. TKG management cluster stores the cloud-init script as Secret named by the corresponding Machine.
$ kubectl get secrets mgmt-control-plane-98zlp -o json | jq '.data.value' -r | base64 -d | grep ntp -A 4
ntp:
enabled: true
servers:
- 1.2.3.4
- The NTP configuration is taken up by the chrony service in default Ubuntu VMs.
$ cat /etc/chrony/chrony.conf | grep server
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# servers
server 1.2.3.4 iburst
For management and workload ClusterClass clusters:1. When creating the cluster, add the following information for your NTP server variables into your Configuration File Variables
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/2.3/tkg-deploy-mc/mgmt-deploy-config-ref.html#vsphere-11NTP_SERVERS: "1.2.3.4,1.2.3.5"2. It will generate the cluster with its Class-based Object Structure after creating the cluster like the following:
kind: Cluster
spec:
topology:
variables:
- name: ntpServers
value:
- "1.2.3.4,1.2.3.5"3. On the nodes of the cluster, it will provide the ntp server setting in chrony service like the following:
$ cat /etc/chrony/chrony.conf | grep server
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# servers
server 1.2.3.4 iburst
server 1.2.3.5 iburstWorkaround:
If the TKG clusters have been already created and you want to modify the NTP parameters without downtime, you may edit the /etc/chrony/chrony.conf with the NTP server and restart chronyd.service. Be aware this workaround is not persistent if the VM gets recreated.
$ vim /etc/chrony/chrony.conf
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# servers
server 172.31.23.40 iburst
server 172.16.26.40 iburst
$ systemctl restart chronyd.service