Use custom NTP server in TKG
search cancel

Use custom NTP server in TKG

book

Article ID: 337407

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid

Issue/Introduction

Symptoms:
Cluster-API allows to configure custom NTP for kubernetes cluster nodes. Doc:
  • https://cluster-api.sigs.k8s.io/tasks/kubeadm-bootstrap.html?highlight=ntp#additional-features
This article explains how to use ytt overlays to configure custom NTP server in legacy TKG cluster and ClusterClass TKG cluster.

Environment

VMware Tanzu Kubernetes Grid 1.x

Resolution


For management and workload legacy clusters using ytt overlay:
  1. Create ytt overlay; replace 1.2.3.4 by your NTP server hostname/IP
    $ cat > ~/.tanzu/tkg/providers/ytt/03_customizations/add_ntp.yaml <<EOF
    #@ load("@ytt:overlay", "overlay")
    #@ load("@ytt:data", "data")
    
    #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
    ---
    spec:
      kubeadmConfigSpec:
        #@overlay/match missing_ok=True
        ntp:
          enabled: true
          servers:
          - 1.2.3.4
    
    #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}),expects="1+"
    ---
    spec:
      template:
        spec:
          #@overlay/match missing_ok=True
          ntp:
            enabled: true
            servers:
            - 1.2.3.4
    EOF
  2. Dryrun the cluster creation and verify your NTP server is configured.
    
    $ tanzu cluster create dryrun-cluster --dry-run --file cluster-config.yaml > dryrun-cluster.yaml
    
    $ cat dryrun-cluster.yaml | yq e 'select(.kind == "KubeadmControlPlane") | .spec.kubeadmConfigSpec.ntp' -
    enabled: true
    servers:
      - 1.2.3.4
  3. Create the cluster
  4. Once the cluster is created, there are more data points to verify your NTP server is taken.
    • The NTP configuration is stored in the cloud-init script which is used to provision the IaaS VM. TKG management cluster stores the cloud-init script as Secret named by the corresponding Machine. 
      
      $ kubectl get secrets mgmt-control-plane-98zlp -o json | jq '.data.value' -r | base64 -d | grep ntp -A 4
      ntp:
        enabled: true
        servers:
          - 1.2.3.4
    • The NTP configuration is taken up by the chrony service in default Ubuntu VMs.
      
      $ cat /etc/chrony/chrony.conf | grep server
      # Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
      # servers
      server 1.2.3.4 iburst


For management and workload ClusterClass clusters:

1. When creating the cluster, add the following information for your NTP server variables into your Configuration File Variables https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/2.3/tkg-deploy-mc/mgmt-deploy-config-ref.html#vsphere-11
NTP_SERVERS:  "1.2.3.4,1.2.3.5"


2. It will generate the cluster with its Class-based Object Structure after creating the cluster  like the following:

kind: Cluster
spec:
  topology:
    variables:
    - name: ntpServers
      value:
      - "1.2.3.4,1.2.3.5"



3. On the nodes of the cluster, it will provide the ntp server setting in chrony service like the following:

$ cat /etc/chrony/chrony.conf | grep server
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# servers
server 1.2.3.4 iburst
server 1.2.3.5 iburst



Workaround:
If the TKG clusters have been already created and you want to modify the NTP parameters without downtime, you may edit the /etc/chrony/chrony.conf with the NTP server and restart chronyd.service. Be aware this workaround is not persistent if the VM gets recreated.

$ vim /etc/chrony/chrony.conf
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# servers
server 172.31.23.40 iburst
server 172.16.26.40 iburst
$ systemctl restart chronyd.service


Additional Information

Impact/Risks:

If the DHCP does not provide the NTP and you don't use ytt overlay for legacy cluster or the configuration variables for ClusterClass cluster, the NTP servers may not be persistent when restarting chronyd service.