Kubernetes Cluster Deployment on vSphere VMs with NFS Storage
search cancel

Kubernetes Cluster Deployment on vSphere VMs with NFS Storage

book

Article ID: 368374

calendar_today

Updated On:

Products

VMware Telco Cloud Service Assurance

Issue/Introduction

This article contains instructions for deploying the Kubernetes cluster on vSphere VMs with vSphere CSI StorageClass provisioner for Telco Cloud Service Assurance

 

Environment

v2.4 and above

Resolution

1) Log in to the deployment host.

2) Download the tar.gz file of the deployment container from My Downloads in the customer portal onto the deployment host under the home directory. Typically this package is named as VMware-Deployment-Container-<VERSION>-<TAG>.tar.gz.
For example, VMware-Deployment-Container-2.4.2-451.tar.gz

# On deployment host
# podman load -i <dir/on/deployment host>/VMware-Deployment-Container-2.4.2-451.tar.gz

Verify the deployment container image
# On deployment host
# podman images

3) Download the K8s Installer from My Downloads in the customer portal onto the deployment host under the home directory. Typically, this package is named as VMware-K8s-Installer-<VERSION>-<BUILD ID>.tar.gz.
For example, VMware-K8s-Installer/VMware-K8sInstaller-2.1.0-509.tar.gz
Note: To verify the downloaded package, run the following command on your deployment host.
# sha256sum VMware-K8s-Installer--.tar.gz
This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.

4) Extract the K8s Installer as follows.
This creates a folder called k8s-installer under the home directory.
Note: Always consistently extract the K8s Installer within the /root directory.
# tar -xzvf VMware-K8s-Installer--.tar.gz

5) Change into the k8s-installer to see the folders named scripts and cluster.

6) Launch the Deployment Container as follows:
# podman run \
--rm \
-v $HOME:/root \
-v $HOME/.ssh:/root/.ssh \
-v /var/run/podman.sock:/var/run/podman.sock \
-v $(which podman):/usr/local/bin/podman:ro \
-v /etc/docker:/etc/docker:rw \
-v /opt:/opt \
--network host \
-it projects.registry.vmware.com/tcx/deployment:2.4.2-451 \
bash

7) Update the deployment parameters by editing the /root/k8s-installer/scripts/ansible/vars.yml file inside the Deployment Container.

    • Configure the general parameters.
      Note: Set the values according to your environment.
      cluster_name: # Unique name for your cluster
      ansible_user: # SSH username for the Cluster Node VMs
      ansible_become_password: # SSH password for the Cluster Node VMs

Update the parameter admin_public_keys_path with the path of public key generated during SSH key generation.
admin_public_keys_path: /root/.ssh/id_rsa.pub # Path to the SSH public key. This will be a .pub file under $HOME/.ssh/

Update the control_plane_ips and worker_node_ips as specified in the following format.
control_plane_ips: # The list of control plane IP addresses of your VMs.This should be a YAML list.
-<IP 1>
-<IP 2>
...
-<IP n>

worker_node_ips: # The list of worker nodes IP addresses of your VMs.This should be a YAML list.
-<IP 1>
-<IP 2>
...
-<IP n>

    • Update the Deployment host IP and the YUM server IP address.
      ## Deployment host IP address
      ## Make sure firewall is disabled in deployment host
      # The IP address of your deployment host

      deployment_host_ip: <your-deployment-host-ip>
      ## default value is http. Use https for secure communication.
      yum_protocol: http
      # The IP address/hostname of your yum/package repository
      yum_server: <your-yum-server-ip>

    • Enter the default keepalived_vip as:
      keepalived_vip: "192.168.64.145"
    • For Harbor Container Registry, uncomment and update the harbor_registry_ip parameter with the selected static IP address. The free Static IP should be in the same subnet as that of the managment IP's of the Cluster Nodes.
      ### Harbor parameters ###
      ## The static IP address to be used for Harbor Container Registry
      ## This IP address must be in the same subnet as the VM IPs.
      harbor_registry_ip: <harbor-static-IPAddress>
    • Set the following parameter to a location that has sufficient storage space for storing all application data.
      Note: Ensure that in the below example /mnt file system must be having 200 GB of storage space and should have 744 permission.
      storage_dir: /mnt
      Note: Create the directory or partition specified in storage_dir on all nodes if it does not already exist.
    • For storage related parameters, uncomment and set the following parameters to true.
      ### Storage related parameters ###
      use_external_storage: "true"
      install_vsphere_csi: "true"

The sample snippet of the vars.yaml file for an example: sample_vars.yaml is attached to this KB article

8) Execute the prepare command inside the Deployment Container.
Note: If you have used Non-Empty Passphrase for SSH Key generation (required for password-less SSH communication), then you must execute the following commands inside the Deployment Container, before running the Ansible script.
# eval "$(ssh-agent -s)"
Agent pid 3112829
# ssh-add ~/.ssh/id_rsa
ß Enter passphrase for /root/.ssh/id_rsa: <==Enter the NON-EMPTY Passphrase that is being provided during the NON-EMPTY ssh-key Generation process
Identity added: /root/.ssh/id_rsa (root@<hostname>)

# cd /root/k8s-installer/
# export ANSIBLE_CONFIG=/root/k8s-installer/scripts/ansible/ansible.cfg
# ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
Note: There would be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.

9) Execute the Kubernetes Cluster installation command inside the Deployment Container.
# cd $HOME/k8s-installer/
# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/ deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml

10) Execute post install step using the vsphere-csi tags included:
# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/post_install.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml --tags vsphere-csi

11) Export Kubeconfig and verify "tcx-standard-sc" exists and "ReclaimPolicy" in your storage is set to "Retain".
Example:
# kubectl get storageclass
NAME              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
tcx-standard-sc   csi.vsphere.vmware.com   Retain          Immediate           true                   3d20h


12)Execute post install now again by skipping vsphere-csi tag.
# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/post_install.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml --skip-tags vsphere-csi


13)  Ensure that the Kubernetes installation is successful and the following successful message is displayed on the console k8s Cluster Deployment successful.
# kubectl get nodes
Note: Ensure that all the nodes are in ready state before starting the VMware Telco Cloud Service Assurance deployment.


14) Verify the Harbor pods are up and running.
# kubectl get pods | grep harbor


15) Once the Kubernetes deployment is complete, the next step is to deploy VMware Telco Cloud Service Assurance.

Attachments

sample_vars.yml get_app