This article contains instructions for deploying the Kubernetes cluster on vSphere VMs with vSphere CSI StorageClass provisioner for Telco Cloud Service Assurance
v2.4 and above
1) Log in to the deployment host.
2) Download the tar.gz file of the deployment container from My Downloads in the customer portal onto the deployment host under the home directory. Typically this package is named as VMware-Deployment-Container-<VERSION>-<TAG>.tar.gz.
For example, VMware-Deployment-Container-2.4.2-451.tar.gz
# On deployment host# podman load -i <dir/on/deployment host>/VMware-Deployment-Container-2.4.2-451.tar.gz
Verify the deployment container image
# On deployment host# podman images
3) Download the K8s Installer from My Downloads in the customer portal onto the deployment host under the home directory. Typically, this package is named as VMware-K8s-Installer-<VERSION>-<BUILD ID>.tar.gz.
For example, VMware-K8s-Installer/VMware-K8sInstaller-2.1.0-509.tar.gz
Note: To verify the downloaded package, run the following command on your deployment host.
# sha256sum VMware-K8s-Installer--.tar.gz
This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.
4) Extract the K8s Installer as follows.
This creates a folder called k8s-installer under the home directory.
Note: Always consistently extract the K8s Installer within the /root directory.
# tar -xzvf VMware-K8s-Installer--.tar.gz
5) Change into the k8s-installer to see the folders named scripts and cluster.
6) Launch the Deployment Container as follows:# podman run \
--rm \
-v $HOME:/root \
-v $HOME/.ssh:/root/.ssh \
-v /var/run/podman.sock:/var/run/podman.sock \
-v $(which podman):/usr/local/bin/podman:ro \
-v /etc/docker:/etc/docker:rw \
-v /opt:/opt \
--network host \
-it projects.registry.vmware.com/tcx/deployment:2.4.2-451 \
bash
7) Update the deployment parameters by editing the /root/k8s-installer/scripts/ansible/vars.yml file inside the Deployment Container.
Update the parameter admin_public_keys_path with the path of public key generated during SSH key generation.
admin_public_keys_path: /root/.ssh/id_rsa.pub # Path to the SSH public key. This will be a .pub file under $HOME/.ssh/
Update the control_plane_ips and worker_node_ips as specified in the following format.
control_plane_ips: # The list of control plane IP addresses of your VMs.This should be a YAML list.
-<IP 1>
-<IP 2>
...
-<IP n>
worker_node_ips: # The list of worker nodes IP addresses of your VMs.This should be a YAML list.
-<IP 1>
-<IP 2>
...
-<IP n>
keepalived_vip: "192.168.64.145"
The sample snippet of the vars.yaml file for an example: sample_vars.yaml is attached to this KB article
8) Execute the prepare command inside the Deployment Container.
Note: If you have used Non-Empty Passphrase for SSH Key generation (required for password-less SSH communication), then you must execute the following commands inside the Deployment Container, before running the Ansible script.
# eval "$(ssh-agent -s)" Agent pid 3112829
# ssh-add ~/.ssh/id_rsa ß Enter passphrase for /root/.ssh/id_rsa: <==Enter the NON-EMPTY Passphrase that is being provided during the NON-EMPTY ssh-key Generation process
Identity added: /root/.ssh/id_rsa (root@<hostname>)
# cd /root/k8s-installer/
# export ANSIBLE_CONFIG=/root/k8s-installer/scripts/ansible/ansible.cfg
# ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
Note: There would be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
9) Execute the Kubernetes Cluster installation command inside the Deployment Container.
# cd $HOME/k8s-installer/
# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/ deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
10) Execute post install step using the vsphere-csi tags included:# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/post_install.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml --tags vsphere-csi
11) Export Kubeconfig and verify "tcx-standard-sc" exists and "ReclaimPolicy" in your storage is set to "Retain".
Example:# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
tcx-standard-sc csi.vsphere.vmware.com Retain Immediate true 3d20h
12)Execute post install now again by skipping vsphere-csi tag.# ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/post_install.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml --skip-tags vsphere-csi
13) Ensure that the Kubernetes installation is successful and the following successful message is displayed on the console k8s Cluster Deployment successful. # kubectl get nodes
Note: Ensure that all the nodes are in ready state before starting the VMware Telco Cloud Service Assurance deployment.
14) Verify the Harbor pods are up and running. # kubectl get pods | grep harbor
15) Once the Kubernetes deployment is complete, the next step is to deploy VMware Telco Cloud Service Assurance.