This article describes how to plug in your own storage backend and Kubernetes volume provisioner into the CaaS installer.
This is a special case of the standard installation workflow.
The TCSA CaaS installer is a bundle of binaries, containers, scripts and Ansible playbooks to install a production grade CaaS on virtual machines.
The CaaS components include Kubernetes, a container registry (Harbor) and other components that interface between the user's IaaS and Kubernetes.
TCSA 2.4.X
*********************************************************************************************************************
VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
$ sha256sum VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
$ tar -xzvf VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
$ docker load -i <downloaded dir in deployment host>/VMware-Deployment-Container-2.4.2-497.tar.gz
$ docker images
$ export DOCKER_HOME=$(docker run --rm localhost/deployment:2.4.2-497 sh -c 'echo $HOME')
$ docker run \
--rm \
-v $HOME:$DOCKER_HOME \
-v $HOME/.ssh:$DOCKER_HOME/.ssh \
-v /var/run/podman/podman.sock:/var/run/podman.sock \
-v $(which podman):/usr/local/bin/podman:ro \
-v /etc/docker:/etc/docker:rw \
-v /opt:/opt \
--network host \
-it localhost/deployment:2.4.2-497 \
bash
Note 1: The following are examples only. Please set these values according to your environment.
Note 2: It is the user's responsibility to save and secure the vars.yml file.
cluster_name: my-cluster # Unique name for your cluster.
ansible_user: my-user # SSH username for the VMs.
ansible_become_password: <your SSH user's sudo password> # sudo password for the VMs (If you need passwordless SSH).
admin_public_keys_path: $HOME/.ssh/id_rsa.pub # Path to the SSH public key (if you need passwordless SSH). This will be a .pub file under $HOME/.ssh/.
control_plane_ips: # The list of control plane IP addresses of your VMs. This should be a YAML list.
- <IP1>
- <IP2>
worker_node_ips: # The list of worker nodes IP addresses of your VMs. This should be a YAML list.
- <IP3>
- <IP4>
deployment_host_ip: # The IP address of your deployment host.
yum_protocol: # Protocol supported by the yum repository for communication.
yum_server: # The IP address/hostname of your yum/package repository for pulling packages required for installing Kubernetes.
keepalived_vip: # The 'keepalive' virtual IP used for internal container registry HA. Set it to an available virtual IP if a default virtual IP is not available.
storage_dir: "/mnt"
use_external_storage: true
install_vsphere_csi: false
vcenter_ip:
vcenter_name:
vcenter_username:
vcenter_password:
vcenter_data_centers:
-
vcenter_insecure:
datastore_url:
If you don't have free static IP addresses or if you are using DHCP for your VMs exclusively, do not configure 'harbor_registry_ip'.
Harbor will be exposed as a 'NodePort' Kubernetes service.
The access URL for Harbor would be 'https://<First-Control-Plane-Node-IP>:30001'.
$ cd $HOME/k8s-installer/
$ export ANSIBLE_CONFIG=$HOME/k8s-installer/scripts/ansible/ansible.cfg LANG=en_US.UTF-8
$ ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
Note: The user can select/skip individual stages mentioned above by appending the ansible-playbook command with '--tags <tag>/--skip-tags <tag>' respectively, where '<tag>' is the ansible tag assigned to the stage/role.$
cd $HOME/k8s-installer/
$
ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
Follow your volume provisioner's documentation to deploy the volume provisioner:
- The Storage Class for your volume provisioner **must** be named 'tcx-standard-sc'.
- The 'reclaimPolicy' in your storage class **must** be set to 'Retain'.
This is required to allow for redeployments of stateful applications without data loss and without manually marking 'Released' PVs as 'Available'.
Please refer to the official Kubernetes documentation for more details on volume provisioners and storage classes: https://kubernetes.io/docs/concepts/storage/storage-classes
For specific information on the Cinder CSI plugin, please refer to: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md
Download: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/archive/refs/heads/master.zip
Unzip the NFS provisioner
Install helm chart in Deployer VM
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<NFS Server IP> --set nfs.path=<NFS-Path>
Check NFS provisioner Status
$ kubectl get pods |grep nfs
Deploys nfs-csi driver to create storage class
nfs-csi.yml: Update the highlighted attributes in for NFS storage provisioner creation
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tcx-standard-sc
provisioner: cluster.local/nfs-provisioner-qa-nfs-subdir-external-provisioner
reclaimPolicy: Retain
parameters:
server: #.#.#.#
path: /NFSShare/BYOS-QA
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Deploy the nfs-csi driver to deploy storage class
$ kubectl apply -f nfs-csi.yml
Check the nfs-csi storage class status by executing the below commands
$ kubectl get sc