CaaS installation with "Bring Your Own Kubernetes Volume Provisioner", also known as "Bring Your Own Storage (BYOS)"
search cancel

CaaS installation with "Bring Your Own Kubernetes Volume Provisioner", also known as "Bring Your Own Storage (BYOS)"

book

Article ID: 380708

calendar_today

Updated On:

Products

VMware Telco Cloud Service Assurance

Issue/Introduction

This article describes how to plug in your own storage backend and Kubernetes volume provisioner into the CaaS installer.
This is a special case of the standard installation workflow.

The TCSA CaaS installer is a bundle of binaries, containers, scripts and Ansible playbooks to install a production grade CaaS on virtual machines.
The CaaS components include Kubernetes, a container registry (Harbor) and other components that interface between the user's IaaS and Kubernetes.

# Prerequisites:

  • IP addresses of virtual machines (control plane nodes and worker nodes) with SSH access to those IP addresses.
  • A Linux deployment host to trigger the installation:
    • This host must have Docker installed.
    • This host must have network connectivity to the VMs.
    • This host must have at least 10 GB of disk space under the $HOME directory to download and extract the installer bundle.
    • A Linux package repository that will be used for installing standard linux packages on the VMs.
    • For online installation, internet access is required for downloading packages and container images.
    • For offline installation, ensure that:
      • All firewalls are disabled on the deployment host.
  • All VMs(including the deployment host) must be time synchronized (via NTP services like ntpd, chronyd, timesyncd, etc.).
  • Load Balancer:
    • The installer deploys the MetalLB (https://metallb.universe.tf/) load balancer and supports only L2 mode of MetalLB.
            You can use the installer's MetalLB instance to expose Kubernetes services outside the cluster.
    • The installer creates a default MetalLB IP pool from all the subnets that your VMs' management IPs belong to.
            You can allocate static IPs to expose your Kubernetes services but these IPs must be in the same subnet(s) as your VMs' management IPs.
    • MetalLB is not compatible with DHCP enabled environments.
            It is recommended to use it only with static IP addresses.

# General Guidelines:

  • The TCSA CaaS installer is based on Ansible so all features of Ansible are available.
    Please consult the official Ansible documentation website (https://docs.ansible.com/) for documentation specific to Ansible.

Environment

TCSA 2.4.X

Resolution

 HIGH-LEVEL OVERVIEW

*********************************************************************************************************************

  • Step 1: Download and extract the installer bundle on your deployment host. 

    • Download the Kubernetes installer from Broadcom Downloader site (https://support.broadcom.com/) onto the deployment host under the home directory.
      Typically this package is named as VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
      For example: VMware-K8s-Installer-1.0.0-113.tar.gz

    • Verify the integrity of the downloaded package.
      Run the following command on your deployment host (you will need the 'sha256sum' package to run the below command):
      $  sha256sum VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
    • This command displays the SHA256 fingerprint of the file.
      Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.

    • Extract the installer bundle
      $  tar -xzvf VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
  • Step 2: Download and Launch the deployment container on the deployment host.

    • Download the deployment container from Broadcom Downloader site (https://support.broadcom.com/) onto the deployment host under the home directory.
      Typically this package is named as VMware-Deployment-Container-<VERSION>-<TAG>.tar.gz.
      For example: VMware-Deployment-Container-2.4.2-497.tar.gz.

    • Load VMware-Deployment-Container-<VERSION>-<TAG>tar.gz file to local registry on deployment host
      $  docker load -i <downloaded dir in deployment host>/VMware-Deployment-Container-2.4.2-497.tar.gz

      Verify the deployment container image on deployment host by executing the below command
      $  docker images
    • Export and run the deployment container using below commands
      Note: Create a folder named docker in /etc/ folder of the deployer host, if it is not available before launching the deployment container
      $  export DOCKER_HOME=$(docker run --rm localhost/deployment:2.4.2-497 sh -c 'echo $HOME')
      $  docker run \
       
   --rm \
      
    -v $HOME:$DOCKER_HOME \
      
    -v $HOME/.ssh:$DOCKER_HOME/.ssh \
      
    -v /var/run/podman/podman.sock:/var/run/podman.sock \
      
    -v $(which podman):/usr/local/bin/podman:ro \
      
    -v /etc/docker:/etc/docker:rw \
      
    -v /opt:/opt \
      
    --network host \
      
    -it localhost/deployment:2.4.2-497 \
      
    bash
  • Step 3: Customize your environment and the deployment by configuring the $HOME/k8s-installer/scripts/ansible/vars.yml file (inside the deployment container).

Note 1: The following are examples only. Please set these values according to your environment.
Note 2: It is the user's responsibility to save and secure the vars.yml file.

    • Configure general parameters:
         cluster_name: my-cluster                                  # Unique name for your cluster.

           ansible_user: my-user                                     # SSH username for the VMs.

           ansible_become_password: <your SSH user's sudo password>  # sudo password for the VMs (If you need passwordless SSH).

           admin_public_keys_path: $HOME/.ssh/id_rsa.pub # Path to the SSH public key (if you need passwordless SSH). This will be a .pub file under $HOME/.ssh/.

         control_plane_ips: # The list of control plane IP addresses of your VMs. This should be a YAML list.
           - <IP1>
             - <IP2>

         worker_node_ips: # The list of worker nodes IP addresses of your VMs. This should be a YAML list.
           - <IP3>
             - <IP4>

           deployment_host_ip:  # The IP address of your deployment host.

           yum_protocol: # Protocol supported by the yum repository for communication.

           yum_server:  # The IP address/hostname of your yum/package repository for pulling packages required for installing Kubernetes.

           keepalived_vip: # The 'keepalive' virtual IP used for internal container registry HA. Set it to an available virtual IP if a default virtual IP is not available.
    • If using local storage (direct attached storage) for storing application data on the VMs, you can change the following parameters.
      For example, if you have a dedicated filesystem named '/external' for storing all third-party configuration and data, you can set these parameters
      this way:
      - Location where the application data gets stored on the VMs:
      storage_dir: "/mnt"
    • The following two parameters must be set to true and false respectively.
      They indicate that you intend to use "external storage" (i.e. not local/direct attached storage) and you are not on a vSphere environment, respectively. 
             use_external_storage: true
             install_vsphere_csi: false
    • All vSphere parameters can be skipped in vars.yml when using the OpenStack Cinder CSI.
            vcenter_ip:
          vcenter_name:
          vcenter_username:
          vcenter_password:
          vcenter_data_centers:
            - 
          vcenter_insecure:
            datastore_url:
    • Set the Harbor container registry access parameters.
      If you have free static IPs in your VMs' subnet range, you can configure the Harbor container registry to use one of these IPs.
      Uncomment and set the 'harbor_registry_ip' parameter to a free static IP.
      Setting 'harbor_registry_ip' will expose Harbor as a 'LoadBalancer' Kubernetes service.
      The access URL for Harbor would be 'https://<harbor_registry_ip>'.

If you don't have free static IP addresses or if you are using DHCP for your VMs exclusively, do not configure 'harbor_registry_ip'.
Harbor will be exposed as a 'NodePort' Kubernetes service.
The access URL for Harbor would be 'https://<First-Control-Plane-Node-IP>:30001'.

  • Step 4: Prepare for deployment

    This step does the following:
    - Configuration validation.
    - Sets up passwordless SSH on all the VMs (ansible tag: paswordless-ssh).
    - For airgapped/offline installation:
        - Sets up a web server to serve static files required for CaaS installation (Ansible tag: nginx-http-server).
        - Sets up a local container registry to serve container images required for CaaS installation (Ansible tag: docker-internal-registry).
        - Pushes images from the unpacked installer bundle to the local container registry (Ansible tag: push-images).
    - Generates an Ansible inventory file (unusable tag: generate-inventory).
    - Generates an internal_vars.yml file containing variables for internal use only.
    • Execute the following commands inside the deployment container:
      $  cd $HOME/k8s-installer/
      $  export ANSIBLE_CONFIG=$HOME/k8s-installer/scripts/ansible/ansible.cfg LANG=en_US.UTF-8
      $  ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
      Note: The user can select/skip individual stages mentioned above by appending the ansible-playbook command with '--tags <tag>/--skip-tags <tag>' respectively, where '<tag>' is the ansible tag assigned to the stage/role.
  • Step 5: Install the Kubernetes cluster using the generated inventory file

    This step does the following:
    - Customizes the VM configuration (OS dependent).
    - Installs Kubernetes on the VMs using Kubespray.
    - Installs Harbor (container registry) on the Kubernetes clusters.
    - Installs networking interfaces (MetalLB, etc.).
    • Execute the following commands inside the deployment container:
      cd $HOME/k8s-installer/
      ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
  • Step 6: Deploy your Kubernetes volume provisioner

Follow your volume provisioner's documentation to deploy the volume provisioner:
- The Storage Class for your volume provisioner **must** be named 'tcx-standard-sc'.
- The 'reclaimPolicy' in your storage class **must** be set to 'Retain'.
  This is required to allow for redeployments of stateful applications without data loss and without manually marking 'Released' PVs as 'Available'.

Please refer to the official Kubernetes documentation for more details on volume provisioners and storage classes: https://kubernetes.io/docs/concepts/storage/storage-classes
For specific information on the Cinder CSI plugin, please refer to: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md

    • Example: Procedure to bring NFS Storage for TCSA Deployment
      • Download and Deploy NFS provisioner
        • Download:  https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/archive/refs/heads/master.zip

        • Unzip the NFS provisioner

        • Install helm chart in Deployer VM

          $ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
          $ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<NFS Server IP> --set nfs.path=<NFS-Path>
        • Check NFS provisioner Status

          $ kubectl get pods |grep nfs
      • Deploys nfs-csi driver to create storage class

        nfs-csi.yml: Update the highlighted attributes in for NFS storage provisioner creation
                apiVersion: storage.k8s.io/v1
                kind: StorageClass
                metadata:
                    name: tcx-standard-sc
                    provisioner: cluster.local/nfs-provisioner-qa-nfs-subdir-external-provisioner
                    reclaimPolicy: Retain
                parameters:
                  server: #.#.#.#
                    path: /NFSShare/BYOS-QA
                volumeBindingMode: WaitForFirstConsumer
                allowVolumeExpansion: true
      • Deploy the nfs-csi driver to deploy storage class  

        $ kubectl apply -f nfs-csi.yml
      • Check the nfs-csi storage class status by executing the below commands  

        $ kubectl get sc
  • Step 7: Run the post-Kubernetes installation steps

    $  cd $HOME/k8s-installer/
    $  ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/post_install.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml