Silicom STS NIC deployment with VMware Telco Cloud Automation
search cancel

Silicom STS NIC deployment with VMware Telco Cloud Automation

book

Article ID: 303604

calendar_today

Updated On:

Products

VMware Telco Cloud Automation

Issue/Introduction

VMware Telco Cloud Automation(TCA) expedites onboarding of worker node with Silicom STS NICs (STS4 and STS2) for LLS-C1 and LLS-C3 deployments.

The support for Silicom STS has been integrated into VMware Telco Cloud Automation (TCA) for simplifying workload deployments. Silicom timing pod is created and run in a workload cluster managed by VMware Tanzu Kubernetes Grid that is integrated into VMware Telco Cloud Automation. Onboarding of the Silicom device is done through the Kubernetes operator framework in VMware Telco Cloud Automation.

The system clock is synchronized to the high accuracy timing provided by the STS timing pod. DU workloads that run within the VM derive
their timing from the VM.
The timing Pod distributes time to connected RUs using a virtual function (VF) from each physical function (PF).

 

LLS-C1 Deployment with STS card

image.png


Resolution

Pre-requisites / Recommended Version

The following firmware and driver versions (or higher) are recommended
for use with STS cards.

 
Firmware version4.01 0x80014757 1.3256.0
ESXi DriverIcen 1.9.5
ice (PCI passthru)1.9.11
Iavf4.5.3
tsyncd2.1.2.11
 

Enable SRIOV in BIOS Settings

BIOS -> Advanced -> PCIe/PCI/PnP Configuration -> SR-IOV Support -> Enabled
image.png
 

Port Bifurcation

STS2 has 8 x 10G ports. Port bifurcation is not required to see all the ports.
The STS4 card needs to be inserted into one of the PCI slots supporting x8x8 bifurcation.


BIOS -> Advanced → Chipset Configuration → North Bridge → IIO Configuration → CPU Configuration
As per the below screenshot, PCI slots 1, 2, 4, 5 supports port bifurcation.  STS4 card should be inserted in one of these slots and x8x8 selected.

image.png


Summary of steps to enable STS cards through TCA:

  • Prepare and deploy Host Profile
  • Create a nodepool on ESXi host with STS
  • Onboard a CSAR and add details for STS PF/VF and USB devices 
  • Instantiate Network Function using the CSAR
  • On workload cluster, deploy the tsyncd daemonSet using helm.

Enable STS cards through TCA:
  1. Prepare and apply the STS Host Profile to enable passthrough on PF0 and SRIOV on other PFs. 
    1. Log into TCA Manager and navigate to Infrastructure > Infrastructure Automation. 
    2. Create a new Host Profile and configure the Physical Function (PF) drivers to have Passthrough enabled on PF0 and SRIOV enabled on the other PFs (1-3). 
    3. Refer to the Host Profile example below to enable PCI passthrough on PF0 and SRIOV on other PFs (1-3). 
    4. Use the attached sts2-ptp-vf Host Profile to create a new Cell Site Group named sts2-ptp-vf
    5. Attach the ESXi Host with the STS card to the new sts2-ptp-vf Cell Site Group. 
  2. Create a node pool using the ESXi host with the STS card. 
  3. Login to the workload cluster as the capv user and obtain the full name of the worker nodes associated to the new nodepool. 
    kubectl get nodes –A 
  4. Confirm the worker node VM is labeled with 'telco.vmware.com.node-restriction.kubernetes.io/sts-silicom=true' by running the following command: 
    kubectl get nodes newNodeName --show-labels 
     Note: Replace newNodeName with the node name from step 3. 
  5. Log into the management cluster. 

  6. Validate the PF groups and their associated devices by reviewing the output of the esxinfoprofile and esxinfo commands.  

    1. List all the ESXi profiles by running the following command:  
      kubectl get esxinfoprofile -n tca-system 

    2. Select the esxinfoprofile for the sts2-ptp-vf host profile by running the following command: 
      kubectl get esxinfoprofile -n tca-system sts2-ptp-vf-o yaml 

    3. Run the following command to obtain all the ESXi information: 
      kubectl get esxinfo -A 

    4. Run the following command to select the ESXInfo for the STS host by matching the ESXi Host FQDN from step 3:  
      kubectl get esxinfo -n tca-system w2-hs6-h0211.eng.vmware.com -o yaml 

  7. Attach the STS card’s PF/VF and USB devices. 

    1. Onboarding a new CSAR with the following spec details added under the infra_requirements: section. 

    2. Add STS PF0 in PCI passthrough mode under the passthrough_devices: section. 

    3. Add the VF from other PFs as an SRIOV Network Adapter under the network:devices: section. 

    4. Add the USB devices under the usb_devices: section. 

    5. Remove the linuxptp, phc2sys and ptp4l settings from the CSAR. Example CSAR 

  8. Instantiate the Silicom NF CSAR with the modifications detailed in the previous step. 

  9.  Confirm the names of the STS devices match the names defined in the Silicom NF CSAR. 

    1. Log into the worker node VM. 

    2. Run the following command to confirm the STS device names match what is defined in the CSAR, e.g. sts-ethX: 
      ip a 

  10. Download the Silicom Tsyncd helm charts and deploy the Tsyncd daemonSet onto the workload cluster. 
    1. Run the following command to add the helm repository: 
      helm repo add sts-charts https://silicom-td.github.io/STS_HelmCharts/ 

    2. Run the following command to confirm the Silicom STS-charts are present:  
      helm search repo sts-charts 

    3. Run the following command to download the sts-charts package:  
      helm pull sts-charts/sts-silicom 

      Note: You can ignore the following warnings: 
      WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/capv/.kube/config 
      WARNING: Kubernetes configuration file is world-readable. This is insecure.Location: /home/capv/.kube/config 

    4. Un-tar the sts-charts package:   
      tar -xvzf sts-silicom-0.0.8.tgz 

  11. Create a new yaml file to overwrite the default Tsyncd helm charts
    1. Run Tsyncd in Grand Master (GM) mode.   
    2. Download the attached gm.yaml file 

    3. Using the gm.yaml file, run the following command to install Tsyncd into the tca-system namespace: 
      helm install -f gm.yaml --debug sts-gm --namespace tca-system ./sts-silicom 

  12. Run the following command to confirm there are 4 pods, with the sts-gm prefix, in READY state:
    kubectl get pods -n tca-system

  13. Run the following command to confirm the sts-gm-cfg configmap has been created with the DATA value set to 2. 
    kubectl get configmap -n tca-system 
  14. Run the following comand to confirm the sts-gm-grp service has been created: 
    kubectl get services -n tca-system

Troubleshooting STS installation:
  1. Use the following commands to review the Tsyncd logs messages and confirm Tsyncd has started without error: 
    kubectl logs -n tca-system sts-gm-tsy-<podname> -c sts-gm-tsy -f | grep –v APR_QIF_MSG_EVENT_AWAKE

  2. Check GNSS log messages using the following command: 
    $kubectl logs -n tca-system sts-gm-tsy-<podname> -c sts-gm-gps 

  3. Check host time synchronization logs using the following command: 
    $kubectl logs -n sts-gm-tsy-<podname> -c sts-gm-phc 

Note: Replace sts-gm-tsy-<podname> with actual sts-gm-tsy-<podname> value.

Attachments