Network Configuration & Requirements
Workload Management is supported on 2 types of vSphere Networking
- VDS networking
- NSX-T networking
The pre-requisites and networking requirements are different for both the types of vSphere networking. Below you can find the detailed steps for both.
Network requirements for VDS based Workload Management
Create a distributed port group on the existing distributed switch
From the networking tab, create a new distributed port group on a distributed virtual switch
- Navigate to networking view and right click on a distributed virtual switch > Distributed Port Group > New Distributed Port Group
- Enter a Name for the New Distributed Port Group > Click Next
- Configure Settings > Click Next
- Review and Click on Finish to complete the New Distributed Port Group creation
Network requirements for NSX-T based Workload Management
Create a distributed switch
- Navigate to networking view and right click on the data center > Distributed Switch > New Distributed Switch
- Enter a Name for the New Distributed Switch > Click Next
- Select the distributed switch version > Click Next
- Configure Settings > Enter Port group name > Click Next
- Review and Click on Finish to complete the New Distributed Switch creation
Add Hosts to the distributed switch
- Navigate to networking view and right click on the Distributed Switch created from Previous Step > Click on Add and Manage Hosts
- Select Add hosts > Click Next
- Click on New hosts
- Select ESXi hosts from compute-cluster > Click OK
Note: Compute-cluster refers to the VSAN cluster where Supervisor Cluster and Workload Network is enabled.
- Verify hosts are selected > Click Next
- Select vmnic1 > Click on Assign uplink
- Select Apply this uplink assignment to the rest of the hosts > Click OK
- Verify vmnic1 is Assigned on all the hosts > Click Next
- Manage and assign VMkernel network adapters, if any > Click Next
- Select virtual machines or network adapters to migrate, if any > Click Next
- Review and Click on Finish to complete adding hosts to the distributed switch
Configure distributed port group VLAN settings
- Navigate to networking view and right click on the Distributed Port Group created from Previous Step > Click Edit Settings
- Click on VLAN > Select VLAN type as VLAN > Enter VLAN ID > Click OK
- Assign privileges to workload-storage user from VC UI:
- Navigate to Administration > Roles
- Select Workload Storage Manager > Click on EDIT
- Select Host > Configuration > Enable Storage partition configuration > Click Next > Click Finish
Configure File Services
Note: Complete documentation on how to configure file services is available at: Configure File Services
- Navigate to the vSAN cluster and click Configure > vSAN > Services.
- On the File Service row, click Enable.
- The Configure File Service wizard opens.
- Review the checklist on the Introduction page, and click Next.
- In the File service agent page, select one of the options to download the OVF file
- In the Domain page, enter the following information and click Next:
- File service domain
- DNS servers: To ensure the proper configuration of File Services enter the DNS server available from the Network Settings > Workload Network tab
- DNS suffixes: Provide the DNS suffix that is used with the file services. All other DNS suffixes from where the clients can access these file servers should also be included. File Services does not support DNS domain with single label, such as "app", "wiz", "com" and so on. A domain name given to file services should be of the format thisdomain.registerdrootdnsname. DNS name and suffix must adhere to the best practices detailed in https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain.
- Directory Service: Configure an Active Directory domain to vSAN File Services for authentication. If you are planning to create an SMB file share or an NFSv4.1 file share with Kerberos authentication, then you must configure an AD domain to vSAN File Services.
- In the Networking page, enter the following information, and click Next:
- Network: Select the Port Group created from Network Configuration step above
- Protocol
- Subnet mask
- Gateway: Enter the IP address of external gateway VM. This can be obtained from the edge-cluster where Supervisor Cluster and Workload Management is not enabled
- In the IP Pool page, enter the following information, select a Primary IP, and then click Next.
- Consider the following while configuring the IP addresses and DNS names:
- To ensure proper configuration of File Services, the IP addresses you enter in the IP Pool page should be static addresses and the DNS server should have records for those IP addresses. For best performance, the number of IP addresses must be equal to the number of hosts in the vSAN cluster.
- You can enter up to 32 IP addresses.
- You can use the following options to automatically fill the IP address and DNS server name text boxes:
AUTO FILL: This option is displayed after you enter the first IP address in the IP address text box. Click the AUTO FIL option to automatically fill the remaining fields with sequential IP addresses, based on the subnet mask and gateway address of the IP address that you have provided in the first row. You can edit the auto filled IP addresses.
LOOK UP DNS: This option is displayed after you enter the first IP address in the IP address text box. Click the LOOK UP DNS option to automatically retrieve the FQDN corresponding to the IP addresses in the IP address column.
Activate File Volume Support
You can activate file volume support for Workload Management on the VSAN cluster to deploy ReadWriteMany volumes
- Navigate to the vSAN cluster and click Configure > Storage > File Volume > Click on Activate file volume support button
- Select Confirm > Click on ACTIVATE
Deactivate File Volume Support
You can deactivate file volume support for Workload Management on the VSAN cluster
- Navigate to the vSAN cluster and click Configure > Storage > File Volume > Click on Activate file volume support button
- Select Confirm > Click on DEACTIVATE
Creating RWM Persistent Volumes in TKG clusters
Now you can provision file volumes and pods in TanzuKubernetesClusters. Below you can find examples for the same.
Example ReadWriteMany Persistent Volume Claim
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-vanilla-file-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: example-policy
Create the Volume
$ kubectl create -f pvc.yaml
Verify volume is created successfully
$ kubectl get pvc
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default example-vanilla-file-pvc Bound pvc-########-####-####-####-########fcd5 10Mi RWX wcpglobal-storage-profile 48s
Example ReadWriteMany Pod
$ cat pod.yaml
# Example read-write pod
apiVersion: v1
kind: Pod
metadata:
name: example-vanilla-file-pod1
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: ["/bin/sh", "-c", "echo 'Hello! This is Pod1' >> /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
volumeMounts:
- name: test-volume
mountPath: /mnt/volume1
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-vanilla-file-pvc
Create the Pod
$ kubectl create -f pod.yaml
Verify pod is in Running state
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-vanilla-file-pod1 1/1 Running 0 23m