How to create custom ClusterClasses in TKG2.0 on vSphere with Tanzu
search cancel

How to create custom ClusterClasses in TKG2.0 on vSphere with Tanzu

book

Article ID: 319380

calendar_today

Updated On:

Products

VMware vSphere ESXi VMware vSphere with Tanzu

Issue/Introduction


This KB is intended to provide guidance to users seeking more customized Guest Cluster configurations on TKG2.0 clusters built in vSphere with Tanzu
 
CAUTION:
 
REQUIRED VERSIONS:
  • vSphere 8.0U2 provides automated functionality for CustomClusterClass creation and management. PLEASE USE VSPHERE 8.0 U2 FOR CUSTOMCLUSTERCLASS CREATION AND MANAGEMENT.
    • Find documentation HERE
  • Custom ClusterClass functionality is available starting with vSphere 8.0A/8.0.0.10100 (build 20920323) release for Supervisor Clusters running on 1.22, 1.23, and 1.24 Kubernetes version and TKG2.0 Guest Clusters deployed from Non-Legacy TKr's that are compatible with vSphere 8.x
  • Supervisor Clusters running on vSphere versions prior to 8.0A will fail to completely deploy Guest Clusters from a CCC due to a known issue. Please upgrade to later versions of vSphere and Supervisor Cluster to deploy Guest Clusters with custom ClusterClasses.


Environment

VMware vSphere 8.0 with Tanzu

Resolution


Best Practices: This is the recommended process for building and using a new CCC along with all required components for Guest Cluster configuration:

1. Clone the existing default TanzuKubernetesCluster ClusterClass for initial CCC creation. This will ensure all configuration variables are present and will minimize time spent investigating variations in the configuration.
2. NOTE that any changes to the CCC will be pushed to Guest Clusters owned by the CCC. Ensure modifications to existing CCC's are minor and ideally have been tested prior to application.
3. If large scale changes are required on the CCC, it is recommended to create a new one with a new associated Guest Cluster for testing prior to application on any existing CCC's/workloads.
4. Ensure you have created all required PREREQUISITES to minimize time spent investigating Guest Cluster deployments.
5. Deploy a Guest Cluster from the custom ClusterClass using default values FIRST, then apply any changes to the CCC to ensure the basic configuration is sound.


High Level Process:

1. Create Prerequisites: vSphere Namespace, Attach StoragePolicy, Attach VMClass, Attach ContentLibrary
2. Clone CustomerClusterClass from existing ClusterClass
3. Create Supervisor Objects required for Guest Cluster initial Deployment
4. Create Guest Cluster
5. Create Supervisor Objects for Guest Cluster Management
6. Create Guest Cluster Security Policies for User Authentication
7. Synchronize SSO roles configured in vSphere Client Namespace view into Guest Cluster for management


Detailed Custom ClusterClass creation steps:

  • PREREQUISITES
    
    • Start by building out and configuring a Namespace in the vSphere Web Client for later use:
      • Create a Content Library using the VMware vSphere 8.0 documentation for reference  
      • Log into the vSphere Web Client, navigate to MENU →  Workload Management
        • Click on "NEW NAMESPACE" → select a Supervisor to create this namespace and enter the namespace name → click "CREATE"
          • In command examples, we will use Namespace name custom-ns
    • Navigate to the newly created Namespace
      • Click on "ADD STORAGE" → select a Storage Policy → click "OK"
      • Click on "ADD VM Class" → select a VM Class Name → click "OK"
      • Click on the "Content Library" under "Tanzu Kubernetes Grid Service" → Select the previously created Content Library → click "OK"

 

  • Create Custom ClusterClass by cloning the default CC:

    • Connect to the Supervisor Cluster as a vCenter SSO user with kubectl. Use this Documentation for reference if unfamiliar with the process
    • Create CCC named custom-cc in the custom-ns namespace using the following command:

# kubectl -n custom-ns get clusterclass tanzukubernetescluster -o json | jq '.metadata.name="custom-cc"' | kubectl apply -f -

 

  • Create Supervisor Objects Required for Guest Cluster Initial Deployment

1. Download and extract the contents of attached file CCC_config_yamls.tar.gz to the jumpbox used to connect to the Supervisor Cluster

2. Create a Self Signed Extension issuer cert in the custom-ns namespace: 

# kubectl -n custom-ns apply -f self-signed-extensions-issuer.yaml

3. For below example, we will use the default cluster name ccc-cluster, the steps demonstrate how to change this if needed.

  • Create ExtensionCACertificate: Edit the the "extension-ca-certificate.yaml" file and change ccc-cluster-extensions-ca to the chosen cluster name in the metadata.name and the spec.issuerRef.name fields. We will use cluster name "ccc-cluster" by default, if using a cluster named "test", we would change the file accordingly:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: test-extensions-ca
spec:
  commonName: kubernetes-extensions
  duration: 87600h0m0s
  isCA: true
  issuerRef:
    kind: Issuer
    name: self-signed-extensions-issuer
  secretName: test-extensions-ca
  usages:
  - digital signature
  - cert sign
  - crl sign

  • Create an Extension CA certificate and secret using the yaml modified in the previous step:

# kubectl -n custom-ns apply -f extension-ca-certificate.yaml

  • You should see a secret with name ccc-cluster-extensions-ca created

  • Create ExtensionsCAIssuer: Edit the the "extensions-ca-issuer.yaml" file and change ccc-cluster-extensions-ca to the chosen cluster name in the metadata.name and the spec.ca.secretName fields. We will use cluster name "ccc-cluster" by default, if using a cluster named "test", we would change the file accordingly:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: test-extensions-ca-issuer
spec:
  ca:
    secretName: test-extensions-ca

  • Create an Issuer certificate using the yaml modified in the previous step:

# kubectl -n custom-ns apply -f extensions-ca-issuer.yaml

  • Create AuthServiceCertificate: Edit the the "auth-svc-cert.yaml" file and change ccc-cluster-extensions-ca to the chosen cluster name in the metadata.name, spec.issuerRef.name and the spec.secretName fields. We will use cluster name "ccc-cluster" by default, if using a cluster named "test", we would change the file accordingly:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: test-auth-svc-cert
spec:
  commonName: authsvc
  dnsNames:
  - authsvc
  - localhost
  - 127.0.0.1
  duration: 87600h0m0s
  issuerRef:
    kind: Issuer
    name: test-extensions-ca-issuer
  secretName: test-auth-svc-cert
  usages:
  - server auth
  - digital signature

  • Apply the yaml modified in the previous step to complete creation:

# kubectl -n custom-ns apply -f auth-svc-cert.yaml

  • You should see a secret with name ccc-cluster-auth-svc-cert created

 

  • Create a Guest Cluster
  • Creating a Cluster that can consume a CCC is done via Cluster v1beta1 API. A cluster requires a bare minimum set of variables to work with CCC. Please refer to Cluster v1beta1 API on what each variable does. This bare minimum set of variables includes:
    • "vmClass" - for details see PREREQUISITES
    • "storageClass" - for details see PREREQUISITES
    • "ntp" - gathering detail for this will be covered in below steps
    • "extensionCert" - auto-generated after "extension CA certificate " is created in the above step
    • "clusterEncryptionConfigYaml" - section below will walk through the process of getting "clusterEncryptionConfigYaml".

 

1. Create the ClusterEncryptionConfig secret:

  • Modify the encryption-secret.yaml file from the downloaded CCC_config_yamls.tar.gz and replace "ccc-cluster" with the chosen cluster name
    • data.key can be replaced if desired, however, this is not necessary: 
      • The steps are noted on the encryption-data page for users wishing to update data.key.
      • If the data.key value is changed in the encryption-secret.yaml file, the cluster.spec.topology.variables.clusterEncryptionConfigYaml.values will also need to be updated in the cluster-with-ccc.yaml file before cluster deployment
    • Apply the encryption-secret.yaml:  
# kubectl -n custom-ns apply -f encryption-secret.yaml

 

2. Gather NTP from Supervisor Cluster if desired:

# kubectl -n vmware-system-vmop get configmap vmoperator-network-config -o jsonpath={.data.ntpservers}
 

3. Modify cluster-with-ccc.yaml file with desired settings:

  • Change the metadata.name to the cluster name used to Create Supervisor Objects Required for Guest Cluster Initial Deployment
  • Change the spec.topology.class to match the CCC name used to Create Custom ClusterClass by cloning the default CC
  • Change the spec.topology.variables.storageClass.value to match the StoragePolicy attached to the namespace during the PREREQUISITES steps
  • Change the spec.topology.variables.ntp.value to match the NTP server address gathered from the supervisor​ cluster
  • Change the spec.topology.variables.extensionCert.value.contentSecret.name to match the cluster name used to Create Supervisor Objects Required for Guest Cluster Initial Deployment
  • If the data.key was changed in the Create the ClusterEncryptionConfig secret steps, change the spec.topology.variables.clusterEncryptionConfigYaml.value accordingly. Otherwise, leave this as default.


4.  Deploy the Cluster:

# kubectl -n custom-ns apply -f cluster-with-ccc.yaml
 

NOTE: The Guest Cluster will deploy machine and VM objects after this step, HOWEVER, they will stay in provisioning state until completion of the Create Supervisor Objects for Guest Cluster Management in the steps below.
 

  • Create Supervisor Objects for Guest Cluster Management:
  • Once the cluster with CCC is applied, various controllers will try to provision it; however, underlying infrastructure resources still require additional objects to be bootstrapped properly. The below steps provide a high level summary of the action performed in steps 1-5 below. 
    • Authentication values will be gathered and appended into a file named values.yaml (an example file is available in CCC_config_yamls.tar.gz)
    • After updating the the values.yaml file with the required Authentication values, the values.yaml file will be encoded into a base64 string
    • The resulting base64 encoded string will be added to the guest-cluster-auth-service-data-values.yaml file downloaded from CCC_config_yamls.tar.gz as a hash prior to applying the guest-cluster-auth-service-data-values.yaml file as a secret named GuestClusterAuthSvcDataValues.
    • Finally, the Guest Cluster Bootstrap must be modified to reference the newly created GuestClusterAuthSvcDataValues secret.
  • For reference, the values.yaml file downloaded in the CCC_config_yamls.tar.gz is presented below (note the section highlighted in red which indicates the Guest Cluster UID):

authServicePublicKeys: '{"issuer_url":"https://<VCENTER_FQDN>/openidconnect/vsphere.local","client_id":"vmware-tes:vc:vns:k8s:2dee51bb-ee4c-48b8-8baf-af1107973ef2","keyset":{"keys":[{"alg":"RS256","e":"AQAB","kid":"46FED7FE37DD23BAA07834827EACF15AF598FD92","kty":"RSA","n":"t6IupxaZT-i8RO7tMmWcqmksDNSiaU6L7sv6vLl8iW1cj1IDJVXLoAAFiuhv39UmMNNSt8adNcFzQIULU7E_phtEabbCJ1Ojmz-J42mOdb_GMqqDAPcQOOajH_Iq1q9PwN4KNpUoNsPdCBGlKGD7g1-fmp585i72U9BNpB0qdE1GRsjazueZEYNNWQenvWJsT5Q_-bp9uWzrSTz304B20R62g8RmbPt6zsnDt4vV1LajyTn_PnIaMxIb7el1Ss1LTNrZ0b6tOiQ0G4kDkX0eifZn3UBqJknmnGN7QK0xBrxYy7BBslRH3L06VCalyyGrAPxN46eO5jWf9T568iMrElTu9uPQ1zOjhJpJMZRyxCoXKVqhVnW-B5xOfJuKkomyU7gg0Ijn-1989YFZLC_ZdETyT6sSq_pOZrCrbY2QifqxR82b7FeTcuZ1sh4fJmisu1GbIKoGsuzXD-J9hRUbUM5sdXyJ2AaO108gTAAUH_FVCPGsVIGxJVS6MoP-33Yd","use":"sig","x5c":["MIIFKz.......O0uQ=="]}]}}'
ceritificate: |
    -----BEGIN CERTIFICATE-----
    MIIDPTCCAiWgAwIBAgIQMibGSjeuJelQoPxCof/+xzANBgkqhkiG9w0BAQsFADAg
    MR4wHAYDVQQDExVrdWJlcm5ldGVzLWV4dGVuc2lvbnMwHhcNMjMwNDA4MDUxMDQ5
    WhcNMzMwNDA1MDUxMDQ5WjASMRAwDgYDVQQDEwdhdXRoc3ZjMIIBIjANBgkqhkiG
    9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnsKLZEJ3pXb/QjaZXFXfazplyRRmRzg6E1VW
    BHaeX3qLEKXKvdmAT807hZtyUVaSrxwjXoQDshuwNtSjDfKSCGsOiZWGPimidBFk
    U/zM2qrb+Ez0ygSI4L/Et3tMy/iSTB6TyIUUH6t4R7JWY56xZRCqXOJt5aUnEjDK
    +lBqxyfcXSf+4syHXckiTv+IvstqjPhcKcZs1Rij+44B8bUSeNpP0cHz1i2HSxwF
    CP1lD0gHA76/fd7Rp8fn/5nQl0OGZsZDwMwrs+UFY44iDBFz6VoX0qvOVbB0sZGm
    TsVaMu124zmafKUM0eyuUQhT90Pt0725NVlrjmYudplfd6rFVwIDAQABo4GAMH4w
    DgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQC
    MAAwHwYDVR0jBBgwFoAUx2TZ+AHswjOx5OcKFuhHrZPBxDwwKAYDVR0RBCEwH4IH
    YXV0aHN2Y4IJbG9jYWxob3N0ggkxMjcuMC4wLjEwDQYJKoZIhvcNAQELBQADggEB
    AKeyw9dyPX8JWJyJaUSwpbZSgAP8s6WSI7jQ+xHPFDpzKpACDTrTKA+SbxWZsD8U
    TclWkw4V/MEQiVZnKPARs6/mQ6Y5JnlpxxB/g+YOh+eNnRIdrIqN9Uc918J2aULJ
    xDH4H9YbFm78Cr8iw7R/UEc8HV7/oVs12ojsKA7nWv9T1+vxY7xqKaH/+r+6M8Kw
    seDpmyBtDpfAe3viOsMMZQ6H+RkymaA+MnNu6TUAfrSRHHIA/VXZcH13G59t0qua
    sESk/RDTB1UAvi8PD3zcbEKZuRxuo4IAJqFFbAabwULhjUo0UwT+dIJo1gLf5/ep
    VoIRJS7j6VT98WbKyZp5B4I=
    -----END CERTIFICATE-----
privateKey: LS0tLS1CRUdJTiBSU.....LS0tLQo=

 

1. Gather required values for the GuestClusterAuthSvcDataValues secret and append them into values.yaml file (reference the above example)

  • Gather the authServicePublicKeys value 
# kubectl -n vmware-system-capw get configmap vc-public-keys -o jsonpath="{.data.vsphere\.local\.json}"
  • Append the output from the above command into the values.yaml file, replacing the authServicePublicKeys section.
  • Next, gather the Guest ClusterUID from the Guest Cluster created earlier to update the authServicePublicKeys section:
# kubectl get cluster -n custom-ns ccc-cluster -o yaml | grep uid
 
  • Append "client_id" value in the values.yaml file with "vmware-tes:vc:vns:k8s:clusterUID" and the cluster UID gathered above
 
  • Gather the ceritificate value (Replace ccc-cluster with the chosen cluster name)
# kubectl -n custom-ns get secret ccc-cluster-auth-svc-cert -o jsonpath="{.data.tls\.crt}" | base64 -d
  • Append the certificate from above output and replace the certificate in values.yaml file
 
NOTE: When appending into the values.yaml file. the cert should be tabbed in 4 spaces to avoid failure
 
  • Gather the privateKey value 
# kubectl -n custom-ns get secret ccc-cluster-auth-svc-cert -o jsonpath="{.data.tls\.key}"
 
  • Replace the privateKey value in values.yaml with the output from the above command

 

​2. Hash the values.yaml file with base64 to gather output for the guest-cluster-auth-service-data-values.yaml file

# base64 -i values.yaml -w 0

3. Edit the guest-cluster-auth-service-data-values.yaml file (downloaded from CCC_config_yamls.tar.gz), replace "ccc-cluster" values in the metadata.labels.cluster-name and metadata.name fields  with the chosen cluster name, then replace the data.values.yaml hash string with the value outputted from the command in step 2 above 

4. Apply the  GuestClusterAuthSvcDataValues file

# kubectl -n custom-ns apply -f guest-cluster-auth-service-data-values.yaml

5. Edit Cluster Bootstrap to reference the secret created above:

# kubectl -n custom-ns edit clusterbootstrap ccc-cluster

  • Find the additional package referenced under "spec.additionalPackages" with the refName that starts with "guest-cluster-auth-service". 
  • Add the following lines below the refName: guest-cluster-auth-service line:

EXAMPLE:

- refName: guest-cluster-auth-service.tanzu.vmware.com.1.0.0+tkg.2-zshippable
valuesFrom:
   secretRef: ccc-cluster-guest-cluster-auth-service-data-values

  • Save and quit to apply the clusterbootstrap modifications

 

  • Create Guest Cluster Security Policies for Workload Management
    • Pods within the Guest Cluster require additional PSP-related objects and rolebinding sync to be able to run properly for authentication. In order to apply the required resource objects on the cluster level, use the following process:
1. Gather the Guest Cluster kubeconfig
 
# kubectl -n custom-ns get secret ccc-cluster-kubeconfig -o jsonpath="{.data.value}" | base64 -d > ccc-cluster-kubeconfig

2. Apply the psp.yaml file from CCC_config_yamls.tar.gz using the KUBECONFIG gathered above
# KUBECONFIG=ccc-cluster-kubeconfig kubectl apply -f psp.yaml
 

 

  • Synchronize SSO roles configured in vSphere Client Namespace view into Guest Cluster for management​​​​​​​​​
    • Rolebinding for SSO users built in the Namespaces view on vSphere We Client must be synchronized from Supervisor Cluster to Guest Cluster in order for developers to manage the Guest Cluster workloads.
    • This process requires exporting the existing rolebinding list from the Supervisor cluster, gathering rolebindings that have "edit" role and updating sync-cluster-edit-rolebinding.yaml file before applying to the Guest Cluster using the GC's KUBECONFIG.


1. Gather existing rolebindings from SV cluster:


# kubectl -n custom-ns get rolebinding -o yaml


2. From the returned list of rolebinding objects, identify ones with roleRef.name equal to "edit"
 

EXAMPLE:
 

- apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    creationTimestamp: "2023-04-14T17:23:36Z"
    labels:
      managedBy: vSphere
    name: wcp:custom-ns:group:SSODOMAIN.COM:testuser
    namespace: custom-ns
    resourceVersion: "3243943"
    selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/custom-ns/rolebindings/wcp:custom-ns:group:vsphere.local:administrators
    uid: e0371d9b-5ec3-4553-952f-34bde3fb5d06
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: edit
  subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: sso:[email protected]


3. Edit the sync-cluster-edit-rolebinding.yaml file from CCC_config_yamls.tar.gz to add any extra roldbindings other than the default [email protected]


EXAMPLE:
 

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    run.tanzu.vmware.com/vmware-system-synced-from-supervisor: "yes"
  name: vmware-system-auth-sync-wcp:custom-ns:group:vsphere.local:administrators
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: sso:[email protected]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    run.tanzu.vmware.com/vmware-system-synced-from-supervisor: "yes"
  name: vmware-system-auth-sync-wcp:custom-ns:group:SSODOMAIN.COM:testuser
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: sso:[email protected]

NOTE: In the metadata.name, the user role is prepended with vmware-system-auth-sync- for all users. The metadata.name and subjects.name entries will require modification for all non-default roles.

4. Apply the sync-cluster-edit-rolebinding.yaml config to synchronize rolebindings:

# KUBECONFIG=ccc-cluster-kubeconfig kubectl apply -f sync-cluster-edit-rolebinding.yaml

 

Attachments

CCC_config_yamls.tar get_app