Gathering Logs for vSphere with Tanzu
search cancel

Gathering Logs for vSphere with Tanzu

book

Article ID: 345464

calendar_today

Updated On:

Products

VMware vSphere ESXi VMware vSphere with Tanzu

Issue/Introduction

There are two components to gathering logs in vSphere with Tanzu

1. Workload Management Logs: Contains logs for the Supervisor Control Plane VM's/Supervisor Cluster.

2. Guest Cluster (TKG) Logs: Contains all of the logs for a specific guest cluster.

Environment

VMware vSphere 8.0 with Tanzu
VMware vSphere 7.0 with Tanzu

Resolution

Gather Workload Management Support Bundle
Workload Management support bundles can be retrieved by logging into the VC UI and selecting Menu -> Workload Platform -> Clusters -> Export Logs, with the appropriate cluster selected.
-This works even if the cluster is stuck in a removing, configuring or updating state.
-This includes a vCenter log bundle.
-This does not include esxi logs. If the issue pertains to vSpherePods or an issue with the Guest Cluster VM's themselves, customers should gather esxi logs additionally to upload to their support ticket. 







Gather Guest Cluster (TKGs) Support Bundle

This bundle is gathered via a cli tool attached to this kb. This is supported only on MacOS and Linux jumpbox's. 
 

Prerequisites:

1. A linux or macOS jumpbox to run the tool from. If you are a windows-only shop you can use the vCenter or a Supervisor Control Plane VM as your jumpbox to run the bundler from. 

Note: To run kubectl commands on vCenter you can pull it from the supervisor cluster by running this command from vCenter as root: 
# curl -k https://$(/usr/lib/vmware-wcp/decryptK8Pwd.py | grep IP -m 1 | awk '{print $2}')/wcp/plugin/linux-amd64/vsphere-plugin.zip -o /tmp/vsphere-plugin.zip && unzip -d /usr /tmp/vsphere-plugin.zip


2. The supervisor cluster kubeconfig file present on the system from which the tkc-support-bundler command will be run. This can either be copied from another system or generated via running the kubectl vsphere login command.


3. Your current Kubernetes context must be set to the supervisor cluster.

 


Flags available for tkc-support-bundler:

./tkc-support-bundler create --help
Create a TKC support bundle

Usage:
  tkc-support-bundler create [flags]

Flags:
  -b, --batch-size int            Number of nodes on which parallel collection is triggered (default 5)
  -c, --cluster string            Tanzu Kubernetes cluster to collect support-bundle for
  -d, --domain-name string        VC Domain name (default "vsphere.local")
  -h, --help                      help for create
  -i, --insecure                  Creates an insecure connection to the VC
  -k, --kubeconfig string         Absolute path to the Supervisor cluster Kubeconfig. If this flag is not set, kubeconfig will be picked up from KUBECONFIG environment variable. (default "$HOME/.kube/config")
  -t, --kubectl-commands string   comma seperated kubectl commands
  -l, --log-ns string             comma seperated namespaces list whose logs should be included
  -n, --namespace string          Supervisor Cluster namespace where the Tanzu Kubernetes cluster resides
  -s, --node-stats                To include the node stats in the support bundle
  -o, --output string             Absolute path to the directory where the support-bundle will be stored, e.g. /home/myuser/mybundle
  -p, --progress-bar              To progress-bar for support-bundle collection per node
  -u, --user string               VC User name
  -v, --vc string                 VC IP:<port>. By default, 443 is considered as the https port 

Required flags:

-c, --cluster string Tanzu Kubernetes cluster to collect support-bundle for
-n, --namespace string Supervisor Cluster namespace where the Tanzu Kubernetes cluster resides
-o, --output string Absolute path to the directory where the support-bundle will be stored, e.g. /home/myuser/mybundle
-u, --user string VC User name
-v, --vc string VC IP:<port>. By default, 443 is considered as the https port


Example for default support bundle where
- .kube/config file lives under ~/.kube/config and has its context set to the supervisor cluster
- 192.0.2.15 is the vCenter ip address
- Admin user is Administrator and the VMware SSO domain is vsphere.local 
- Guest cluster name is guestcluster01
- Supervisor Cluster Namespace where the Guest Cluster lives is supcluster01
- Output of the log bundle would be the user's home directory which is ~/

./tkc-support-bundler create -k ~/.kube/config -v 192.0.2.15 -u [email protected] -c guestcluster01 -n supcluster01 -o ~/ -i true -p

 

If there is already a service account named "tkc-support-bundler-user-{cluster-name}-{cluster-namespace}" or permissions associated with it (role name:  "tkc-support-bundler-guestops-role-{cluster-name}-{cluster-namespace}" ), the log bundle will fail. Therefore, users must ensure that they clean up this service account and the related role to enable logging collection.
There are two methods to delete them:
1. Automatic Deletion: After executing a binary file, the system will prompt for automatic deletion.
2. Manual Deletion:

To delete a role: navigate through the VC UI to Administration -> Roles, find the specific role: and then click the delete button.
To delete a user account, navigate through the VC UI to Administration -> Single Sign-On -> Users and Groups, find the specific user account, and then click the delete button.


Note:

  • When the log bundler fails, it will generate a .log file in the output directory with more details on why it failed. 
  • If the bundler finishes very quickly and only has a small log tar file, its likely that the vmware-system-user account is expired. Follow this kb to resolve this issue: https://knowledge.broadcom.com/external/article?legacyId=90469  

Attachments

tkc-support-bundler-v1.5.0-darwin-amd64.tar get_app
tkc-support-bundler-v1.5.0-darwin-arm64.tar get_app
tkc-support-bundler-v1.5.0-linux-amd64.tar get_app