This is expected behavior. Running the tkg command with sudo will result in the tkg and kubectl configuration files being placed under the root user's home directory, and owned by the root user.
Workaround:
You will need to merge the contents of the .tkg/config.yaml and .kube/config files by adding the management cluster created via sudo and then importing credentials for any created clusters.
To add a management cluster created via sudo, you can change the ownership/permissions on the root user's .kube/config file (with the chown/chmod commands) such that the current user can read it. Once that is done, you can run commands similar to the following:
tkg add mc --kubeconfig <root home directory>/.kube/config
tkg set mc <management cluster name>
You can now run tkg get clusters to see the clusters that are available and then use the tkg get credentials <cluster name> command to update your .kube/config file.
Note: If you want to perform additional TKG operations against the same vSphere environment, you can copy the vSphere specific deployment variables from the root user's .tkg/config.yaml file to the current user's .tkg/config.yaml file.
You will still need to add a kubeconfig entry for the management cluster to your .kube/config file. You can manually copy the relevant entries from the root user's .kube/config file to the current user's .kube/config file or use the following steps:
export KUBECONFIG=<current user's home directory>/.kube/config:<root user's home directory>/.kube/config
kubectl config view --flatten > <current user's home directory>/.kube/config
Note: This will result in all contexts present in the root user's .kube/config file now being present in the current user's .kube/config file. If any of the added contexts are undesired, you can delete them with the kubectl config delete-context <context name> command.