How to get TKGI credentials when the bosh director is down
search cancel

How to get TKGI credentials when the bosh director is down

book

Article ID: 298732

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Integrated Edition

Issue/Introduction

As per the current architecture of TKGI, if the bosh director VM is down and all the other components are up, the tkgi get-credentials command will fail with a similar error as shown below:
Error: Status: 500; ErrorMessage: <nil>; Description: There was a problem completing your request. Please contact your operations team providing the
following information: service: p.pks, service-instance-guid: ad5b159-65e9-2223-8759-9278c516g6e4, broker-request-id: a52966b6-ab18-48g8-00bv-3454f3bc45b6, operation: bind - error-message: gathering deployment list Cannot get the list of deployments: 

Finding deployments: Performing request GET 'https://172.50.0.3:25555/deployments': Performing GET request: Requesting token via client credentials grant: Performing request POST 'https://172.50.0.3:8443/oauth/token': Performing POST request: Retry: Post https://172.50.0.3:8443/oauth/token: dial tcp 172.50.0.3:8443: connect: connection refused; ResponseError: <nil>
This happens due to the fact that the TKGI on-demand broker tries to list the available deployments (service instances) by contacting the bosh director as seen in the GET request above 'https://172.50.0.3:25555/deployments'. As the director is down this operation fails.

Environment

Product Version: 1.9
OS: Ubuntu

Resolution

In such scenarios, to get kubeconfig of a K8s cluster, the following steps can be used. Before proceeding please make sure to do the following:
  • Note: These instructions only explore the use case when TKGI is enabled to use UAA as an OIDC provider
  • Replace the username admin with a non-admin user to test it first
  • Replace pks.corp.local with the PKS API FQDN
  • Replace cluster1 with the cluster name
  • Replace cluster1.corp.local with the cluster FQDN
  • Replace password wherever applicable
Send a POST request to generate id_token and refresh_token:
curl 'https://pks.corp.local:8443/oauth/token' -k -s -X POST \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -H 'Accept: application/json' \
    -d 'response_type=token id_token&client_id=pks_cluster_client&client_secret=&grant_type=password&username=admin&password=HehXCHBuGUBP2FQ-bmEmiS71dK986TTv' > token.json

Get the cluster CA certificate:
echo | openssl s_client -showcerts -servername cluster1.corp.local -connect cluster1.corp.local:8443 2>/dev/null | openssl x509 -outform PEM > cluster-ca.crt

Create kubeconfig for this cluster and set the current context to use the newly generated kubeconfig:
# Set user
kubectl config set-context cluster1 --cluster cluster1 --user admin --kubeconfig config.json

# Embed certificates
kubectl config set-cluster cluster1 --server=https://cluster1.corp.local:8443 --certificate-authority=cluster-ca.crt --embed-certs=true --kubeconfig config.json

# Configure auth provider to redirect to IDP
kubectl config set-credentials admin \
  --auth-provider oidc \
  --auth-provider-arg client-id=pks_cluster_client \
  --auth-provider-arg client-secret="" \
  --auth-provider-arg idp-issuer-url=https://pks.corp.local:8443/oauth/token \
  --kubeconfig config.json

# Set certificate authority data
kubectl config set-credentials admin --auth-provider-arg=idp-certificate-authority-data=$(base64 -w 0 cluster-ca.crt) --kubeconfig config.json

# Set id and refresh tokens
kubectl config set-credentials admin --auth-provider-arg=id-token=$(cat token.json | jq -r .id_token) --kubeconfig config.json
kubectl config set-credentials admin --auth-provider-arg=refresh-token=$(cat token.json | jq -r .refresh_token) --kubeconfig config.json

# Use the generated kubeconfig in the current context
kubectl config use-context cluster1 --kubeconfig config.json