As per the current architecture of TKGI, if the bosh director VM is down and all the other components are up, the tkgi get-credentials command will fail with a similar error as shown below:
Error: Status: 500; ErrorMessage: <nil>; Description: There was a problem completing your request. Please contact your operations team providing the following information: service: p.pks, service-instance-guid: ad5b159-####-####-8759-9278c516g6e4, broker-request-id: a52966b6-ab18-48g8-00bv-3454f3bc45b6, operation: bind - error-message: gathering deployment list Cannot get the list of deployments: Finding deployments: Performing request GET 'https://172.###.###.3:25555/deployments': Performing GET request: Requesting token via client credentials grant: Performing request POST 'https://172.###.###.3:8443/oauth/token': Performing POST request: Retry: Post https://172.###.###.3:8443/oauth/token: dial tcp 172.###.###.3:8443: connect: connection refused; ResponseError: <nil>
This happens due to the fact that the TKGI on-demand broker tries to list the available deployments (service instances) by contacting the bosh director as seen in the GET request above 'https://172.###.###.3:25555/deployments
'. As the director is down this operation fails.
Product Version: 1.9+
OS: Ubuntu
In such scenarios, to get kubeconfig of a K8s cluster, the following steps can be used. Before proceeding please make sure to do the following:
Send a POST request to generate id_token and refresh_token:
curl 'https://pks.corp.local:8443/oauth/token' -k -s -X POST \ -H 'Content-Type: application/x-www-form-urlencoded' \ -H 'Accept: application/json' \ -d 'response_type=token id_token&client_id=pks_cluster_client&client_secret=&grant_type=password&username=admin&password=###############-################' > token.json
Get the cluster CA certificate:
echo | openssl s_client -showcerts -servername cluster1.corp.local -connect cluster1.corp.local:8443 2>/dev/null | openssl x509 -outform PEM > cluster-ca.crt
Create kubeconfig for this cluster and set the current context to use the newly generated kubeconfig:
# Set user kubectl config set-context cluster1 --cluster cluster1 --user admin --kubeconfig config.json # Embed certificates kubectl config set-cluster cluster1 --server=https://cluster1.corp.local:8443 --certificate-authority=cluster-ca.crt --embed-certs=true --kubeconfig config.json # Configure auth provider to redirect to IDP kubectl config set-credentials admin \ --auth-provider oidc \ --auth-provider-arg client-id=pks_cluster_client \ --auth-provider-arg client-secret="" \ --auth-provider-arg idp-issuer-url=https://pks.corp.local:8443/oauth/token \ --kubeconfig config.json # Set certificate authority data kubectl config set-credentials admin --auth-provider-arg=idp-certificate-authority-data=$(base64 -w 0 cluster-ca.crt) --kubeconfig config.json # Set id and refresh tokens kubectl config set-credentials admin --auth-provider-arg=id-token=$(cat token.json | jq -r .id_token) --kubeconfig config.json kubectl config set-credentials admin --auth-provider-arg=refresh-token=$(cat token.json | jq -r .refresh_token) --kubeconfig config.json # Use the generated kubeconfig in the current context kubectl config use-context cluster1 --kubeconfig config.json