PKS cluster details do not show NSXT Network Details
search cancel

PKS cluster details do not show NSXT Network Details

book

Article ID: 298667

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Integrated Edition

Issue/Introduction

After updating the bosh config for a PKS Service Instance, the pks cluster --details no longer displays the clusters NSXT Network Details, all fields are blank.

What you should get....
 
pks cluster test --details
:
NSXT Network Details:
  Load Balancer Size                       (lb_size):                  "small"
  Nodes DNS Setting                        (nodes_dns):                ["10.1.2.3","10.1.2.4"]
  Node IP addresses are routable [no-nat]  (node_routable):            false
  Nodes subnet prefix                      (node_subnet_prefix):       24
  POD IP addresses are routable [no-nat]   (pod_routable):             false
  PODs subnet prefix                       (pod_subnet_prefix):        24
  NS Group ID of master VMs                (master_vms_nsgroup_id):    ""
  Tier 0 Router identifier                 (t0_router_id):             "439297bf-e62b-46bf-9d95-a5464f78be2b"
  Floating IP Pool identifiers             (fip_pool_ids):             ["77dbca8b-11be-4ef8-96a6-da260354a115"]
  Node IP block identifiers                (node_ip_block_ids):        ["663d23c6-c4d8-43b6-830a-473c4b323209"]
  POD IP block identifiers                 (pod_ip_block_ids):         ["061ca94e-358c-4532-b8c6-1fd3f76de52e"]
  Shared tier1                             (single_tier_topology):     true
  Infrastructure networks                  (infrastructure_networks):  ""

What you do get ...
NSXT Network Details:
  Load Balancer Size                       (lb_size):                  ""
  Nodes DNS Setting                        (nodes_dns):                ""
  Node IP addresses are routable [no-nat]  (node_routable):            ""
  Nodes subnet prefix                      (node_subnet_prefix):       ""
  POD IP addresses are routable [no-nat]   (pod_routable):             ""
  PODs subnet prefix                       (pod_subnet_prefix):        ""
  NS Group ID of master VMs                (master_vms_nsgroup_id):    ""
  Tier 0 Router identifier                 (t0_router_id):             ""
  Floating IP Pool identifiers             (fip_pool_ids):             ""
  Node IP block identifiers                (node_ip_block_ids):        ""
  POD IP block identifiers                 (pod_ip_block_ids):         ""
  Shared tier1                             (single_tier_topology):     ""
  Infrastructure networks                  (infrastructure_networks):  ""

If you used the bosh commandline credentials from the Ops Manager Credentials page to retrieve and update the config, you are acting as a bosh admin user.  That means when you updated a service instance's  bosh config, the config lost its ownership by the Pivotal Container Service bosh client.  The result of that is the PKS user is not able to see the updated config.

The loss of access can be seen by running bosh configs as the bosh admin user.

You should see something like the following where the bosh team column has values for the service instaances...
bosh configs
ID   Type   Name                                                   Team                                            Created At                                                                    Created At
:
17*  cloud    service-instance_78c71a51-65a0-4e9e-87ba-5004370fe652                   pivotal-container-service-cedfe8ef830cf6bcadcd  2021-01-05 10:46:09 UTC

When the ownership is lost, the Team column is not present....
bosh configs
ID   Type   Name                                                   Team                                            Created At                                     Created At

18*  cloud    service-instance_78c71a51-65a0-4e9e-87ba-5004370fe652                   -                                               2021-01-05 11:43:53 UTC



Further evidence of the problem may be seen in the pivotal-container-service vm logs.  In the /var/vcap/sys/log/pks-nsx-t-osb-proxy/pks-nsx-t-osb-proxy.stdout.log file, you may see error...
 
{"timestamp":"1609850048.164758205","source":"pks-nsx-t-osb-proxy","message":"pks-nsx-t-osb-proxy.proxy.get-network-settings-operation: failed to infer Network Profile","log_level":1,"data":{"error":"No config\nerror fetching latest cloud config with name service-instance_78c71a51-65a0-4e9e-87ba-5004370fe652 ....


Environment

Product Version: 1.9

Resolution

First retrieve the bosh config for the service instance again to a text file.

bosh config --type=cloud --name=service-instance_GUID --column Content > service-instance_GUID-config.yml

We now need to delete the bad bosh config so the pks bosh user can take ownership of the new cloud config.

bosh delete-config --type=cloud --name=service-instance_GUID

Then from the PKS tile/Credentials/UAA Client Credentials use theses to replace the BOSH_CLIENT and BOSH_CLIENT_SECRET environment values.

bosh update-config --type=cloud --name=service-instance_GUID service-instance_GUID-config.yml

After that, pks cluster --details should now show the NSXT Network details.