Accessing Prometheus metrics data failed with INVALID_VANITY_HOST
search cancel

Accessing Prometheus metrics data failed with INVALID_VANITY_HOST

book

Article ID: 375522

calendar_today

Updated On:

Products

VIP Authentication Hub

Issue/Introduction

I was trying to curl performance monitoring data at /metrics endpoint but getting "Invalid vanity host"

$ curl -vk https://<IP Address>:8080/metrics
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):

> GET /metrics HTTP/1.1
> User-Agent: curl/7.35.0
> Host: <IP Address>:8080
> Accept: */*

< HTTP/1.1 400
< X-TRANSACTION-ID: xxxxx
< Content-Type: application/json;charset=utf-8
< Content-Length: 60
< Date: Mon, 19 Aug 2024 23:51:26 GMT
< Connection: close

* SSLv3, TLS alert, Client hello (1):
{"errorCode":"0000043","errorMessage":"Invalid vanity host"}

The Pod's log shows

"msg":"<< TenantFilter processing failed to assert tenancy. Reason 'INVALID_VANITY_HOST', Service FQDN '<fqdn>, Request host FQDN '<fqdn>',  Request tenant hint '', Request tenant id 'null', Resolved issuer type 'TENANT_VANITY_HOST', Resolved tenant hint '<ip address>'. >>"

Environment

VIP Authentication Hub 3.x

Resolution

First of all, please make sure that performance monitoring data should be collected from an internal cluster endpoint and not sent to external ingress.

Referring to below documentation

  Deploying Prometheus

it is explained that Authentication Hub services expose performance monitoring data in a Prometheus metrics format. The endpoint exposing the Prometheus metrics is indicated using the Authentication Hub service's label. If you have a Prometheus Operator deployed in your cluster, Authentication Hub creates the proper serviceMonitor object to communicate this information to Prometheus.

  • Every service with the label ssp-prometheus-metrics=actuator exposes the endpoint /actuator/prometheus over TLS (port-443, HTTPS).
  • Every service with the label ssp-prometheus-metrics=metrics  exposes the endpoint /metrics over TLS (port-443, HTTPS).

Hence, depending of the Pod, we need to identify whether it's endpoint is /actuator/prometheus or /metrics.

For example, running below kubectl command identifies that 'factor' Pod's endpoint is /actuator/prometheus instead of /metrics

# kubectl get svc -l ssp-prometheus-metrics=actuator -n ssp
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
ssp-ssp-admin         ClusterIP   nn.nn.nn.nn   <none>        443/TCP   106d
ssp-ssp-auth-mgr      ClusterIP   nn.nn.nn.nn    <none>        443/TCP   106d
ssp-ssp-azserver      ClusterIP   nn.nn.nn.nn    <none>        443/TCP   106d
ssp-ssp-factor        ClusterIP   nn.nn.nn.nn     <none>        443/TCP   106d
ssp-ssp-geolocation   ClusterIP   nn.nn.nn.nn      <none>        443/TCP   106d
ssp-ssp-iarisk        ClusterIP   nn.nn.nn.nn    <none>        443/TCP   106d
ssp-ssp-identity      ClusterIP   nn.nn.nn.nn    <none>        443/TCP   106d
ssp-ssp-scheduler     ClusterIP   nn.nn.nn.nn   <none>        443/TCP   106d

# kubectl get endpoints ssp-ssp-factor -n ssp
NAME             ENDPOINTS           AGE
ssp-ssp-factor   <factor Pod's IP>:8080   111d

So the following curl command should work to retrieve the performance monitoring data of 'factor' Pod

# curl -k https://<factor Pod's IP>:8080/actuator/prometheus

Likewise, running below kubectl command identifies that 'signin' Pod's endpoint is /metrics instead of /actuator/prometheus.

# kubectl get svc -l ssp-prometheus-metrics=metrics -n ssp
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
ssp-ssp-adminconsole   ClusterIP   nn.nn.nn.nn     <none>        443/TCP   106d
ssp-ssp-opa            ClusterIP   nn.nn.nn.nn   <none>        443/TCP   106d
ssp-ssp-signin         ClusterIP   nn.nn.nn.nn   <none>        443/TCP   106d

# kubectl get endpoints ssp-ssp-signin -n ssp
NAME             ENDPOINTS           AGE
ssp-ssp-signin   <signin Pod's IP>:3000   111d

So the following curl command should work to retrieve the performance monitoring data of 'signin' Pod

# curl -k https://<signin Pod's IP>:3000/metrics

 

Additional Information