This is a known issue affecting TKG 1.2 and 1.3. There is currently no resolution.
You can work around this issue by modifying the Prometheus extension:
and similarly for TKG 1.3 for the same step
update the prometheus-data-values.yaml to include monitoring.prometheus_server.config.prometheus_yaml
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "prometheus-alertmanager.tanzu-system-monitoring.svc:80"
For reference, modified prometheus-data-values.yaml file is attached to this article.
Follow steps one and two in the previous section and then replace the existing secret using the following command (Note: the command assumes you are running it from the directory where the prometheus-data-values.yaml file exists):
kubectl create secret generic prometheus-data-values --from-file=values.yaml=monitoring/prometheus/vsphere/prometheus-data-values.yaml -n tanzu-system-monitoring -o yaml --dry-run=client | kubectl replace -f -
This will update the prometheus-server configmap. You may need to wait up to five minutes for kapp to reconcile based on the syncPeriod configured for the extension
You can verify if the port is changed by running the following command:
kubectl -n tanzu-system-monitoring get cm prometheus-server -o yaml | grep prometheus-alertmanager.tanzu-system-monitoring.svc
- "prometheus-alertmanager.tanzu-system-monitoring.svc:80"
targets:\n - \"prometheus-alertmanager.tanzu-system-monitoring.svc:80\"\n -
Additionally, alerts should be sent to Alert Manager from Prometheus.