While testing redundancy, we shutdown Elastic Node 1 and now we don't see data in AXA UIs anymore
DX APPLICATION EXPERIENCE ANALYTICS (AXA) 20.2.x
This problem is related to defect DE517070:
Product deployments are configured to use ElasticSearch Node#1 instead to ElasticSearch LoadBalancer
Below are the steps how to reconfigure AXA, BPA, Kibana components to point to ElasticSearch Load Balancer:
1) For AXA
a) Open below config maps axa-configmap and bpa-configmap.
- kubectl edit cm axa-configmap -n<namespace>
- kubectl edit cm bpa-configmap -n<namespace>
b) Replace 'jarvis-elasticsearch' to 'jarvis-elasticsearch-lb' for below keys:
2) For Kibana
a) Open kibana Deployment
kubectl edit deployment kibana -n<namespace>
b) Replace 'jarvis-elasticsearch' to 'jarvis-elasticsearch-lb' for below keys:
3) Restart AXA and BPA services:
a) scale down and up all AXA and BPA deployments using:
kubectl scale --replicas=0 deployment axaservices-amq -n<namespace>
kubectl scale --replicas=0 deployment axaservices-axa-ng-aggregator -n<namespace>
kubectl scale --replicas=0 deployment axaservices-axa-user-processor -n<namespace>
kubectl scale --replicas=0 deployment axaservices-ba-routing-service -n<namespace>
kubectl scale --replicas=0 deployment axaservices-crashhandler -n<namespace>
kubectl scale --replicas=0 deployment axaservices-decryptor -n<namespace>
kubectl scale --replicas=0 deployment axaservices-dxc -n<namespace>
kubectl scale --replicas=0 deployment axaservices-indexer -n<namespace>
kubectl scale --replicas=0 deployment axaservices-kibana-indexer -n<namespace>
kubectl scale --replicas=0 deployment axaservices-ngutils -n<namespace>
kubectl scale --replicas=0 deployment axaservices-notify-filter -n<namespace>
kubectl scale --replicas=0 deployment axaservices-platelemetry -n<namespace>
kubectl scale --replicas=0 deployment axaservices-readserver -n<namespace>
kubectl scale --replicas=0 deployment axaservices-scheduler -n<namespace>
kubectl scale --replicas=0 deployment axaservices-transformer -n<namespace>
kubectl scale --replicas=0 deployment bpa-adminui-configserver -n<namespace>
kubectl scale --replicas=0 deployment bpa-diviner-capture -n<namespace>
kubectl scale --replicas=0 deployment bpa-diviner-discovery -n<namespace>
kubectl scale --replicas=0 deployment bpaservices-urlnormalization -n<namespace>
b) Check pods are not in Running status:
get pods -ndxi | grep -e 'axa\|bpa'
c) Scale up the pods
kubectl scale --replicas=1 deployment axaservices-amq -n<namespace>
kubectl scale --replicas=1 deployment axaservices-axa-ng-aggregator -n<namespace>
kubectl scale --replicas=1 deployment axaservices-axa-user-processor -n<namespace>
kubectl scale --replicas=1 deployment axaservices-ba-routing-service -n<namespace>
kubectl scale --replicas=1 deployment axaservices-crashhandler -n<namespace>
kubectl scale --replicas=1 deployment axaservices-decryptor -n<namespace>
kubectl scale --replicas=1 deployment axaservices-dxc -n<namespace>
kubectl scale --replicas=1 deployment axaservices-indexer -n<namespace>
kubectl scale --replicas=1 deployment axaservices-kibana-indexer -n<namespace>
kubectl scale --replicas=1 deployment axaservices-ngutils -n<namespace>
kubectl scale --replicas=1 deployment axaservices-notify-filter -n<namespace>
kubectl scale --replicas=1 deployment axaservices-platelemetry -n<namespace>
kubectl scale --replicas=1 deployment axaservices-readserver -n<namespace>
kubectl scale --replicas=1 deployment axaservices-scheduler -n<namespace>
kubectl scale --replicas=1 deployment axaservices-transformer -n<namespace>
kubectl scale --replicas=1 deployment bpa-adminui-configserver -n<namespace>
kubectl scale --replicas=1 deployment bpa-diviner-capture -n<namespace>
kubectl scale --replicas=1 deployment bpa-diviner-discovery -n<namespace>
kubectl scale --replicas=1 deployment bpaservices-urlnormalization -n<namespace>
d) Check pods are not in Running status:
get pods -ndxi | grep -e 'axa\|bpa'
4) Verify AXA Data is available from the UIs
Data can be partially available until the issue with the elastic node is resolve