DX Platform - How to clear the ElasticSearch data?

book

Article ID: 216670

calendar_today

Updated On:

Products

DX Operational Intelligence DX Application Performance Management CA App Experience Analytics

Issue/Introduction

DX Platform consists of multiple databases including:

a) ElasticSearch: stores

- Alarms from integrations (APM, UIM, Spectrum, NetOps, etc)
- APM traces
- OI, AXA and BPA data
- Log ingestion

b) NASS: Metrics 
c) TAS: Topology 

This KB explains the process of how to clear or delete the content of ElasticSearch data ONLY.

It will NOT affect the content of the existing NASS and TAS data stores


Why would you need to clear the ElasticSearch database?

1) You cannot reinstall DX Platform and need to delete the existing ElasticSearch data

2) APM is working as expected but ElasticSearch data is corrupted

3) You are testing data ingestion and need to recreate only the ElasticSearch database

 

Environment

DX Operational Intelligence 20.2.1 only

DX Application Performance Management 20.2.1 only

Resolution

IMPORTANT: This KB is intended for TEST environments ONLY.  For assistance contact Broadcom Support

 


STEP # 1 : Clear the ElasticSearch, Kafka and Zookeeper data


1. Switch to the DXI project or namespace:

If openshift:
oc project <your-project>

If kubernetes:
kubectl config set-context --current --namespace=<your-namespace>

 

2. Stop all the DX-Platform services.

cd <dx-platform install folder>/tools
./dx-admin.sh stop

Wait until all the pods are terminated.
Run the below command to check the status of the pods. You can ignore the pods that are in Completed  status

kubectl get pods


3. Go to the ElasticSearch servers and delete the current data as below:

a) Delete ElasticSearch data

rm -rf /dxi/jarvis/elasticsearch/*
rm -rf /dxi/jarvis/elasticsearch-backup/*


b) Delete Kafka data

rm -rf /dxi/jarvis/kafka/*
rm -rf /dxi/jarvis/kafka/.lock*


c) Delete Zookeeper data

rm -rf /dxi/jarvis/zookeeper/*


d) Verify that all content has been deleted:

cd /dxi/jarvis
find .

Expected output:

./elasticsearch
./elasticsearch-backup
./es_plugin
./es_plugin/ca_es_acl-7.5.1.0.zip
./kafka
./zookeeper

 

4. Start all the DX-Platform services

cd <dx-platform install folder>/tools
./dx-admin.sh start

Wait until all the services are up and running
Run the below command to check the status of the pods:

kubectl get pods

 

STEP # 2 : Verify that the DX OI and AXA indices have been recreated

1. Find out the ElasticSearch and Jarvis APIs endpoints:

In Openshift:

oc get routes | grep jarvis

For example:

In Kubernetes:

kubectl get ingress | grep jarvis

For example:

 

2. List all available OI indices:

http(s)://<elastic-endpoint>/_cat/indices/*itoa*?v

Expected output:

3. List all available AXA indices:

http(s)://<elastic-endpoint>/_cat/indices/*_axa*?v

Expected output:

 

STEP # 3 : Onboard the APM indices

1. Create the script to onboard the APM incides:

cd <dx-platform install folder>/post_install
cp 3.apm.onboard.sh my.apm.onboard.sh

2. Update jarvis_host and elastic_host variables with endpoints found in STEP#2

vi my.apm.onboard.sh

3. Run the script: 

./my.apm.onboard.sh

Expected output:

Waiting for jarvis-apis to be up...
Error! Failed to load template ao_apm_tt_analyzer loaded into ES.

HTTPStatus=000

Doc type - apm_tt loaded.

Doc type - apm_itoa_alarms_apm_1 loaded.

Doc type - itoa_inventory_apm loaded.

 

4. Check that apm_tt indices has been created:

http(s)://<elastic-endpoint>/_cat/indices/*_apm_*?v

For example:

 

STEP # 4 : Onboard the LogAnalytic indices


1. Connect to log parser pod 

kubectl get pods | grep parser
kubectl exec -ti <doi-logparser-pod> bash


2. Delete the file "current_version.txt" 

cd /logparser_config/
rm current_version.txt
exit         (to exit the pod)


3. Scale down and up doi-logparser

kubectl scale --replicas=0 deployment doi-logparser

wait for a minute

kubectl scale --replicas=1 deployment doi-logparser


wait for a minute, the newly created doi-logparser pod will onboard all the LogAnalytic indices .


4. verify that LogAnalytic indices are available:

http(s)://<elastic-endpoint>/_cat/indices/*_log*?v

For example:

 

STEP # 5 : Onboard the BPA indices


1. Scale down and up bpa-diviner-discovery

kubectl scale --replicas=0 deployment bpa-diviner-discovery

wait for a minute

kubectl scale --replicas=1 deployment bpa-diviner-discovery


wait for a minute, the new bpa-diviner-discovery pod will onboard all the BPA indices.


2. Verify all BPA indices are available:

http(s)://<elastic-endpoint>/_cat/indices/*_aum*?v

For example:

STEP # 6 : Onboard the existing Tenant(s)

 
To onboard the existing tenant you need : TENANT-NAME , TAS-TENAT-ID and TENANT-ID
 

1. From DX Cluster Management you can see all active Tenants, however TENANT-ID is not available
 
Here is an example:


2.  Elastic data has been recreated so Tenant information is not available from ao_dxi_tenants_1_1 index

http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty 

Here is an example:

 3. However, you can obtain Tenant information from apmservices-oimetricpublisher.log file as below:

cd <NFS>/ca/dxi/apmservices
grep -r "Tenants eligible for processing" *

Here is an example of the output

the line includes all the information you need to onboard any existing tenants: 

....
test8=TenantDetails [internalTenantId=13, cohortId=8AF6E56D-75FD-452A-B84E-3EF6C297E6F0, isDeleted=false], 
support=TenantDetails [internalTenantId=10, cohortId=E9C39947-BDDE-44F9-A40F-9E8C46762D76, isDeleted=false], 
test19=TenantDetails [internalTenantId=15, cohortId=3FFE5236-E3E3-4455-A257-E686DC997482, isDeleted=false]}
....

2. Onboard a Tenant with Jarvis

a) Go to the Jarvis apis UI and onboard a tenant

Endpoint:http(s)://<apis-endpoint>/#/Tenants/createTenantUsingPOST

Body syntax:

{
  "product_id": "ao",
  "tenant_id": "<CohortID you obtain in previous step>"
}

Example:

using one of the above tenants:

{
  "product_id": "ao",
  "tenant_id": "E9C39947-BDDE-44F9-A40F-9E8C46762D76"
}

Code return should be 204.

 

b) Verify that the tenant has been added:

Endpoint: http(s)://<apis-endpoint>/#/Tenants/getTenantUsingGET1

When prompted for "product_id" enter ao

Example:


Click Execute

Verify that tenant ID appears in the output, here is an example based on above example:

 

3. Onboard a Tenant with DX Operational Intelligence

a) Download POSTMAN

b) Find out the doi nginx endpoint

In Openshift:

oc get routes | grep nginx

For example:

In Kubernetes:

kubectl get ingress | grep nginx

For example:

 

c) Perform a POST call to the doi-nginx ingestion point as below:

Endpoint: http(s)://<doi-nginx-endpoint>/ingestion

Headers:

Content-Type : application/json

Body Syntax:

{
"documents": [
{
"header": {
"product_id": "ao",
"tenant_id": "<TENANT-ID>",
"doc_type_id": "dxi_tenants",
"doc_type_version": "1"
},
"body": [
{
"tenant_id": "<TENANT-ID>",
"tenant_name": "<TENANT-NAME>",
"tas_tenant_id": "<TAS-TENAT-ID>"
}
]
}
]
}

Expected return code is 202

Example:

using one of the above tenants:

{
"documents": [
{
"header": {
"product_id": "ao",
"tenant_id": "E9C39947-BDDE-44F9-A40F-9E8C46762D76",
"doc_type_id": "dxi_tenants",
"doc_type_version": "1"
},
"body": [
{
"tenant_id": "E9C39947-BDDE-44F9-A40F-9E8C46762D76",
"tenant_name": "support",
"tas_tenant_id": "10"
}
]
}
]
}

Here are screenshots illustrating both header and body:

 

4. Verify that Tenant(s) appears in ElasticSearch: 

http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty


Here is an example based on above example:

 

STEP # 7 : Verify that data is available from the DX Operational Intelligence UIs

1. Login to a Tenant

2. Go to "DX Operational Intelligence"

3. Verify that data is available from Service Analytics, Alarms Analytics and Performance Analytics 

NOTE: You might need to restart the integrations in order to see the alarms, topology and metrics.

 

Additional Information

DX AIOPs - Troubleshooting, Common Issues and Best Practices
https://knowledge.broadcom.com/external/article/190815

Attachments