AIOps - How to clear the ElasticSearch data?
search cancel

AIOps - How to clear the ElasticSearch data?

book

Article ID: 216670

calendar_today

Updated On:

Products

DX Operational Intelligence DX Application Performance Management CA App Experience Analytics

Issue/Introduction

DX Platform consists of multiple databases including:

a) ElasticSearch: stores

- Alarms from integrations (APM, UIM, Spectrum, NetOps, etc)
- APM traces
- OI, AXA and BPA data
- Log ingestion

b) NASS: Metrics 
c) TAS: Topology 

This KB explains the process of how to clear or delete the content of ElasticSearch data ONLY.

It will NOT affect the content of the existing NASS and TAS data stores


Why would you need to clear the ElasticSearch database?

1) You cannot reinstall DX Platform and need to delete the existing ElasticSearch data

2) APM is working as expected but ElasticSearch data is corrupted

3) You are testing data ingestion and need to recreate only the ElasticSearch database

 

Environment

DX Operational Intelligence 20.2.1 only

DX Application Performance Management 20.2.1 only

Resolution

IMPORTANT: This KB is intended for TEST environments ONLY.  For assistance contact Broadcom Support

 


STEP # 1 : Clear the ElasticSearch, Kafka and Zookeeper data


1. Switch to the DXI project or namespace:

If openshift:
oc project <your-project>

If kubernetes:
kubectl config set-context --current --namespace=<your-namespace>

 

2. Stop all the DX-Platform services.

cd <dx-platform install folder>/tools
./dx-admin.sh stop

Wait until all the pods are terminated.
Run the below command to check the status of the pods. You can ignore the pods that are in Completed  status

kubectl get pods


3. Go to the ElasticSearch servers and delete the current data as below:

a) Delete ElasticSearch data

rm -rf /dxi/jarvis/elasticsearch/*
rm -rf /dxi/jarvis/elasticsearch-backup/*


b) Delete Kafka data

rm -rf /dxi/jarvis/kafka/*
rm -rf /dxi/jarvis/kafka/.lock*


c) Delete Zookeeper data

rm -rf /dxi/jarvis/zookeeper/*


d) Verify that all content has been deleted:

cd /dxi/jarvis
find .

Expected output:

./elasticsearch
./elasticsearch-backup
./es_plugin
./es_plugin/ca_es_acl-7.5.1.0.zip
./kafka
./zookeeper

 

4. Start all the DX-Platform services

cd <dx-platform install folder>/tools
./dx-admin.sh start

Wait until all the services are up and running
Run the below command to check the status of the pods:

kubectl get pods

 

STEP # 2 : Verify that the DX OI and AXA indices have been recreated

1. Find out the ElasticSearch and Jarvis APIs endpoints:

In Openshift:

oc get routes | grep jarvis

 

In Kubernetes:

kubectl get ingress | grep jarvis

 

2. List all available OI indices:

http(s)://<elastic-endpoint>/_cat/indices/*itoa*?v

 

3. List all available AXA indices:

http(s)://<elastic-endpoint>/_cat/indices/*_axa*?v

 

STEP # 3 : Onboard the APM indices

1. Create the script to onboard the APM incides:

cd <dx-platform install folder>/post_install
cp 3.apm.onboard.sh my.apm.onboard.sh

2. Update jarvis_host and elastic_host variables with apis and es routers or endpoints found in STEP#2 above

vi my.apm.onboard.sh


3. Run the script: 

./my.apm.onboard.sh

Expected output:

Waiting for jarvis-apis to be up...
Error! Failed to load template ao_apm_tt_analyzer loaded into ES.

HTTPStatus=000

Doc type - apm_tt loaded.

Doc type - apm_itoa_alarms_apm_1 loaded.

Doc type - itoa_inventory_apm loaded.

 

4. Check that apm_tt indices has been created:

http(s)://<elastic-endpoint>/_cat/indices/*_apm_*?v

 

 

STEP # 4 : Onboard the LogAnalytic indices


1. Connect to log parser pod 

kubectl get pods | grep parser
kubectl exec -ti <doi-logparser-pod> bash


2. Delete the file "current_version.txt" 

cd /logparser_config/
rm current_version.txt
exit         (to exit the pod)


3. Scale down and up doi-logparser

kubectl scale --replicas=0 deployment doi-logparser

wait for a minute

kubectl scale --replicas=1 deployment doi-logparser


wait for a minute, the newly created doi-logparser pod will onboard all the LogAnalytic indices .


4. verify that LogAnalytic indices are available:

http(s)://<elastic-endpoint>/_cat/indices/*_log*?v

 

STEP # 5 : Onboard the BPA indices


1. Scale down and up bpa-diviner-discovery

kubectl scale --replicas=0 deployment bpa-diviner-discovery

wait for a minute

kubectl scale --replicas=1 deployment bpa-diviner-discovery


wait for a minute, the new bpa-diviner-discovery pod will onboard all the BPA indices.


2. Verify all BPA indices are available:

http(s)://<elastic-endpoint>/_cat/indices/*_aum*?v

 

STEP # 6 : Onboard the existing Tenant(s)

 
To onboard the existing tenant you need : TENANT-NAME , TAS-TENAT-ID and TENANT-ID
 

1. From DX Cluster Management you can see all active Tenants, however TENANT-ID is not available
 
Here is an example:


2.  Elastic data has been recreated so Tenant information is not available from ao_dxi_tenants_1_1 index

http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty 

 

 3. However, you can obtain Tenant information from apmservices-oimetricpublisher.log file as below:

cd <NFS>/ca/dxi/apmservices
grep -r "Tenants eligible for processing" *

the line includes all the information you need to onboard any existing tenants: 

....
test8=TenantDetails [internalTenantId=13, cohortId=<tenant COHORT ID1>, isDeleted=false], 
support=TenantDetails [internalTenantId=10, cohortId=<tenant COHORT ID 2>, isDeleted=false], 
test19=TenantDetails [internalTenantId=15, cohortId=<tenant COHORT ID 3>, isDeleted=false]}
....

2. Onboard a Tenant with Jarvis

a) Go to the Jarvis apis UI and onboard a tenant

Endpoint:http(s)://<apis-endpoint>/#/Tenants/createTenantUsingPOST

Body syntax:

{
  "product_id": "ao",
  "tenant_id": "<CohortID you obtain in previous step>"
}

Code return should be 204.

 

b) Verify that the tenant has been added:

Endpoint: http(s)://<apis-endpoint>/#/Tenants/getTenantUsingGET1

When prompted for "product_id" enter ao

Click Execute, verify that tenant ID appears in the output,

 

3. Onboard a Tenant with DX Operational Intelligence

a) Download POSTMAN

b) Find out the doi nginx endpoint

In Openshift:

oc get routes | grep nginx

In Kubernetes:

kubectl get ingress | grep nginx

 

c) Perform a POST call to the doi-nginx ingestion point as below:

Endpoint: http(s)://<doi-nginx-endpoint>/ingestion

Headers:

Content-Type : application/json

Body Syntax:

{
"documents": [
{
"header": {
"product_id": "ao",
"tenant_id": "<TENANT-ID>",
"doc_type_id": "dxi_tenants",
"doc_type_version": "1"
},
"body": [
{
"tenant_id": "<TENANT-ID>",
"tenant_name": "<TENANT-NAME>",
"tas_tenant_id": "<TAS-TENAT-ID>"
}
]
}
]
}

Expected return code is 202

Example:

using one of the above tenants:

{
"documents": [
{
"header": {
"product_id": "ao",
"tenant_id": "<tenant ID1>",
"doc_type_id": "dxi_tenants",
"doc_type_version": "1"
},
"body": [
{
"tenant_id": "<tenant ID2>",
"tenant_name": "support",
"tas_tenant_id": "10"
}
]
}
]
}

 

4. Verify that Tenant(s) appears in ElasticSearch: 

http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty

 

STEP # 7 : Verify that data is available from the DX Operational Intelligence UIs

1. Login to a Tenant

2. Go to "DX Operational Intelligence"

3. Verify that data is available from Service Analytics, Alarms Analytics and Performance Analytics 

NOTE: You might need to restart the integrations in order to see the alarms, topology and metrics.

 

Additional Information

DX AIOPs - Troubleshooting, Common Issues and Best Practices
https://knowledge.broadcom.com/external/article/190815