Symptoms:
- No Traces available, “Business Transaction” tab cannot be open or is empty
- Jarvis appears in partial state or corrupted, some jarvis indices are missing:
http(s)://{es_endpoint}/_cat/indices?v&s=ss&h=health,status,index,uuid,pri,rep,docs.count,docs.deleted,store.size,pri.store.size,cds
Expected Results:
a) In the Map, Business Transaction should be available when selecting a vertex and you know there is traces activity:
b) In Elastic, the below Jarvis indices should be available:
jarvis_config
jarvis_jmessages_1.0_1
jarvis_kron
jarvis_jmetrics_1.0_1
jarvis_metadata
Jarvis is in partial state or corrupted
APM 11.x, 19.x
Recreate Jarvis data as below:
Step #1 : Scale down all jarvis components/deployments and apmservices-tracestore:
a) Go Kubernetes Admin console
b) Set Namespaces = All namespaces
c) Select Deployments
d) Search for Jarvis
e) Select each of the below deployments, click “…SCALE”option to decrease it to 0:
jarvis-ldds-web
jarvis-kafka
jarvis-couchdb
jarvis-apis
jarvis-kron
jarvis-verifier
jarvis-indexer
jarvis-esutils
jarvis-elasticsearch
jarvis-zookeeper
apmservices-tracestore
Step #2: Recreate Jarvis data in the below order:
Jarvis-elasticsearch:
a) select the deployment, find Jarvis-elasticsearch
b) click Edit
c) Find out the data folder, locate the section “volumeMounts” > mountPath
d) delete the content:
go to Elastic node server
cd /dxi_data/jarvis/elasticsearch
rm -rf *
ls -la (to verify that all files have been deleted)
cd /dxi_data/jarvis/elasticsearch-backup
rm -rf *
ls -la
e) scale up zookeeper
Jarvis-zookeeper:
a) Edit Jarvis-zookeeper
b) Find out the data folder, locate the section “volumeMounts” > mountPath
c) delete the content of the volumes path, in the above example:
cd /data/jarvis/zookeeper
rm -rf *
d) scale up zookeeper
e) check the logs, it should not report any ERROR message:
jarvis-kafka:
a) Edit Jarvis-kafka
b) Find out the data folder, locate the section “volumeMounts” > mountPath
c) delete the content of the volumes path, for example:
cd /dxi_data/jarvis/kafka
rm -rf *
d) Scale up “jarvis-couchdb”, it is required for kafka
e) Scale up “jarvis-kafka”
jarvis-kron:
a) Scale up jarvis-kron
b) check the logs
kubectl get pods –ndxi | grep kron ==> to find the kron podname
kubectl logs jarvis-kron-<xxxxx> -ndxi
you should see entries as below:
jarvis-ldds-web:
Scale up
jarvis-apis:
a) Scale up
b) check the logs
kubectl get pods –ndxi | grep api ==> to find the api podname
kubectl logs jarvis-apis-<xxxxx> -ndxi
Verification check:
a) Check Topics
kubectl get pods –ndxi | grep kafka ==> to find the kafka podname
kubectl exec –it jarvis-kafka-<xxxxx> -ndxi bash
sh /opt/ca/kafka/bin/kafka-topics.sh --zookeeper jarvis-zookeeper:2181 --list
Result should be:
b) Check indices:
http(s)://{es_endpoint}/_cat/indices?v&s=ss&h=health,status,index,uuid,pri,rep,docs.count,docs.deleted,store.size,pri.store.size,cds
jarvis-verifier:
Scale up
jarvis-indexer:
Scale up
jarvis-esutils:
Scale up
Step# 3 :Onboard AO indexes
a)Go to Jarvis APIs
http://apis.<master-node>.nip.io
b) Select POST /onboarding/products
c) Click Try it out
d) Configure product_id and product_name = “ao”
e) Click Execute
f) Verify result:
g) Download attached apm_tt.json.txt file and transfer it to the master node:
Rename it as apm_tt.json
Run below curl command:
curl -k -XPOST -header “Content-Type: application/json” ‘https:// apis.<master-node>.nip.io/onboarding/doc_type’ –data-binary @apm_tt.json
NOTE: You should not get any error or message
h) check ao indices:
http(s)://{es_endpoint}/_cat/indices?v&s=ss&h=health,status,index,uuid,pri,rep,docs.count,docs.deleted,store.size,pri.store.size,cds
i) Scale up “apmservices-tracestore”
j) check “ao” indices again to see if documents are getting ingested:
http(s)://{es_endpoint}/_cat/indices?v&s=ss&h=health,status,index,uuid,pri,rep,docs.count,docs.deleted,store.size,pri.store.size,cds
Step 4: Verify the results in ATC