DX OI - New Tenants reporting ERROR "400 /oi/v3/queryTas/analytics" and "500 /oi/v2/api/Inventory/fields"
search cancel

DX OI - New Tenants reporting ERROR "400 /oi/v3/queryTas/analytics" and "500 /oi/v2/api/Inventory/fields"

book

Article ID: 217736

calendar_today

Updated On:

Products

DX Operational Intelligence

Issue/Introduction

Symptoms:

- When opening "Service Analytics" > create a new service, the below error is reported: "400 /oi/v3/queryTas/analytics

- when opening "Performance Analytics" , the below error is reported : "500 /oi/v2/api/Inventory/fields

- restarting all services using <dx-platform-HOME>/tools/dx-admin.sh stop and start doesn't help.

Environment

DX Operational Intelligence 2x

 

Cause

Possibility #1) Problem during Tenant creation

Check that all Tenants from Cluster Management appears in dxi_tenants index. 

http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty
http(s)://<elastic-endpoint>/ao_tenants_1_1/_search?size=200&pretty

If not, then it indicates a problem during tenant creation or a corruption, ensure you have enough capacity in the system
 
 
Possibility #2) LAG in kafka due to the high volume of inventory/topology data
 
a) Checked for a possible LAG in jarvis_indexer
 
/opt/ca/kafka/bin/kafka-consumer-groups.sh --bootstrap-server jarvis-kafka:9092,jarvis-kafka-2:9092,jarvis-kafka-3:9092 --describe --group jarvis_indexer

below an example illusrating a LAG issue

 

b) Checked for a possible LAG in verifier
 
/opt/ca/kafka/bin/kafka-consumer-groups.sh --bootstrap-server jarvis-kafka:9092,jarvis-kafka-2:9092,jarvis-kafka-3:9092 --describe --group verifier

Result: confirmed there was a LAG and consumers kept disconnecting.
 

 

Resolution

Suggestion# 1:
 
- Try to create a new tenant and ensure it is registered property in the dxi_tenants index:
 
http(s)://<elastic-endpoint>/ao_dxi_tenants_1_1/_search?size=200&pretty
 
- Ensure you have enough capacity in the openshift or kubernetes setup to create a new tenant.
 
 
Suggestion# 2: If the problem is related to the high volume of inventory/topology data
 
oi_connector inventory ingestion feature should be disabled since this data is sent by the apm_bridge 

From: https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/ca-unified-infrastructure-management-probes/GA/alphabetical-probe-articles/oi-connector-ca-digital-operational-intelligence-gateway/oi-connector-ca-digital-operational-intelligence-gateway-release-notes.html#concept.dita_c00d3ea6aac2fdf0cdb9a39396f9e72b5f096057_UpgradeConsiderations

"While upgrading to oi_connector 1.38 or later versions, check and update the key subscribe_to_uim_inventory_ci value to 'no' in the oi_connector.cfg file. The key subscribe_to_uim_inventory_ci can be found in the setup section of the oi_connector.cfg file /setup/subscribe_to_uim_inventory_ci, below default value:

 

Go to UIM oi_connector raw configuration, set /setup/subscribe_to_uim_inventory_ci = no

 
Optional:
 
1) Reset CAA_unverified_p1 offset as below
 
sh /opt/ca/kafka/bin/kafka-consumer-groups.sh --bootstrap-server jarvis-kafka:9092,jarvis-kafka-2:9092,jarvis-kafka-3:9092 --group verifier --topic CAA_unverified_p1 --reset-offsets --to-latest --execute
 
this command will clear all existing message in the kaka queue/cache.
 
2) You might consider to increase jarvis-verifier replicas to 2 or 4 depending on your requirements, here is an example:
 
kubectl scale --replicas=4 deployment javis-verifier -n<namespace>
 
 

Additional Information

DX AIOPs Troubleshooting, Common Issues and Best Practices
https://knowledge.broadcom.com/external/article/190815

Attachments