VIDM 3.3.7 Search function for users or groups fails or opensearch shows a failure in the daignostic dash board
search cancel

VIDM 3.3.7 Search function for users or groups fails or opensearch shows a failure in the daignostic dash board

book

Article ID: 373759

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

  • VIDM system diagnostic dash board shows failure for opensearch
  • VRLCM show failure for VIDM health,  that is found to be caused by opensearch
  • Dashboard within the Admin UI for all nodes is not loading and appears blank.
  • Unable to sync the directory with domain users. Only built-in administrators work.
  • Unable to open 'Roles' tab on workspace one access console.
  • Error with Integrated Components - Error retrieving component status.
  • Sync result showing very old date from sync log.
  • Error with ACS Health - Application Deployment Status - Web Application Status - Error when connecting to the application.
  • Analytics-service.log (/opt/vmware/horizon/workspace/logs/analytics-service.log) shows validation failure errors as below:

          Unable to create index: v4_2022-11-07_audit - {"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [3220]/[1000] maximum shards open;"}]

Environment

VMware Identity Manager 3.3.7

Cause

- Due to the migration in VMware Identity Manager 3.3.7 from Elasticsearch to OpenSearch, validation failures occur due to the Elasticsearch/OpenSearch max shard count being exceeded.

 

Resolution

Note: Take proper snapshots and DB backup before proceeding with the action plan below. (OpenSearch status should be running, service opensearch start)

1. Run below command on primary node and increase the OpenSearch max shards count to 6500. A subsequent raise to 8200 may be necessary if the initial error in this KB is still observed after raising to 6500:

curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent":
{ "cluster.max_shards_per_node": "6500" }
}'

 

2. Check the status of the all node with below command. For clustered deployments consisting of multiple VIDM nodes, the status should show Green with 0 Unassigned Shards, in which case you can skip to step 4. On single node deployments status may be Yellow and show unassigns equal to the number of assigned shards, in which case you can also skip to step 4.

curl http://localhost:9200/_cluster/health?pretty=true

 

3. If the Above command status is Red/Yellow with UNASSIGNED shard value more than 0. Follow below article to delete unassign Shard
    https://knowledge.broadcom.com/external/article?legacyId=71297 (no.9)

 curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} | xargs -i curl -XDELETE "http://localhost:9200/{}"

 

4. Release locks (once for the cluster is enough - run on psql primary node)

/usr/sbin/hznAdminTool liquibaseOperations -forceReleaseLocks

 

5. Restart the main vIDM service - first on primary, wait a minute or two, then the other two nodes:

service horizon-workspace restart

 

6. Check and confirm all issues reported above are resolved.

Additional Information

Max shards count for OpenSearch service will be increased on all nodes.