The LCM remediation on vIDM failed with error LCMVIDM74066.
search cancel

The LCM remediation on vIDM failed with error LCMVIDM74066.

book

Article ID: 409517

calendar_today

Updated On:

Products

VCF Operations/Automation (formerly VMware Aria Suite)

Issue/Introduction

VMware Identity Manager (vIDM) Remediation Fails (LCMVIDM74066) Due to Network Configuration and Service Issues


Attempts to remediate a VMware Identity Manager (vIDM) environment through Aria Suite Lifecycle (ASL) are failing with the error LCMVIDM74066. Additionally, manual troubleshooting efforts following VMware Knowledge Base article 367175 also encounter errors.

Environment

VMware Identity Manager (vIDM) in a clustered deployment (e.g., 3-node cluster).

Aria Suite Lifecycle (ASL) for vIDM management and remediation.

Cause

The root cause is a combination of critical network configuration files being incorrect or missing, and essential services not running or being misconfigured on vIDM nodes. Specifically:

  • On one node (e.g., node02), the /etc/hosts and /etc/resolv.conf files were found to be blank or incomplete, missing crucial IP-to-hostname mappings (including master and delegate IPs, and the node's own FQDN/shortname) and DNS server configurations.
  • On other nodes (e.g., node01 and node03), the /etc/hosts file contained an incorrect IP address for the vIDM master node, likely due to a master role change that was not propagated.
    These network misconfigurations prevent proper inter-node communication and DNS resolution, which are vital for vIDM cluster health and the horizon-workspace service's functionality. Furthermore, essential services like OpenSearch may not be running, contributing to the overall degraded state and preventing successful remediation or manual troubleshooting attempts.

Resolution

To restore vIDM cluster health and enable successful remediation, perform the following steps to correct network configurations and restart critical services:

  1. Correct /etc/hosts on node02:

    • SSH to node02.
    • Edit the /etc/hosts file.
    • Add the correct IP addresses for the vIDM Master and any Delegate nodes, as well as the FQDN and shortname for node02 itself. Ensure all cluster nodes are correctly mapped.
  2. Correct /etc/resolv.conf on node02:

    • SSH to node02.
    • Edit the /etc/resolv.conf file.
    • Add the appropriate DNS server entries (e.g., nameserver 192.168.1.1).
  3. Correct /etc/hosts on node01 and node03 (or other replica nodes):

    • SSH to node01 and node03 (or other replica nodes) individually.
    • Edit their respective /etc/hosts files.
    • Ensure the IP address for the vIDM Master and delegateIP is correct.
  4. Stop horizon-workspace service on replica nodes:

    • SSH to both replica nodes (e.g., node01 and node03).
    • Run the command: service horizon-workspace stop
  5. Restart horizon-workspace service on the Master node:

    • SSH to the vIDM Master node.
    • Run the command: service horizon-workspace restart
  6. Start horizon-workspace on replica nodes:
    • SSH to both replica nodes (e.g., node01 and node03).
    • Run the command: service horizon-workspace start
  7. Start opensearch on affected nodes (if not running):

    • SSH to node01 and node02.
    • Run the command: service opensearch start.
  8. Reboot node02:

    • Initiate a full system reboot of node02. This helps ensure all network configurations are fully applied and resolves any lingering port connectivity issues visible in the vIDM system diagnostic dashboard.