Unable to scale out or replace a node in an already patched Aria Automation/Orchestrator environment
search cancel

Unable to scale out or replace a node in an already patched Aria Automation/Orchestrator environment

book

Article ID: 326149

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Symptoms:
Unable to scale out or replace a node in an already patched Aria Automation/Orchestrator environment.

Environment

VMware Aria Automation 8.x
VMware Aria Automation Orchestrator 8.x
VMware Aria Suite Lifecycle 8.x

Cause

Currently there is no automatic procedure in LCM for:

  • Replacing a node of an already patched Aria Automation/Orchestrator environment.
  • Scaling out an already patched Aria Automation/Orchestrator environment.

Resolution

A fix is being considered for this issue to be included in a later release. See the Workaround section below for additional information.

Workaround:
  • To workaround this the new node must manually receive the same patches as the existing environment before it can be joined to the cluster.
  • However, since there are no application services running on the newly deployed node (e.g. vIDM and license are not yet setup) - this would cause issues during the standard patch upgrade process.
  • In order to allow the patch to be applied in this state of the node, a custom upgrade configuration profile has to be be used, containing overrides for certain upgrade settings.
  • For those overrides to take effect a temporary patch of the upgrade engine needs to be applied first.
Note: The upgrade engine modifications from the procedure below will be automatically reverted once the product patch is applied.

Prerequisites

  • You have taken simultaneous snapshots of the appliance(s) or daily backups of each appliance.
  • You have access to SSH or utilities like PuTTy.
  • You have root username and password for the Aria product appliances.

Procedure

  1. Patch the upgrade profile override mechanism, run the following command on each node that is going to be patched and added to the cluster:
    base64 -d <<< '/Td6WFoAAATm1rRGAgAhARYAAAB0L+Wj4AddA0hdABGIQkY99Bhqpmevep9wH2Kss2bl9ban5u26OflIdl9rtPKrgpzqglm5vaoK3n4lPmT/m8ObnrBF8BzfjG9lvNcV70KquVjjjpWL6HwENPFUW1TitJrfxV2oE04JYBGxevjVh41v8wm3Ct8Bpitbx9TKuevcsz+sZPgBF+YphQa5oGk/q7Wox7kL7uk5nvFatu8rUXgjeAhR7fN3pgKtsFcrTnreAmUoNVn3aEByYvt5F8/4RG8u8VMujFGRlEh4tFhKhVULnk6s3X3vZ5hbG42XBuzfP/VVBK8RJvbQ86BkSeTy57l9dtGoKSYMOzLt83FQhUpdny8zUrGD4reW/ZL0V12GOgTCIiwBk/021QIHfuJ6frnA7AkVGj9QEkPlY9BwGJtrT3HyqUtQ+7s+WmmdJkLc8OX0vlrttCclXDj/3guxd3OPr256P9ERqdKNJzgS6NLik3m/gvlaxHjSEgLSfPGU6eLAq0xbYaao+T/wxtbn9GCZ6zze2EE2LCqcKTb5hu+0q/n4QinzdgWIBq7Y5wiZTmcmLho+NYeWOciB72n7Ee7V9i8PmHmxCXZ0fquR0XXtMDJOUVNEouXMdECmgQtXAohVGVgX31k5l0c74MSLnhZ7aQ6AN/f/wZ1GdLwYRMzeaFsnJGXVdhpo+CiW8Jg53XiaN0dEk9qr9V2mC7vwYjueXtMj5+4shox7JgkHfaANUIWHTQ/gldyHSuvrlWZshZP9MKSvV6xh4D+7SNye2Yqm8ZBwkELFXBwXAop8se1+LQ9azA8NnDr/9PT0Mrs1rmJUcdOYMBaCp5C/7h/EC6Um4X3ToLlwcHvGPwfTcELUKIUyWBqWMo1RF21QRvCAN50crDYT4DprX0/6Sf4x7p2JGjLqEQruWEqpSJtIGOc9UoYg+GKlqKLIryDKQjOlxRETJFFV+8C7qZnE+l2vx8J5dak5xsrtDjn2Y4sq/Ph6PMIsCW7xAFl6fKPkaVSWh1B9M77NvS5m+U2gsAsMgKEAaiKSnjxPgqQpDVgZISaXpKCtgYAXFONGlfGjboLJ/v4DbTirLL+JxcBw+yWKOtDK8H8AFZF0a4hitzWrofTfqa1opSg+kFLIKxf8U8/LnoNIhgC/kzrkvslF4QAB5AbeDgAASZvVDbHEZ/sCAAAAAARZWg==' | xz -d | bash -
Note: Logs can be found at /var/log/vmware/prelude
  1. To generate a custom upgrade profile with the necessary overrides, run the following command on each node that is going to be patched and added to the cluster:
    base64 -d <<< '/Td6WFoAAATm1rRGAgAhARYAAAB0L+Wj4At9BMRdABGCgJLNqZknztucLBbFypdKvxve2R/KDi/TrCmYW4DwJfXNc07v6U8bNyfhBKwm4zh0psLX+hViMllEAg6bZvORKYxkQfWIGgEmoNc3tnljqy06S7nkAiC9FXdvU1Zzvkj+7hwsd7jvA2W34rm4e6yNctyMYjhA2El7b1EsB/5FXbdyyBFQjubpn0ATTH31o7QBfN3Ae8r4RMoUKT+NIkwm2nIQBPsUDNP/vbGqthmtA7Qx2Fy1SLWc3FrRdGTkjEhKvBaQrX1fma8I7dpw2O6ZR91wGlZYu8Hzcrqwl7AaHzqXc/9i5ICiFSCW82rJEsGjiMpU7cUuhrhgRcEmdjaaVnCpi03JjZo4vYZPJ4XZ74dpkQiVv1hzg3dMyyKS4h+rQqyrq6FCgsQpEyRWcNh+TrUWn/e7ngbtm25cUJ0SXiddlqzMbFQgjajybw5ifrUt1KujE7fqEvqWPxD+8b2Ql3Jm+D8pSg9zp6FbPh6GDSCr9sMS+X0qGwcUKB3G3E/Xi48rjZCYkW2IovVDJxhDBHuVLS+ImdmXTGcXJaSIo4faufItT/Zn8atzUKRUnXEIFYfvGyywUnrNM1ewqbKYAQESCtS9qcWf755kzDh60oJ1jKm/8bQoAbCyeA69jthx/A5+6DXwmdUIklJwr6cQfJOxrw2SchrAVbLmT2DdTLOaPc3/XIq3hpqiUsnEO18X2Lf1JIVP4UduNKM565kCDlQHWhoxOxQ5VP3JTdgssEP7jhXpXIS+m66O7tbzg0terQFNyJLLY4SQnuCkgcLEkXEL9QjaXNjk97K1wilFOmigfmQ7rsFLgQpovZejjTCGihwjLTaK+B1g10O3/tprcO437p3vlPQ3uTfApGGQK0YIn4LCacXBQP40CKK09TZyoznexeIkZtTvsMeSElzec/mZ+cARifKvSgvnD2IwXyvhYcLBY1xjL5Zjv2x2/C3chdJfTMyn814JRxoeC4vYQKNZ2pd3ubyIOD2JEyrpNfnEyAiUtEHIzX0cpzWWVXXoG0igHW1kOjgxqcEjmkQKUuuYJa6+ECuJwnnKL7DCmwoksfCeMQJ1b5feDKluvkonwOmKJBHWcZZXeVBwtyUofTGlgLaKnmuDm6BMGSgOcUe7QkJwut5wSxLs12xjQ0sXVpgzC3tQ34/8rBMa2QJYsL5VQ1yoCGFhf9nh0c6MnUJG3B4OQ8jmMMuAFQKjtFY5ri4DpMaWtpleM/x3QWMIAXkkxAIhwfliH1T0P1mycw2tnf876FnkPnVhFlUcpVBalnAifobCptm62SgyAM1EU/dm/EcJ0GvuvAvUAVXQWXgSD4ap4LoKEb5nbnd3RE7wuq37unm21DcafiI5Fy4LBOa4mfPCpq0TIbzWf/LVGn4B/j3tMbBcDCOQdQ7MjZETRbDkBxHJjEwJqIUv4dxJrNjJ7dh34oHiKD3r4gr8fqF2L2o+iYzV/NwCDfbsbI645tZGaH/Qs+gQPK/0ayUftd4AmNNT86YNiw0pZ/92ILRdMdNDQRdn4j2ReUOw6Hu+idUjPtmLOe4HKVntiQmkB66y/Vpc/iiJvus3j2B9jCIL65PLddTQ6/GWqf3uVSpBSgWLAaZwRh2b2r22dX7lPRbUAAt5i0F2dJ/4AAHgCf4WAAB4ExOmscRn+wIAAAAABFla' | xz -d > /etc/vmware-prelude/upgrade-ignore-pods-health.conf
  2. Apply the desired product patch on the node(s) that are going to be added to the cluster.
    1. In LCM UI add the patch binary via Settings > Binary Mappings > Patch Binaries > Add Patch Binary.
    2. To manually apply the desired patch execute the following command on each node that is about to be added to the cluster:
      vracli upgrade exec -y --repo  "[LCM_PATCH_REPO_URL]" --profile ignore-pods-health
      1. Where the [LCM_PATCH_REPO_URL] has the following structure:
        [https://[LCM_NODE]/repo/productPatchRepo/patches/vra/[VRA_VERSION]]/patchrepo.iso/update]
      2. In the above URL replace:
        • [LCM_NODE] with the fqdn/ip of the LCM node
        • [VRA_VERSION] with the product version the patch is intended for, up to the patch section (e.g. 8.13.1)

Example:

https://lcm.node.fqdn/repo/productPatchRepo/patches/vra/8.13.1/patchrepo.iso/update
  1. To join the newly patched node(s) to the existing cluster execute the following command on each new node:
    vracli cluster join VRA-LEAD-NODE-FQDN
    Note: Where VRA-LEAD-NODE-FQDN is the node you are joining the newly patched nodes to.
  2. Start Aria Automation/Orchestrator services by running the following command on one of the appliances:
    /opt/scripts/deploy.sh


Additional Information

Impact/Risks:
Any patched Aria Automation environment where there is the need to scale out or replace a node.