Vertical scaling in Aria Operations for Logs (Formerly vRealize Operations Manager) 8.2 And Newer
search cancel

Vertical scaling in Aria Operations for Logs (Formerly vRealize Operations Manager) 8.2 And Newer

book

Article ID: 332393

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

vRealize Log Insight 8.2 has added support for extra large node instances, implying more effective usage of additional memory and computational resources provided to the node.
As of now, these node sizes have not been added into the Aria Operations for Logs Insight sizing guidelines, and as such, some information is subject to change.
 

Node Size XL XXL XXXL
Memory 64GBs 128GBs 256GBs
vCPUs 16 or more 16 or more 16 or more


Unlike the Aria Operations for Logs sizing guideline which bases calculations mainly on data-feed and ingestion speeds, recommendations for selecting extra large instances are skewed towards using query processing as a base.
Also note, that unlike the sizing guideline, these recommendations are for expanding already existing Aria Operations for Logs nodes, rather than deploying new ones.

Environment

VMware vRealize Log Insight 8.2 and newer.
Aria Operations for Logs 8.x

Resolution

Prerequisites

  • A running Aria Operations for Logs cluster with large (L) nodes, each featuring 32GBs of memory and 16 vCPUs.
 

Criteria

Use the following informal criteria to identify whether you need to expand the nodes further:
  • You see dozens of active queries listed on the System Monitor > Active Queries page each time you open or refresh it.
  • You see dozens of active queries in the System Monitor > Statistics > Active Queries By Host table per each host each time you open or refresh the corresponding page.

If the answer to both of the above points is "yes", then moving to XL node instances (and further) may be beneficial.
 

Steps

Below are the steps for moving to XL (or higher) node instances.

Note: All nodes must be the same size; this process must be completed on all nodes to size them equally.

For each of the cluster node, starting with the master:

  1. Bring the node down by shutting down the guest OS through vSphere.
  2. Increase the Memory size to 64GBs.
Note: Adjust the Memory size appropriately for the desired size in accordance with the chart above.
  1. Increase the number of vCPUs to at least 16.
  2. Start the node and wait for it's Status to show as Connected in the Administration > Cluster page before switching to the next node.