vRealize Operations Manager 6.7 Sizing Guidelines
search cancel

vRealize Operations Manager 6.7 Sizing Guidelines

book

Article ID: 340749

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

This article provides information on using the sizing guidelines for vRealize Operations Manager 6.7, to determine the configurations used during installation.

Environment

VMware vRealize Operations Manager 6.7.x

Resolution

By default, VMware offers Extra Small, Small, Medium, Large, and Extra Large configurations during installation. You can size the environment according to the existing infrastructure to be monitored. After the vRealize Operations Manager instance outgrows the existing size, you must expand the cluster to add nodes of the same size.
 
 vRealize Operations NodeRemote Collector (RC)
Extra SmallSmallMediumLargeExtra LargeStandardLarge
Configuration
vCPU
248162424
Default Memory (GB)
8163248128416
Maximum Memory Configuration (GB)
N/A326496N/A832
vCPU: Physical core ratio for data nodes (*)
1 vCPU to 1 physical core at scale maximums
Network latency for data nodes (*****)
< 5 ms
Network latency for remote collectors (*****)
< 200 ms
Network latency for agents (to vRealize Operations node or RC) (*****)
< 20 ms
Datastore latency
Consistently lower than 10 ms with possible occasional peaks up to 15 ms
IOPS
See the attached Sizing Guidelines worksheet for details.
Disk Space
Objects and Metrics
Single-Node Maximum Objects
3503,50011,00020,00045,0006,000 (****)32,000 (****)
Single-Node Maximum Collected Metrics (**)
70,000800,0002,500,0004,000,00010,000,0001,200,0006,500,000
Maximum number of nodes in a cluster
1281666060
Multi-Node Maximum Objects Per Node (***)
N/A3,0008,50016,50040,000N/AN/A
Multi-Node Maximum Collected Metrics Per Node (***)
N/A700,0002,000,0003,000,0007,500,000
Maximum Objects for the configuration with the maximum supported number of nodes (***)
3506,00068,000200,000240,000
Maximum Metrics for the configuration with the maximum supported number of nodes(***)
70,0001,400,00016,000,00037,500,00045,000,000
End Point Operations agent
Maximum number of agents per node
1003001,2002,5002,5002502,000
  • (*) It is critical to allocate enough CPU resources for environments running at scale maximums to avoid performance degradation. Refer to the vRealize Operations Manager Cluster Node Best Practices in the vRealize Operations Manager 6.7 Help for more guidelines regarding CPU allocation.
  • (**) Metric numbers reflect the total number of metrics that are collected from all adapter instances in vRealize Operations Manager. To get this number, you can go to the Administration page, expand History and generate an Audit report.
  • (***) In large configurations that consist of more than 8 nodes, note the reduction in maximum metrics and objects to permit some head room. This adjustment is accounted for in the calculations.
  • (****) The object limit for the remote collector is based on the VMware vCenter adapter.
  • (*****) The latency limits are provided in Round Trip Time (RTT)

What's new in vRealize Operations Manager 6.7 sizing

  • Manage larger environments with the same footprint: Up to 30% more objects than in 6.6.x.
  • Manage more vCenter Servers: Up to 120 vCenter Servers (it was 60 in 6.6.x).
  • Collect from larger vCenter Servers: Up to 65000 objects; achieved by a large Remote Collector (8 vCPU and 32GB of RAM)
  • Support for scale down CPU configuration: Reclaim up to 4 vCPUs from the Large and Extra Large node VMs if the cluster is not running at the upper limits, and the CPU usage in node VMs is less than 60%.  The cluster will perform better if the nodes stay within a single socket (don't cross NUMA boundaries).
 

Important Notes

  • The sizing guides are version specific, please use the sizing guide based on the vRealize Operations version you are planning to deploy.
  • An object in this table represents a basic entity in vRealize Operations Manager that is characterized by properties and metrics that are collected from adapter data sources. Example of objects include a virtual machine, a host, a datastore for a VMware vCenter adapter, a storage switch port for a storage devices adapter, an Exchange server, a Microsoft SQL Server, a Hyper-V server, or Hyper-V virtual machine for a Hyperic adapter, and an AWS instance for a AWS adapter.
 

Other Maximums

Maximum number of remote collectors60 
Maximum number of vCenter adapter instances120 
Maximum number of vCenter on a single collector100 
Maximum number of concurrent users per node (*)10This is when the cluster is running with all nodes filled with objects or metrics at maximum levels on a 4 large node cluster.
Maximum number of certified concurrent users (**)300 
Maximum number of the End Point Operations agents10,000 
* The maximum number of concurrent users is 10 per node with objects or metrics at maximum levels (For example, 16 nodes Large with 200K objects can support 160 concurrent users).
** The maximum number of concurrent users is achieved on a system configured with the objects and metrics at 50% of supported maximums (For example, 4 nodes Large with 32K object).

VDI use case

  • A large node can collect more than 20,000 vRealize Operations for Horizon objects when a dedicated remote collector is used.
  • A large node can collect more than 20,000 vRealize Operations for Published Apps objects when a dedicated remote collector is used.
 

Constraints

  • Extra small and small node are designed for test environment and POC, we do not recommend to scale-out small node more than two nodes and we do not recommend to scale out extra small node.
  • If you have >1 node, then alll nodes must be scaled equally. No mixing of nodes with different sizes.
  • Snapshot impact performance. Snapshot on the disk causes slow IO performance and high CPU co-stop values which degrades the performance of vRealize Operations.
  • In HA, each object is replicated in some nodes of a cluster, hence the limit for HA based instance is two times less compare to non HA. 
  • vRealize Operations HA supports only one node failure. Avoid single-point-of-failure by putting each nodes into different host in the vSphere cluster.
 

Scaling Tips

  • Scale up, not out.
Use the configuration which has the least number of nodes.
Example: For 180000 objects, deploy as 4 Extra Large nodes instead of 12 Large nodes. You will save half the CPU.
  • You can increase RAM size instead of increasing both RAM and CPU
This is useful if the number of objects is close to the upper limit. Check that there is enough RAM on the underlying hardware
Example: Large node has 48GB and the number of objects are closed to 20000. You can increase up to 96 GB. This assumes the underlying ESXi has >96 GB per socket.
 

Collectors

The collection process on a node will support adapter instances where the total number of objects is not more than 3,000, 8,500, 16,500 and 40,000 on small, medium, large and extra large multi-node vRealize Operations Manager clusters respectively.
For example, a 4-node system of medium nodes will support a total of 34,000 objects.
However, if an adapter instance needs to collect 12,000 objects, a collector that runs on a medium node cannot support that as a medium node can only handle 8,500 objects. In this situation, you can add a large remote collector and pin the adapter instance to the remote collector or scale up by using a configuration that supports more objects.

Attachments

vRealizeOperationsManagerSizing_6.7 get_app