In Aria Operations for Network UI Platform nodes 2 and 3 show Critical: Disk Read IOPS is low and High Indexer Lag
search cancel

In Aria Operations for Network UI Platform nodes 2 and 3 show Critical: Disk Read IOPS is low and High Indexer Lag

book

Article ID: 409826

calendar_today

Updated On:

Products

VCF Operations for Networks

Issue/Introduction

In the Aria Operations for Networks you see the following error:

You also see a High Indexer Lag:

When checking the Platform node read IOPS you see they are considerable different for nodes 2 and 3 for read IOPS:

$ ./run_all.sh 'file="/var/lib/ubuntu/fio-tmp"; fio --filename=$file --direct=1 --ioengine=libaio --bs=4K --name=bw-test --rw=randread --iodepth=4 --size=100M | grep -i IOPS; rm -f $file'

When checking the Platform node write IOPS you can see differences between the nodes:

$ ./run_all.sh 'file="/var/lib/ubuntu/fio-tmp"; fio --filename=$file --direct=1 --ioengine=libaio --bs=4K --name=bw-test --rw=randwrite --iodepth=4 --size=100M | grep -i IOPS; rm -f $file'

 

 

Environment

Aria Operations for Networks 6.13.0

Aria Operations for Networks 6.14.0

Aria Operations for Networks 6.14.1

Cause

This can be caused by a restore to snapshots for a failed upgrade

This can be present if there are excessive snapshots that need to be deleted

 

Resolution

If you are using a VSAN storage cluster move the effected nodes disk to another datastore then retest the read and write IOPS

 

To re-test the read disk IOPS :

ub
./run_all.sh 'file="/var/lib/ubuntu/fio-tmp"; fio --filename=$file --direct=1 --ioengine=libaio --bs=4K --name=bw-test --rw=randread --iodepth=4 --size=100M | grep -i IOPS; rm -f $file'

to re-test the disk write IOPS

ub
./run_all.sh 'file="/var/lib/ubuntu/fio-tmp"; fio --filename=$file --direct=1 --ioengine=libaio --bs=4K --name=bw-test --rw=randwrite --iodepth=4 --size=100M | grep -i IOPS; rm -f $file'

Additional Information