/var/log/elasticsearch reports an issue with hardware checksum:org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662) ~[lucene-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 2020-04-08 08:55:42] Caused by: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=8a3c786b actual=aa4c34ea (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/var/lib/elasticsearch/nodes/0/indices/eylyF2pcRxGc-lRKCi9x4w/0/index/_6mqy.fdt"))) at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419) ~[lucene-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 2020-04-08 08:55:42] at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:526) ~[lucene-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 2020-04-08 08:55:42] at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.checkIntegrity(CompressingStoredFieldsReader.java:669) ~[lucene-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 2020-04-08 08:55:42]VMware Aria Operations for Networks 6.X
This issue is caused by underlying storage issue where the AON VMs are deployed.
The issue appears to be related to a hardware checksum failure. Further investigation by the storage support team is required to determine the root cause and ensure long-term resolution.
Workaround:
If the environment is currently operating as a single node, scaling out to a cluster can mitigate the issue. Alternatively, migrating the affected AON VMs to a healthy storage location may also serve as a temporary workaround.