High disk consumption by Elasticsearch
search cancel

High disk consumption by Elasticsearch

book

Article ID: 275060

calendar_today

Updated On:

Products

DX Application Performance Management

Issue/Introduction

Elasticsearch is generating high consumption of NFS, on 10/04/2023 we added 1TB to NFS but Elasticsearch is growing quickly.

We need help identifying and adjusting consumption to prevent the disk from reaching its limit.

2.2T    /nfs/example/dxi/jarvis/elasticsearch
2.4T    /nfs/example/dxi/jarvis/[<user>@<host> jarvis]# df -kh .


Filesystem      Size  Used Avail Use% Mounted on
/dev/example1       3.0T  2.6T  410G  87% /opt/example

Apparently the indices are not being rotated:

sh-5.1$ curl -X GET 'http://localhost:9200/_cat/indices/*ao_apm_tt*?h=h,s,i,id,p,r,dc,dd,pri.segments.count,pri.store.size,ss,creation.date.string&s=creation.date.string&v'
h     s    i              id                     p r       dc dd pri.segments.count pri.store.size     ss creation.date.string
green open ao_apm_tt_2_18 <number> 1 1  1123866  0                  1         26.1gb 52.3gb 2023-09-15T20:26:01.718Z
green open ao_apm_tt_2_19 <number> 1 1  2264767  0                  1         24.8gb 49.7gb 2023-09-16T15:08:00.894Z
green open ao_apm_tt_2_20 <number> 1 1  1372694  0                  1         24.4gb 48.9gb 2023-09-18T11:39:01.046Z
green open ao_apm_tt_2_21 <number>  1 1  1562220  0                  1         25.4gb 50.8gb 2023-09-18T17:00:01.749Z
green open ao_apm_tt_2_22 <number>  1 1  1674373  0                  1         25.7gb 51.5gb 2023-09-19T08:22:30.914Z
green open ao_apm_tt_2_23 <number> 1 1  1098995  0                  1         21.3gb 42.7gb 2023-09-19T16:30:31.955Z
green open ao_apm_tt_2_24 <number>  1 1  1934225  0                  1         25.6gb 51.3gb 2023-09-19T20:18:31.084Z
green open ao_apm_tt_2_25 <number>  1 1  2012722  0                  1         27.9gb 55.8gb 2023-09-20T14:05:02.396Z
green open ao_apm_tt_2_26 <number>  1 1  2374736  0                  1         23.3gb 46.6gb 2023-09-20T20:48:00.667Z
green open ao_apm_tt_2_27 <number>  1 1  2410191  0                  1         23.8gb 47.6gb 2023-09-21T15:05:02.133Z
green open ao_apm_tt_2_28 <number>  1 1  2231682  0                  1         25.9gb 51.9gb 2023-09-22T04:05:31.648Z
green open ao_apm_tt_2_29 <number>  1 1  1597985  0                  1         27.2gb 54.4gb 2023-09-22T14:24:31.482Z
green open ao_apm_tt_2_30 <number>  1 1  1268400  0                  1         21.2gb 42.4gb 2023-09-22T17:30:32.563Z
green open ao_apm_tt_2_31 <number>  1 1 65066687  0                261        775.5gb  1.5tb 2023-09-22T20:18:32.409Z

Environment

Release : 23.1

Resolution

Please first check if jarvis-kron pod keeps restarting.

- if yes, please follow this KB:
https://knowledge.broadcom.com/external/article/236940

- if not, please collect the logs from pod jarvis-kron, jarvis-esutils and jarvis-elasticsearch-x

***

After performing the KB procedure, a new index was created and we are monitoring its growth and if the oldest indices will be deleted.

The jarvis-kron pod no longer restarts:

 

Additional Information

If the KB fails, please collect the logs from pod jarvis-kron, jarvis-esutils and jarvis-elasticsearch-x.