This KB article provide steps to clear the Elasticsearch indexes when persistent volume claim becomes full so no space is left on the device.
VIP Authentication/Auth Hub
Following steps can be done to clear the Elasticsearch indexes to regain the space:
$ kubectl patch \
"$(kubectl get daemonsets.apps --namespace "${SSP_NAMESPACE}" --selector 'app.kubernetes.io/name=fluent-bit' --output name)" \
-n "${SSP_NAMESPACE}" \
-p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
Remove the Elasticsearch indexes following the below command. Please note that ES_PWD needs to be set to the Elasticsearch elastic user's password below.
$ kubectl exec -n "${ES_NAMESPACE}" "elasticsearch-es-default-0" -c "elasticsearch" -- curl --insecure -u"elastic:$ES_PWD" -s -XDELETE "https://localhost:9200/ssp_audit
$ kubectl exec -n "${ES_NAMESPACE}" "elasticsearch-es-default-0" -c "elasticsearch" -- curl --insecure -u"elastic:$ES_PWD" -s -XDELETE "https://localhost:9200/ssp_log
$ kubectl exec -n "${ES_NAMESPACE}" "elasticsearch-es-default-0" -c "elasticsearch" -- curl --insecure -u"elastic:$ES_PWD" -s -XDELETE "https://localhost:9200/ssp_tp_log
Scale back the Fluent-Bit Daemon set
$ kubectl patch \
"$(kubectl get daemonsets.apps --namespace "${SSP_NAMESPACE}" --selector 'app.kubernetes.io/name=fluent-bit' --output name)" \
-n "${SSP_NAMESPACE}" \
-p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'
Note:-> Please make sure that the ES_NAMESPACE ( Elasticsearch Namespace) and SSP_NAMESPACE environment variables are set before running the above commands.