Issues with the filebeat while pushing logs to logcollector and kibana
search cancel

Issues with the filebeat while pushing logs to logcollector and kibana

book

Article ID: 214404

calendar_today

Updated On:

Products

DX Operational Intelligence

Issue/Introduction

We are observing issues at our filebeat 

 

Our setup is that we have nfs share we do push all the logs and filebeat configured over those paths and push to logcollector and kibana.
Is there a way we can ensure that there should not be any issues at filebeat or tweak any settings at filebeat/logging app so that we will not miss any logs?

Environment

Release : 20.2

Component : CA DOI Foundations

Resolution

Per the filebeat documentation reading files across a network volumes can be problematic. The beats documentation recommends installing on the edges - There is no direct workaround from Beats for this issue other than not using a network drive. 

If the file rotates and clear off the previous file, before filbeat reads, it will lead to potential loss of data, the file should be kept for some time for filebeat to read (renaming and all).

Related to the error in screenshot, network volumes might be a problem in this case.

Network volumes are not recommended to be used with filebeat so it is best to be avoided. As a general rule or principle, when using filebeat for reading log files which are rotated time to time, we need to do the following to avoid loosing data.

We need to check the log rotation strategy and see if the log rotation tool is copying the old log data into a new file and then truncating(making empty) the original file. If this is the case, we cannot avoid loss of data because the new data that was added after copy of the file contents (by rotation tool) but before file truncate will be deleted (as part of file truncation). Another problem with this is since a new file is created (rotated file), filebeat will read that file as well if the file name matches the lookup criteria, which we don't want as it can lead to duplicate data ingestion. The best way to avoid this issue is to rename the original log file, as per rotation policy, instead of truncating/deleting its content & creating a new file to store new incoming log data. This has to be done via the rotation policy or some tool which is handling the file rotations and cannot be done by filebeat. One more possibility for loss of data due to rotation is that if the moved file's name/path does not come under the criteria to lookup for filebeat and the file is moved before the filebeat completes reading the entire contents (which can happen due to backpressure), the file gets moved and the unread data will be skipped for ever. Renaming of the file (for rotation) does not cause duplicate data to be ingested as it is not a new file and filebeat is aware of it (due to inodes based & not name based tracking it does internally).

With the above things in mind, we have 2 things to do:
1. Change file rotation policy, ensure rotated file is just renamed instead of copy/truncate/delete; for the new log data, after the rotation happens, create a new file from where filebeat can read.
2. Keep the rotated/moved file readable and include rotated/moved file names/paths searchable by filebeat.