There is no activity logs anymore
search cancel

There is no activity logs anymore

book

Article ID: 239232

calendar_today

Updated On:

Products

Web Isolation

Issue/Introduction

For the "no activity logs" challenge reported, we assume what you see is similar to what's shown in the snippet below.

For the above challenge, please follow through with the troubleshooting steps detailed below, and, please, send us clear outputs/feedback, for each step. By executing the steps, the cause of the issue may be identified and the issue resolved. This is the expectation. Howbeit, please send us the results for each step.

The Flow:

TIE

The engine, asyncs services, and the proxy write logs to the fireglass_activity.log in the /var/log dir

other log files are collected as well like gateway_audit, mgmt_audit, etc.

the log_shipper_logstash takes the logs and puts them in the log_shipper_redis

Mgmt

report_server_logstash takes the logs from the TIE’s redis and puts them in the report_server_elastic

So:

file → logstash → redis → logstash → elastic

If log forwarding is configured then:

file → logstash → redis → logstash → elastic + S3 (or any other storage)

Note 1: If logstash can’t move data to the external storage for some reason, ES will also NOT receive data. Logstash will move data only if all the end-points can get it.

Resolution

Please collect some very specific outputs and also complete some tasks on the Management.

1. We will need to check to see whether the shards are still initializing. To do this, please run the curl command below.

See sample output in the snippet below.

https://api-broadcom-ca.wolkenservicedesk.com/attachment/get_attachment_content?uniqueFileId=hiShPP5CY+ywW9A6VHseMQ==

Check the initializing_shards field. If the state remains red, please run the below curl command.

See sample output below.

https://api-broadcom-ca.wolkenservicedesk.com/attachment/get_attachment_content?uniqueFileId=rA7uF/IDMaxn5hgnfp3SRQ==

2. We would like to check, to see if the ElasticSearch Server starts up or does have corrupt indices. To help with these, please run the following curl commands, on the management server.

See sample output in the snippet below.

https://api-broadcom-ca.wolkenservicedesk.com/attachment/get_attachment_content?uniqueFileId=g+XZLVCyoZ6cNpUEXy2K/g==

If after ElasticSearch restart, the index remains red, it must be corrupt and should be deleted. Run the curl command to delete. Please send us the output, before the deletion.

curl -XDELETE http://169.254.0.1:9200/logstash-2016.04.*

or

curl -XDELETE http://169.254.0.1:9200/metrics-2017.52

 

For a reported case scenario, resetting the logstash container, to delete all the logs therein, including the corrupt indices, ensured that the new logs got populated in the activity logs, as expected, to resolve the issue.