High CPU utilization observed after Enterprise PKS integration with vRealize Log Insight
search cancel

High CPU utilization observed after Enterprise PKS integration with vRealize Log Insight

book

Article ID: 316791

calendar_today

Updated On:

Products

VMware

Issue/Introduction

Symptoms:
  • CPU utilization spikes after Enterprise PKS is integrated with vRealize Log Insight
  • The issue is seen with any node plan selected
  • The issue is seen with any rate limiting value selected
  • The ruby process is consuming most of the CPU
  • You see that the fluentd logs (fluentd.stdout.log) are filled with buferoverflowerror messages, per the following example: 
2019-06-07 07:50:18 +0000 [warn]: emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/var/vcap/data/packages/vrli-fluentd/9b6af2d881d008a24610c0a33eca4ee986251417/gem_home/ruby/2.4.0/gems/fluentd-1.2.6/lib/fluent/plugin/buffer.rb:269:in `write'" tag="XXXX"

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.


Environment

VMware PKS 1.x

Cause

This issue can occur when port 9000 is blocked between the Enterprise PKS cluster and the vRealize Log Insight nodes. 

Port 9000 is used by the Log Insight Ingestion API and this port must be open to network traffic from sources that send data to vRealize Log Insight.

Resolution


To resolve the issue, check for any firewall rules set to block traffic over port 9000 between Enterprise PKS clusters and vRealize Log Insight clusters. If such rules exist, they must be modified such that traffic over port 9000 is allowed.



Additional Information

Default Configuration of the vRealize Log Insight Linux Agent