Users experiencing performance issues using HCX L2 extensions when the NE appliances are running on same ESXi host.
search cancel

Users experiencing performance issues using HCX L2 extensions when the NE appliances are running on same ESXi host.

book

Article ID: 421647

calendar_today

Updated On:

Products

VMware HCX

Issue/Introduction

  • When using HCX L2 extension, users are experiencing performance issues.

  • You are experiencing packet drops when the NE CPU reaches 100%.

  • Reviewing the output from <top> command on HCX NE appliance shows CPU reaching 100% . The process consuming the most CPU is related to the IPSec (ksoftirqd) service:

    SSH into the HCX Manager
    ccli > list > go <NE_ID#> ssh > top

  • This high CPU due to the software process ksoftirqd indicates that traffic load is causing heavy per-packet IPsec encryption processing on the NE appliance VM.

  • In vCenter, the performance chart shows a high percentage of CPU usage:

Environment

HCX 4.11.0

Cause

Both HCX Network Extension (NE) appliances are running on the same ESXi hosts.

Resolution

Create a VM/Host Rules of type "Separate Virtual Machines" to keep the two HCX Network Extension (NE) appliances on separate ESX hosts. vCenter Distributed Resource Scheduler (DRS) will instantly ensure the virtual machines are placed on distinct ESXi hosts.