Multihoming on ESXi
search cancel

Multihoming on ESXi

book

Article ID: 318546

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This article provides information about multihoming in ESXi.

Multihoming in a VMkernel networking context means that there are multiple VMkernel adapters in a single TCP/IP stack.
Note: There can be multiple TCP/IP stacks each with a single vmknic. That is not considered multihoming.

Multihoming should not be confused with iSCSI port-binding, Multi-NIC vMotion, or the use of multiple Netstacks / TCP/IP stacks.

For more information about VMkernel Networking in vSphere, see VMkernel Networking Layer
For more information about TCP/IP stacks in vSphere, see View TCP/IP Stack Configuration on a Host


Environment

VMware vSphere ESXi 

Cause

The VMkernel TCP/IP stack uses a single routing table to route traffic, stores DNS information, the congestion control algorithm, and the maximum number of allowed connections.

ESXi does not run a routing protocol and cannot support/add dynamic routes.


For example, if you have VMkernel ports configured like this (unsupported):

One VMkernel port for Management, named vmk0 with IP 192.168.0.8/24 and Default Gateway 192.168.0.1 as part of the default TCP/IP stack.
One VMkernel port for vMotion, named vmk1 with IP 192.168.0.10/24 and Default Gateway 192.168.0.1 as part of the default TCP/IP stack.
Another VMkernel port for iSCSI, named vmk2 with IP 192.168.0.12/24 and Default Gateway 192.168.0.1 as part of the default TCP/IP stack.

Since all three of these vmknics are configured within the same IP subnet (in this example), the VMkernel TCP/IP stack chooses any one of the three interfaces each time for all outgoing (egress) traffic (Management, vMotion, and iSCSI) on that subnet.

In this example vMotion operations can fail, storage latency may be reported, timeouts, and ESXi host disconnects from vCenter can occur.

Resolution

Ensure the VMkernel ports are in different TCP/IP stacks.

Alternatively, if the vmknics are in different IP subnets in the same TCP/IP stacks, they need to have their own gateway and route configuration.
This means, any specific route entry needs to be added statically from the VCSA or the ESXi host CLI.

Configurations with more than one vmknic interface on the same IP subnet is not supported.


Example 1 of supported VMkernel port configuration:

One VMkernel port for vMotion, named vmk1 and IP 192.168.0.10/24 with Default Gateway 192.168.0.1/24 (in the pre-defined vMotion TCP/IP stack).
Another VMkernel port for iSCSI, named vmk2 and IP 172.1.0.12/24 with Default Gateway 172.1.0.1/24 (in a Custom TCP/IP stack).


Example 2 of supported VMkernel port configuration:

One VMkernel port for Management, named vmk0 and IP 192.168.0.10/24 with Default Gateway 192.168.0.1/24 (in the pre-defined Default TCP/IP stack).
Another VMkernel port for vMotion, named vmk1 and IP 10.10.128.12/24 with Default Gateway 192.168.0.1/24 (in the pre-defined Default TCP/IP stack).
Another VMkernel port for iSCSI, named vmk2 and IP 172.1.0.12/24 with Default Gateway Override, using gateway 172.1.0.1/24 (in the pre-defined Default TCP/IP stack).

Both subnets will have the same DNS information, use the same congestion control algorithm, and contribute to the maximum number of allowed connections of the Default TCP/IP stack.
 

 

Additional Information






Legacy content from when ESX was supported:
 
Take special care when migrating an ESX system to ESXi. On ESX, the management interface (vswif) is owned by the Service Console and any routing decision for outgoing management traffic is done using the routing table in the Service Console. VMkernel interfaces (vmknics), however, are owned by the VMkernel and any routing decision for outgoing VMkernel traffic (vMotion, FT, iSCSI and NFS) is done using the routing table in the VMkernel TCP/IP stack.
 
On ESX, the management interface can be in the same IP subnet as one of the VMkernel NICs without causing any issues.
 
ESXi does not have a Service Console. The management agents/daemons run on the VMkernel and use the VMkernel's TCP/IP stack. Therefore the management interface is just another VMkernel interface (vmknic). On ESXi, the management traffic and the VMkernel traffic use the same routing table in the VMkernel TCP/IP stack. Because of this, there is a possibility of unexpected network behavior if the management interface is on the same IP subnet as one of the other VMkernel interfaces.
 
Note: Having more than one vmknic using the same DHCP server for configuration leads to the same situation. VMware recommends avoiding this scenario as well.



Impact/Risks:
Deploying such a configuration can lead to unexpected results like connectivity issues, low throughput and asymmetric routing.

Configurations with more than one vmknic interface on the same IP subnet is not supported.

For exceptions see Considerations for using software iSCSI port binding in ESX/ESXi (2038869) and Multiple-NIC vMotion in vSphere (2007467).

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Scenario 1: The ESXi host is in a "not responding" state, and multiple VMkernel interfaces (vmk) are configured on the same management subnet. The ESXi management VMkernel (vmk) is connected to the correct physical NIC (vmnic), but the other VMkernel interfaces, which are on the same subnet, are not connected to any vmnic and are showing as "void" in esxtop.

Even if the correct vmnic is connected to the management VMkernel, the ESXi host may not reconnect to vCenter due to the multihoming configuration. To resolve this issue, all VMkernel interfaces on the same subnet must be connected to the appropriate vmnics. Although multihoming on the same subnet is not a best practice, ensuring that all VMkernel interfaces are properly mapped to their respective vmnics can help restore connectivity and resolve the host's "not responding" state in vCenter.