Assign SR-IOV is disabled by default on ESXi 5.1 and 5.5. To enable SR-IOV, use the esxcli command or host profiles in 5.1. In vSphere 5.5, it can be done through the vSphere Web Client also.
Note: For SR-IOV to function, physical hosts must meet these requirements.
- Supported Processor: Intel VT-d or AMD-Vi
- Motherboard firmware support for Intel VT-d or AMD-Vi
- IOMMU is enabled in the BIOS/UEFI
To enable SR-IOV on a Physical Adapter using the esxcli
command in ESXi 5.1 or 5.5:
- At the host console, or via SSH as root, run the command:
esxcli system module parameters set -m NIC_Driver_Module -p "max_vfs=n"
Where:
NIC_Driver_Module
is the module name of the NIC which is SR-IOV capable (for example, ixgbe
)n
is the number of virtual functions (VFs) provided by the NIC (for example, 8
)
For example, to configure for an Intel X540 10 GB Ethernet Adapter, run the command:
esxcli system module parameters set -m ixgbe -p "max_vfs=8"
If you have a dual port NIC or two NICs that use the same module, run the command:
esxcli system module parameters set -m ixgbe -p "max_vfs=8,8"
Note: Add a comma and the value 8
for each additional NIC (for example, max_vfs=8,8,8
for three NICs, and so on). The number of virtual functions supported and available for configuration depends on your system configuration.
- Reboot the host to reload the driver with the configured parameters.
To enable SR-IOV on Physical Adapter using host profiles in the ESXi 5.1 or 5.5 :
- From the vSphere Web Client Home, click Rules and Profiles > Host Profiles.
- Select the host profile from the list and click the Manage tab.
- Click Edit Host Profile and expand the General System Settings folder.
- Expand Kernel Module Parameter and select the parameter of the physical function driver for creating virtual functions.
For example, the parameter for the physical function driver of an Intel physical NIC is max_vfs
.
- In the Value text box, type a comma-separated list of valid virtual function numbers.
Each list entry is the number of virtual functions that you want to configure for each physical function. A value of 0
means SR-IOV is not enabled for that physical function.
If you have a dual port NIC, set the value for each port separated by a comma as stated in the esxcli
command procedure.
- Click Finish.
- Remediate the modified host profile to the target host.
To enable the SR-IOV on a Host Physical Adapter in the vSphere Web Client in the ESXi 5.5 :
- In the vSphere Web Client, navigate to the Host.
- In the Manage > Networking tab, select Physical adapters, you can see the SR-IOV property to check whether a physical adapter supports SR-IOV.
- Select the physical adapter and click Edit Settings.
- Under SR-IOV, select Enabled from the Status dropdown.
- In the Number of virtual functions text box, type the number of virtual functions that you want to configure for the adapter.
- Click OK.
- Restart the host.
Notes:
- Although the SR-IOV is supported on the ESXi 5.1 hosts satisfying the requirements, SR-IOV on them can not be configured by using the vSphere Web Client. You can also not assign an SR-IOV passthrough adapter to a virtual machine on such a host. The adapter is available for virtual machines that are compatible with the ESXi 5.5 and later. Although a vCenter Server 5.5 release might be managing an ESXi 5.1 host, the configuration is the same as in release 5.1. You must add a PCI device to the virtual machine hardware and manually select a VF for the device.
- The virtual functions become active on the NIC port represented by the physical adapter entry. They appear in the PCI Devices list in the Settings tab for the host.
- LACP is currently unsupported with SR-IOV.
For more information:
- For ESXi 5.1, see the Configure a Virtual Machine to Use SR-IOV in the vSphere Client in ESXi 5.1 and Configure a Virtual Machine to Use SR-IOV in the vSphere Web Client sections in the vSphere Networking Guide.
- For ESXi 5.5, see the Configure a Virtual Machine to Use SR-IOV in the vSphere Web Client section in the vSphere Networking Guide.
Note: The preceding link was correct as of August 29, 2013. If you find the link is broken, provide feedback and a VMware employee will update the link.