vSphere/vSAN Support for Intel's Optane Persistent Memory (Pmem)
search cancel

vSphere/vSAN Support for Intel's Optane Persistent Memory (Pmem)

book

Article ID: 326910

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This article provides information on vSphere and vSAN support for Intel's Optane Persistent Memory (Pmem).

Environment

VMware vSphere ESXi 6.7
VMware vSphere ESXi 7.0.0

Resolution

VMware vSphere supports Intel Optane persistent memory 100 series (Intel Optane Pmem 100 series) with 2nd Gen Intel Xeon Scalable processors and the Intel Optane Pmem 200 series with 3rd Gen Intel Xeon Scalable processors for the following modes:

Intel’s Pmem technology can be enabled in the following modes inside the vSphere platform.

  •  App Direct mode (AD): vSphere 6.7 EP10 and later versions enable Intel Optane PMem 100 series in App Direct mode. vSphere 7.0U2 and later versions enable Intel Optane PMem 200 series in App Direct mode. You can take advantage of the large capacity, affordability, and persistence benefits offered in this mode and deploy in production any supported 3rd party application without any restriction with full VMware support. VMware encourages its customers to leverage this technology in App Direct mode.

 

  • Memory Mode (MM): vSphere 6.7 EP10 and later versions enable Intel Optane PMem 100 series in Memory Mode. vSphere 7.0U2 and later versions enable PMem 200 series in Memory mode. vSphere usage of Intel Optane PMem in Memory Mode can offer increased memory capacity and TCO improvements for relevant workloads. VMware will support customers’ production deployment of Intel Optane PMem 100 and Intel Optane 200 series in Memory Mode.


Please review the Best Practices for Memory Mode configurations section that follows for guidance on how VMware will support customers’ production deployment of Intel Optane PMem 100 and Intel Optane 200 series in Memory Mode.


Best Practices for Memory Mode configurations:

Starting with vSphere 7.0U3, customers will not need to go through the RPQ process.

VMware highly recommends that customers and partners adhere to the following best practices:

  • DRAM capacity is required to be at least 12.5% of PMem capacity at a minimum, but VMware highly recommends a DRAM capacity of 25% or more of PMem capacity as a best practice.
  • The system must be populated with at least 4 Intel Optane PMem DIMMs per socket.
  • Active Memory of the host in steady-state should at no time exceeds 50% of the available DRAM capacity in the host, for their specific workload use-cases.
  • Use the vSphere Memory Monitoring and Remediation (vMMR) feature, introduced in 7.0U3, to get insights on host and VM-level statistics for both PMem and DRAM in the Advanced Memory section for Performance on the UI. vMMR will raise alarms on critical memory conditions.
  • Ensure that the server platform is running with the Balanced Profile BIOS setting recommended by the server OEMs.


Note: For more information on server platforms that meet these requirements, please contact your OEM server vendor for supported configurations.

Prior to vSphere 7.0U3, customers are required to meet the following guidelines.

For servers running vSphere versions older than 7.0U3, customers will be required to go through an RPQ process if it is not one of the configurations listed in the table below.

 

Intel Optane Memory

#Sockets DRAM PMEM Ratio Available Memory Slots Populated/socket
PMEM 100 2 384G=12x32G 1T=8x128G "1:3" 1TB 6,4
PMEM 100 2 384G=12x32G 1.5T=12x128G "1:4" 1.5TB 6,6
PMEM 100 2 768G=12x64G 3T=12x256G "1:4" 3TB 6,6
PMEM 100 2 1.5T=12x128G 6T=12x512G "1:4" 6TB 6,6
PMEM 100 4 768G=24x32G 2TB=16x128G "1:3" 2TB 6,4
PMEM 100 4 768G=24x32G 3T=24x128G "1:4" 3TB 6,6
PMEM 100 4 1.5T=24x64G 6T=24x256G "1:4" 6TB 6,6
PMEM 100 4 3T=24x128G 12T=24x512G "1:4" 12TB 6,6
             
PMEM 200 2 256G=16x16G 1T=8x128G "1:4" 1TB 8,4
PMEM 200 2 512G=16x32G 2T=16x128G "1:4" 2TB 8,8
PMEM 200 2 1T=16x64G 4T=16x256G "1:4" 4TB 8,8
PMEM 200 2 2T=16x128G 8TB=16x512G "1:4" 8TB 8,8

 

FAQ

Where do I find the vSphere compatible server hardware for this technology?

To find the compatible servers with this technology use VMware Compatibility Guide (VCG). VMware VCG lists compatible servers for this technology under the “Persistent Memory” feature. Go through the associated KB article for more specific configuration and technology compatibility, listed under footnote on VCG listing. If you do not see the server of your choice in the list, contact server vendor for the availability.
 

What specific editions of vSphere support this technology?

vSphere Enterprise Plus Edition™ and higher version only.
 

What is the maximum host capacity of Intel® Optane™ Persistent Memory enabled in vSphere?

  • Specific limits of Intel Optane PMem 100 series available on specific OEM server SKUs is listed on VMware Compatibility Guide . vSphere is certifying the following maximum capacity for vSphere 6.7 release as follows:
  • Intel® Optane™ Persistent Memory for PMem 100 series:
    • Up to 6TB PMem for 2-socket and 12TB PMem for 4-socket and 8-socket – both in AppDirect mode and Memory mode
    • A combination of DRAM and Intel Optane PMem in App Direct mode with a combined limit of:
      • 15TB (up to 12TB of PMem + up to 3TB of DRAM) (applicable for vSphere 6.7 & 7.0 GA)
      • 18TB (up to 12TB of PMem + up to 6TB of DRAM) (applicable for vSphere 7.0U1 and onwards)
  • Intel® Optane™ Persistent Memory for Pmem 200 series:
    • Up to 8TB PMem for 2-socket - both in AppDirect mode and Memory mode
    • A combination of DRAM and Intel Optane PMem in App Direct mode with a combined limit of 12TB (up to 8TB of PMem + up to 4TB of DRAM)

For specific limits for server platform use VMware Compatibility Guide.
 

What is the maximum VM capacity of Intel® Optane™ Persistent Memory enabled in vSphere?

A VM can be configured with the following limits in various configurations:

  • Intel Optane PMem 100 series:
    • Memory Mode:
      • Up to 6TB virtual RAM for 2-socket, 4-socket and 8-socket (applicable for vSphere 6.7 & 7.0 GA)
      • Up to 6TB virtual RAM for 2-socket and 12TB virtual RAM for 4-socket and 8-socket (applicable for vSphere 7.0U1 and onwards)
  • App Direct Mode: (vNVDIMM is backed-up by Intel Optane PMem 100 series)
    • Up to 6TB of virtual RAM and vNVDIMM in total (applicable for vSphere 6.7 & 7.0 GA)
    • Up to 18TB virtual RAM and vNVDIMM in total - up to 12TB PMem + up to 6TB DRAM (applicable for vSphere 7.0U1 and onwards)

 

  • Intel Optane PMem 200 series:
    • Memory Mode:
      • Up to 8TB virtual RAM for 2-socket
    • App Direct Mode: (vNVDIMM is backed-up by Intel Optane PMem 200 series)
      • Up to 12TB virtual RAM and vNVDIMM in total - up to 8TB PMem + up to 4TB DRAM

For specific limits for server platforms use VMware Compatibility Guide.
 

Can I use vSAN with this technology?

VMware vSAN supports PMem in Memory Mode (MM) ​for the configurations listed in the below table:

Intel Optane Memory #Sockets DRAM PMEM Ratio Available Memory Slots Populated/socket
PMEM 100 2 384G=12x32G 1T=8x128G "1:3" 1TB 6,4
PMEM 100 2 384G=12x32G 1.5T=12x128G "1:4" 1.5TB 6,6
PMEM 200 2 256G=16x16G 1T=8x128G "1:4" 1TB 8,4
PMEM 200 2 512G=16x32G 2T=16x128G "1:4" 2TB 8,8

 

The “Best Practices for Memory Mode Configurations” section above in the KB are applicable to vSAN. In addition, ​the following are additional guidelines to be followed:

  • The configuration needs to be All Flash vSAN. We recommend using vSAN ReadyNode. However, you could consume PMem in MM for any All-Flash configurations if the components are certified and listed in vSAN HCL. For example, you could choose any All Flash ReadyNode and use the above table to create your PMem configuration. Or you could use the supported components listed in vSAN HCL and use the above table for your PMem configuration.
  • The server platform needs to be vSphere certified for Persistent Memory.
  • Configurations with DRAM less than 25% of PMem capacity or workloads running with active memory higher than 50% DRAM capacity may see unpredictable performance behavior while running multiple VMs.