Error "Could not create the Java Virtual Machine"
search cancel

Error "Could not create the Java Virtual Machine"

book

Article ID: 296212

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

Symptoms:

A Java Heap Space error is displayed on a system with enough free memory.

Note: This error is not limited to the PHD or GPDB. The error can be observed when running any Java application that has similar heap space requirements. 

 

Error Message:

Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Error occurred during initialization of VM Could not reserve enough space for object heap

INFO: os::commit_memory(0x0000000718000000, 2818572288, 0) failed; error='Cannot allocate memory' (errno=12) 
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 2818572288 bytes for committing reserved memory.

Environment


Cause

There is not enough virtual memory space requested by an application thus the application fails to start. This usually happens during the application startup phase. Applications reserve a large amount of virtual memory for the JVM heap space during the startup phase.

The issue is that the system has enough physical memory installed to accommodate the application's memory requirement but an error is still produced.

Most systems with GPDB or PHD installed has Linux kernel parameter of vm.overcommit_memory = 2 and vm.overcommit_ratio = 50. With these parameters, the available virtual memory space is calculated by (SwapTotal + MemTotal * vm.overcommit_ratio / 100). As a result, an unexpected problem can arise when the swap size is very small.

For example, say a node has 256GB of memory installed and it only has 4GB of swap space. In this case the OS will only allow up to 132GB of virtual memory, which is effectively wasting almost 50% of virtual memory space. While 50% of the space not mapped to virtual memory can be used for buffer and file cache, this is not an ideal way of utilizing memory.

 

How to detect similar situations 

1. Check the physical memory & swap space configured. Observe the large disparity between the MemTotal & SwapTotal.

# cat /proc/meminfo | grep Mem
MemTotal: 264418044 kB
MemFree: 124329352 kB

# cat /proc/meminfo | grep Swap
SwapTotal: 4194300 kB
SwapFree: 4194300 kB

2. Check the current values of virtual memory related kernel parameters. Specifically, the vm.overcommit_memory and vm.overcommit_ratio parameters are of interest. 


# sysctl -a | grep overcommit 
vm.overcommit_memory = 2
vm.overcommit_ratio = 80

3. Check the current virtual memory overcommit status.


# cat /proc/meminfo | grep Commit
CommitLimit: 215728732 kB  # SwapTotal(4194300) + MemTotal(264418044) * vm.overcommit_ratio(80) / 100
Committed_AS: 207266812 kB  # This is the real amount of memory actually being used by applications

Observe that the CommitLimit (which is the virtual memory limit), is far less than the physical memory installed (MemTotal: 264418044 kB). The Committed_AS is reaching the limit which causes the out of memory (OOM) situation at any time.

 

Resolution

In this specific case, where the swap size is very small compared to the physical memory size, the vm.overcommit_ratio needs to be recalculated using the formula below to appropriately make use of the physical memory installed.
 

vm.overcommit_ratio = (MemTotal - SwapTotal) / MemTotal * 100 

Once the new vm.overcommit_ratio  is calculated, update the /etc/sysctl.conf and run sysctl -p as root.

After this run cat /proc/meminfo | grep CommitLimit. Check whether the limit has been raised appropriately.