Collectors are having Java memory issue

book

Article ID: 143454

calendar_today

Updated On:

Products

CA Application Performance Management Agent (APM / Wily / Introscope) CA Application Performance Management (APM / Wily / Introscope) INTROSCOPE DX Application Performance Management

Issue/Introduction

This is the error seen on many of the collectors. How can we fix this? This is impacting performance testing

 

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)

#

# There is insufficient memory for the Java Runtime Environment to continue.

# Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.

# An error report file with more information is saved as:

# /usr/wily/introscope10.7/colx/hs_err_pid14470.log

./EMCtrl.sh status: Enterprise Manager stopped

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)

 

There is insufficient memory for the Java Runtime Environment to continue.

# Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.

# Possible reasons:

#   The system is out of physical RAM or swap space

#   In 32 bit mode, the process size limit was hit

# Possible solutions:

#   Reduce memory load on the system

#   Increase physical memory or swap space

#   Check if swap backing store is full

#   Use 64 bit Java on a 64 bit OS

#   Decrease Java heap size (-Xmx/-Xms)

#   Decrease number of Java threads

#   Decrease Java thread stack sizes (-Xss)

#   Set larger code cache with -XX:ReservedCodeCacheSize=

# This output file may be truncated or incomplete.

#

#  Out of Memory Error (os_linux.cpp:2627), pid=12901, tid=0x00007fe558745700

#

# JRE version:  (8.0_112-b15) (build )

# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b15 mixed mode linux-amd64 compressed oops)

# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

#

 

---------------  T H R E A D  ---------------

 

Current thread (0x00007fe550009800):  JavaThread "Unknown thread" [_thread_in_vm, id=12996, stack(0x00007fe5586c6000,0x00007fe558746000)]

 

Stack: [0x00007fe5586c6000,0x00007fe558746000],  sp=0x00007fe558743e10,  free space=503k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

V  [libjvm.so+0xac6f7a]  VMError::report_and_die()+0x2ba

V  [libjvm.so+0x4fc71b]  report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8b

V  [libjvm.so+0x923b13]  os::Linux::commit_memory_impl(char*, unsigned long, bool)+0x103

V  [libjvm.so+0x923fb5]  os::pd_commit_memory_or_exit(char*, unsigned long, unsigned long, bool, char const*)+0x35

V  [libjvm.so+0x91e096]  os::commit_memory_or_exit(char*, unsigned long, unsigned long, bool, char const*)+0x26

V  [libjvm.so+0x5c36bf]  G1PageBasedVirtualSpace::commit_internal(unsigned long, unsigned long)+0xbf

V  [libjvm.so+0x5c394c]  G1PageBasedVirtualSpace::commit(unsigned long, unsigned long)+0x11c

V  [libjvm.so+0x5c6590]  G1RegionsLargerThanCommitSizeMapper::commit_regions(unsigned int, unsigned long)+0x40

V  [libjvm.so+0x629bf7]  HeapRegionManager::commit_regions(unsigned int, unsigned long)+0x77

V  [libjvm.so+0x62ae91]  HeapRegionManager::make_regions_available(unsigned int, unsigned int)+0x31

V  [libjvm.so+0x62b410]  HeapRegionManager::expand_by(unsigned int)+0xb0

V  [libjvm.so+0x59a849]  G1CollectedHeap::expand(unsigned long)+0x199

"hs_err_pid12901.log" 397L, 19736C

 

 

Environment

Release : 10.7.0

Component : APM Agents

Resolution


It looks like the server memory were highly utilized by other processes and those processes were eating the entire system SWAP memory also. Since we had defined the dedicated memory for the Introscope process, it was not able to start up. Needed to reboot the server to release the memory and was able to start the Introscope process.

Additional Information

See the above for other possible solutions.