This issue comes up in a Tanzu GemFire service instance or a Tanzu GemFire cluster.
gemfire-locator
job failing on the VMware Tanzu GemFire service instance VMs. The GemFire logs (locator.log or one of the rolled over logs) located in /var/vcap/sys/log/gemfire-locator/gemfire
will show the following exception.
Uncaught exception in thread Thread[IDLE p2pDestreamer,5,P2P Reader Threads] java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596) at java.lang.StringBuffer.append(StringBuffer.java:367) at java.io.BufferedReader.readLine(BufferedReader.java:370) at java.io.BufferedReader.readLine(BufferedReader.java:389) at org.apache.geode.management.internal.beans.BeanUtilFuncs.decompress(BeanUtilFuncs.java:358) at org.apache.geode.management.internal.beans.QueryDataFunction.callFunction(QueryDataFunction.java:276) at org.apache.geode.management.internal.beans.QueryDataFunction.queryData(QueryDataFunction.java:423) at org.apache.geode.management.internal.beans.DistributedSystemBridge.queryData(DistributedSystemBridge.java:1421) at org.apache.geode.management.internal.beans.DistributedSystemMBean.queryData(DistributedSystemMBean.java:398) at sun.reflect.GeneratedMethodAccessor550.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at org.apache.geode.management.internal.security.MBeanServerWrapper.invoke(MBeanServerWrapper.java:221) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
This is a known limitation.
Queries executed using gfsh or Pulse are coordinated by the locator, meaning results are sent back to the locator, which in-turn sends results back to Pulse or gfsh.
When the size of data becomes high, the locator will run out of memory.
You have a few options to work around this:
Execute heavy queries programmatically using spring-data-gemfire
or GemFire Java APIs. In this case, a cache server takes the responsibility of collecting and sending back the results instead of the locator. Cache servers are usually higher capacity VMs (especially w.r.t RAM) than locators, therefore they avoid the issue.
In the case of
Another option is to lower the default (it is set to 1000) value of "QueryResultSetLimit
" for queries running from Pulse.
If you use jconsole
, you can change this value by browsing the MBean with the following: GemFire > Distributed > System > Attributes > QueryResultSetLimit
This value resets to 1,000
if you restart the GemFire JMX manager (typically, the Locator).