Intermittent JVM crash under load - jboss5.x

I am currently load testing my clustered application which is running in JBoss 5.1, with JDK 1.6.0_45 and I am experiencing intermittent JVM crashes. From the error report (further details from report below) it appears that the eden space is full (100%) at the time of the crash, so I suspect that this is the most likely candidate.
I have therefore been running JVisualVM to look for memory leaks, specifically monitoring my own classes. I can see these classes growing in memory, but then they are periodically cleaned up by the garbage collector.
Even if there was a memory leak, I would have expected to see OutOfMemory errors before a complete JVM crash anyway. Can anyone help to point me in the right direction with where the problem may lie? Any guidance would be very much appreciated.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000006dba43f7, pid=3980, tid=2556
#
# JRE version: 6.0_45-b06
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.45-b01 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# V [jvm.dll+0x2c43f7]
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
snip
Heap
PSYoungGen total 670272K, used 662831K [0x00000007d5560000, 0x0000000800000000, 0x0000000800000000)
eden space 641728K, 100% used [0x00000007d5560000,0x00000007fc810000,0x00000007fc810000)
from space 28544K, 73% used [0x00000007fc810000,0x00000007fdcabf68,0x00000007fe3f0000)
to space 28352K, 12% used [0x00000007fe450000,0x00000007fe7d0e60,0x0000000800000000)
PSOldGen total 1398144K, used 1096904K [0x0000000780000000, 0x00000007d5560000, 0x00000007d5560000)
object space 1398144K, 78% used [0x0000000780000000,0x00000007c2f32250,0x00000007d5560000)
PSPermGen total 422848K, used 378606K [0x0000000760000000, 0x0000000779cf0000, 0x0000000780000000)
object space 422848K, 89% used [0x0000000760000000,0x00000007771bb800,0x0000000779cf0000)
snip
VM Arguments:
jvm_args: -Dprogram.name=run.bat -XX:MaxPermSize=512m -Xms2G -Xmx2G -Dhttp.proxyHost=testproxy -Dhttp.proxyPort=8010 -Dhttps.proxyHost=testproxy -Dhttps.proxyPort=8010 -Djavax.net.ssl.trustStore=cacerts -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.keyStore=testkeystore.jks -Djavax.net.ssl.keyStorePassword=testkeystore -Djboss.messaging.ServerPeerID=2 -Dhttp.nonProxyHosts=*.mydomain.com -Dsun.rmi.dgc.client.gcInterval=900000 -Dsun.rmi.dgc.server.gcInterval=900000 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Djava.library.path=C:\jboss-5.1.0.GA\bin\native;C:\Program Files (x86)\Windows Resource Kits\Tools\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\jboss-5.1.0.GA\bin -Djava.endorsed.dirs=C:\jboss-5.1.0.GA\lib\endorsed
java_command: org.jboss.Main -c hops-cnf -b 0.0.0.0
Launcher Type: SUN_STANDARD

This is most likely a problem with the JVM version and the eden space. Your best bet is probably to reduce the GC threading? Try with
-XX:LargePageSizeInBytes=5m -XX:ParallelGCThreads=1 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC

Related

Vertx program memory keeps growing

I used the top command to check on the Linux server, and found that the Memory of the deployed Vertx program has been increasing. I printed the Memory usage with Native Memory Tracking, and found that the Internal Memory has been increasing.I didn't manually allocate out of heap memory in the code.
Native Memory Tracking:
Total: reserved=7595MB, committed=6379MB
Java Heap (reserved=4096MB, committed=4096MB)
(mmap: reserved=4096MB, committed=4096MB)
Class (reserved=1101MB, committed=86MB)
(classes #12776)
(malloc=7MB #18858)
(mmap: reserved=1094MB, committed=79MB)
Thread (reserved=122MB, committed=122MB)
(thread #122)
(stack: reserved=121MB, committed=121MB)
Code (reserved=253MB, committed=52MB)
(malloc=9MB #12566)
(mmap: reserved=244MB, committed=43MB)
GC (reserved=155MB, committed=155MB)
(malloc=6MB #302)
(mmap: reserved=150MB, committed=150MB)
Internal (reserved=1847MB, committed=1847MB)
(malloc=1847MB #35973)
Symbol (reserved=17MB, committed=17MB)
(malloc=14MB #137575)
(arena=3MB #1)
Native Memory Tracking (reserved=4MB, committed=4MB)
(tracking overhead=3MB)
vertx version:3.9.8
Cluster:Hazelcast
startup script:su web -s /bin/bash -c "/usr/bin/nohup /usr/bin/java -XX:NativeMemoryTracking=detail -javaagent:../target/showbiz-server-game-1.0-SNAPSHOT-fat.jar -javaagent:../../quasar-core-0.8.0.jar=b -Dvertx.hazelcast.config=/data/appdata/webdata/Project/config/cluster.xml -jar -Xms4G -Xmx4G -XX:-OmitStackTraceInFastThrow ../target/server-1.0-SNAPSHOT-fat.jar start-Dvertx-id=server -conf application-conf.json -Dlog4j.configurationFile=log4j2_logstash.xml -cluster >nohup.out 2>&1 &"
If your producers are much faster than your consumers, and back pressure isn't handled properly, it's possible to have memory that keeps increasing.
Also, this could vary depending on how the code is written.
This similar reported issue could be of help and consider exploring writeStream too.

ExecutionSetupException: One or more nodes lost connectivity during query

While running a query on Dremio 4.6.1 installed on Kubernetes, we are getting the following error message from Dremio UI:
ExecutionSetupException: One or more nodes lost connectivity during query. Identified nodes were [dremio-executor-2.dremio-cluster-pod.dremio.svc.cluster.local:0].
Dremio-env config has the following settings:
DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=13384
DREMIO_MAX_HEAP_MEMORY_SIZE_MB is not set
We are using workers of 16G /8c (Total of 10 workers)
1 Master Coordinator with the same config
Zookeeper with 1G/ 1c
Any idea what's causing this behavior ?
By doing a live logs tail before the worker crashes here are the logs:
An irrecoverable stack overflow has occurred.
Please check if any of your loaded .so files has enabled executable stack (see man page execstack(8))
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f41cdac4fa8, pid=1, tid=0x00007f41dc2ed700
#
# JRE version: OpenJDK Runtime Environment (8.0_262-b10) (build 1.8.0_262-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.262-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x00007f41cdac4fa8
#
# Core dump written. Default location: /opt/dremio/core or core.1
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid1.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
[error occurred during error reporting , id 0xb]

JBoss WildFly 15.0.1 Final not starting on ubuntu 14.04 vServer with 2 GB: insufficient memory for JRE

I am trying to get JBoss WildFly 15.0.1 Final to start on a rather small ubuntu 14.04 vServer. The server has only 2 GB of RAM.
I tried to start WildFly many times without success. The JVM seems to require a lot more RAM than I had ever expected.
Here's the console output:
root#t2g55:~# service wildfly start
* Starting WildFly Application Server wildfly
* WildFly Application Server failed to start within the timeout allowed.
root#t2g55:~# cat /var/log/wildfly/console.log
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/wildfly
JAVA: /usr/bin/java
JAVA_OPTS: -server -Xms768m -Xmx1536m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a0000000, 536870912, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
1379
root#t2g55:~# free
total used free shared buffers cached
Mem: 2097152 258748 1838404 64 0 38644
-/+ buffers/cache: 220104 1877048
Swap: 2097152 0 2097152
root#t2g55:~# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1~14.04-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
root#t2g55:~#
As you can see I specified JAVA_OPTS: -server -Xms768m -Xmx1536m ..., which I thought should suffice for a WildFly server to start. Please not, that the standalone.xml has a datasource defined to a MySQL DB.
Here's the start of the dump .log:
root#t2g55:~# cat /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2757), pid=1379, tid=0x00007f62486c6700
#
# JRE version: (8.0_222-b10) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.222-b10 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
.
.
.
QUESTION:
Can this be solved with this amount of memory or do I simply have too few RAM?
What else could I probably try?
I don't really want my provider to go up with the mem successively only to find there's some other problem with Java, the JVM or anything...
Thanks
EDIT 1:
The vServer provider uses OpenVZ for its virtualization.
Info: they just pushed my to 4GB, then once, I got JBoss up and running. After reboot WildFly again refuses to start: same thing, not enough memory (even though I switch between Java 8 and Java 11 runtimes).
CMD to start JBoss WildFly: sh /opt/wildfly/bin/standalone.sh &, standalone.xml appears to be OK. I removed the ExampleDS, three entries commented.
It was indeed a server virtualization issue with OpenVZ.
Quote (in German):
Hi,
das Problem nach bei den user_beancounters, genauer gesagt bei privvmpages, diese waren > zu gering eingestellt.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Mit freundlichen Grüßen
Mr X
Translation:
Hi,
the problem was with the user_beancounters, that is with the privvmpages, these were set too low.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Best regards
Mr X
I don't know exactly what he did in detail, but that resolved it.
I now run on a 2GB machine without any problems and memory usage of mysqld + standalone.sh (WildFly + webapp) is around 800 MB.

OrientDB 2.2.2 - Is there a way to suppress OAbstractProfiler$MemoryChecker Messages?

We are running on 32bit windows and since upgrading from 1.4.1 to 2.2.2, we are seeing the following memory in stdout (numbers not exact):
INFO: Database 'BLAH' uses 770MB/912MB of DISKCACHE memory, while Heap is not completely used (usedHeap=123MB maxHeap=512MB). To improve performance set maxHeap to 124MB and DISKCACHE to 1296MB
With 32bit, we can only set a max of Xmx + storage.diskCache.bufferSize ~= 1.4gb without getting OOM or performance issues. Any combination of different sizes of either of these two configurable variables results in a variant of the above message.
Is there a way to suppress the above profiler/memory checker messages?
You can disable the profiler with:
java ... -Dprofiler.enabled=false ...
Set that configuration in your server.sh or in the last section of config/orientdb-server-config.xml file.

Out of memory issue in jdk but works fine in openjdk, java application deployed on jboss 5.1

I have deployed my java application on jboss and linux 2.6.32 .
The machine has 8 gb of memory, but when I run the application on openjdk using
JAVA_OPTS="$JAVA_OPTS -server -Xms2048m -Xmx2048m -XX:MaxPermSize=700m
-XX:NewRatio=3 -XX:+DisableExplicitGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=4 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dofbiz.home=adasdfasdf".
It's working fine.
But when i try to run the same on jdk 1.6 its giving me out of memory error as below:
There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 444 bytes for vframeArray::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.inline.hpp:44), pid=11749, tid=707259248
#
# JRE version: 6.0_32-b05
# Java VM: Java HotSpot(TM) Server VM (20.7-b02 mixed mode linux-x86 )
--------------- T H R E A D ---------------
Current thread (0x2a5b5000): JavaThread "main" [_thread_in_Java, id=11768, stack(0x2a22e000,0x2a27f000)]
Stack: [0x2a22e000,0x2a27f000], sp=0x2a27cd50, free space=315k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7257e0]
How can i make the application run using jdk 1.6?