ExecutionSetupException: One or more nodes lost connectivity during query - kubernetes

While running a query on Dremio 4.6.1 installed on Kubernetes, we are getting the following error message from Dremio UI:
ExecutionSetupException: One or more nodes lost connectivity during query. Identified nodes were [dremio-executor-2.dremio-cluster-pod.dremio.svc.cluster.local:0].
Dremio-env config has the following settings:
DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=13384
DREMIO_MAX_HEAP_MEMORY_SIZE_MB is not set
We are using workers of 16G /8c (Total of 10 workers)
1 Master Coordinator with the same config
Zookeeper with 1G/ 1c
Any idea what's causing this behavior ?
By doing a live logs tail before the worker crashes here are the logs:
An irrecoverable stack overflow has occurred.
Please check if any of your loaded .so files has enabled executable stack (see man page execstack(8))
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f41cdac4fa8, pid=1, tid=0x00007f41dc2ed700
#
# JRE version: OpenJDK Runtime Environment (8.0_262-b10) (build 1.8.0_262-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.262-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x00007f41cdac4fa8
#
# Core dump written. Default location: /opt/dremio/core or core.1
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid1.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
[error occurred during error reporting , id 0xb]

Related

JBoss WildFly 15.0.1 Final not starting on ubuntu 14.04 vServer with 2 GB: insufficient memory for JRE

I am trying to get JBoss WildFly 15.0.1 Final to start on a rather small ubuntu 14.04 vServer. The server has only 2 GB of RAM.
I tried to start WildFly many times without success. The JVM seems to require a lot more RAM than I had ever expected.
Here's the console output:
root#t2g55:~# service wildfly start
* Starting WildFly Application Server wildfly
* WildFly Application Server failed to start within the timeout allowed.
root#t2g55:~# cat /var/log/wildfly/console.log
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/wildfly
JAVA: /usr/bin/java
JAVA_OPTS: -server -Xms768m -Xmx1536m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a0000000, 536870912, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
1379
root#t2g55:~# free
total used free shared buffers cached
Mem: 2097152 258748 1838404 64 0 38644
-/+ buffers/cache: 220104 1877048
Swap: 2097152 0 2097152
root#t2g55:~# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1~14.04-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
root#t2g55:~#
As you can see I specified JAVA_OPTS: -server -Xms768m -Xmx1536m ..., which I thought should suffice for a WildFly server to start. Please not, that the standalone.xml has a datasource defined to a MySQL DB.
Here's the start of the dump .log:
root#t2g55:~# cat /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2757), pid=1379, tid=0x00007f62486c6700
#
# JRE version: (8.0_222-b10) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.222-b10 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
.
.
.
QUESTION:
Can this be solved with this amount of memory or do I simply have too few RAM?
What else could I probably try?
I don't really want my provider to go up with the mem successively only to find there's some other problem with Java, the JVM or anything...
Thanks
EDIT 1:
The vServer provider uses OpenVZ for its virtualization.
Info: they just pushed my to 4GB, then once, I got JBoss up and running. After reboot WildFly again refuses to start: same thing, not enough memory (even though I switch between Java 8 and Java 11 runtimes).
CMD to start JBoss WildFly: sh /opt/wildfly/bin/standalone.sh &, standalone.xml appears to be OK. I removed the ExampleDS, three entries commented.
It was indeed a server virtualization issue with OpenVZ.
Quote (in German):
Hi,
das Problem nach bei den user_beancounters, genauer gesagt bei privvmpages, diese waren > zu gering eingestellt.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Mit freundlichen Grüßen
Mr X
Translation:
Hi,
the problem was with the user_beancounters, that is with the privvmpages, these were set too low.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Best regards
Mr X
I don't know exactly what he did in detail, but that resolved it.
I now run on a 2GB machine without any problems and memory usage of mysqld + standalone.sh (WildFly + webapp) is around 800 MB.

Out of memory issue in jdk but works fine in openjdk, java application deployed on jboss 5.1

I have deployed my java application on jboss and linux 2.6.32 .
The machine has 8 gb of memory, but when I run the application on openjdk using
JAVA_OPTS="$JAVA_OPTS -server -Xms2048m -Xmx2048m -XX:MaxPermSize=700m
-XX:NewRatio=3 -XX:+DisableExplicitGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=4 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dofbiz.home=adasdfasdf".
It's working fine.
But when i try to run the same on jdk 1.6 its giving me out of memory error as below:
There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 444 bytes for vframeArray::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.inline.hpp:44), pid=11749, tid=707259248
#
# JRE version: 6.0_32-b05
# Java VM: Java HotSpot(TM) Server VM (20.7-b02 mixed mode linux-x86 )
--------------- T H R E A D ---------------
Current thread (0x2a5b5000): JavaThread "main" [_thread_in_Java, id=11768, stack(0x2a22e000,0x2a27f000)]
Stack: [0x2a22e000,0x2a27f000], sp=0x2a27cd50, free space=315k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7257e0]
How can i make the application run using jdk 1.6?

DDS 9th topic causes a crash

I am using DDS (more specifically RTI DDS) for a java application. I am creating each topic for my DDS implementation one by one in code so thus I can test each one with a DDS spy after the code is written. When I wrote the 8th topic everything worked fine. However when I then wrote the 9th topic, nothing seemed to happen as the program seemed to stop somewhere. I then debugged and after a lot of stepping into code, got this printed to council.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x01349a58, pid=16109, tid=2429123440
#
# JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17)
# Java VM: Java HotSpot(TM) Server VM (24.65-b04 mixed mode linux-x86 )
# Problematic frame:
# V [libjvm.so+0x48aa58] java_lang_String::utf8_length(oopDesc*)+0x58
#
# Core dump written. Default location: /home/foo/core or core.16109
#
# An error report file with more information is saved as:
#
# /home/foo/corehs_err_pid16109.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
[D0000|ENABLE]COMMENDSrReaderService_new:!create worker-specific object
[D0000|ENABLE]PRESPsService_enable:!create srr (strict reliable reader)
[D0000|ENABLE]DDS_DomainParticipantService_enable:!enable publish/subscribe service
[D0000|ENABLE]DDS_DomainParticipant_enableI:!enable service
I am not sure why this has happened all of a sudden when I created my 9th topic, yet if I only have 8 it works great. I have tried to increase my resourcelimits as well and get an Immutable QOS Policy error. Does anyone know why this error is occurring in terms of why my 9th topic causes a failure and how to fix the problem? I am running my application on 32 bit RHEL 6.6.
I found on this is because of the max objects per thread by default is set to 8 by the qos. To change this setting, before your first topic is created you must do the following.
DomainParticipantFactoryQos factoryQos =
new DomainParticipantFactoryQos();
DomainParticipantFactory.TheParticipantFactory.get_qos(factoryQos);
factoryQos.resource_limits.max_objects_per_thread = 2048;
DomainParticipantFactory.TheParticipantFactory.set_qos(factoryQos);
This then sets the size before the DDS starts and is thus editable and not immutable at that point.

Oracle sql developer does not start in fedora 20

I am download last version of Oracle sql developer RPM version from oracle website and install it on my fedora 20 , after running sql developer , program does not start with following result:
[mohsen#localhost bin]$ sqldeveloper
Oracle SQL Developer
Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved.
LOAD TIME : 655#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f3d8abb5910, pid=18459, tid=139904211367680
#
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 1.7.0_67-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C[thread 139904223835904 also had an error]
0x00007f3d8abb5910
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid18459.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
/opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 18459 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[#]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[#]}"
I am using sqldeveloper 4.0.3.16 java 7u67 on fedora 20

log file automatically created when open eclipse

i am trying to open eclipse but it doesn't response .
it creates a log file automatically (hs_err_pid#no.log).
i tried to download another copy of eclipse but the same problem occur.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (signature.cpp:53), pid=3808, tid=1460
# fatal error: expecting (
#
# JRE version: (7.0_51-b13) (build )
# Java VM: Java HotSpot(TM) Client VM (24.51-b03 mixed mode windows-x86 )
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#