I am using wildfly-10.1.0.Final.zip, start in domain mode in CentOS 6.6, some how it will shutdown accidently and the error as below:
2017-05-15 21:01:20,103 INFO [org.jboss.as.host.controller] (Thread-2) WFLYHC0181: Host Controller shutdown has been requested via an OS signal
what may cause this error,thank you
These are likely the causes,
Ctrl+c, kill -HUP , kill -INT , kill -TERM , and or System.exit call
Related
I tried following the basic instructions to run a node over here https://geode.apache.org/docs/guide/114/getting_started/15_minute_quickstart_gfsh.html, but I am getting the following issue:
gfsh>start locator --name=locator1
Starting a Geode Locator in /home/thiago/geode/locator1...
........
Locator in /home/thiago/geode/locator1 on 192.168.50.225[10334] as locator1 is currently online.
Process ID: 28137
Uptime: 5 seconds
Geode Version: 1.14.0
Java Version: 16.0.2
Log File: /home/thiago/geode/locator1/locator1.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/thiago/apache-geode-1.14.0/lib/geode-core-1.14.0.jar:/home/thiago/apache-geode-1.14.0/lib/geode-dependencies.jar
Unable to auto-connect (Security Manager may be enabled). Please use "connect --locator=192.168.50.225[10334]" to connect Gfsh to the locator.
Failed to connect; unknown cause: Exception caused JMX Manager startup to fail because: 'java.rmi.server.ExportException: Port already in use: 1099; nested exception is:
java.net.BindException: Failed to create server socket on 127.0.1.1[1099]'
Before starting I made sure nothing was running on port 1099, but after running the command I checked that what uses port 1099 is the spawned java process itself:
❯ lsof -i:1099
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 28137 thiago 206u IPv6 262648 0t0 TCP *:rmiregistry (LISTEN)
Sounds like the issue is because I was using java 16 which does not seems to be supported. Using java 1.8 fixes it.
Same thing still happens with lastest geode and JDK 17. JDK 11 is the latest stable release that will still work
First try downgrading or switching to the stable Java version 1.8 or 11
locator need 10334 port to start and the need 1099 for cluster connection.
To check the port availability:
lsof -I:[port-number]
Then take the PID form the above result to free up the port
To free up the port:
sudo kill PID
if it doesn't work then try
sudo kill -2 PID
or
sudo kill -1 PID
or
sudo kill -9 PID
I am trying to start up Zookeeper via the CLI with the command:
bin/zookeeper-server-start.sh ../config/zookeeper.properties
And it hums along for a second with what all seems to be correct until it says this:
INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
and then the below loops indefinitely until I exit:
[2018-08-10 15:07:48,223] INFO Accepted socket connection from /172.31.39.32:46374 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-08-10 15:07:48,228] WARN Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running (org.apache.zookeeper.server.NIOServerCnxn)
[2018-08-10 15:07:48,228] INFO Closed socket connection for client /172.31.39.32:46374 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
This is a single server and I believe a single node test server, so there isn't a quorum or other pieces running. My zookeeper config is basic, it only contains this:
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
The weird thing is, my zookeeper had been running fine, and I had made NO changes to the config. Pulled it down to try to fix something else to do a quick restart on the zookeeper, and it won't budge. I've checked, and nothing else is running on port 2181.
I see this question has been asked several times with no answers, any ideas?
This might be happening because of some corruption in zookeeper data. You should not set dataDir to /tmp/*. If your computer purges some data of /tmp, it will be difficult for zookeeper to restore the state upon restart. If you check the zookeeper logs, you should see some kind of exception there.
Since you mentioned this zookeeper instance is for test purpose only. You should set
dataDir to anything but /tmp and try restart.
I'm using confluent platform, the zookeeper is active with status lookup. but when I try to start kafka with confluent it shows zookeeper is down.
$ sudo service zookeeper status
Redirecting to /bin/systemctl status zookeeper.service
● zookeeper.service - Zookeeper
Loaded: loaded (/etc/systemd/system/zookeeper.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2017-08-08 17:25:34 PDT; 16h ago
Docs: http://kafka.apache.org/documentation.html
Process: 3774 ExecStop=/var/www/confluent/bin/zookeeper-server-stop (code=exited, status=1/FAILURE)
Main PID: 3785 (java)
CGroup: /system.slice/zookeeper.service
└─3785 java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log...
zookeeper[3785]: [2017-08-08 17:26:09,005] INFO Processed session termination for sessionid: 0x15dc460fd0c0000 (org.apache.zooke...Processor)
zookeeper[3785]: [2017-08-08 17:26:39,000] INFO Expiring session 0x15dc4364baf0004, timeout of 60000ms exceeded (org.apache.zook...perServer)
zookeeper[3785]: [2017-08-08 17:26:39,000] INFO Expiring session 0x15dc4364baf0002, timeout of 60000ms exceeded (org.apache.zook...perServer)
zookeeper[3785]: [2017-08-08 17:26:39,000] INFO Expiring session 0x15dc4364baf0003, timeout of 60000ms exceeded (org.apache.zook...perServer)
zookeeper[3785]: [2017-08-08 17:26:39,001] INFO Processed session termination for sessionid: 0x15dc4364baf0004 (org.apache.zooke...Processor)
zookeeper[3785]: [2017-08-08 17:26:39,002] INFO Processed session termination for sessionid: 0x15dc4364baf0002 (org.apache.zooke...Processor)
zookeeper[3785]: [2017-08-08 17:26:39,002] INFO Processed session termination for sessionid: 0x15dc4364baf0003 (org.apache.zooke...Processor)
zookeeper[3785]: [2017-08-09 09:56:26,711] INFO Accepted socket connection from /127.0.0.1:46446 (org.apache.zookeeper.server.NI...xnFactory)
zookeeper[3785]: [2017-08-09 09:59:14,796] WARN Exception causing close of session 0x0 due to java.io.IOException: Len error -72...erverCnxn)
zookeeper[3785]: [2017-08-09 09:59:14,796] INFO Closed socket connection for client /127.0.0.1:46446 (no session established for...erverCnxn)
Hint: Some lines were ellipsized, use -l to show in full.
$ confluent start kafka
Starting zookeeper
|Zookeeper failed to start
zookeeper is [DOWN]
Cannot start Kafka, Zookeeper is not running. Check your deployment
This is because zookeeper is already running, you can check the process with
ps aux|grep zookeeper
and kill the process manually and it is gonna work.
The most common cause for the message you are seeing when running:
confluent start kafka
and informs you that zookeeper is down, is that there's another zookeeper instance that is currently running, and the new zookeeper instance can not bind to its required port (by default this port is 2181).
A few options at your disposal to figure out what's the other zookeeper instance that is currently running when you try to issue confluent start kafka are:
run jps to see the running java processes. Zookeeper is the process named QuorumPeerMain next to its process ID. (equivalent to running ps xuaww | grep -i zookeeper or equivalent).
run lsof -i :2181 to figure out what the process that is running and has reserved the default zookeeper port (in this example 2181, but might be different in your system).
Try running confluent start kafka again after stopping the above process.
I received the same message. In my case I didn't set $JAVA_HOME variable properly.
You are mixing two installations.
confluent start kafka would depend on you running confluent start zookeeper.
Rather, it seems you already have systemctl running Zookeeper, so you should ideally just configure your server.properties and use the regular kafka-server-start script. And/or create a systemctl file for Kafka.
run $ confluent log zookeeper you will be able to see the log for any errors
there is high chance zookeeper is already running and using the port 2181,
use $ sudo lsof -i :1-2181 to see which process is using that port and try to kill and try again or
run $ sudo netstat -plten | grep java to see processes and ports they are on.
run kill -9 <pid> to kill the process.
I am having issues in stopping the jboss. Most of the times the when I execute the shutdown. it stops the server in couple of seconds.
But some times it takes forver to stop and I have to kill the process.
Whenerver the shut down takes long I see the scheduler was running and in logs I see
2014-07-14 19:19:29,124 INFO [org.springframework.scheduling.quartz.SchedulerFactoryBean] (JBoss Shutdown Hook) Shutting down Quartz Scheduler
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07shutting down.
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07 paused.
and nothing after that.
Make sure the Quartz scheduler thread and all threads in its thread pool are marked as daemon threads so that they do not to prevent the JVM from exiting.
This can be achieved by setting the following Quartz properties respectively:
org.quartz.scheduler.makeSchedulerThreadDaemon=true
org.quartz.threadPool.makeThreadsDaemons=true
While it is safe to mark the scheduler thread as a daemon thread, you should think before you mark your thread pool threads as daemons threads, because when the JVM exits, these "worker" threads can be in the middle of executing some logic that you do not want to abort abruptly. If that is the case, you can have your jobs implement the org.quartz.InterruptableJob interface and implement a JVM shutdown hook somewhere in your application that interrupts all currently executing jobs (the list of which can be obtained from the org.quartz.Scheduler API).
I have supervisord installed on my Ubuntu 10.04 and it runs a Java process continuously and supposed to heal (reload) process when it somehow dies or crashes.
On my htop I send SIGKILL, SIGTERM, SIGHUP, SIGSEGV signals to that Java process and watch /etc/logs/supervisord.log file and it says.
08:09:46,182 INFO success: myprogram entered RUNNING state,[...]
08:38:10,043 INFO exited: myprogram (exit status 0; expected)
At 08:38 I kill the process with SIGSEGV. How come it is exited with code 0 and why does not supervisord restart it at all?
All my supervisord.conf about this specific program is as follows:
[program:play-9000]
command=play run /var/www/myprogram/ --%%prod
stderr_logfile = /var/log/supervisord/myprogram-stderr.log
stdout_logfile = /var/log/supervisord/myprogram-stdout.log
Process works really fine when I launch supervisord, however does not get healed.
By the way any ideas how to start supervisord as a service so that it automatically launches when the whole system reboots?
Try setting autorestart=true. By default, autorestart is set to "unexpected" which means it will only restart a process if it exits with an unexpected exit code. By default, exit code 0 is expected.
http://supervisord.org/configuration.html#program-x-section-settings
You can use the chkconfig program to make sure that supervisor starts on reboot.
$ sudo apt-get install chkconfig
$ chkconfig -l supervisor
supervisor 0:off 1:off 2:on 3:on 4:on 5:on 6:off
You can see that it's enabled for runlevels 2-5 by default when I installed it.
$ man 7 runlevel
for more info on run levels.