Supervisord does not start killed processes - supervisord

I have supervisord installed on my Ubuntu 10.04 and it runs a Java process continuously and supposed to heal (reload) process when it somehow dies or crashes.
On my htop I send SIGKILL, SIGTERM, SIGHUP, SIGSEGV signals to that Java process and watch /etc/logs/supervisord.log file and it says.
08:09:46,182 INFO success: myprogram entered RUNNING state,[...]
08:38:10,043 INFO exited: myprogram (exit status 0; expected)
At 08:38 I kill the process with SIGSEGV. How come it is exited with code 0 and why does not supervisord restart it at all?
All my supervisord.conf about this specific program is as follows:
[program:play-9000]
command=play run /var/www/myprogram/ --%%prod
stderr_logfile = /var/log/supervisord/myprogram-stderr.log
stdout_logfile = /var/log/supervisord/myprogram-stdout.log
Process works really fine when I launch supervisord, however does not get healed.
By the way any ideas how to start supervisord as a service so that it automatically launches when the whole system reboots?

Try setting autorestart=true. By default, autorestart is set to "unexpected" which means it will only restart a process if it exits with an unexpected exit code. By default, exit code 0 is expected.
http://supervisord.org/configuration.html#program-x-section-settings
You can use the chkconfig program to make sure that supervisor starts on reboot.
$ sudo apt-get install chkconfig
$ chkconfig -l supervisor
supervisor 0:off 1:off 2:on 3:on 4:on 5:on 6:off
You can see that it's enabled for runlevels 2-5 by default when I installed it.
$ man 7 runlevel
for more info on run levels.

Related

Celery worker exited prematurely on restart using systemd

I'm using celery with systemd. I noticed that most times on restart, I lose the workers mid-task. From the celery multi documentation, it seems like the celery multi stopwait should be waiting for the tasks to finish.
Got the following error on restart:
Process "ForkPoolWorker-10" pid:16902 exited with "signal 15 (SIGTERM)"
celery.conf
[Unit]
Description=Celery background worker
After=network.target
[Service]
Type=forking
User=celery
Group=celery
WorkingDirectory=/src
ExecStart=celery multi start worker -A main.celery -Q celery --logfile=/data/celery.log --loglevel=info --concurrency=10 --pidfile=/var/run/celery/%%n.pid
ExecStop=celery multi stopwait worker --pidfile=/var/run/celery/%%n.pid
[Install]
WantedBy=multi-user.target
I also read the systemd documentation, we should at least be waiting 90 seconds for the task to be completed before sending out a SIGTERM. I receive this error in less than 10 seconds of running the restart command.
What am I doing wrong?
Using celery version: 5.2.2 (dawn-chorus)

Unable to start tailscaled service on debian 11 container

Tailscale version 1.22.0
Your operating system & version Debian bullseye 11
version
Hello sir, could you guide me how to start tailscaled.service. i got an error message like this :
failed to connect to local tailscaled; it doesn’t appear to be running (sudo systemctl start tailscaled ?)
And when i try to run this command sudo systemctl start tailscaled. I got another different error :
Job for tailscaled.service failed because the control process exited with error code.
See “systemctl status tailscaled.service” and “journalctl -xe” for details.
Thanks
The journalctl output it suggests to run would tell the error it is experiencing. I'd recommend running:
journalctl -u tailscaled --since="2 hours ago"

Supervisord (exit status 2; not expected) ubuntu

I'm trying to run Celery with Supervisord on Ubuntu, but am getting:
INFO exited: celery (exit status 2; not expected)
INFO spawned: 'celery' with pid 15517
INFO gave up: celery entered FATAL state, too many start retries too
quickly
This is the Supervisord script:
cd into the directory and activate the virtual environment
celery -A [APP_NAME].celery worker -E -l info --concurrency=2
If I run this script manually, Celery starts up without any issues. But running sudo supervisorctl start celery errors out with the error messages above.

WildFly10 shutdown error WFLYHC0181

I am using wildfly-10.1.0.Final.zip, start in domain mode in CentOS 6.6, some how it will shutdown accidently and the error as below:
2017-05-15 21:01:20,103 INFO [org.jboss.as.host.controller] (Thread-2) WFLYHC0181: Host Controller shutdown has been requested via an OS signal
what may cause this error,thank you
These are likely the causes,
Ctrl+c, kill -HUP , kill -INT , kill -TERM , and or System.exit call

Failed to start puppetserver Service

While trying to run a puppet update form a node:
sudo /opt/puppetlabs/bin/puppet agent -t
I get an error:
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Connection refused - connect(2) for "puppet" port 8140`
Elsewhere indicates this is likely a problem with the puppetserver service, and suggests to reboot the server. Restarting didn't help, and when I try to restart the service I get failure:
~$ sudo service puppetserver restart
Job for puppetserver.service failed because the control process exited with error code. See "systemctl status puppetserver.service" and "journalctl -xe" for details.
I've looked at these logs, and as a puppet/linux noob, I'm not sure what to do next.
systemctl status puppetserver.service
● puppetserver.service - puppetserver Service
Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; vendor preset: enabled)
Active: activating (start-post) since Fri 2016-09-02 15:54:26 PDT; 2s ago
Process: 22301 ExecStartPre=/usr/bin/install --directory --owner=puppet --group=puppet --mode=775 /var/run/puppetlabs/puppetserver (code=exited
Main PID: 22306 (java); : 22307 (bash)
Tasks: 17
Memory: 335.7M
CPU: 5.535s
CGroup: /system.slice/puppetserver.service
├─22306 /usr/bin/java -Xms6g -Xmx6g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -cp /opt/p
└─control
├─22307 /bin/bash /opt/puppetlabs/server/apps/puppetserver/ezbake-functions.sh wait_for_app
└─22331 sleep 1
Sep 02 15:54:26 puppet systemd[1]: Starting puppetserver Service...
Sep 02 15:54:26 puppet java[22306]: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
puppet version 4.6.1
The puppet master communicates with the other node using port number 8140.
I don't think a restart will help, since this looks like a connection issue between the server and the node.
please try the following -
first make sure that the puppet master is actually listening on port 8140. run the following command on the puppetmaster -
netstat -ntlp | grep 8140
this command should return something like this -
tcp 0 0 0.0.0.0:8140 0.0.0.0:* LISTEN 1783/puppetmaster
If you don't get the same output, your puppetmaster is not listening, and therefore can not compile catalogs for the node.
Try checking the puppet master log at /var/log/puppetmaster.log
check that the node can communicate with the puppetmaster on the relevant port. you can check this quickly with the telnet command. run this on your node -
telnet < puppetmaster ip address \ dns name> 8140
you should get something like -
Connected to <puppet-master-IP/DNS-name>
Escape character is '^]'.
if you don't get this output, this means that something is blocking you from accessing the puppetmaster. try opening the port in your firewall to access the puppetmaster.
if you're still stuck try using the --debug flag for verbose output and edit your question.
Could be 2 things: (1) in puppet.conf you have configured more memory than you have on your machine. Or (2) You installed both apt-get install puppetserver and apt-get install puppet.
If you get failed to start puppet.service: unit not found. error on slave machine while connecting to puppet.
Close the putty and then again open and connect it.The issue wont come while starting putty on slave.
The error occurs because there is not enough RAM and to fix the error, open the Puppet server configuration file:
sudo nano /etc/sysconfig/puppetserver
And reduce the amount of allocated RAM for the Puppet server (for example, I specified 512m instead of 2g):
JAVA_ARGS="-Xms512m -Xmx512m"
Now let’s start the Puppet server:
sudo systemctl start puppetserver