How can I program an automatic restart of a process in a cgroup of systemctl? - server

I have an apache2 systemctl service running on my server. It has a bunch of subprocess in grouped in a Cgroup. If the server gets a bigger workload it can happen that one process fails. In particular it is /usr/sbin/fcgi-pm (I am running a Genome Browse (Gbrowse2)).
I want that the process restarts itself if it fails. It works fine, if I restart the Apache2 service again. However, I need to do it manually. Were is the setting of Apache2/ Systemctl that I can order a restart on failure of a subprocess?
Thanks in advance guys.

Related

Stop mosquitto autorestart on centos7

I'm trying to stop the Mosquitto broker service on a centos 7 server.
I've stopped the service with
sudo systemctl stop mosquitto.service
then I've disabled it with
sudo systemctl disable mosquitto.service
with ps I still get
/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
if I kill it, it autorestarts automatically and even after a reboot It's still running.
The process is owned by an other user (admin).
How can I stop it definitively?
This has nothing to do with mosquitto but how systemd manages its services.
systemctl disable only affects the autostart of the service but a disabled service will still get started if another service depends on it.
Lets say you have got a service mqtt-client depending on mosquitto with e.g. Wants=mosquitto. Every time mqtt-client is started the mosquitto service gets started as well, even if it is disabled.
So one way is either to prevent mqtt-client from starting as well (and all other services which depend on mosquitto) or remove the dependency.
Another approach is to totally prevent the service from being loaded by masking it:
systemctl mask mosquitto - this way you can neither start it manually nor by another service.
I would recommend reworking your dependencies in the long run since masking will just create a symlink to dev/null so nothing happens if service gets loaded and you are not able to start it by yourself without unmasking it first.

Systemd - always have a service running and reboot if service stops more than X times

I need to have a systemd service which runs continuously. System under question is an embedded linux built by Yocto.
If the service stops for any reason (either failure or just completed), it should be restarted automatically
If restarted more than X times, system should reboot.
What options are there for having this? I can think of the following two, but both seem suboptimal
1) having a cron job which will literally do the check above and keep the number of retries somewhere in /tmp or other tmpfs
2) having the service itself track the number times it has been started (again in some tmpfs location) and rebooting if necessary. Systemd would just have to continuously try to start the service if it's not running
edit: as suggested by an answer, I modified the service to use the StartLimitAction as given below. It causes the unit to correctly restart, but at no point does it reboot the system, even if I continuously kill the script:
[Unit]
Description=myservice system
[Service]
Type=simple
WorkingDirectory=/home/root
ExecStart=/home/root/start_script.sh
Restart=always
StartLimitAction=reboot
StartLimitIntervalSec=600
StartLimitBurst=5
[Install]
WantedBy=multi-user.target
This in your service file should do something very close to your requirements:
[Service]
Restart=always
[Unit]
StartLimitAction=reboot
StartLimitIntervalSec=60
StartLimitBurst=5
It will restart the service if it stops, except if there are more than 5 restarts in 60 seconds: in that case it will reboot.
You may also want to look at WatchdogSec value, but this software watchdog functionality requires support from the service itself (very easy to add though, see the documentation for WatchDogSec).
My understanding is that the line Restart= should be in [Service], as in the example
but lines StartLimitxxxxx= should be in [Unit].

How to properly check if a slow starting java tomcat application is running so you can restart it?

I want to implement a automatic service restarting for several tomcat applications, applications that do take a lot of time to start, even over 10 minutes.
Mainly the test would check if the application is responding on HTTP with a valid response.
Still, this is not the problem, the problem is how to prevent this uptime check to fail while the service is under maintenance, scheduled or not.
I don't want for this service to be started if it was stopped manually, with `service appname stop".
I considered creating .maintenance files on stop or restart actions of the daemon and checking for them before triggering an automated restart.
So far the only problem that I wasn't able to properly solve was, how to detect that the app finished starting up and remove the .maintenance file, so the automatic restart would work properly.
Note, an init.d script is not supposed to wait, so the daemon should start a background command that solves this problem.

Using Monit to monitor a custom daemon

I created a daemon using the System_Daemon pear package. How do I use Monit to restart the daemon when it fails.
I have the following code that was going to place in the monit config gile:
check process merge with pidfile /var/www/merge/merge.pid
group 1000
start program = "/etc/init.d/merge start"
stop program = "/etc/init.d/merge stop"
if failed host IPADDRESS port 80
then restart
if 5 restarts within 5 cycles then timeout
Is that the right way to monitor a custom daemon?
I'd say it looks quite correct.
If you ask, I guess it is because you are facing issues. Can you elaborate?

UpStart initctl start|restart ubuntu

When using upstart on ubuntu how do I issue a command for starting a job if not running and restarting if already running. When deploying an app to a new node the job is not defined.
initctl restart JOB complains if not already running
initctl start JOB complains if already running.
I can script it to do
initctl start JOB
initctl restart JOB
But it doesn't seem to be the nicest thing to do.
I was in front of the same problem.
Short of a straight "lazy-stop-then-start" command built-in initctl, we have to script.
Invoke start and restart if it fails:
initctl start JOB || initctl restart JOB
This script is probably not the answer both of us were looking for but it is short enough to mention it.
As long as the service works nicely, it will do the trick.
When the services fails, this script fails twice; For example, if the service was stopped and actually fails to start, it will also fail to restart.
Definitely looking for an improvement to this.
I hope this helps.
I also tried the 'start or restart' method that hmalphettes suggested, but got into troubles. When using this approach then updates to the upstart script would not be applied. Instead I use this, which works as I would expect:
sudo stop JOB || true && sudo start JOB
This basically reads 'Stop the job if it's running, then start it.'
sudo service JOB restart
The service command was patched in Ubuntu to make it work the same on Upstart as it does in the most common cases on sysvinit.
systemctl restart JOB
Has some unexpected effects, and in general should be carefully studied before using. It is mostly there so you can restart a job without re-loading the job definition, which is a really uncommon case.