systemd not executing ExecStop script - centos

I have created a service(name- develop) using systemd. Following is the content of my develop unit file -
Description=Develop Manager Service
[Service]
Type=forking
PIDFile = /home/nayasa/data/var/run/developPid
User=root
Group=root
ExecStartPre = /bin/bash /home/nayasa/control_scripts/develop_startPre.sh
ExecStart =/bin/bash /home/nayasa/control_scripts/develop_start.sh
ExecStop =/bin/bash /home/nayasa/control_scripts/develop_stop.sh
[Install]
WantedBy=multi-user.target
My develop.service forks multiple processes during runtime.
Whenever I run systemctl stop develop.service , systemd stops all processs in the CGroup of my develop service whereas the develop_stop script that I have provided only kills the main process using pid from pidfile. I want to stop only the main process. It seems to me that systemd is not using my stop script. How do I force systemd to execute my stop script to stop the service and not kill all processes of the Cgroup? FYI- I know that using KillMode option I can direct systemd to kill only main process and leave other processes, but I want to know why is my script not being executed?

It's a little weird to expect orphaned processes to persist after stopping a service. You would be left with a system that's in an unknown state. What would happen if you started the service again?
I think what you probably want is more complicated than a single service.
Let's say you wanted develop.service to launch proc1 and proc2. You want systemctl stop develop.service to kill proc1 but not proc2. In this case, you still need something to manage proc2 otherwise you have a rogue orphaned unmanaged and monitored process. The answer is to use another service.
Instead, try making two services. develop.service would launch proc1, possibly using your scripts. Then add a Wants=proc2.service to your [Unit] section. proc2.service would be responsible for proc2.
This means systemctl start develop.service will launch proc1 and proc2. Meanwhile systemctl stop develop.service will only kill proc1. proc2 can still be stopped/monitored by inspecting proc2.service.

Related

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.

Systemd timer for a bound service restarts the service it is bound to

I have two systemd services a and b, where b is "After" and "BindsTo" a, and b is a short command that is launched every minute with a systemd timer.
Heres my config:
$ cat /systemd/a.service
[Unit]
After=foo
BindsTo=foo
[Service]
ExecStart=/opt/a/bin/a
Group=lev
User=lev
Restart=Always
WorkingDirectory=/opt/a
$ cat /systemd/b.service
[Unit]
After=a
BindsTo=a
[Service]
ExecStart=/opt/b/bin/b
Group=lev
User=lev
WorkingDirectory=/opt/b
$ cat /systemd/b.timer
[Unit]
[Timer]
OnCalendar=*:0/1:00
When I run sudo systemctl stop a, service a is indeed stopped, but then it is started back up at the top of the next minute when the timer for service b runs b
The systemd documentation states that BindsTo
declares that if the unit bound to is stopped, this unit will be stopped too.
(https://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=)
I expect that by stopping a, b will also be stopped, and the timer disabled. This is not the case. Can you help explain why the b timer restarts not only b (which should fail), but also a?
Can you also help me edit these services such that:
on boot, a is started first, then b is started
when I sudo systemctl stop a, b's timer does not run
when I sudo systemctl start a, b's timer begins running again
Thanks in advance!
Here are the simplest units that meet your constraints:
test-a.service
[Service]
ExecStart=sleep 3600 # long-running command
test-b.service
[Service]
ExecStart=date # short command
test-b.timer
[Unit]
After=test-a.service
BindsTo=test-a.service # makes test-b.timer stop when test-a.service stops
[Timer]
OnCalendar=* *-*-* *:*:00
[Install]
WantedBy=test-a.service # makes test-b.timer start when test-a.service starts
Don't forget to
systemctl daemon-reload
systemctl disable test-b.timer
systemctl enable test-b.timer
To apply the changes in the [Install] section.
Explanations:
what you want is to bind a.service with b.timer, not b.service
b.service is only a short command, and systemctl start b.service will only run the command, not start the associated timer
only systemctl start b.timer will start the timer
The WantedBy tells systemd to start test-b.timer when test-a.service starts
The BindsTo tells test-b.timer to stop when test-a.service stops
The After only ensures that test-b.timer is not started at the same time than test-a.service: it will force systemd to start test-b.timer after test-a.service has finished starting.
About the behaviour you observed:
When you stopped your a.service, the b.timer was still active and it tried starting b.service to run its short command. Since your b.service specified BindsTo=a.service, systemd thought that b.service required a.service to be started also, and effectively restarted a.service for b.service to run correctly.
I could be mistaken, but I believe that the "Restart=Always" option is the reason that the service named a is started and hence why the service named b is not subsequently stopped.
The man page for systemd.service states if this option is set to always
the service will be restarted regardless of whether it exited cleanly
or not, got terminated abnormally by a signal, or hit a timeout.
https://www.freedesktop.org/software/systemd/man/systemd.service.html#Restart=
So even though you are stopping the service, this option is starting it again.
You can test this by running the following commands. Since you have service "b" on a one minute timer, I would run the stop command at 10 seconds after the top of the minute (i.e. 10:00:10). Then I would run the status command 20 seconds later and see if the service has been restarted.
sudo systemctl stop a
sudo systemctl status a b

systemd service fails to activate

I am working to create a service that triggers a script upon boot. The script then installs and activates a piece software. I only want this service to run once so that it installs the software on initial boot. This is being built into an AMI for standard deployment in an enterprise.
I currently have the following:
/etc/systemd/system/startup.service (executable using chmod +x; enabled using "systemctl enable startup.service")
/var/tmp/LinuxDeploymentScript.sh
The service contains:
[Unit]
After=remote-fs.target
[Service]
Type=oneshot
User=root
ExecStart=/var/tmp/LinuxDeploymentScript.sh
[Install]
WantedBy=multi-user.target
When I test the service by using systemctl start startup.service it runs successfully, but when I leave it enabled and reboot the system, it fails to activate:
Screenshot of failure log
Any help would be great. I have a thought that it could be my After= setting may not be far enough into the computer spinning up to be successful.

Best way to run a pyramid pserve server as daemon

I used to run my pyramid server as a daemon with the pserve --daemon command.
Given that it's deprecated, I'm looking for the best replacement. This link recommends to run it with screen or tmux, but it seems too heavy to just run a web server. Another idea would be to launch it with setsid.
What would be a good way to run it ?
Create a service file in /etc/systemd/system. Here a example (pyramid.service):
[Unit]
Description=pyramid_development
After=network.target
[Service]
# your Working dir
WorkingDirectory=/srv/www/webgis/htdocs/app
# your pserve path with ini
ExecStart=/srv/www/app/env/bin/pserve /srv/www/app/development.ini
[Install]
WantedBy=multi-user.target
Enable the service:
systemctl enable pyramid.service
Start/Stop/Restart the service with:
systemctl start pyramid.service
systemctl restart pyramid.service
systemctl stop pyramid.service
The simplest option is to install supervisord and setup a conf file for the service. The program would just be env/bin/pserve production.ini. There are countless examples online of how to do this.
The best option is to integrate with your system's process manager (systemd usually, but maybe also upstart or sysvinit or openrc). It is very easy to write a systemd unit file for starting pserve and then it will be started/stopped along with the rest of your system. Log files are even handled automatically in these cases.

supervisord: How to stop supervisord on PROCESS_STATE_FATAL

I'm using supervisord to manage multiple processes in a docker container.
However, one process is always the 'master', and the others are monitoring and reporting processes.
What I want to do is kill supervisord if the master process fails to start after startretries.
What I tried to do is use eventlistener to kill the process:
[eventlistener:master]
events=PROCESS_STATE_FAIL
command=supervisorctl stop all
But I don't think the events subsystem is this sophisticated. I think I need to actually write an event listener to handle the events.
Is that correct? Is there a simpler way to kill the entire supervisord if one of the processes kicks?
Thanks
Another try:
[eventlistener:quit_on_failure]
events=PROCESS_STATE_FATAL
command=sh -c 'echo "READY"; while read -r line; do echo "$line"; supervisorctl shutdown; done'
Especially for docker containers, it would literaly be a killer to have a simple straightforward shutdown on errors. Container should go down when processes die.
Answered by:
supervisord event listener
The command parameter MUST be an event handler, can't be a random command.