This is the first time I've used systemd and a bit unsure about something.
I've got a service that I've set up (for geoserver running under tomcat):
[Unit]
Description=Geoserver
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/geoserver/bin/startup-optis.sh
ExecStop=/usr/local/geoserver/bin/shutdown-optis.sh
User=geoserver
[Install]
WantedBy=multi-user.target
The start up script does an exec to run java/tomcat. Starting up the service from the commandline appears to work:
sudo systemctl start geoserver
However the command does not return until I ctrl-c, this doesn't seem right to me. The java process remains running afterwards though and functions normally. I'm reluctant to reboot the box to test this in case this is going to cause problems during init and it's a remote machine and it would be a pain to get someone to address it.
You need to set correct "Type" in "Service" section:
[Service]
...
Type=simple
...
Type
Configures the process start-up type for this service unit. One of simple, forking, oneshot, dbus, notify or idle.
If set to simple (the default if neither Type= nor BusName=, but
ExecStart= are specified), it is expected that the process configured
with ExecStart= is the main process of the service. In this mode, if
the process offers functionality to other processes on the system, its
communication channels should be installed before the daemon is
started up (e.g. sockets set up by systemd, via socket activation), as
systemd will immediately proceed starting follow-up units.
If set to forking, it is expected that the process configured with
ExecStart= will call fork() as part of its start-up. The parent
process is expected to exit when start-up is complete and all
communication channels are set up. The child continues to run as the
main daemon process. This is the behavior of traditional UNIX daemons.
If this setting is used, it is recommended to also use the PIDFile=
option, so that systemd can identify the main process of the daemon.
systemd will proceed with starting follow-up units as soon as the
parent process exits.
Behavior of oneshot is similar to simple; however, it is expected that
the process has to exit before systemd starts follow-up units.
RemainAfterExit= is particularly useful for this type of service. This
is the implied default if neither Type= or ExecStart= are specified.
Behavior of dbus is similar to simple; however, it is expected that
the daemon acquires a name on the D-Bus bus, as configured by
BusName=. systemd will proceed with starting follow-up units after the
D-Bus bus name has been acquired. Service units with this option
configured implicitly gain dependencies on the dbus.socket unit. This
type is the default if BusName= is specified.
Behavior of notify is similar to simple; however, it is expected that
the daemon sends a notification message via sd_notify(3) or an
equivalent call when it has finished starting up. systemd will proceed
with starting follow-up units after this notification message has been
sent. If this option is used, NotifyAccess= (see below) should be set
to open access to the notification socket provided by systemd. If
NotifyAccess= is not set, it will be implicitly set to main. Note that
currently Type=notify will not work if used in combination with
PrivateNetwork=yes.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all jobs are dispatched. This
may be used to avoid interleaving of output of shell services with the
status output on the console.
Related
I'm using a rapsberry pi 4, v10(buster).
I installed supervisor per the instructions here: http://supervisord.org/installing.html
Except I changed "pip" to "pip3" because I want to monitor running things that use the python3 kernel.
I'm using Prefect, and the supervisord.conf is running the program with command=/home/pi/.local/bin/prefect "agent local start" (I tried this with and without double quotes)
Looking at the supervisord.log file it seems like the Prefect Agent does start, I see the ASCII art that normally shows up when I start it from the command line. But then it shows it was terminated by SIGTERM;not expected, WARN recieved SIGTERM inidicating exit request.
I saw this post: Supervisor gets a SIGTERM for some reason, quits and stops all its processes but I don't even have that 10Periodic file it references.
Anyone know why/how Supervisor processes are getting killed by sigterm?
It could be that your process exits immediately because you don’t have an API key in your command and this is required to connect your agent to the Prefect Cloud API. Additionally, it’s a best practice to always assign a unique label to your agents, below is an example with “raspberry” as a label.
You can also check the logs/status:
supervisorctl status
Here is a command you can try, plus you can specify a directory in your supervisor config (not sure whether environment variables are needed but I saw it from other raspberry Pi supervisor user):
[program:prefect-agent]
command=prefect agent local start -l raspberry -k YOUR_API_KEY --no-hostname-label
directory=/home/pi/.local/bin/prefect
user=pi
environment=HOME="/home/pi/.local/bin/prefect",USER="pi"
Environment: Ubuntu 16.04, daemon programmed in c, using systemd for process management.
So i have the unit file as:
[Unit]
Description=Fantastic Service
After=network.target
[Service]
Restart=always
Type=forking
ExecStart=/opt/fan/tastic
[Install]
WantedBy=multi-user.target
And in my tastic.c code, it basically fork() X number of childs each doing so_reuseport, and than the main process exits leaving the childs to handle requests.
With the above setup it works fine, and i get the expected behavior.
However if i put the PIDFile in the service unit file, i get that the pid provided by my application is non-existent, which it is - since my main process is exiting after starting up the requested number of childs.
Now in the systemd documentation it clearly states that if you do Type=forking you should provide the PIDFile, but the issue is that how am i supposed to provide a single pid file when there are multiple childs and the main parent process exits once the childs start?
Am I missing something?
As you found, the system works fine without PIDFile= in your case. The docs recommend the use of PIDFile=, but I believe that's for the case when there is a single main process, which doesn't apply to your case.
Also see man systemd.kill which explains how processes will be killed. The default is "control-group", which kills "all remaining processes in the control group".
So by default, systemd is going to clean up all those child processes at "stop" time for you, which is what you want.
For someone who did have a main process, they might want to use KillMode=process, and in that case setting PIDFile= may help with that, but this does not apply to your case.
My application may be started by systemd socket activation or (end-user's choice) directly as service. In the former case, I like to code my application that it shuts down after a given idle time (and will be restarted by systemd once a new connection is received); in the latter case it should not shut down.
How can my application distinguish whether it has been started by systemd through socket activation versus by 'systemctl start myapplication'?
The systemd logs gave me no hints. Is it possible at all to distinguish those two startup cases?
Al_
PS: in case it matters: my application is written in C++/Qt and follows the systemd 'notify' scheme.
In the latter case it will have to listen at a TCP socket. So if you can listen at that socket, systemd isn't, so you weren't started by systemd.
I need to have a systemd service which runs continuously. System under question is an embedded linux built by Yocto.
If the service stops for any reason (either failure or just completed), it should be restarted automatically
If restarted more than X times, system should reboot.
What options are there for having this? I can think of the following two, but both seem suboptimal
1) having a cron job which will literally do the check above and keep the number of retries somewhere in /tmp or other tmpfs
2) having the service itself track the number times it has been started (again in some tmpfs location) and rebooting if necessary. Systemd would just have to continuously try to start the service if it's not running
edit: as suggested by an answer, I modified the service to use the StartLimitAction as given below. It causes the unit to correctly restart, but at no point does it reboot the system, even if I continuously kill the script:
[Unit]
Description=myservice system
[Service]
Type=simple
WorkingDirectory=/home/root
ExecStart=/home/root/start_script.sh
Restart=always
StartLimitAction=reboot
StartLimitIntervalSec=600
StartLimitBurst=5
[Install]
WantedBy=multi-user.target
This in your service file should do something very close to your requirements:
[Service]
Restart=always
[Unit]
StartLimitAction=reboot
StartLimitIntervalSec=60
StartLimitBurst=5
It will restart the service if it stops, except if there are more than 5 restarts in 60 seconds: in that case it will reboot.
You may also want to look at WatchdogSec value, but this software watchdog functionality requires support from the service itself (very easy to add though, see the documentation for WatchDogSec).
My understanding is that the line Restart= should be in [Service], as in the example
but lines StartLimitxxxxx= should be in [Unit].
I'm currently attempting to develop a sandbox using Docker. Docker spawns process through a running daemon, and I am having a great deal of trouble enabling the limits set forth in the limits.conf file such that they apply to the daemon. Specifically, I am running a forkbomb such that the daemon is the process that spawns all the new processes. The nproc limitation I placed on the user making this call doesn't seemed to get applied and I for the life of me can not figure out how to make it work. I'm quiet positive it will be as simple as adding the correct file to /etc/pam.d/, but I'm not certain.
The PAM limits only apply to processes playing nice with PAM. By default, when you start a shell in a container, it won't have anything to do with PAM, and setting limits through PAM just won't work.
Here are some other ways to make it happen!
Instead of starting your process immediately, you can start a tiny wrapper script, which will do the appropriate ulimit calls before executing your process.
If you want an interactive shell, you can run login -f <username> (e.g. login -f root); that will use the normal login process to auto-log you on the machine (and that should go through the normal PAM mechanisms).
If you want all containers to be subject to those limits, you can set the limits on your system, then restart Docker with those lower limits; containers are created by Docker, and by default, they will inherit those limits as well.