systemd service redirect stdout to custom filename - service

I am working with systemd services in order to start an application. Stdout should be redirected to a file containing the current date (when the service was started). Logging to a file works fine, however, I don't know how to provide the date for the filename within the service. Any ideas?
...
[Service]
ExecStart=/bin/mybin
StandardOutput=file:/my/path/<filename should contain date>.log
...

systemd cannot generate the file name dynamically. But you can use bash redirection and date to create such a logfile.
[Service]
ExecStart=/bin/bash -c "/bin/mybin >/my/path/filename-$(date %%y-%%d-%%m).log"

I have the same needs and I have found a nice solution for me. It works well. Hope it can help you too.
1. Create an script.
You must put the script under /usr/bin. It's /usr/bin/ruoyi-gen.sh for me.
2. Add below contents.
#!/bin/bash
java -jar /root/xf-service/ruoyi-modules-gen-2.3.0.jar > /root/xf-service/ilogs/modules-gen-`date "+%Y-%m-%d"`.log 2>&1 &
Make the script executable -> chmod +x /usr/bin/ruoyi-gen.sh.
3. Add service description
Run vi /etc/systemd/system/ruoyi-gen.service, Add desciption like below:
[Unit]
Description=ruoyi-gen
[Service]
Type=forking
ExecStart=/usr/bin/ruoyi-gen.sh
[Install]
WantedBy=multi-user.target
4. Reload all systemd service files
systemctl daemon-reload
5.Start your service
systemctl start ruoyi-gen
It works on CentOS 7.

Related

Create a SysV Init script for Prometheus node exporter in old linux system (RHL6)

I have a server machine that has RHL6 (Red Hat Linux 6) and is based on SysV initialization (does not have systemd package), and I want to make my prometheus node exporter collect metrics from this machine.
All I can find online is how to create a node exporter service with systemctl (systemd): basically you create a .service file under /etc/systemd/system and then write something like this in it:
[Unit]
Description=Node Exporter
After=network.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
And then you start the service, enable it at startup, etc with systemctl command like this
sudo systemctl start node_exporter
sudo systemctl status node_exporter
sudo systemctl enable node_exporter
But the problem is that I don't have systemd installed and I don't have the right to update the server machine system so I am trying to find a way how to write an init script for node exporter to be placed under /etc/rd.d/init.d in my case.
It seems that all scripts under init.d are shell scripts that declare many methods like start(), stop(), restart(), reload(), force_reload(), ...
So it's not as easy as writing the service based on systemd.
Anyone have an idea how to do that with SysV init ???
Thanks,
I managed to found a solution for my problem.
Here is how the script looks like:
#!/bin/bash
#
# chkconfig: 2345 90 12
# description: node-exporter server
#
# Get function from functions library
. /etc/init.d/functions
# Start the service node-exporter
start() {
echo -n "Starting node-exporter service: "
/usr/sbin/node_exporter_service &
### Create the lock file ###
touch /var/lock/subsys/node-exporter
success $"node-exporter service startup"
echo
}
# Restart the service node-exporter
stop() {
echo -n "Shutting down node-exporter service: "
killproc node_exporter_service
### Now, delete the lock file ###
rm -f /var/lock/subsys/node-exporter
echo
}
### main logic ###
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status node_exporter_service
;;
restart|reload)
stop
start
;;
*)
echo $"Usage: $0 {start|stop|restart|reload|status}"
exit 1
esac
exit 0
We place the above script under /etc/init.d named "node-exporter" (without .sh) and we place the binary for the node exporter under /usr/sbin (with systemd we place binaries under /usr/local/bin).
You can download the binary file for node exporter from here https://github.com/prometheus/node_exporter/releases.
Then we add the script file to the list of services with command chkconfig --add node-exporter (to check if it already exists use command chkconfig --list node-exporter).
Enable the service with command chkconfig node-exporter on.
And then to start/stop/restart ... we use command /etc/init.d/node-exporter start/stop/restart ....
In the start script we basically run the binary file and in the stop script we kill the process by its name.
I hope this will be useful.

How do I run a Bash script before shutdown or reboot of a Raspberry Pi (running Raspbian)?

I want to run a Bash script prior to either shutdown or reboot of my Pi (running the latest Raspbian, a derivative of Debian).
e.g. if I type in sudo shutdown now or sudo reboot now into the command prompt, it should run my Bash script before continuing with shutdown/reboot.
I created a very simple script just for testing, to ensure I get the method working before I bother writing the actual script:
#!/bin/bash
touch /home/pi/ShutdownFileTest.txt
I then copied the file (called CreateFile.sh) to /etc/init.d/CreateFile
I then created symlinks in /etc/rc0.d/ and /etc/rc6.d/:
sudo ln -s /etc/init.d/CreateFile K99Dave
I'm not certain on what the proper naming should be for the symlink. Some websites say "Start the filename with a K", some say "start with an S", one said: "start with K99 so it runs at the right time"...
I actually ended up trying all of the following (not all at once, of course, but one at a time):
sudo ln -s /etc/init.d/CreateFile S00Dave
sudo ln -s /etc/init.d/CreateFile S99Dave
sudo ln -s /etc/init.d/CreateFile K00Dave
sudo ln -s /etc/init.d/CreateFile K01rpa
sudo ln -s /etc/init.d/CreateFile K99Dave
After creating each symlink, I always ran:
sudo chmod a+x /etc/init.d/CreateFile && sudo chmod a+x /etc/rc6.d/<name of symlink>
I then rebooted each time.
Each time, the file at /home/pi/ShutdownFileTest.txt was not created; the script is not executed.
I found this comment on an older post, suggesting that the above was the outdated method:
The modern way to do this is via systemd. See "man systemd-shutdown"
for details. Basically, put an executable shell script in
/lib/systemd/system-shutdown/. It gets passed an argument like "halt"
or "reboot" that allows you to distinguish the various cases if you
need to.
I copied my script into /lib/systemd/system-shutdown/, chmod +x'd it, and rebooted, but still no success.
I note the above comment says that the script is passed "halt" or "reboot" as an argument. As it should run identically in both cases, I assume it shouldn't need to actually deal with that argument. I don't know how to deal with that argument, either, so I'm not sure if I need to do something to make that work or not...
Could someone please tell me where I'm going wrong?
Thanks in advance,
Dave
As it turns out, part of the shutdown command has already executed (and unmounted the filesystem) before these scripts are executed.
Therefore, mounting the filesystem at the start of the script and unmounting it at the end is necessary.
Simply add:
mount -oremount,rw /
...at the start of the script (beneath the #!/bin/bash)
...then have the script's code...
and then finish the script with:
mount -oremount,ro /
So, the OP script should become:
#!/bin/bash
mount -oremount,rw /
touch /home/pi/ShutdownFileTest.txt
mount -oremount,ro /
...that then creates the file /home/pi/ShutdownFileTest.txt just before shutdown/reboot.
That said, it may not be best practice to use this method. Instead, it is better to create a service that runs whenever the computer is on and running normally, but runs the desired script when the service is terminated (which happens at shutdown/reboot).
This is explained in detail here, but essentially:
1: Create a file (let's call it example.service).
2: Add the following into example.service:
[Unit]
Description=This service calls shutdownScript.sh upon shutdown or reboot.
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/pi/shutdownScript.sh
[Install]
WantedBy=multi-user.target
3: Move it into the correct directory for systemd by running sudo mv /home/pi/example.service /etc/systemd/system/example.service
4: Ensure the script to launch upon shutdown has appropriate permissions: chmod u+x /home/pi/shutdownScript.sh
5: Start the service: sudo systemctl start example --now
6: Make the service automatically start upon boot: sudo systemctl enable example
7: Stop the service: sudo systemctl stop example
This last command will mimic what would happen normally when the system shuts down, i.e. it will run /home/pi/shutdownScript.sh (without actually shutting down the system).
You can then reboot twice and it should work from the second reboot onwards.
EDIT: nope, no it doesn't. It worked the first time I tested it, but stopped working after that. Not sure why. If I figure out how to get it working, I'll edit this answer and remove this message (or if someone else knows, please feel free to edit the answer for me).
As I a do not have enough senority to post comments this is a new answer for which I appologize.
I added a step to ZPMMaker's answer and it seems to work for me at least.
sudo chmod u+x /etc/systemd/system/example.service

how to change systemd unit files directory?

Since I want to put all my service unit files in my own directory like /opt/myservice/, I found the way that use $SYSTEMD_UNIT_PATH in question https://unix.stackexchange.com/questions/224992/where-do-i-put-my-systemd-unit-file/367237#367237, however systemdctl can't find my service file in /opt/myservice/ after I setSYSTEMD_UNIT_PATH with shell command SYSTEMD_UNIT_PATH=/opt/myservice/, anyone knows how can it work? thx
[root#localhost system]# ls /opt/myservice/
test.service
[root#localhost system]# export SYSTEMD_UNIT_PATH=/opt/myservice/
[root#localhost system]# echo $SYSTEMD_UNIT_PATH
/opt/myservice/
[root#localhost system]# systemctl daemon-reload
[root#localhost system]# systemctl status test.service
Unit test.service could not be found.
From the document, the environment variable must be set as the kernel environement.
So, if the pid of systemd is 1, the environment must be set as kernel option, any others are invalid (e.g. /etc/profile, systemctl set-envionrment)
If you use grub-driven system, you can set it in /etc/default/grub, change the line starts with GRUB_CMDLINE_LINUX= and append SYSTEMD_UNIT_PATH=/absolute/path/to/your/services:. The last colon is required if you just want to append your services path.

How to run different files conditionally in systemd config?

I hope this isnt a duplicate question. Systemd is really hard to search for....
I have a systemd file that looks like
[Unit]
Description=My Daemon
[Service]
User=root
Type=simple
PIDFile=/var/run/app.pid
ExecStart=/usr/bin/python /opt/app/app.pyc
Restart=always
[Install]
WantedBy=multi-user.target
I want ExecStart to run /usr/bin/python /opt/app/app.pyc if it exists and run /usr/bin/python /opt/app/app.py if it doesnt.
The goal that on the deployed system there will not be a py file only a pyc but on dev systems we might only have a py file. How can I get this to work?
Make a small bash script which does what you want and then put that script on the ExecStart line.
#!/bin/bash
if [ -f /opt/app/app.pyc ];
then
exec /opt/app/app.pyc
else
exec /opt/app/app.py
fi

Symlinking unicorn_init.sh into /etc/init.d doesn't show with chkconfig --list

I'm symlinking my config/unicorn_init.sh to /etc/init.d/unicorn_project with:
sudo ln -nfs config/unicorn_init.sh /etc/init.d/unicorn_<project>
Afterwards, when I run chkconfig --list my unicorn_ script doesn't show. I'm adding my unicorn script to load my application on server load.
Obviously, this is not allowing me to add my script with:
chkconfig unicorn_<project> on
Any help / advice would be awesome :).
Edit:
Also, when I'm in /etc/init.d/ and run:
sudo service unicorn_project start
It says: "unrecognized service"
I figured this out. There were two things wrong with what I was doing:
1) You have to make sure your unicorn script can play nice with chkconfig by adding the below code below #!/bin/bash. Props to digitalocean's blog for the help.
# chkconfig: 2345 95 20
# description: Controls Unicorn sinatra server
# processname: unicorn
2) I was attempting to symlink the config/unicorn_init.sh file when I was already in the project directory which was creating a dangling symlink (pink colored symlink ~> should be teal) by using a relative path. To fix this, I removed the dangling symlink and provided the absolute path to the unicorn_init.sh file.
To debug this I used ll in the /etc/init.d/ directory to see r,w,x permissions and file types, was running chkconfig --list to see a list of services in /etc/init.d/ and also was trying to run the dangling symlink in my /etc/init.d directory with sudo service unicorn_<project> restart
Hope this helps someone.