Since I want to put all my service unit files in my own directory like /opt/myservice/, I found the way that use $SYSTEMD_UNIT_PATH in question https://unix.stackexchange.com/questions/224992/where-do-i-put-my-systemd-unit-file/367237#367237, however systemdctl can't find my service file in /opt/myservice/ after I setSYSTEMD_UNIT_PATH with shell command SYSTEMD_UNIT_PATH=/opt/myservice/, anyone knows how can it work? thx
[root#localhost system]# ls /opt/myservice/
test.service
[root#localhost system]# export SYSTEMD_UNIT_PATH=/opt/myservice/
[root#localhost system]# echo $SYSTEMD_UNIT_PATH
/opt/myservice/
[root#localhost system]# systemctl daemon-reload
[root#localhost system]# systemctl status test.service
Unit test.service could not be found.
From the document, the environment variable must be set as the kernel environement.
So, if the pid of systemd is 1, the environment must be set as kernel option, any others are invalid (e.g. /etc/profile, systemctl set-envionrment)
If you use grub-driven system, you can set it in /etc/default/grub, change the line starts with GRUB_CMDLINE_LINUX= and append SYSTEMD_UNIT_PATH=/absolute/path/to/your/services:. The last colon is required if you just want to append your services path.
Related
I am working with systemd services in order to start an application. Stdout should be redirected to a file containing the current date (when the service was started). Logging to a file works fine, however, I don't know how to provide the date for the filename within the service. Any ideas?
...
[Service]
ExecStart=/bin/mybin
StandardOutput=file:/my/path/<filename should contain date>.log
...
systemd cannot generate the file name dynamically. But you can use bash redirection and date to create such a logfile.
[Service]
ExecStart=/bin/bash -c "/bin/mybin >/my/path/filename-$(date %%y-%%d-%%m).log"
I have the same needs and I have found a nice solution for me. It works well. Hope it can help you too.
1. Create an script.
You must put the script under /usr/bin. It's /usr/bin/ruoyi-gen.sh for me.
2. Add below contents.
#!/bin/bash
java -jar /root/xf-service/ruoyi-modules-gen-2.3.0.jar > /root/xf-service/ilogs/modules-gen-`date "+%Y-%m-%d"`.log 2>&1 &
Make the script executable -> chmod +x /usr/bin/ruoyi-gen.sh.
3. Add service description
Run vi /etc/systemd/system/ruoyi-gen.service, Add desciption like below:
[Unit]
Description=ruoyi-gen
[Service]
Type=forking
ExecStart=/usr/bin/ruoyi-gen.sh
[Install]
WantedBy=multi-user.target
4. Reload all systemd service files
systemctl daemon-reload
5.Start your service
systemctl start ruoyi-gen
It works on CentOS 7.
I have a pod running python image as 199 user. My code app.py is place in /tmp/ directory, Now when I run copy command to replace the running app.py then the command simply fails with file exists error.
Please try to use the --no-preserve=true flag with kubectl cp command. It will pass --no-same-owner and --no-same-permissions flags to the tar utility while extracting the copied file in the container.
GNU tar manual suggests to use --skip-old-files or --overwrite flag to tar --extract command, to avoid error message you encountered, but to my knowledge, there is no way to add this optional argument to kubectl cp.
I want to run a Bash script prior to either shutdown or reboot of my Pi (running the latest Raspbian, a derivative of Debian).
e.g. if I type in sudo shutdown now or sudo reboot now into the command prompt, it should run my Bash script before continuing with shutdown/reboot.
I created a very simple script just for testing, to ensure I get the method working before I bother writing the actual script:
#!/bin/bash
touch /home/pi/ShutdownFileTest.txt
I then copied the file (called CreateFile.sh) to /etc/init.d/CreateFile
I then created symlinks in /etc/rc0.d/ and /etc/rc6.d/:
sudo ln -s /etc/init.d/CreateFile K99Dave
I'm not certain on what the proper naming should be for the symlink. Some websites say "Start the filename with a K", some say "start with an S", one said: "start with K99 so it runs at the right time"...
I actually ended up trying all of the following (not all at once, of course, but one at a time):
sudo ln -s /etc/init.d/CreateFile S00Dave
sudo ln -s /etc/init.d/CreateFile S99Dave
sudo ln -s /etc/init.d/CreateFile K00Dave
sudo ln -s /etc/init.d/CreateFile K01rpa
sudo ln -s /etc/init.d/CreateFile K99Dave
After creating each symlink, I always ran:
sudo chmod a+x /etc/init.d/CreateFile && sudo chmod a+x /etc/rc6.d/<name of symlink>
I then rebooted each time.
Each time, the file at /home/pi/ShutdownFileTest.txt was not created; the script is not executed.
I found this comment on an older post, suggesting that the above was the outdated method:
The modern way to do this is via systemd. See "man systemd-shutdown"
for details. Basically, put an executable shell script in
/lib/systemd/system-shutdown/. It gets passed an argument like "halt"
or "reboot" that allows you to distinguish the various cases if you
need to.
I copied my script into /lib/systemd/system-shutdown/, chmod +x'd it, and rebooted, but still no success.
I note the above comment says that the script is passed "halt" or "reboot" as an argument. As it should run identically in both cases, I assume it shouldn't need to actually deal with that argument. I don't know how to deal with that argument, either, so I'm not sure if I need to do something to make that work or not...
Could someone please tell me where I'm going wrong?
Thanks in advance,
Dave
As it turns out, part of the shutdown command has already executed (and unmounted the filesystem) before these scripts are executed.
Therefore, mounting the filesystem at the start of the script and unmounting it at the end is necessary.
Simply add:
mount -oremount,rw /
...at the start of the script (beneath the #!/bin/bash)
...then have the script's code...
and then finish the script with:
mount -oremount,ro /
So, the OP script should become:
#!/bin/bash
mount -oremount,rw /
touch /home/pi/ShutdownFileTest.txt
mount -oremount,ro /
...that then creates the file /home/pi/ShutdownFileTest.txt just before shutdown/reboot.
That said, it may not be best practice to use this method. Instead, it is better to create a service that runs whenever the computer is on and running normally, but runs the desired script when the service is terminated (which happens at shutdown/reboot).
This is explained in detail here, but essentially:
1: Create a file (let's call it example.service).
2: Add the following into example.service:
[Unit]
Description=This service calls shutdownScript.sh upon shutdown or reboot.
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/pi/shutdownScript.sh
[Install]
WantedBy=multi-user.target
3: Move it into the correct directory for systemd by running sudo mv /home/pi/example.service /etc/systemd/system/example.service
4: Ensure the script to launch upon shutdown has appropriate permissions: chmod u+x /home/pi/shutdownScript.sh
5: Start the service: sudo systemctl start example --now
6: Make the service automatically start upon boot: sudo systemctl enable example
7: Stop the service: sudo systemctl stop example
This last command will mimic what would happen normally when the system shuts down, i.e. it will run /home/pi/shutdownScript.sh (without actually shutting down the system).
You can then reboot twice and it should work from the second reboot onwards.
EDIT: nope, no it doesn't. It worked the first time I tested it, but stopped working after that. Not sure why. If I figure out how to get it working, I'll edit this answer and remove this message (or if someone else knows, please feel free to edit the answer for me).
As I a do not have enough senority to post comments this is a new answer for which I appologize.
I added a step to ZPMMaker's answer and it seems to work for me at least.
sudo chmod u+x /etc/systemd/system/example.service
I'm symlinking my config/unicorn_init.sh to /etc/init.d/unicorn_project with:
sudo ln -nfs config/unicorn_init.sh /etc/init.d/unicorn_<project>
Afterwards, when I run chkconfig --list my unicorn_ script doesn't show. I'm adding my unicorn script to load my application on server load.
Obviously, this is not allowing me to add my script with:
chkconfig unicorn_<project> on
Any help / advice would be awesome :).
Edit:
Also, when I'm in /etc/init.d/ and run:
sudo service unicorn_project start
It says: "unrecognized service"
I figured this out. There were two things wrong with what I was doing:
1) You have to make sure your unicorn script can play nice with chkconfig by adding the below code below #!/bin/bash. Props to digitalocean's blog for the help.
# chkconfig: 2345 95 20
# description: Controls Unicorn sinatra server
# processname: unicorn
2) I was attempting to symlink the config/unicorn_init.sh file when I was already in the project directory which was creating a dangling symlink (pink colored symlink ~> should be teal) by using a relative path. To fix this, I removed the dangling symlink and provided the absolute path to the unicorn_init.sh file.
To debug this I used ll in the /etc/init.d/ directory to see r,w,x permissions and file types, was running chkconfig --list to see a list of services in /etc/init.d/ and also was trying to run the dangling symlink in my /etc/init.d directory with sudo service unicorn_<project> restart
Hope this helps someone.
I am trying to figure out why the provided init.d script is not working on CentOS. I tried starting it manually:
/etc/init.d/mongod start
But I get the following error:
Starting mongod: /usr/bin/dirname: extra operand `2>&1.pid'
Try `/usr/bin/dirname --help' for more information.
I looked in the script where it tries to start:
daemon --user "$MONGO_USER" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
So I looked where mongod var is defined:
mongod=${MONGOD-/usr/bin/mongod}
Also tried:
service mongod start
Same error.
Not sure what I have setup wrong, but I have verified that I have the latest script but I cannot get mongod process to start.
Any ideas???
The following link appears to address the issue well
https://ma.ttias.be/mongodb-startup-dirname-extra-operand-pid/
In a nutshell, a bad script appears to have been distributed but the output it produces is not harmful, mongod still runs. If you run yum update you'll get a fixed script, but likely mongod will still fail because the script was not making it fail. Check your mongo logs (usually /var/log/mongodb/mongod.log, but can be different if specified in differently in /etc/mongod.conf). The log file should tell you the real reason it's failing.
Check the mongo pid file location in the config file /etc/mongod.conf:
awk -F'[:=]' -v IGNORECASE=1 '/^[[:blank:]]*pidfilepath[[:blank:]]*[:=][[:blank:]]*/{print $2}' /etc/mongod.conf
By default there should be this line in mongod.conf: 'pidfilepath = /var/run/mongodb/mongod.pid'. Add it if it doesn't exist.
If you are using the YAML version of /etc/mongod.conf, check out this issue: https://jira.mongodb.org/browse/SERVER-13595. In short, you need to change this line in /etc/rc.d/init.d/mongod:
PIDFILE=`awk -F= '/^pidfilepath[[:blank:]]*=[[:blank:]]*/{print $2}' "$CONFIGFILE"`
to:
PIDFILE=`awk -F: '/^[[:blank:]]*pidFilePath[[:blank:]]*:[[:blank:]]*/{print $2}' "$CONFIGFILE" | tr -d ' '`
For me problem was in pidfilepath. Init script can't deal with path in format like this
pidfilepath = /var/run/mongodb/mongod.pid
PIDFILE variable inside of init script contains ' /var/run/mongodb/mongod.pid' and not '/var/run/mongodb/mongod.pid'
FIX:
replace PIDFILE line with this and it will work.
PIDFILE=awk -F= '/^pidfilepath[[:blank:]]*=[[:blank:]]*/{gsub(" ", "", $2);print $2}' "$CONFIGFILE"
I have also faced the same issue.
The fix is to make a small change in the script file(/etc/init.d/mongod) as mentioned below:
line 63 i guess:
daemon --user "$MONGO_USER" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
to
daemon --user "$MONGO_USER" --pidfile "$PIDFILE" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
Hope this helps !!!
It could be a RedHat bug on the initscript package:
goog.le forum
redhat bugzilla