Upstart / init script not working - init

I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?

This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script

As of Ubuntu 15, upstart is no longer being used, see systemd.

Related

How do I run a Bash script before shutdown or reboot of a Raspberry Pi (running Raspbian)?

I want to run a Bash script prior to either shutdown or reboot of my Pi (running the latest Raspbian, a derivative of Debian).
e.g. if I type in sudo shutdown now or sudo reboot now into the command prompt, it should run my Bash script before continuing with shutdown/reboot.
I created a very simple script just for testing, to ensure I get the method working before I bother writing the actual script:
#!/bin/bash
touch /home/pi/ShutdownFileTest.txt
I then copied the file (called CreateFile.sh) to /etc/init.d/CreateFile
I then created symlinks in /etc/rc0.d/ and /etc/rc6.d/:
sudo ln -s /etc/init.d/CreateFile K99Dave
I'm not certain on what the proper naming should be for the symlink. Some websites say "Start the filename with a K", some say "start with an S", one said: "start with K99 so it runs at the right time"...
I actually ended up trying all of the following (not all at once, of course, but one at a time):
sudo ln -s /etc/init.d/CreateFile S00Dave
sudo ln -s /etc/init.d/CreateFile S99Dave
sudo ln -s /etc/init.d/CreateFile K00Dave
sudo ln -s /etc/init.d/CreateFile K01rpa
sudo ln -s /etc/init.d/CreateFile K99Dave
After creating each symlink, I always ran:
sudo chmod a+x /etc/init.d/CreateFile && sudo chmod a+x /etc/rc6.d/<name of symlink>
I then rebooted each time.
Each time, the file at /home/pi/ShutdownFileTest.txt was not created; the script is not executed.
I found this comment on an older post, suggesting that the above was the outdated method:
The modern way to do this is via systemd. See "man systemd-shutdown"
for details. Basically, put an executable shell script in
/lib/systemd/system-shutdown/. It gets passed an argument like "halt"
or "reboot" that allows you to distinguish the various cases if you
need to.
I copied my script into /lib/systemd/system-shutdown/, chmod +x'd it, and rebooted, but still no success.
I note the above comment says that the script is passed "halt" or "reboot" as an argument. As it should run identically in both cases, I assume it shouldn't need to actually deal with that argument. I don't know how to deal with that argument, either, so I'm not sure if I need to do something to make that work or not...
Could someone please tell me where I'm going wrong?
Thanks in advance,
Dave
As it turns out, part of the shutdown command has already executed (and unmounted the filesystem) before these scripts are executed.
Therefore, mounting the filesystem at the start of the script and unmounting it at the end is necessary.
Simply add:
mount -oremount,rw /
...at the start of the script (beneath the #!/bin/bash)
...then have the script's code...
and then finish the script with:
mount -oremount,ro /
So, the OP script should become:
#!/bin/bash
mount -oremount,rw /
touch /home/pi/ShutdownFileTest.txt
mount -oremount,ro /
...that then creates the file /home/pi/ShutdownFileTest.txt just before shutdown/reboot.
That said, it may not be best practice to use this method. Instead, it is better to create a service that runs whenever the computer is on and running normally, but runs the desired script when the service is terminated (which happens at shutdown/reboot).
This is explained in detail here, but essentially:
1: Create a file (let's call it example.service).
2: Add the following into example.service:
[Unit]
Description=This service calls shutdownScript.sh upon shutdown or reboot.
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/pi/shutdownScript.sh
[Install]
WantedBy=multi-user.target
3: Move it into the correct directory for systemd by running sudo mv /home/pi/example.service /etc/systemd/system/example.service
4: Ensure the script to launch upon shutdown has appropriate permissions: chmod u+x /home/pi/shutdownScript.sh
5: Start the service: sudo systemctl start example --now
6: Make the service automatically start upon boot: sudo systemctl enable example
7: Stop the service: sudo systemctl stop example
This last command will mimic what would happen normally when the system shuts down, i.e. it will run /home/pi/shutdownScript.sh (without actually shutting down the system).
You can then reboot twice and it should work from the second reboot onwards.
EDIT: nope, no it doesn't. It worked the first time I tested it, but stopped working after that. Not sure why. If I figure out how to get it working, I'll edit this answer and remove this message (or if someone else knows, please feel free to edit the answer for me).
As I a do not have enough senority to post comments this is a new answer for which I appologize.
I added a step to ZPMMaker's answer and it seems to work for me at least.
sudo chmod u+x /etc/systemd/system/example.service

Start shrew vpn client (iked & ikec) on start-up of OSMC on Raspberry 2

I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.

How to run script in Solaris after boot once

I looking for the right way to run shell script first boot Solaris.
I need to run resize command, there is a my script
#!/bin/sh -ux
echo "#!/bin/sh -ux" > /etc/rc3.d/S90scale
echo "/sbin/zpool set autoexpand=on rpool" >> /etc/rc3.d/S90scale
echo "/sbin/zpool online -e rpool c1d0" >> /etc/rc3.d/S90scale
echo "rm /etc/rc3.d/S90scale" >> /etc/rc3.d/S90scale
echo "/sbin/shutdown -y -i6 -g0" >> /etc/rc3.d/S90scale
chmod a+x /etc/rc3.d/S90scale
actually script working properly, but unfortunately resize do not work. When I do the same things from user session everything just fine.
What exactly I doing wrong?
Your method is not the "right" one to run a script once after boot as it uses the legacy approach. The correct way would be to create an smf service that runs once. However, it does work anyway with Solaris 10 and 11 as the rc scripts while deprecated are still processed so I won't elaborate more about smf.
The main issue is you don't check for errors and whatever happens, it remove the script and reboot preventing any analysis to occur.
I would suggest to modify your script to log what is happening in a file and quit on error:
#!/bin/ksh
cat > /etc/rc3.d/S90scale <<%EOF%
exec > /var/tmp/S90scale.log 2>&1 # logs everything to file
set -xe # show commands and exits on error
/sbin/zpool set autoexpand=on rpool
/sbin/zpool online -e rpool c1d0
mv /etc/rc3.d/S90scale /etc/rc3.d/_S90scale
/sbin/shutdown -y -i6 -g0
%EOF%
chmod a+x /etc/rc3.d/S90scale
After the next reboot complete, you should have a look to the /var/tmp/S90scale.log file and possibly see an error message there.

Mongo mongod init.d script not working on CentOS

I am trying to figure out why the provided init.d script is not working on CentOS. I tried starting it manually:
/etc/init.d/mongod start
But I get the following error:
Starting mongod: /usr/bin/dirname: extra operand `2>&1.pid'
Try `/usr/bin/dirname --help' for more information.
I looked in the script where it tries to start:
daemon --user "$MONGO_USER" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
So I looked where mongod var is defined:
mongod=${MONGOD-/usr/bin/mongod}
Also tried:
service mongod start
Same error.
Not sure what I have setup wrong, but I have verified that I have the latest script but I cannot get mongod process to start.
Any ideas???
The following link appears to address the issue well
https://ma.ttias.be/mongodb-startup-dirname-extra-operand-pid/
In a nutshell, a bad script appears to have been distributed but the output it produces is not harmful, mongod still runs. If you run yum update you'll get a fixed script, but likely mongod will still fail because the script was not making it fail. Check your mongo logs (usually /var/log/mongodb/mongod.log, but can be different if specified in differently in /etc/mongod.conf). The log file should tell you the real reason it's failing.
Check the mongo pid file location in the config file /etc/mongod.conf:
awk -F'[:=]' -v IGNORECASE=1 '/^[[:blank:]]*pidfilepath[[:blank:]]*[:=][[:blank:]]*/{print $2}' /etc/mongod.conf
By default there should be this line in mongod.conf: 'pidfilepath = /var/run/mongodb/mongod.pid'. Add it if it doesn't exist.
If you are using the YAML version of /etc/mongod.conf, check out this issue: https://jira.mongodb.org/browse/SERVER-13595. In short, you need to change this line in /etc/rc.d/init.d/mongod:
PIDFILE=`awk -F= '/^pidfilepath[[:blank:]]*=[[:blank:]]*/{print $2}' "$CONFIGFILE"`
to:
PIDFILE=`awk -F: '/^[[:blank:]]*pidFilePath[[:blank:]]*:[[:blank:]]*/{print $2}' "$CONFIGFILE" | tr -d ' '`
For me problem was in pidfilepath. Init script can't deal with path in format like this
pidfilepath = /var/run/mongodb/mongod.pid
PIDFILE variable inside of init script contains ' /var/run/mongodb/mongod.pid' and not '/var/run/mongodb/mongod.pid'
FIX:
replace PIDFILE line with this and it will work.
PIDFILE=awk -F= '/^pidfilepath[[:blank:]]*=[[:blank:]]*/{gsub(" ", "", $2);print $2}' "$CONFIGFILE"
I have also faced the same issue.
The fix is to make a small change in the script file(/etc/init.d/mongod) as mentioned below:
line 63 i guess:
daemon --user "$MONGO_USER" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
to
daemon --user "$MONGO_USER" --pidfile "$PIDFILE" "$NUMACTL $mongod $OPTIONS >/dev/null 2>&1"
Hope this helps !!!
It could be a RedHat bug on the initscript package:
goog.le forum
redhat bugzilla

How to run a bash command as a different user in Capistrano?

How would I accomplish the following in Capistrano?
sudo su - postgres
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
The following task doesn't work:
task :postgres_check do
on roles(:db) do in: :sequence |host|
execute "sudo su - postgres << EOF
/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data/
EOF"
end
end
The commands in the execute statement works in a bash script.
EDIT 1:
I also tried the following:
task :postgres_check do
on roles(:postgres_pref_db), in: :sequence do |host|
execute "/usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data", :shell => "sudo su - postgres"
end
end
Which errors with:
DEBUG [68eb95f2] Command: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data
DEBUG [68eb95f2] pg_ctl: could not open PID file "/var/lib/pgsql/9.2/data/postmaster.pid": Permission denied
cap aborted!
SSHKit::Command::Failed: /usr/pgsql-9.2/bin/pg_ctl status -D /var/lib/pgsql/9.2/data stdout: Nothing written
It appears that it still executing the command as the ssh user.
I came across this and explored the answer for myself. I wouldn't have accepted the answer either to I'll provide what I did.
task :copy_files do
on roles(:web) do |host|
as 'other_user' do
execute "whoami"
end
end
end
Capistrano 3 uses SSH KIT and I found these examples really helpful for getting bash commands to work inside my tasks.
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md
You'll want to checkout ssh kit and see about on(), within(), with(), as() ... they can be used nested in any order. So you end up having a lot of control even if it takes a few minutes to learn.
I think for your specific example you will want to use as() and within() to become the postgres user and run commands within a certain directory.
Also I had to disable requiretty on my /etc/sudoers for my deploy user.