How to run script in Solaris after boot once - solaris

I looking for the right way to run shell script first boot Solaris.
I need to run resize command, there is a my script
#!/bin/sh -ux
echo "#!/bin/sh -ux" > /etc/rc3.d/S90scale
echo "/sbin/zpool set autoexpand=on rpool" >> /etc/rc3.d/S90scale
echo "/sbin/zpool online -e rpool c1d0" >> /etc/rc3.d/S90scale
echo "rm /etc/rc3.d/S90scale" >> /etc/rc3.d/S90scale
echo "/sbin/shutdown -y -i6 -g0" >> /etc/rc3.d/S90scale
chmod a+x /etc/rc3.d/S90scale
actually script working properly, but unfortunately resize do not work. When I do the same things from user session everything just fine.
What exactly I doing wrong?

Your method is not the "right" one to run a script once after boot as it uses the legacy approach. The correct way would be to create an smf service that runs once. However, it does work anyway with Solaris 10 and 11 as the rc scripts while deprecated are still processed so I won't elaborate more about smf.
The main issue is you don't check for errors and whatever happens, it remove the script and reboot preventing any analysis to occur.
I would suggest to modify your script to log what is happening in a file and quit on error:
#!/bin/ksh
cat > /etc/rc3.d/S90scale <<%EOF%
exec > /var/tmp/S90scale.log 2>&1 # logs everything to file
set -xe # show commands and exits on error
/sbin/zpool set autoexpand=on rpool
/sbin/zpool online -e rpool c1d0
mv /etc/rc3.d/S90scale /etc/rc3.d/_S90scale
/sbin/shutdown -y -i6 -g0
%EOF%
chmod a+x /etc/rc3.d/S90scale
After the next reboot complete, you should have a look to the /var/tmp/S90scale.log file and possibly see an error message there.

Related

How do I run a Bash script before shutdown or reboot of a Raspberry Pi (running Raspbian)?

I want to run a Bash script prior to either shutdown or reboot of my Pi (running the latest Raspbian, a derivative of Debian).
e.g. if I type in sudo shutdown now or sudo reboot now into the command prompt, it should run my Bash script before continuing with shutdown/reboot.
I created a very simple script just for testing, to ensure I get the method working before I bother writing the actual script:
#!/bin/bash
touch /home/pi/ShutdownFileTest.txt
I then copied the file (called CreateFile.sh) to /etc/init.d/CreateFile
I then created symlinks in /etc/rc0.d/ and /etc/rc6.d/:
sudo ln -s /etc/init.d/CreateFile K99Dave
I'm not certain on what the proper naming should be for the symlink. Some websites say "Start the filename with a K", some say "start with an S", one said: "start with K99 so it runs at the right time"...
I actually ended up trying all of the following (not all at once, of course, but one at a time):
sudo ln -s /etc/init.d/CreateFile S00Dave
sudo ln -s /etc/init.d/CreateFile S99Dave
sudo ln -s /etc/init.d/CreateFile K00Dave
sudo ln -s /etc/init.d/CreateFile K01rpa
sudo ln -s /etc/init.d/CreateFile K99Dave
After creating each symlink, I always ran:
sudo chmod a+x /etc/init.d/CreateFile && sudo chmod a+x /etc/rc6.d/<name of symlink>
I then rebooted each time.
Each time, the file at /home/pi/ShutdownFileTest.txt was not created; the script is not executed.
I found this comment on an older post, suggesting that the above was the outdated method:
The modern way to do this is via systemd. See "man systemd-shutdown"
for details. Basically, put an executable shell script in
/lib/systemd/system-shutdown/. It gets passed an argument like "halt"
or "reboot" that allows you to distinguish the various cases if you
need to.
I copied my script into /lib/systemd/system-shutdown/, chmod +x'd it, and rebooted, but still no success.
I note the above comment says that the script is passed "halt" or "reboot" as an argument. As it should run identically in both cases, I assume it shouldn't need to actually deal with that argument. I don't know how to deal with that argument, either, so I'm not sure if I need to do something to make that work or not...
Could someone please tell me where I'm going wrong?
Thanks in advance,
Dave
As it turns out, part of the shutdown command has already executed (and unmounted the filesystem) before these scripts are executed.
Therefore, mounting the filesystem at the start of the script and unmounting it at the end is necessary.
Simply add:
mount -oremount,rw /
...at the start of the script (beneath the #!/bin/bash)
...then have the script's code...
and then finish the script with:
mount -oremount,ro /
So, the OP script should become:
#!/bin/bash
mount -oremount,rw /
touch /home/pi/ShutdownFileTest.txt
mount -oremount,ro /
...that then creates the file /home/pi/ShutdownFileTest.txt just before shutdown/reboot.
That said, it may not be best practice to use this method. Instead, it is better to create a service that runs whenever the computer is on and running normally, but runs the desired script when the service is terminated (which happens at shutdown/reboot).
This is explained in detail here, but essentially:
1: Create a file (let's call it example.service).
2: Add the following into example.service:
[Unit]
Description=This service calls shutdownScript.sh upon shutdown or reboot.
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=/home/pi/shutdownScript.sh
[Install]
WantedBy=multi-user.target
3: Move it into the correct directory for systemd by running sudo mv /home/pi/example.service /etc/systemd/system/example.service
4: Ensure the script to launch upon shutdown has appropriate permissions: chmod u+x /home/pi/shutdownScript.sh
5: Start the service: sudo systemctl start example --now
6: Make the service automatically start upon boot: sudo systemctl enable example
7: Stop the service: sudo systemctl stop example
This last command will mimic what would happen normally when the system shuts down, i.e. it will run /home/pi/shutdownScript.sh (without actually shutting down the system).
You can then reboot twice and it should work from the second reboot onwards.
EDIT: nope, no it doesn't. It worked the first time I tested it, but stopped working after that. Not sure why. If I figure out how to get it working, I'll edit this answer and remove this message (or if someone else knows, please feel free to edit the answer for me).
As I a do not have enough senority to post comments this is a new answer for which I appologize.
I added a step to ZPMMaker's answer and it seems to work for me at least.
sudo chmod u+x /etc/systemd/system/example.service

Start shrew vpn client (iked & ikec) on start-up of OSMC on Raspberry 2

I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.

How to send data to command line after calling .sh file?

I want to install Anaconda through EasyBuild. EasyBuild is a software to manage software installation on clusters. Anaconda can be installed with sh Anaconda.sh.
However, after running I have to accept the License agreement and give the installation location on the command line by entering <Enter>, yes <Enter>, path/where/to/install/ <Enter>.
Because this has to be installed automatically I want to do the accepting of terms and giving the install location in one line. I tried to do it like this:
sh Anaconda.sh < <(echo) >/dev/null < <(echo yes) >/dev/null \
< <(echo /apps/software/Anaconda/1.8.0-Linux-x86_64/) > test.txt
From the test.txt I can read that the first echo works as <Enter>, but I can't figure out how to accept the License agreement, as it sees it now as not sending yes:
Do you approve the license terms? [yes|no]
[no] >>> The license agreement wasn't approved, aborting installation.
How can I send the yes correctly to the script input?
Edit: Sorry, I missed the part about having to enter more then one thing. You can take a look at writing expect scripts. thegeekstuff.com/2010/10/expect-examples. You may need to install it however.
You could try piping with the following command: yes yes | sh Anaconda.sh. Read the man pages for more information man yes.
Expect is a great way to go and probably the most error proof way. If you know all the questions I think you could do this by just writing a file with the answers in the correct order, one per line and piping it in.
That install script is huge so as long as you can verify you know all the questions you could give this a try.
In my simple tests it works.
I have a test script that looks like this:
#!/bin/sh
echo -n "Do you accept "
read ANS
echo $ANS
echo -n "Install path: "
read ANS
echo $ANS
and an answers file that looks like this:
Y
/usr
Running it like so works... perhaps it will work for your monster install file as well.
cat answers | ./test.sh
Do you accept Y
Install path: /usr
If that doesn't work then the script is likely flushing and you will have to use expect or pexpect.
Good luck!
Actually, I downloaded and looked at the anaconda install script. Looks like it takes command line arguments.
/bin/bash Anaconda-2.2.0-Linux-x86_64.sh -h
usage: Anaconda-2.2.0-Linux-x86_64.sh [options]
Installs Anaconda 2.2.0
-b run install in batch mode (without manual intervention),
it is expected the license terms are agreed upon
-f no error if install prefix already exists
-h print this help message and exit
-p PREFIX install prefix, defaults to /home/cody.stevens/anaconda
Use the -b and -p options...
so use it like so:
/bin/bash Anaconda-2.2.0-Linux-x86_64.sh -b -p /usr
Also of note.. that script explicitly says not to run with '.' or 'sh' but 'bash' so they must have some dependency on a feature of bash.
--
Cody

Upstart / init script not working

I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.

How modify /etc/network/interfaces without gedit?

I'm doing a script and I would like to add the following line
pre-up iptables-restore < /etc/iptables.rules
to the file interfaces which is located on /etc/network/interfaces,but although I have enabled the permissions to write in this file(I work in Ubuntu), I'm not able to do it... I'm trying to use the following command in my bash script
sudo echo "pre-up iptables-restore < /etc/iptables.rules" >> /etc/network/interfaces
Any suggestion of how to do it without using gedit o vi?
Thanks in advance!
You need to tell bash not to use redirection before it starts sudo:
sudo bash -c 'echo "pre-up iptables-restore < /etc/iptables.rules" >> /etc/network/interfaces'
this way the complete command will be executed with root access, not only the echo "pre-up iptables-restore < /etc/iptables.rules" part