How does Softlayer handle post-install scripts on Vsphere? - ibm-cloud

I am reloading for ESXi's hosts using slcli. Here is the example of the call: slcli hardware reload <hwd_id> --postinstall <url_to_post_install_script>. My post install script is not being executed after the reload. Since, these are ESXi's they have a different way of executing post install scripts. How do I make my post-install execute after the reload? I will attach my script that I want to run after boot.
#!/bin/sh
esxcli vsan storage automode set --enabled=false >> ${logfile} 2>&1
esxcli vsan cluster leave >> ${logfile} 2>&1
esxcli vsan storage list|grep "Is SSD: true" -C5| grep "Display Name" |awk '{print $3}' |
while IFS= read -r line
do
echo "removing: [$line]" >> ${logfile} 2>&1
esxcli vsan storage remove -s $line >> ${logfile} 2>&1
sleep 10
done
echo "VSAN cleanup script has finished" >> ${logfile} 2>&1

I'm afraid your script will not be executed after reloading, Vsphere has a different system which is not supported by post-script feature.
Supported systems are listed here https://knowledgelayer.softlayer.com/topic/provisioning-scripts.

Related

Start shrew vpn client (iked & ikec) on start-up of OSMC on Raspberry 2

I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.

howto: elastic beanstalk + deploy docker + graceful shutdown

Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.

How to run script in Solaris after boot once

I looking for the right way to run shell script first boot Solaris.
I need to run resize command, there is a my script
#!/bin/sh -ux
echo "#!/bin/sh -ux" > /etc/rc3.d/S90scale
echo "/sbin/zpool set autoexpand=on rpool" >> /etc/rc3.d/S90scale
echo "/sbin/zpool online -e rpool c1d0" >> /etc/rc3.d/S90scale
echo "rm /etc/rc3.d/S90scale" >> /etc/rc3.d/S90scale
echo "/sbin/shutdown -y -i6 -g0" >> /etc/rc3.d/S90scale
chmod a+x /etc/rc3.d/S90scale
actually script working properly, but unfortunately resize do not work. When I do the same things from user session everything just fine.
What exactly I doing wrong?
Your method is not the "right" one to run a script once after boot as it uses the legacy approach. The correct way would be to create an smf service that runs once. However, it does work anyway with Solaris 10 and 11 as the rc scripts while deprecated are still processed so I won't elaborate more about smf.
The main issue is you don't check for errors and whatever happens, it remove the script and reboot preventing any analysis to occur.
I would suggest to modify your script to log what is happening in a file and quit on error:
#!/bin/ksh
cat > /etc/rc3.d/S90scale <<%EOF%
exec > /var/tmp/S90scale.log 2>&1 # logs everything to file
set -xe # show commands and exits on error
/sbin/zpool set autoexpand=on rpool
/sbin/zpool online -e rpool c1d0
mv /etc/rc3.d/S90scale /etc/rc3.d/_S90scale
/sbin/shutdown -y -i6 -g0
%EOF%
chmod a+x /etc/rc3.d/S90scale
After the next reboot complete, you should have a look to the /var/tmp/S90scale.log file and possibly see an error message there.

Where should I add the --rest option for MongoDB?

I need to use mongodb with the --rest option. But mongodb is started automatically on boot, so I guess I need to modify a file or something.
Where can I add this --rest option?
I have this file at /etc/init/mongodb.conf, not sure what to edit:
# Ubuntu upstart file at /etc/init/mongodb.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb/
mkdir -p /var/log/mongodb/
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGODB="yes"
if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec /usr/bin/mongod -- --config /etc/mongodb.conf; fi
end script
And this file at /etc/init.d/mongodb:
#!/bin/sh -e
# upstart-job
#
# Symlink target for initscripts that have been converted to Upstart.
set -e
INITSCRIPT="$(basename "$0")"
JOB="${INITSCRIPT%.sh}"
if [ "$JOB" = "upstart-job" ]; then
if [ -z "$1" ]; then
echo "Usage: upstart-job JOB COMMAND" 1>&2
exit 1
fi
JOB="$1"
INITSCRIPT="$1"
shift
else
if [ -z "$1" ]; then
echo "Usage: $0 COMMAND" 1>&2
exit 1
fi
fi
COMMAND="$1"
shift
if [ -z "$DPKG_MAINTSCRIPT_PACKAGE" ]; then
ECHO=echo
else
ECHO=:
fi
$ECHO "Rather than invoking init scripts through /etc/init.d, use the service(8)"
$ECHO "utility, e.g. service $INITSCRIPT $COMMAND"
# Only check if jobs are disabled if the currently _running_ version of
# Upstart (which may be older than the latest _installed_ version)
# supports such a query.
#
# This check is necessary to handle the scenario when upgrading from a
# release without the 'show-config' command (introduced in
# Upstart for Ubuntu version 0.9.7) since without this check, all
# installed packages with associated Upstart jobs would be considered
# disabled.
#
# Once Upstart can maintain state on re-exec, this change can be
# dropped (since the currently running version of Upstart will always
# match the latest installed version).
UPSTART_VERSION_RUNNING=$(initctl version|awk '{print $3}'|tr -d ')')
if dpkg --compare-versions "$UPSTART_VERSION_RUNNING" ge 0.9.7
then
initctl show-config -e "$JOB"|grep -q '^ start on' || DISABLED=1
fi
case $COMMAND in
status)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
$COMMAND "$JOB"
;;
start|stop)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -z "$RUNNING" ] && [ "$COMMAND" = "stop" ]; then
exit 0
elif [ -n "$RUNNING" ] && [ "$COMMAND" = "start" ]; then
exit 0
elif [ -n "$DISABLED" ] && [ "$COMMAND" = "start" ]; then
exit 0
fi
$COMMAND "$JOB"
;;
restart)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the stop(8) and then start(8) utilities,"
$ECHO "e.g. stop $JOB ; start $JOB. The restart(8) utility is also available."
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -n "$RUNNING" ] ; then
stop "$JOB"
fi
# If the job is disabled and is not currently running, the job is
# not restarted. However, if the job is disabled but has been forced into the
# running state, we *do* stop and restart it since this is expected behaviour
# for the admin who forced the start.
if [ -n "$DISABLED" ] && [ -z "$RUNNING" ]; then
exit 0
fi
start "$JOB"
;;
reload|force-reload)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the reload(8) utility, e.g. reload $JOB"
reload "$JOB"
;;
*)
$ECHO
$ECHO "The script you are attempting to invoke has been converted to an Upstart" 1>&2
$ECHO "job, but $COMMAND is not supported for Upstart jobs." 1>&2
exit 1
esac
It's probably cleaner to enable the REST interface via /etc/mongodb.conf by adding a line of:
rest = true
That setting is documented here.
MongoDB version 2.6 has switched to a YAML config file. The following two entries are required to prevent the following startup warning:
mongodb WARNING: --rest is specified without --httpinterface
net:
http:
enabled: true
RESTInterfaceEnabled: true
When u start the server using command mongod , add --rest option with command mongod like this mongod --rest.
refer mongod - MongoDB Manual 2.6.
After run command complete , u can use the following the simple Restful API:
http://127.0.0.1:28017/databaseName/collectionName/
Here is simple RestFul API Doc.
Just start the server using mongod --rest
Note: By default, the rest API's are inaccessible due to security issues. The web interface is accessible at localhost:<port>, where the number is 1000 more than the mongod port. For example, your mongodb server is running at 27017 (by default) then you can access mongodb at
http://127.0.0.1:28017/<db-name>/<collection-name>/

Postgres command line install from zip file fails to properly start the windows service after the database has been configured

Below is the batch file I am kicking off at the end of an installer to properly configure my postgres database, however there comes an issue wherein the service is unable to be started.
This is what the service parameters look like:
C:\Program Files\pgsql\bin/pg_ctl.exe runservice -N "MyPostGres" -D "C:/Program Files/pgsql/PGDATA"
Batch file
#echo off
if "%1" == "" goto displayUsage
if "%2" == "" goto displayUsage
if "%3" == "" goto displayUsage
goto setup
:displayUsage
echo supply arguments: SUPER_USER_NAME SUPER_USER_PASSWORD USER_PASSWORD \ >> C:\myLog.log
goto end
:setup
echo %~1 >> C:\myLog.log
echo %~2 >> C:\myLog.log
echo %~3 >> C:\myLog.log
echo calling initdb! >> C:\myLog.log
initdb -U MY_PG ../PGDATA
echo calling pg_ctl register! >> C:\myLog.log
rem provide a variable for PGDATA
pg_ctl register -N TravisPostGres -U MY_PG -P wooo! -D "C:\Program Files\pgsql\PGDATA"
echo calling pg_ctl start! >> C:\myLog.log
pg_ctl start -D ../PGDATA -w
echo calling createuser! %~1 >> C:\myLog.log
createuser -s -UMY_PG %~1
:end
echo the end! >> C:\myLog.log
exit /B 0
I discovered that the issue was the existence of the following file
C:\Program
This file is invisible via explorer view but when executing dir from the c directory it was shown as a 0k file. Deleting this file proved to be the fix.