Restart Play Framework Activator using a script on mac & linux - scala

I am trying to develop a script that can restart an activator instance running on a specified port. I normally run my activator project at port 15000 and I am aiming to have it restarted using the script. I can then later call that script from a web page to have activator restarted remotely etc.
So far I found a really handy utility in Linux called fuser which can find a process listening at a specified port and kill it. Something like:
fuser -k 15000/tcp which works fine on linux but NOT on a mac.
I guess I would also need to somehow track the activator project location to start it later.
Please let me know your suggestions and comments on how this can be done.

I'm using a bash file for this. It works on Linux and Mac OS.
It's named loader.sh and put into your distributions root.
To stop it uses the kill command and the PID stored in RUNNING_PID.
#!/bin/bash
# Change IP address and port here
address="127.0.0.1"
port="9000"
# Get directory and add it to PATH
dir="$( cd "$( dirname "$0" )" && pwd )"
export PATH="$dir:$dir/bin:$PATH"
function start() {
# Check if we started already
[ -f $dir/RUNNING_PID ] && return
echo -n "Starting"
# You can specify a config file with -Dconfig.resource
# or a secret with -Dplay.crypto.secret
myApp -Dhttp.port=$port -Dhttp.address=$address > /dev/null &
echo "...started"
}
function stop() {
[ -f $dir/RUNNING_PID ] || return
echo -n "Stopping"
kill -SIGTERM $(cat $dir/RUNNING_PID)
while [ -f $dir/RUNNING_PID ]
do
sleep 0.5
done
echo "...stopped"
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
*)
echo "Usage: loader.sh start|stop|restart"
exit 1
;;
esac

Related

How do I capture output from a running process in a bash variable

I have a Swift command line program which is running a server and prints the URL of the server when it starts. I'm then trying capture the URL in a bash shell variable so I can pass it to other programs.
Basically my Swift program looks like this
#main
struct MyApplication {
static func main() throws {
let server = try VoodooServer {
Endpoints.config
}
print(server.url.absoluteString)
server.wait()
}
}
and when I run it from the command line I get output that looks like this:
% .build/release/server run -c Tests/files/TestConfig3
http://127.0.0.1:8082
However when I try to capture the URL using
% export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3` &
[3] 19101
and then check the exported variables using export there's nothing there.
I've tried commenting out the wait() function so the server exits immediately and I get the URL in the variable. ie. running
% export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
% echo $SERVER_URL
http://127.0.0.1:8080
So I'm guessing the problem is that because the server is not exiting, the value is not being stored because stdout has not finished or something like that.
So how can I capture the output from the server into a variable without stopping it?
Your problem is the usage of &:
$ export HELLO=`echo world` &
[1] 3774017
[1]+ Done export HELLO=`echo world`
$ export | grep HELLO
$ export HELLO=`echo world`
$ export | grep HELLO
declare -x HELLO="world"
When you run a command "regularly", the shell just runs it as you would expect. Examples of regular running:
echo world
.build/release/server run -c Tests/files/TestConfig3
export HELLO=`echo world`
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
When you run things with &, you're asking the shell to run them in the background, while you continue about your day.
That means that your shell has to continue accepting your command, but also run the background command.
So the shell launches a background shell where it runs your commands. Meaning, when you run:
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3` &
The shell launches a background shell, and the background shell runs:
export SERVER_URL=`.build/release/server run -c Tests/files/TestConfig3`
That background shell will indeed export SERVER_URL to its own subprocesses, but your regular, foreground shell, isn't a subprocess of the background shell. Rather, the background shell is a subprocess of the foreground shell.
That is why the export isn't visible in the foreground shell.
Unfortunately, there's no simple way to capture that URL while the server is still running. What people usually do is have the server write that information to a file, so that the foreground shell can read the file, e.g.:
$ ( (sleep 1; echo world > config; sleep 50) & ) &
[1] 3775004
[1]+ Done ( ( sleep 1; echo world > config; sleep 50 ) & )
$ sleep 1
$ export HELLO=`cat config`
$ export | grep HELLO
declare -x HELLO="world"
(I have replaced your Swift server with a simple bash command that goes to the background via fancy bash syntax)
As you can see, the background process writes its configuration to the file config, but it's difficult to know when config will be written, so you have to resort to something more complex:
$ ( (sleep 10; echo world > config.tmp; mv config.tmp config; sleep 50) & )
&
[1] 3775481
[1]+ Done ( ( sleep 10; echo world > config.tmp; mv config.tmp config; sleep 50 ) & )
$ while ! [ -f config ]; do sleep 1; done
$ export HELLO=`cat config`
$ export | grep HELLO
declare -x HELLO="world"
Here, we're writing to config.tmp, and we're only renaming it to config after we finish, to ensure that when the foreground shell tries to read, it reads the full configuration after the server definitely finished writing it.
But on the foreground side, we actually have to wait for it to finish writing it, which is what the while loop is for.

Start shrew vpn client (iked & ikec) on start-up of OSMC on Raspberry 2

I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.

howto: elastic beanstalk + deploy docker + graceful shutdown

Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.

Where should I add the --rest option for MongoDB?

I need to use mongodb with the --rest option. But mongodb is started automatically on boot, so I guess I need to modify a file or something.
Where can I add this --rest option?
I have this file at /etc/init/mongodb.conf, not sure what to edit:
# Ubuntu upstart file at /etc/init/mongodb.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb/
mkdir -p /var/log/mongodb/
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGODB="yes"
if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec /usr/bin/mongod -- --config /etc/mongodb.conf; fi
end script
And this file at /etc/init.d/mongodb:
#!/bin/sh -e
# upstart-job
#
# Symlink target for initscripts that have been converted to Upstart.
set -e
INITSCRIPT="$(basename "$0")"
JOB="${INITSCRIPT%.sh}"
if [ "$JOB" = "upstart-job" ]; then
if [ -z "$1" ]; then
echo "Usage: upstart-job JOB COMMAND" 1>&2
exit 1
fi
JOB="$1"
INITSCRIPT="$1"
shift
else
if [ -z "$1" ]; then
echo "Usage: $0 COMMAND" 1>&2
exit 1
fi
fi
COMMAND="$1"
shift
if [ -z "$DPKG_MAINTSCRIPT_PACKAGE" ]; then
ECHO=echo
else
ECHO=:
fi
$ECHO "Rather than invoking init scripts through /etc/init.d, use the service(8)"
$ECHO "utility, e.g. service $INITSCRIPT $COMMAND"
# Only check if jobs are disabled if the currently _running_ version of
# Upstart (which may be older than the latest _installed_ version)
# supports such a query.
#
# This check is necessary to handle the scenario when upgrading from a
# release without the 'show-config' command (introduced in
# Upstart for Ubuntu version 0.9.7) since without this check, all
# installed packages with associated Upstart jobs would be considered
# disabled.
#
# Once Upstart can maintain state on re-exec, this change can be
# dropped (since the currently running version of Upstart will always
# match the latest installed version).
UPSTART_VERSION_RUNNING=$(initctl version|awk '{print $3}'|tr -d ')')
if dpkg --compare-versions "$UPSTART_VERSION_RUNNING" ge 0.9.7
then
initctl show-config -e "$JOB"|grep -q '^ start on' || DISABLED=1
fi
case $COMMAND in
status)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
$COMMAND "$JOB"
;;
start|stop)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -z "$RUNNING" ] && [ "$COMMAND" = "stop" ]; then
exit 0
elif [ -n "$RUNNING" ] && [ "$COMMAND" = "start" ]; then
exit 0
elif [ -n "$DISABLED" ] && [ "$COMMAND" = "start" ]; then
exit 0
fi
$COMMAND "$JOB"
;;
restart)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the stop(8) and then start(8) utilities,"
$ECHO "e.g. stop $JOB ; start $JOB. The restart(8) utility is also available."
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -n "$RUNNING" ] ; then
stop "$JOB"
fi
# If the job is disabled and is not currently running, the job is
# not restarted. However, if the job is disabled but has been forced into the
# running state, we *do* stop and restart it since this is expected behaviour
# for the admin who forced the start.
if [ -n "$DISABLED" ] && [ -z "$RUNNING" ]; then
exit 0
fi
start "$JOB"
;;
reload|force-reload)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the reload(8) utility, e.g. reload $JOB"
reload "$JOB"
;;
*)
$ECHO
$ECHO "The script you are attempting to invoke has been converted to an Upstart" 1>&2
$ECHO "job, but $COMMAND is not supported for Upstart jobs." 1>&2
exit 1
esac
It's probably cleaner to enable the REST interface via /etc/mongodb.conf by adding a line of:
rest = true
That setting is documented here.
MongoDB version 2.6 has switched to a YAML config file. The following two entries are required to prevent the following startup warning:
mongodb WARNING: --rest is specified without --httpinterface
net:
http:
enabled: true
RESTInterfaceEnabled: true
When u start the server using command mongod , add --rest option with command mongod like this mongod --rest.
refer mongod - MongoDB Manual 2.6.
After run command complete , u can use the following the simple Restful API:
http://127.0.0.1:28017/databaseName/collectionName/
Here is simple RestFul API Doc.
Just start the server using mongod --rest
Note: By default, the rest API's are inaccessible due to security issues. The web interface is accessible at localhost:<port>, where the number is 1000 more than the mongod port. For example, your mongodb server is running at 27017 (by default) then you can access mongodb at
http://127.0.0.1:28017/<db-name>/<collection-name>/

init.d celery script for CentOS?

I'm writing a Django app that uses celery. So far I've been running on Ubuntu, but I'm trying to deploy to CentOS.
Celery comes with a nice init.d script for Debian-based distributions, but it doesn't work on RedHat-based distributions like CentOS because it uses start-stop-daemon. Does anybody have an equivalent one for RedHat that uses the same variable conventions so I can reuse my /etc/default/celeryd file?
Is better solved here:
Celery CentOS init script
You should be good using that one
Since I didn't get an answer, I tried to roll my own:
#!/bin/sh
#
# chkconfig: 345 99 15
# description: celery init.d script
# Defines the following variables
# CELERYD_CHDIR
# DJANGO_SETTINGS_MODULE
# CELERYD
# CELERYD_USER
# CELERYD_GROUP
# CELERYD_LOG_FILE
CELERYD_PIDFILE=/var/run/celery.pid
if test -f /etc/default/celeryd; then
. /etc/default/celeryd
fi
# Source function library.
. /etc/init.d/functions
# Celery options
CELERYD_OPTS="$CELERYD_OPTS -f $CELERYD_LOG_FILE -l $CELERYD_LOG_LEVEL"
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
start () {
cd $CELERYD_CHDIR
daemon --user $CELERYD_USER --pidfile $CELERYD_PIDFILE $CELERYD $CELERYD_OPTS &
}
stop () {
if [[ -s $CELERYD_PIDFILE ]] ; then
echo "Stopping Celery"
killproc -p $CELERYD_PIDFILE python
echo "done!"
rm -f $CELERYD_PIDFILE
else
echo "Celery not running."
fi
}
check_status() {
status -p $CELERYD_PIDFILE python
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
check_status
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac