How do you stop a perl Dancer/Starman/Plack server? - perl

I started a Dancer/Starman server using:
sudo plackup -s Starman -p 5001 -E deployment --workers=10 -a mywebapp/bin/app.pl
but I'm unsure how I can stop the server. Can someone provide me with a quick way of stopping it and all the workers it has spawned?

Use the
--pid /path/to/the/pid.file
and you can kill the process based on his PID
So, using the above options, you can use
kill $(cat /path/to/the/pid.file)
the pid.file simply stores the master's PID - don't need analyze the ps output...

pkill -f starman
Kill processes based on name.

On Windows you can do "CTRL + C" like making a copy but Cancel in this case. Tested working.

Related

How to stop kubectl proxy

I executed below command:
kubectl proxy --port=8081 &
kubectl proxy --port=8082 &
and of course I have 2 accessible endpoints:
curl http://localhost:8081/api/
curl http://localhost:8082/api/
But in the same time two running processes serving the same content.
How to stop one of these processes in "kubectl" manner?
Of course, I can kill the process but it seems to be a less elegant way...
I believe the "kubectl way" is to not background the proxy at all as it is intended to be a short running process to access the API on your local machine without further authentication.
There is no way to stop it other than kill or ^C (if not in background).
You can use standard shell tricks though, so executing fg then ^C will work or kill %1
Run this command to figure out the process id (pid):
netstat -tulp | grep kubectl
Then run sudo kill -9 <pid> to kill the process.
ps -ef | grep "kubectl proxy"
will show you the PID of the process
Then you can stop it with
kill -9 <pid>
Depending on the platform you could wrap the proxy in service / daemon, but seems like overkill I would just add aliases or functions to start and source them in your terminal/shell profile to make it easier.
https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html
or
kubectl-proxy-start() {
kubectl proxy &
}
kubectl-proxy-kill() {
pkill -9 -f "kubectl proxy"
}
The following works for me in the MacOS
pkill -9 -f "kubectl proxy"
Filter (grep) all "kube" pids and kill with loop:
for pid in `netstat -tulp | grep kube | awk '{print $7}' | awk -F"/" '{print $1}'| uniq`
do
kill -9 $pid
done
Try this, using your port #s of course
$ pkill -f 'kubectl proxy --port=8080'
In Windows, to stop the background job in Powershell, it is:
get-job
Stop-Job Job1

howto: elastic beanstalk + deploy docker + graceful shutdown

Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.

How can I have a custom restart script for runit?

I'm using runit to manage an HAProxy and want to do a safe restart to reload a configuration file (specifically: haproxy -f /etc/haproxy/haproxy.cfg -sf $OLD_PROCESS_ID). I figure that I could run sv restart haproxy and tried to add a custom script named /etc/service/haproxy/restart, but it never seems to execute. How do I have a special restart script? Is my approach even good here? How do I reload my config with minimal impact using runit?
HAProxy runit service script
/etc/service/haproxy/run
#!/bin/sh
#
# runit haproxy
#
# forward stderr to stdout for use with runit svlogd
exec 2>&1
PID_PATH=/var/run/haproxy/haproxy.pid
BIN_PATH=/opt/haproxy/sbin/haproxy
CFG_PATH=/opt/haproxy/etc/haproxy.cfg
exec /bin/bash <<EOF
$BIN_PATH -f $CFG_PATH -D -p $PID_PATH
trap "echo SIGHUP caught; $BIN_PATH -f $CFG_PATH -D -p $PID_PATH -sf \\\$(cat $PID_PATH)" SIGHUP
trap "echo SIGTERM caught; kill -TERM \\\$(cat $PID_PATH) && exit 0" SIGTERM SIGINT
while true; do # Iterate to keep job running.
sleep 1 # Wake up to handle signals
done
EOF
Graceful reload that keeps things up and running.
sv reload haproxy
Full stop and start.
sv restart haproxy
This solution was inspired by https://gist.github.com/gfrey/8472007

Why is sleep needed after fabric call to pg_ctl restart

I'm using Fabric to initialize a postgres server. I have to add a "sleep 1" at the end of the command or the postgres server processes die without explanation or an entry in the log:
sudo('%(pgbin)s/pg_ctl -D %(pgdata)s -l /tmp/pg.log restart && sleep 1' % env, user='postgres')
That is, I see this output on the terminal:
[dbserv] Executing task 'setup_postgres'
[dbserv] run: /bin/bash -l -c "sudo -u postgres /usr/lib/postgresql/9.1/bin/pg_ctl -D /data/pg -l /tmp/pg.log restart && sleep 1"
[dbserv] out: waiting for server to shut down.... done
[dbserv] out: server stopped
[dbserv] out: server starting
Without the && sleep 1, there's nothing in /tmp/pg.log (though the file is created), and no postgres processes are running. With the sleep, everything works fine.
(And if I execute the same command directly on target machine's command line, it works fine without the sleep.)
Since it's working, it doesn't really matter, but I'm asking anyway: Does someone know what the sleep is allowing to happen and why?
You might try also using the pty option set it to false and see if it's related to how fabric handles pseudo-ttys.

How can I tail a remote file?

I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?