How to get Puma port inside Sinatra - sinatra

I have some bash script start_dummy:
#!/usr/bin/env bash
#killall puma
ulimit -s 16384
ofile=logs/access_`date +%F_%H%M%S`.log
RACK_ENV=production puma -b tcp://0.0.0.0:22522 2>&1 | tee $ofile
and start_test:
#!/usr/bin/env bash
#killall puma
ulimit -s 16384
ofile=logs/access_`date +%F_%H%M%S`.log
RACK_ENV=production puma -b tcp://0.0.0.0:22577 2>&1 | tee $ofile
I want to connect different database when the app running on different port:
require 'sinatra'
# *snip* sinatra configuration
require 'data_mapper'
DataMapper::Model.raise_on_save_failure = true
if # __WHAT__ # when puma listen on 22577
DataMapper.setup(:default, 'postgres://original#127.0.0.1/original')
else # when puma listen on 22522
DataMapper.setup(:default, 'postgres://dummy#127.0.0.1/dummy')
end
what should I insert on __WHAT__ ?

if request.port == 22577
DataMapper.setup(:default, 'postgres://original#127.0.0.1/original')
else
DataMapper.setup(:default, 'postgres://dummy#127.0.0.1/dummy')
end
A better way to do this would be to store the connection string in an environment variable that can be different per server:
DataMapper.setup(:default, ENV['DATABASE_URL'])

Related

Kubectl port forward reliably in a shell script

I am using kubectl port-forward in a shell script but I find it is not reliable, or doesn't come up in time:
kubectl port-forward ${VOLT_NODE} ${VOLT_CLUSTER_ADMIN_PORT}:${VOLT_CLUSTER_ADMIN_PORT} -n ${NAMESPACE} &
if [ $? -ne 0 ]; then
echo "Unable to start port forwarding to node ${VOLT_NODE} on port ${VOLT_CLUSTER_ADMIN_PORT}"
exit 1
fi
PORT_FORWARD_PID=$!
sleep 10
Often after I sleep for 10 seconds, the port isn't open or forwarding hasn't happened. Is there any way to wait for this to be ready. Something like kubectl wait would be ideal, but open to shell options also.
I took #AkinOzer's comment and turned it into this example where I port-forward a postgresql database's port so I can make a pg_dump of the database:
#!/bin/bash
set -e
localport=54320
typename=service/pvm-devel-kcpostgresql
remoteport=5432
# This would show that the port is closed
# nmap -sT -p $localport localhost || true
kubectl port-forward $typename $localport:$remoteport > /dev/null 2>&1 &
pid=$!
# echo pid: $pid
# kill the port-forward regardless of how this script exits
trap '{
# echo killing $pid
kill $pid
}' EXIT
# wait for $localport to become available
while ! nc -vz localhost $localport > /dev/null 2>&1 ; do
# echo sleeping
sleep 0.1
done
# This would show that the port is open
# nmap -sT -p $localport localhost
# Actually use that port for something useful - here making a backup of the
# keycloak database
PGPASSWORD=keycloak pg_dump --host=localhost --port=54320 --username=keycloak -Fc --file keycloak.dump keycloak
# the 'trap ... EXIT' above will take care of kill $pid

How to check if a WildFly Server has started successfully using command/script?

I want to write a script to manage the WildFly start and deploy, but I'm having trouble now. To check if the server has started, I found the command
./jboss-cli.sh -c command=':read-attribute(name=server-state)' | grep running
But when the server is starting, because the controller is not available, ./jboss-cli.sh -c fails to connect and returns an error.
Is there a better way to check whether WildFly started completely?
I found a better solution. The command is
netstat -an | grep 9990 | grep LISTEN
Check the management port (9990) state before the WildFly is ready to accept management commands.
After that, use ./jboss-cli.sh -c command=':read-attribute(name=server-state)' | grep running to check if the server has started. Change the port
if the management port config is not the default 9990.
Here is my start & deploy script, the idea is continually check until the server started.
Then, use the jboss-cli command to deploy my application. And just print the log to the screen, so don't need to use another shell to tail the log file.
#!bin/sh
totalRow=0
printLog(){ #output the new log in the server.log to screen
local newTotal=$(awk 'END{print NR}' ./standalone/log/server.log) #quicker than wc -l
local diff=$(($newTotal-$totalRow))
tail -n $diff ./standalone/log/server.log
totalRow=$newTotal
}
nohup bin/standalone.sh>/dev/null 2>&1 &
echo '======================================== Jboss-eap-7.1 is starting now ========================================'
while true #check if the port is ready
do
sleep 1
if netstat -an | grep 9990 | grep LISTEN
then
printLog
break
fi
printLog
done
while true #check if the server start success
do
if bin/jboss-cli.sh --connect command=':read-attribute(name=server-state)' | grep running
then
printLog
break
fi
printLog
sleep 1
done
echo '======================================== Jboss-eap-7.1 has started!!!!!! ========================================'
bin/jboss-cli.sh --connect command='deploy /bcms/jboss-eap-7.1/war/myApp.war' &
tail -f -n0 ./standalone/log/server.log

How can I have a custom restart script for runit?

I'm using runit to manage an HAProxy and want to do a safe restart to reload a configuration file (specifically: haproxy -f /etc/haproxy/haproxy.cfg -sf $OLD_PROCESS_ID). I figure that I could run sv restart haproxy and tried to add a custom script named /etc/service/haproxy/restart, but it never seems to execute. How do I have a special restart script? Is my approach even good here? How do I reload my config with minimal impact using runit?
HAProxy runit service script
/etc/service/haproxy/run
#!/bin/sh
#
# runit haproxy
#
# forward stderr to stdout for use with runit svlogd
exec 2>&1
PID_PATH=/var/run/haproxy/haproxy.pid
BIN_PATH=/opt/haproxy/sbin/haproxy
CFG_PATH=/opt/haproxy/etc/haproxy.cfg
exec /bin/bash <<EOF
$BIN_PATH -f $CFG_PATH -D -p $PID_PATH
trap "echo SIGHUP caught; $BIN_PATH -f $CFG_PATH -D -p $PID_PATH -sf \\\$(cat $PID_PATH)" SIGHUP
trap "echo SIGTERM caught; kill -TERM \\\$(cat $PID_PATH) && exit 0" SIGTERM SIGINT
while true; do # Iterate to keep job running.
sleep 1 # Wake up to handle signals
done
EOF
Graceful reload that keeps things up and running.
sv reload haproxy
Full stop and start.
sv restart haproxy
This solution was inspired by https://gist.github.com/gfrey/8472007

Stopping supervisord: Shut down

I tired to start supervisor but getting error. Can anyone help? Thanks
/etc/init.d/supervisord file.
SUPERVISORD=/usr/local/bin/supervisord
SUPERVISORCTL=/usr/local/bin/supervisorctl
case $1 in
start)
echo -n "Starting supervisord: "
$SUPERVISORD
echo
;;
stop)
echo -n "Stopping supervisord: "
$SUPERVISORCTL shutdown
echo
;;
restart)
echo -n "Stopping supervisord: "
$SUPERVISORCTL shutdown
echo
echo -n "Starting supervisord: "
$SUPERVISORD
echo
;;
esac
Then run these
sudo chmod +x /etc/init.d/supervisord
sudo update-rc.d supervisord defaults
sudo /etc/init.d/supervisord start
And getting this:
Stopping supervisord: Shut down
Starting supervisord: /usr/local/lib/python2.7/dist-packages/supervisor/options.py:286: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
For help, use /usr/local/bin/supervisord -h
Conf file (located at /etc/supervisord.conf):
[unix_http_server]
file=/tmp/supervisor.sock; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock; use a unix:// URL for a unix socket
[program:myproject]
command=/home/richard/envs/myproject_stage/bin/python /home/richard/webapps/myproject/manage.py run_gunicorn -b 127.0.0.1:8002 --log-file=/tmp/myproject_stage_gunicorn.log
directory=/home/richard/webapps/myproject/
user=www-data
autostart=true
autorestart=true
stdout_logfile=/tmp/myproject_stage_supervisord.log
redirect_stderr=true
first of all, type this on your console or terminal
ps -ef | grep supervisord
You will get some pid of supervisord just like these
root 2641 12938 0 04:52 pts/1 00:00:00 grep --color=auto supervisord
root 29646 1 0 04:45 ? 00:00:00 /usr/bin/python /usr/local/bin/supervisord
if you get output like that, your pid is the second one. then if you want to shut down your supervisord you can do this
kill -s SIGTERM 29646
hope it helpful. ref: http://supervisord.org/running.html#signals
sudo unlink /tmp/supervisor.sock
This .sock file is defined in /etc/supervisord.conf's [unix_http_server]'s file config value (default is /tmp/supervisor.sock).
$ ps aux | grep supervisor
alexamil 54253 0.0 0.0 2506960 6440 ?? Ss 10:09PM 0:00.26 /usr/bin/python /usr/local/bin/supervisord -c supervisord.conf
so we can use:
$ pkill -f supervisord # kill it
This is what works for me. Run the following in the Terminal (For Linux machines)
To check if the process is running:
sudo systemctl status supervisor
To stop the process:
sudo systemctl stop supervisor
Try running these commands
sudo unlink /run/supervisor.sock
and
sudo /etc/init.d/supervisor start
As of version 3.0a11, you could do this one-liner:
sudo kill -s SIGTERM $(sudo supervisorctl pid) which hops on the back of the supervisorctl pid function.
There are many answers already available. I shall present a cleaner way to shut down supervisord.
supervisord by default, creates a file named supervisord.pid in the directory where supervisord.conf file exists. That file consists the pid of the supervisord daemon. Read the pid from the file and kill the supervisord process.
However, you can configure where the supervisord.pid file should be created. Refer this link to configure it, http://supervisord.org/configuration.html

Where should I add the --rest option for MongoDB?

I need to use mongodb with the --rest option. But mongodb is started automatically on boot, so I guess I need to modify a file or something.
Where can I add this --rest option?
I have this file at /etc/init/mongodb.conf, not sure what to edit:
# Ubuntu upstart file at /etc/init/mongodb.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb/
mkdir -p /var/log/mongodb/
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGODB="yes"
if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec /usr/bin/mongod -- --config /etc/mongodb.conf; fi
end script
And this file at /etc/init.d/mongodb:
#!/bin/sh -e
# upstart-job
#
# Symlink target for initscripts that have been converted to Upstart.
set -e
INITSCRIPT="$(basename "$0")"
JOB="${INITSCRIPT%.sh}"
if [ "$JOB" = "upstart-job" ]; then
if [ -z "$1" ]; then
echo "Usage: upstart-job JOB COMMAND" 1>&2
exit 1
fi
JOB="$1"
INITSCRIPT="$1"
shift
else
if [ -z "$1" ]; then
echo "Usage: $0 COMMAND" 1>&2
exit 1
fi
fi
COMMAND="$1"
shift
if [ -z "$DPKG_MAINTSCRIPT_PACKAGE" ]; then
ECHO=echo
else
ECHO=:
fi
$ECHO "Rather than invoking init scripts through /etc/init.d, use the service(8)"
$ECHO "utility, e.g. service $INITSCRIPT $COMMAND"
# Only check if jobs are disabled if the currently _running_ version of
# Upstart (which may be older than the latest _installed_ version)
# supports such a query.
#
# This check is necessary to handle the scenario when upgrading from a
# release without the 'show-config' command (introduced in
# Upstart for Ubuntu version 0.9.7) since without this check, all
# installed packages with associated Upstart jobs would be considered
# disabled.
#
# Once Upstart can maintain state on re-exec, this change can be
# dropped (since the currently running version of Upstart will always
# match the latest installed version).
UPSTART_VERSION_RUNNING=$(initctl version|awk '{print $3}'|tr -d ')')
if dpkg --compare-versions "$UPSTART_VERSION_RUNNING" ge 0.9.7
then
initctl show-config -e "$JOB"|grep -q '^ start on' || DISABLED=1
fi
case $COMMAND in
status)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
$COMMAND "$JOB"
;;
start|stop)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the $COMMAND(8) utility, e.g. $COMMAND $JOB"
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -z "$RUNNING" ] && [ "$COMMAND" = "stop" ]; then
exit 0
elif [ -n "$RUNNING" ] && [ "$COMMAND" = "start" ]; then
exit 0
elif [ -n "$DISABLED" ] && [ "$COMMAND" = "start" ]; then
exit 0
fi
$COMMAND "$JOB"
;;
restart)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the stop(8) and then start(8) utilities,"
$ECHO "e.g. stop $JOB ; start $JOB. The restart(8) utility is also available."
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
RUNNING=1
fi
if [ -n "$RUNNING" ] ; then
stop "$JOB"
fi
# If the job is disabled and is not currently running, the job is
# not restarted. However, if the job is disabled but has been forced into the
# running state, we *do* stop and restart it since this is expected behaviour
# for the admin who forced the start.
if [ -n "$DISABLED" ] && [ -z "$RUNNING" ]; then
exit 0
fi
start "$JOB"
;;
reload|force-reload)
$ECHO
$ECHO "Since the script you are attempting to invoke has been converted to an"
$ECHO "Upstart job, you may also use the reload(8) utility, e.g. reload $JOB"
reload "$JOB"
;;
*)
$ECHO
$ECHO "The script you are attempting to invoke has been converted to an Upstart" 1>&2
$ECHO "job, but $COMMAND is not supported for Upstart jobs." 1>&2
exit 1
esac
It's probably cleaner to enable the REST interface via /etc/mongodb.conf by adding a line of:
rest = true
That setting is documented here.
MongoDB version 2.6 has switched to a YAML config file. The following two entries are required to prevent the following startup warning:
mongodb WARNING: --rest is specified without --httpinterface
net:
http:
enabled: true
RESTInterfaceEnabled: true
When u start the server using command mongod , add --rest option with command mongod like this mongod --rest.
refer mongod - MongoDB Manual 2.6.
After run command complete , u can use the following the simple Restful API:
http://127.0.0.1:28017/databaseName/collectionName/
Here is simple RestFul API Doc.
Just start the server using mongod --rest
Note: By default, the rest API's are inaccessible due to security issues. The web interface is accessible at localhost:<port>, where the number is 1000 more than the mongod port. For example, your mongodb server is running at 27017 (by default) then you can access mongodb at
http://127.0.0.1:28017/<db-name>/<collection-name>/