How can we create two instances of memcached server in same server in different port? - memcached

I tried to add in the way
-l 11211
-l 11212
in memcached conf file. But it is just listening to first one i.e 1121

First I used mikewied's solution, but then I bumped into the problem of auto starting the daemon. Another confusing thing in that solution is that it doesn't use the config from etc. I was about to create my own start up scripts in /etc/init.d but then I looked into /etc/init.d/memcached file and saw this beautiful solution
# Usage:
# cp /etc/memcached.conf /etc/memcached_server1.conf
# cp /etc/memcached.conf /etc/memcached_server2.conf
# start all instances:
# /etc/init.d/memcached start
# start one instance:
# /etc/init.d/memcached start server1
# stop all instances:
# /etc/init.d/memcached stop
# stop one instance:
# /etc/init.d/memcached stop server1
# There is no "status" command.
Basically readers of this question just need to read the /etc/init.d/memcached file.
Cheers

Here's what memcached says the -l command is for:
-l <addr> interface to listen on (default: INADDR_ANY, all addresses)
<addr> may be specified as host:port. If you don't specify
a port number, the value you specified with -p or -U is
used. You may specify multiple addresses separated by comma
or by using -l multiple times
First off you need to specify the interface you want memcached to listen on if you are using the -l flag. Use 0.0.0.0 for all interfaces and use 127.0.0.1 is you just want to be able to access memcached from localhost. Second, don't use two -l flags. Use only one and separate each address by a comma. The command below should do what you want.
memcached -l 0.0.0.0:11211,0.0.0.0:11212
Keep in mind that this will have one memcached instance listen on two ports. To have two memcached instances on one machine run these two commands.
memcached -p 11211 -d
memcached -p 11212 -d

The answer from David Dzhagayev is the best one. If you don't have the correct version of memcache init script, here is the one he is talking about:
It should work with any linux distro using init.
#! /bin/bash
### BEGIN INIT INFO
# Provides: memcached
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $local_fs
# Should-Stop: $local_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start memcached daemon
# Description: Start up memcached, a high-performance memory caching daemon
### END INIT INFO
# Usage:
# cp /etc/memcached.conf /etc/memcached_server1.conf
# cp /etc/memcached.conf /etc/memcached_server2.conf
# start all instances:
# /etc/init.d/memcached start
# start one instance:
# /etc/init.d/memcached start server1
# stop all instances:
# /etc/init.d/memcached stop
# stop one instance:
# /etc/init.d/memcached stop server1
# There is no "status" command.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/memcached
DAEMONNAME=memcached
DAEMONBOOTSTRAP=/usr/share/memcached/scripts/start-memcached
DESC=memcached
test -x $DAEMON || exit 0
test -x $DAEMONBOOTSTRAP || exit 0
set -e
. /lib/lsb/init-functions
# Edit /etc/default/memcached to change this.
ENABLE_MEMCACHED=no
test -r /etc/default/memcached && . /etc/default/memcached
FILES=(/etc/memcached_*.conf)
# check for alternative config schema
if [ -r "${FILES[0]}" ]; then
CONFIGS=()
for FILE in "${FILES[#]}";
do
# remove prefix
NAME=${FILE#/etc/}
# remove suffix
NAME=${NAME%.conf}
# check optional second param
if [ $# -ne 2 ];
then
# add to config array
CONFIGS+=($NAME)
elif [ "memcached_$2" == "$NAME" ];
then
# use only one memcached
CONFIGS=($NAME)
break;
fi;
done;
if [ ${#CONFIGS[#]} == 0 ];
then
echo "Config not exist for: $2" >&2
exit 1
fi;
else
CONFIGS=(memcached)
fi;
CONFIG_NUM=${#CONFIGS[#]}
for ((i=0; i < $CONFIG_NUM; i++)); do
NAME=${CONFIGS[${i}]}
PIDFILE="/var/run/${NAME}.pid"
case "$1" in
start)
echo -n "Starting $DESC: "
if [ $ENABLE_MEMCACHED = yes ]; then
start-stop-daemon --start --quiet --exec "$DAEMONBOOTSTRAP" -- /etc/${NAME}.conf $PIDFILE
echo "$NAME."
else
echo "$NAME disabled in /etc/default/memcached."
fi
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --oknodo --retry 5 --pidfile $PIDFILE --exec $DAEMON
echo "$NAME."
rm -f $PIDFILE
;;
restart|force-reload)
#
# If the "reload" option is implemented, move the "force-reload"
# option to the "reload" entry above. If not, "force-reload" is
# just the same as "restart".
#
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --oknodo --retry 5 --pidfile $PIDFILE
rm -f $PIDFILE
if [ $ENABLE_MEMCACHED = yes ]; then
start-stop-daemon --start --quiet --exec "$DAEMONBOOTSTRAP" -- /etc/${NAME}.conf $PIDFILE
echo "$NAME."
else
echo "$NAME disabled in /etc/default/memcached."
fi
;;
status)
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $?
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
done;
exit 0

In case someone else stumbles upon this question, there is a bug on the debian distribution of memcached (which means flavours like Ubuntu would also be affected).
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784357
Because of this bug, even when you have separate configuration files, when you run sudo service memcached restart, only the default configuration file in /etc/memcached.conf will be loaded.
As mentioned in the comment here, the temporary solution is to
Remove /lib/systemd/system/memcached.service
Run sudo systemctl daemon-reload (don't worry, it is safe to do
so)
Finally, run sudo service memcached restart if you are okay with losing all cache information. If not, run sudo service memcached force-reload

Ok, very good answer, Tristan CHARBONNIER.
Please replace code into file /usr/share/memcached/scripts/start-memcached:
#!/usr/bin/perl -w
# start-memcached
# 2003/2004 - Jay Bonci
# This script handles the parsing of the /etc/memcached.conf file
# and was originally created for the Debian distribution.
# Anyone may use this little script under the same terms as
# memcached itself.
use strict;
if($> != 0 and $< != 0)
{
print STDERR "Only root wants to run start-memcached.\n";
exit;
}
my $params; my $etchandle; my $etcfile = "/etc/memcached.conf";
# This script assumes that memcached is located at /usr/bin/memcached, and
# that the pidfile is writable at /var/run/memcached.pid
my $memcached = "/usr/bin/memcached";
my $pidfile = "/var/run/memcached.pid";
if (scalar(#ARGV) == 2) {
$etcfile = shift(#ARGV);
$pidfile = shift(#ARGV);
}
# If we don't get a valid logfile parameter in the /etc/memcached.conf file,
# we'll just throw away all of our in-daemon output. We need to re-tie it so
# that non-bash shells will not hang on logout. Thanks to Michael Renner for
# the tip
my $fd_reopened = "/dev/null";
sub handle_logfile
{
my ($logfile) = #_;
$fd_reopened = $logfile;
}
sub reopen_logfile
{
my ($logfile) = #_;
open *STDERR, ">>$logfile";
open *STDOUT, ">>$logfile";
open *STDIN, ">>/dev/null";
$fd_reopened = $logfile;
}
# This is set up in place here to support other non -[a-z] directives
my $conf_directives = {
"logfile" => \&handle_logfile,
};
if(open $etchandle, $etcfile)
{
foreach my $line (< $etchandle>)
{
$line ||= "";
$line =~ s/\#.*//g;
$line =~ s/\s+$//g;
$line =~ s/^\s+//g;
next unless $line;
next if $line =~ /^\-[dh]/;
if($line =~ /^[^\-]/)
{
my ($directive, $arg) = $line =~ /^(.*?)\s+(.*)/;
$conf_directives->{$directive}->($arg);
next;
}
push #$params, $line;
}
}else{
$params = [];
}
push #$params, "-u root" unless(grep "-u", #$params);
$params = join " ", #$params;
if(-e $pidfile)
{
open PIDHANDLE, "$pidfile";
my $localpid = <PIDHANDLE>;
close PIDHANDLE;
chomp $localpid;
if(-d "/proc/$localpid")
{
print STDERR "memcached is already running.\n";
exit;
}else{
`rm -f $localpid`;
}
}
my $pid = fork();
if($pid == 0)
{
reopen_logfile($fd_reopened);
exec "$memcached $params";
exit(0);
}else{
if(open PIDHANDLE,">$pidfile")
{
print PIDHANDLE $pid;
close PIDHANDLE;
}else{
print STDERR "Can't write pidfile to $pidfile.\n";
}
}

Simple solution to Centos 6
First copy /etc/sysconfig/memcached to /etc/sysconfig/memcached2 and write new settings to the new file.
Then copy /etc/init.d/memcached to /etc/init.d/memcached2 and change in the new file:
PORT to your new port (it should be reset from /etc/sysconfig/memcached2, so we do it just in case)
/etc/sysconfig/memcached to /etc/sysconfig/memcached2
/var/run/memcached/memcached.pid to /var/run/memcached/memcached2.pid
/var/lock/subsys/memcached to /var/lock/subsys/memcached2
Now you can use service memcached2 start, service memcached2 stop etc. Don't forget chkconfig memcached2 on to run it when machine boots up.

in /etc/memcached.conf you can just edit like below
-l 192.168.112.22,127.0.0.1
must use comma between two ip address

Related

sbt-native-packager: Scala App on Alpine Docker Image fails with permission denied

I have a Scala application that I want to run inside a Docker container. To build the docker image, I use sbt-native-packager.
The base image I am using is "openjdk:8-jre-alpine".
Tried "openjdk:8-jdk-alpine" - does not make any difference
Tried sbt-native-packager 1.3.20 - does not make any difference
project/plugins.sbt
resolvers += Resolver.typesafeRepo("releases")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.17")
build.sbt
enablePlugins(JavaAppPackaging)
mainClass in Compile := Some("MyAppClass")
enablePlugins(DockerPlugin)
dockerBaseImage := "openjdk:8-jre-alpine" // startup fails with permission denied if using alpine :-(
Running a container with the resulting image leads to following error on startup:
docker run my-app:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/opt/docker/bin/my-app\": permission denied": unknown.
ERRO[0000] error waiting for container: context canceled
The app starts up normally when using "openjdk:8-jre".
Update
Contents of /opt/docker/bin/my-app:
#!/usr/bin/env bash
### ------------------------------- ###
### Helper methods for BASH scripts ###
### ------------------------------- ###
die() {
echo "$#" 1>&2
exit 1
}
realpath () {
(
TARGET_FILE="$1"
CHECK_CYGWIN="$2"
cd "$(dirname "$TARGET_FILE")"
TARGET_FILE=$(basename "$TARGET_FILE")
COUNT=0
while [ -L "$TARGET_FILE" -a $COUNT -lt 100 ]
do
TARGET_FILE=$(readlink "$TARGET_FILE")
cd "$(dirname "$TARGET_FILE")"
TARGET_FILE=$(basename "$TARGET_FILE")
COUNT=$(($COUNT + 1))
done
if [ "$TARGET_FILE" == "." -o "$TARGET_FILE" == ".." ]; then
cd "$TARGET_FILE"
TARGET_FILEPATH=
else
TARGET_FILEPATH=/$TARGET_FILE
fi
# make sure we grab the actual windows path, instead of cygwin's path.
if [[ "x$CHECK_CYGWIN" == "x" ]]; then
echo "$(pwd -P)/$TARGET_FILE"
else
echo $(cygwinpath "$(pwd -P)/$TARGET_FILE")
fi
)
}
# TODO - Do we need to detect msys?
# Uses uname to detect if we're in the odd cygwin environment.
is_cygwin() {
local os=$(uname -s)
case "$os" in
CYGWIN*) return 0 ;;
*) return 1 ;;
esac
}
# This can fix cygwin style /cygdrive paths so we get the
# windows style paths.
cygwinpath() {
local file="$1"
if is_cygwin; then
echo $(cygpath -w $file)
else
echo $file
fi
}
# Make something URI friendly
make_url() {
url="$1"
local nospaces=${url// /%20}
if is_cygwin; then
echo "/${nospaces//\\//}"
else
echo "$nospaces"
fi
}
# This crazy function reads in a vanilla "linux" classpath string (only : are separators, and all /),
# and returns a classpath with windows style paths, and ; separators.
fixCygwinClasspath() {
OLDIFS=$IFS
IFS=":"
read -a classpath_members <<< "$1"
declare -a fixed_members
IFS=$OLDIFS
for i in "${!classpath_members[#]}"
do
fixed_members[i]=$(realpath "${classpath_members[i]}" "fix")
done
IFS=";"
echo "${fixed_members[*]}"
IFS=$OLDIFS
}
# Fix the classpath we use for cygwin.
fix_classpath() {
cp="$1"
if is_cygwin; then
echo "$(fixCygwinClasspath "$cp")"
else
echo "$cp"
fi
}
# Detect if we should use JAVA_HOME or just try PATH.
get_java_cmd() {
if [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then
echo "$JAVA_HOME/bin/java"
else
echo "java"
fi
}
echoerr () {
echo 1>&2 "$#"
}
vlog () {
[[ $verbose || $debug ]] && echoerr "$#"
}
dlog () {
[[ $debug ]] && echoerr "$#"
}
execRunner () {
# print the arguments one to a line, quoting any containing spaces
[[ $verbose || $debug ]] && echo "# Executing command line:" && {
for arg; do
if printf "%s\n" "$arg" | grep -q ' '; then
printf "\"%s\"\n" "$arg"
else
printf "%s\n" "$arg"
fi
done
echo ""
}
# we use "exec" here for our pids to be accurate.
exec "$#"
}
addJava () {
dlog "[addJava] arg = '$1'"
java_args+=( "$1" )
}
addApp () {
dlog "[addApp] arg = '$1'"
app_commands+=( "$1" )
}
addResidual () {
dlog "[residual] arg = '$1'"
residual_args+=( "$1" )
}
addDebugger () {
addJava "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=$1"
}
require_arg () {
local type="$1"
local opt="$2"
local arg="$3"
if [[ -z "$arg" ]] || [[ "${arg:0:1}" == "-" ]]; then
die "$opt requires <$type> argument"
fi
}
is_function_defined() {
declare -f "$1" > /dev/null
}
# Attempt to detect if the script is running via a GUI or not
# TODO - Determine where/how we use this generically
detect_terminal_for_ui() {
[[ ! -t 0 ]] && [[ "${#residual_args}" == "0" ]] && {
echo "true"
}
# SPECIAL TEST FOR MAC
[[ "$(uname)" == "Darwin" ]] && [[ "$HOME" == "$PWD" ]] && [[ "${#residual_args}" == "0" ]] && {
echo "true"
}
}
# Processes incoming arguments and places them in appropriate global variables. called by the run method.
process_args () {
local no_more_snp_opts=0
while [[ $# -gt 0 ]]; do
case "$1" in
--) shift && no_more_snp_opts=1 && break ;;
-h|-help) usage; exit 1 ;;
-v|-verbose) verbose=1 && shift ;;
-d|-debug) debug=1 && shift ;;
-no-version-check) no_version_check=1 && shift ;;
-mem) echo "!! WARNING !! -mem option is ignored. Please use -J-Xmx and -J-Xms" && shift 2 ;;
-jvm-debug) require_arg port "$1" "$2" && addDebugger $2 && shift 2 ;;
-main) custom_mainclass="$2" && shift 2 ;;
-java-home) require_arg path "$1" "$2" && jre=`eval echo $2` && java_cmd="$jre/bin/java" && shift 2 ;;
-D*|-agentlib*|-XX*) addJava "$1" && shift ;;
-J*) addJava "${1:2}" && shift ;;
*) addResidual "$1" && shift ;;
esac
done
if [[ no_more_snp_opts ]]; then
while [[ $# -gt 0 ]]; do
addResidual "$1" && shift
done
fi
is_function_defined process_my_args && {
myargs=("${residual_args[#]}")
residual_args=()
process_my_args "${myargs[#]}"
}
}
# Actually runs the script.
run() {
# TODO - check for sane environment
# process the combined args, then reset "$#" to the residuals
process_args "$#"
set -- "${residual_args[#]}"
argumentCount=$#
#check for jline terminal fixes on cygwin
if is_cygwin; then
stty -icanon min 1 -echo > /dev/null 2>&1
addJava "-Djline.terminal=jline.UnixTerminal"
addJava "-Dsbt.cygwin=true"
fi
# check java version
if [[ ! $no_version_check ]]; then
java_version_check
fi
if [ -n "$custom_mainclass" ]; then
mainclass=("$custom_mainclass")
else
mainclass=("${app_mainclass[#]}")
fi
# Now we check to see if there are any java opts on the environment. These get listed first, with the script able to override them.
if [[ "$JAVA_OPTS" != "" ]]; then
java_opts="${JAVA_OPTS}"
fi
# run sbt
execRunner "$java_cmd" \
${java_opts[#]} \
"${java_args[#]}" \
-cp "$(fix_classpath "$app_classpath")" \
"${mainclass[#]}" \
"${app_commands[#]}" \
"${residual_args[#]}"
local exit_code=$?
if is_cygwin; then
stty icanon echo > /dev/null 2>&1
fi
exit $exit_code
}
# Loads a configuration file full of default command line options for this script.
loadConfigFile() {
cat "$1" | sed $'/^\#/d;s/\r$//'
}
# Now check to see if it's a good enough version
# TODO - Check to see if we have a configured default java version, otherwise use 1.6
java_version_check() {
readonly java_version=$("$java_cmd" -version 2>&1 | awk -F '"' '/version/ {print $2}')
if [[ "$java_version" == "" ]]; then
echo
echo No java installations was detected.
echo Please go to http://www.java.com/getjava/ and download
echo
exit 1
else
local major=$(echo "$java_version" | cut -d'.' -f1)
if [[ "$major" -eq "1" ]]; then
local major=$(echo "$java_version" | cut -d'.' -f2)
fi
if [[ "$major" -lt "6" ]]; then
echo
echo The java installation you have is not up to date
echo $app_name requires at least version 1.6+, you have
echo version $java_version
echo
echo Please go to http://www.java.com/getjava/ and download
echo a valid Java Runtime and install before running $app_name.
echo
exit 1
fi
fi
}
### ------------------------------- ###
### Start of customized settings ###
### ------------------------------- ###
usage() {
cat <<EOM
Usage: $script_name [options]
-h | -help print this message
-v | -verbose this runner is chattier
-d | -debug set sbt log level to debug
-no-version-check Don't run the java version check.
-main <classname> Define a custom main class
-jvm-debug <port> Turn on JVM debugging, open at the given port.
# java version (default: java from PATH, currently $(java -version 2>&1 | grep version))
-java-home <path> alternate JAVA_HOME
# jvm options and output control
JAVA_OPTS environment variable, if unset uses "$java_opts"
-Dkey=val pass -Dkey=val directly to the java runtime
-J-X pass option -X directly to the java runtime
(-J is stripped)
# special option
-- To stop parsing built-in commands from the rest of the command-line.
e.g.) enabling debug and sending -d as app argument
\$ ./start-script -d -- -d
In the case of duplicated or conflicting options, basically the order above
shows precedence: JAVA_OPTS lowest, command line options highest except "--".
Available main classes:
MyAppClass
EOM
}
### ------------------------------- ###
### Main script ###
### ------------------------------- ###
declare -a residual_args
declare -a java_args
declare -a app_commands
declare -r real_script_path="$(realpath "$0")"
declare -r app_home="$(realpath "$(dirname "$real_script_path")")"
# TODO - Check whether this is ok in cygwin...
declare -r lib_dir="$(realpath "${app_home}/../lib")"
declare -a app_mainclass=(MyAppClass)
declare -r script_conf_file="${app_home}/../conf/application.ini"
declare -r app_classpath="$lib_dir/my-app-0.1.0-SNAPSHOT.jar:$lib_dir/org.scala-lang.scala-library-2.12.8.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-j-api_2.12-9.0.1.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-messages_2.12-9.0.1.jar:$lib_dir/com.thenewmotion.enum-utils_2.12-0.2.1.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-json_2.12-9.0.1.jar:$lib_dir/org.json4s.json4s-native_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-core_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-ast_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-scalap_2.12-3.6.1.jar:$lib_dir/com.thoughtworks.paranamer.paranamer-2.8.jar:$lib_dir/org.slf4j.slf4j-api-1.7.25.jar:$lib_dir/org.java-websocket.Java-WebSocket-1.3.9.jar:$lib_dir/org.apache.logging.log4j.log4j-api-2.11.2.jar:$lib_dir/org.apache.logging.log4j.log4j-core-2.11.2.jar:$lib_dir/org.apache.logging.log4j.log4j-slf4j-impl-2.11.2.jar:$lib_dir/com.typesafe.config-1.3.4.jar"
# java_cmd is overrode in process_args when -java-home is used
declare java_cmd=$(get_java_cmd)
# if configuration files exist, prepend their contents to $# so it can be processed by this runner
[[ -f "$script_conf_file" ]] && set -- $(loadConfigFile "$script_conf_file") "$#"
run "$#"
run the sbt task docker:stage. Then analyze the output created in the folder target/docker/stage.
In my case the Dockerfile contains the following:
FROM openjdk:11-jre-slim as stage0
WORKDIR /opt/docker
COPY opt /opt
USER root
RUN ["chmod", "-R", "u=rX,g=rX", "/opt/docker"]
RUN ["chmod", "u+x,g+x", "/opt/docker/bin/sample"]
FROM openjdk:11-jre-slim
LABEL MAINTAINER="your name"
USER root
RUN id -u demiourgos728 2> /dev/null || useradd --system --create-home --uid 1001 --gid 0 demiourgos728
WORKDIR /opt/docker
COPY --from=stage0 --chown=demiourgos728:root /opt/docker /opt/docker
EXPOSE 9000
USER 1001
ENTRYPOINT ["/opt/docker/bin/sample"]
CMD []
I had the problem that the PID file could not be created. I think in your case it will be something similar. There is no magic involved here.
The folder /opt/docker does not have write permissions by default. As the documentation states, you could add the following line to your build.sbt:
dockerAdditionalPermissions += (DockerChmodType.UserGroupWriteExecute, "/opt/docker")
which will add an additional line:
RUN ["chmod", "u=rwX,g=rwX", "/opt/docker"]
to the stage0 container. See nativer packager docs.
Alternatively, disable the PID file by passing a parameter to the JVM:
bashScriptExtraDefines ++= Seq( "addJava '-Dpidfile.path=/dev/null'" )
to your build.sbt. Play Production configuration Docs

Perl: Unable to exec

I am trying to run few child processes on different platforms in parallel. Parent should only proceed further once all the child processes have completed on respective platforms.
The problem is that when I use fork and then run the ‘exec’ command in the child process, it ends almost instantly. Also, the output isn't consistent. Almost every time the log shows only one line.
-bash-2.05b$ cat Agent.SOLSPARC
caught SIGTERM signal, cleaning up
or
-bash-2.05b$ cat Agent.SOLSPARC
Host: EBSO9SPC Login: esm2
Sometimes, there are few extra lines and at last the message, 'Killed by signal 15'. The command that i use in 'exec' actually calls a script which connects to remote boxes and runs make command on them. For testing purpose, i am currently passing only one platform i.e., SOLSPARC. Also, i'm only interested in knowing whether a command finished on any given platform.
I was not sure whether I was passing all the arguments to ‘exec’ correctly so I tried different combinations (after referring different links on the Internet) but to no avail. One important observation is that when i used strace to debug this issue, the command worked fine. I saw in the perldoc that exec uses /bin/sh -c on Unix platforms, but varies on other platforms. Is it that exec and strace use different shell?
Here’s the relevant portion of my code:
sub compile {
my %child_pids;
foreach $plat (0 .. $#plat_list) {
my $pid = fork;
# Didn't check the undef condition for child
if ($plat_list[$plat] eq "SOLSPARC") {
print "\nStarted Solaris build \n";
if ($pid == 0) {
print "Inside Child Process \n\n";
exec ( "${ROOT}/${REM_EXEC} -t 1200 -c \"make LANG=en_US distclean \" -b ${ROOT} -l Agent. $plat_list[$plat]" ) or die "exec failed";
} elsif ($pid > 0) {
$child_pids{"SOLSPARC"} = $pid;
}
} else {
print "\nStarted build for other platforms \n";
if ($pid == 0) {
print "Inside Child Process \n\n";
exec ( "${ROOT}/${REM_EXEC} -t 1200 -c \"make LANG=en_GB clean \" -b ${ROOT} -l Agent. $plat_list[$plat]" ) or die "exec failed";
} elsif ($pid > 0) {
$child_pids{"$plat_list[$plat]"} = $pid;
}
}
}
my %rev_child_pids = reverse %child_pids;
while ((my $kid = waitpid -1, WNOHANG) > 0) {
if ($rev_child_pids{$kid} eq "SOLSPARC") {
print "\nChild process completed for SOLARIS platform $rev_child_pids{$kid} \n";
print "Run some other command here \n";
} else {
print "\nChild process completed for other platform $rev_child_pids{$kid} \n";
print "No more commands to run \n";
}
}
}
Any suggestions?
Try using 'system' instead of 'exec'.
system `${ROOT}/${REM_EXEC} -t 1200 -c "make LANG=en_US distclean " -b ${ROOT} -l Agent. $plat_list[$plat]`
'system' works slightly differently in relation to fork so it might solve the problem.

can't run thin web server as a service - thin: unrecognized service

I tried to setup thin service in accordance with RVM and thin, root vs. local user and http://wiki.rubyonrails.org/deployment/nginx-thin?rev=1233246014
And I get thin: unrecognized service when starting the service. How to fix that?
~ > sudo /usr/sbin/update-rc.d -f thin defaults
update-rc.d: warning: thin stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (S 0 1 6)
Adding system startup for /etc/init.d/thin ...
/etc/rc0.d/K20thin -> ../init.d/thin
/etc/rc1.d/K20thin -> ../init.d/thin
/etc/rc6.d/K20thin -> ../init.d/thin
/etc/rc2.d/S20thin -> ../init.d/thin
/etc/rc3.d/S20thin -> ../init.d/thin
/etc/rc4.d/S20thin -> ../init.d/thin
/etc/rc5.d/S20thin -> ../init.d/thin
I changed daemon to point /usr/local/rvm/bin/bootup_thin
~ > sudo cat /etc/init.d/thin
#!/bin/sh
### BEGIN INIT INFO
# Provides: thin
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: thin initscript
# Description: thin
### END INIT INFO
# Original author: Forrest Robertson
# Do NOT "set -e"
DAEMON=/usr/local/rvm/bin/bootup_thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
Hm, just found the cause.
firstly I deleted /etc/init.d/thin, then created and had forgot to chmod +x it.
So after chmod +x /etc/init.d/thin it works.

Daemon network process (perl) under Redhat RHEL5 is denying network connection

I wrote a program that is using the Perl POE Framework to realize a json webservice.
So far so good. I have no problems running that application under debian systems. But when i run my application under RHEL5 the network connection is refused. I have no setuid so the service is running as root and the port (9991) is out of the serviceport-range.
My workaround ist to start the application as nondaemon with nohup $CMD &. Thats more than stupid. But I've absolute no idea
my init.d script
DEBIAN:
#!/bin/sh -e
### BEGIN INIT INFO
# Provides: jobserver
# Required-Start: $local_fs $network $syslog
# Required-Stop: $local_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start/stop jobserver
### END INIT INFO
APPLICATION_CONFIG="/etc/jobserver/jobserver.conf"
test -f $APPLICATION_CONFIG && . $APPLICATION_CONFIG
set -e
if [ ! -x $APPLICATION_PATH ] ; then
echo "No jobserver installed"
exit 0
fi
#load init.d helper functions
. /lib/lsb/init-functions
PIDFILE=$APPLICATION_PIDFILE
CONF=$APPLICATION_CONFIG
DAEMON=$APPLICATION_DAEMON
PARAMETER="--configuration $CONF --daemon"
if [ -z "$PIDFILE" ] ; then
echo "ERROR: APPLICATION_PIDFILE needs to be defined in application config" >&2
exit 2
fi
jobserver_start() {
#log_daemon_msg "Starting Jobserver Daemon"
log_success_msg "Starting Jobserver Daemon"
start-stop-daemon --start --quiet --oknodo --make-pidfile --pidfile "$PIDFILE" --exec "$DAEMON" -- $PARAMETER
#log_end_msg $?
}
jobserver_stop() {
log_success_msg "Stopping Jobserver Daemon"
start-stop-daemon --stop --quiet --oknodo --pidfile "${PIDFILE}"
rm -f "${PIDFILE}"
#log_end_msg $?
}
case $1 in
start)
jobserver_start
;;
stop)
jobserver_stop
;;
restart)
jobserver_stop
jobserver_start
;;
*)
log_success_msg "Usage: /etc/init.d/jobserver {start|stop|restart}"
exit 1
;;
esac
exit $?;
REDHAT:
# Source function library.
. /etc/rc.d/init.d/functions
APPLICATION_CONFIG="/etc/jobserver/jobserver.conf"
test -f $APPLICATION_CONFIG && . $APPLICATION_CONFIG
PIDFILE=$APPLICATION_PIDFILE
CONF=$APPLICATION_CONFIG
DAEMON=$APPLICATION_DAEMON
PARAMETER="--configuration $CONF --daemon"
RETVAL=0
if [ -z "$PIDFILE" ] ; then
echo "ERROR: APPLICATION_PIDFILE needs to be defined in application config" >&2
exit 2
fi
jobserver_start() {
echo -n $"Starting jobserver daemon: "
daemon --pidfile=$PIDFILE $DAEMON $PARAMETER
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
failure;
fi;
echo
return $RETVAL
}
jobserver_stop() {
echo -n $"Stopping jobserver daemon: "
if [ ! -f $PIDFILE ]; then
echo -n $"Jobserver daemon is not running: ";
else
killproc -p $PIDFILE
RETVAL=$?
fi
echo
return $RETVAL;
}
case $1 in
start)
jobserver_start
;;
stop)
jobserver_stop
;;
restart)
jobserver_stop
jobserver_start
;;
*)
echo "Usage: /etc/init.d/jobserver {start|stop|restart}"
exit 1
;;
esac
exit $?;
Daemonprocess:
sub daemonize {
# start a new child process
if (fork()) {
exit(0);
}
# become process group leader
unless (POSIX::setsid) {
die("POSIX setsid failed: $!");
}
# change to root dir
chdir("/");
foreach (0 .. (POSIX::sysconf(&POSIX::_SC_OPEN_MAX) || 1024)) {
POSIX::close($_);
}
# allow only user based io
umask(077);
# reopen pipes
open(STDIN, "<", "/dev/null");
open(STDOUT, ">", "/dev/null");
open(STDERR, ">", "/dev/null");
# Advisory. Fork one more time; this is not "necessary" for most toolserver
# daemons but it is a best practice as there are situations where
# forking twice is needed to avoid zombies. A second fork also
# prevents the daemon from ever re-acquiring a terminal, by making
# the main daemon process not be the process group leader
if (fork()) {
exit(0);
}
# write pidfile
my $pidfile = "/var/run/jobserverd.pid";
open(PIDFILE, ">$pidfile");
print(PIDFILE "$$");
close(PIDFILE);
}
Serverfront:
sub listen {
my $self = shift;
logInfo("Starting webservice. Listening to port ".$self->{'_port'}, 1);
# Spawn the webservice
POE::Component::Server::HTTP->new (
Port => $self->{'_port'},
ContentHandler => {
"/json/" => \&dispatchJSONService
},
Headers => {
"Server" => "Perl JobServer version ".$self->{'_version'}
},
);
$poe_kernel->run();
}
Problem solved. The Problem was the double fork. Thanks to all for reading. Never trust advisories! ;)

taint-mode perl: preserve suid when running external program via system()

I'm trying to add a feature to a legacy script. The script is suid, and uses perl -T (taint mode: man perlsec), for extra security. The feature I need to add is implemented in Python.
My problem is that I can't convince perlsec to preserve the suid permissions, no matter how much I launder the environment and my command lines.
This is frustrating, since it preserves the suid for other binaries (such as /bin/id). Is there a undocumented special case for /usr/bin/perl? This seems unlikely.
Does anyone know a way to make this work? (As-is: We don't have the resources to re-architect this whole thing.)
Solution: (as per #gbacon)
# use the -p option to bash
system('/bin/bash', '-p', '-c', '/usr/bin/id -un');
# or set real user and group ids
$< = $>;
$( = $);
system('/usr/bin/python', '-c', 'import os; os.system("/usr/bin/id -un")');
Gives the desired results!
Here's a cut-down version of my script, which still shows my problem.
#!/usr/bin/perl -T
## This is an SUID script: man perlsec
%ENV = ( "PATH" => "" );
##### PERLSEC HELPERS #####
sub tainted (#) {
# Prevent errors, stringifying
local(#_, $#, $^W) = #_;
#let eval catch the DIE signal
$SIG{__DIE__} = '';
my $retval = not eval { join("",#_), kill 0; 1 };
$SIG{__DIE__} = 'myexit';
return $retval
}
sub show_taint {
foreach (#_) {
my $arg = $_; #prevent "read-only variable" nonsense
chomp $arg;
if ( tainted($arg) ) {
print "TAINT:'$arg'";
} else {
print "ok:'$arg'";
}
print ", ";
}
print "\n";
}
### END PERLSEC HELPERS ###
# Are we SUID ? man perlsec
my $uid = `/usr/bin/id --user` ;
chomp $uid;
my $reluser = "dt-pdrel";
my $reluid = `/usr/bin/id --user $reluser 2> /dev/null`;
chomp $reluid;
if ( $uid ne $reluid ) {
# what ? we are not anymore SUID ? somebody must do a chmod u+s $current_script
print STDERR "chmod 4555 $myname\n";
exit(14);
}
# comment this line if you don't want to autoflush after every print
$| = 1;
# now, we're safe, single & SUID
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# BEGIN of main code itself
print "\nENVIRON UNDER BASH:\n";
run('/bin/bash', '-c', '/bin/env');
print "\nTAINT DEMO:\n";
print "\#ARGV: ";
show_taint(#ARGV);
print "\%ENV: ";
show_taint(values %ENV);
print "`cat`: ";
show_taint(`/bin/cat /etc/host.conf`);
print "\nworks:\n";
run('/usr/bin/id', '-un');
run('/usr/bin/id -un');
print "\ndoesn't work:\n";
run('/bin/bash', '-c', '/usr/bin/id -un');
run('/bin/bash', '-c', '/bin/date >> /home/dt-pdrel/date');
run('/bin/date >> /home/dt-pdrel/date');
run('/usr/bin/python', '-c', 'import os; os.system("/usr/bin/id -un")');
run('/usr/bin/python', '-c', 'import os; os.system("/usr/bin/id -un")');
sub run {
my #cmd = #_;
print "\tCMD: '#cmd'\n";
print "\tSEC: ";
show_taint(#cmd);
print "\tOUT: ";
system #cmd ;
print "\n";
}
And here's the output:
$ id -un
bukzor
$ ls -l /proj/test/test.pl
-rwsr-xr-x 1 testrel asic 1976 Jul 22 14:34 /proj/test/test.pl*
$ /proj/test/test.pl foo bar
ENVIRON UNDER BASH:
CMD: '/bin/bash -c /bin/env'
SEC: ok:'/bin/bash', ok:'-c', ok:'/bin/env',
OUT: PATH=
PWD=/proj/test2/bukzor/test_dir/
SHLVL=1
_=/bin/env
TAINT DEMO:
#ARGV: TAINT:'foo', TAINT:'bar',
%ENV: ok:'',
`cat`: TAINT:'order hosts,bind',
works:
CMD: '/usr/bin/id -un'
SEC: ok:'/usr/bin/id', ok:'-un',
OUT: testrel
CMD: '/usr/bin/id -un'
SEC: ok:'/usr/bin/id -un',
OUT: testrel
doesn't work:
CMD: '/bin/bash -c /usr/bin/id -un'
SEC: ok:'/bin/bash', ok:'-c', ok:'/usr/bin/id -un',
OUT: bukzor
CMD: '/bin/bash -c /bin/date >> /home/testrel/date'
SEC: ok:'/bin/bash', ok:'-c', ok:'/bin/date >> /home/testrel/date',
OUT: /bin/bash: /home/testrel/date: Permission denied
CMD: '/bin/date >> /home/testrel/date'
SEC: ok:'/bin/date >> /home/testrel/date',
OUT: sh: /home/testrel/date: Permission denied
CMD: '/usr/bin/python -c import os; os.system("/usr/bin/id -un")'
SEC: ok:'/usr/bin/python', ok:'-c', ok:'import os; os.system("/usr/bin/id -un")',
OUT: bukzor
CMD: '/usr/bin/python -c import os; os.system("/usr/bin/id -un")'
SEC: ok:'/usr/bin/python', ok:'-c', ok:'import os; os.system("/usr/bin/id -un")',
OUT: bukzor
You need to set your real userid to the effective (suid-ed) one. You probably want to do the same for your real group id:
#! /usr/bin/perl -T
use warnings;
use strict;
$ENV{PATH} = "/bin:/usr/bin";
system "id -un";
system "/bin/bash", "-c", "id -un";
# set real user and group ids
$< = $>;
$( = $);
system "/bin/bash", "-c", "id -un";
Sample run:
$ ls -l suid.pl
-rwsr-sr-x 1 nobody nogroup 177 2010-07-22 20:33 suid.pl
$ ./suid.pl
nobody
gbacon
nobody
What you're seeing is documented bash behavior:
-p
Turn on privileged mode. In this mode, the $BASH_ENV and $ENV files are not processed, shell functions are not inherited from the environment, and the SHELLOPTS, BASHOPTS, CDPATH and GLOBIGNORE variables, if they appear in the environment, are ignored. If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, these actions are taken and the effective user id is set to the real user id. If the -p option is supplied at startup, the effective user id is not reset. Turning this option off causes the effective user and group ids to be set to the real user and group ids.
This means you could also
#! /usr/bin/perl -T
use warnings;
use strict;
$ENV{PATH} = "/bin:/usr/bin";
system "/bin/bash", "-p", "-c", "id -un";
to get
nobody
Recall that passing multiple arguments to system bypasses the shell. A single argument does go to the shell, but probably not bash—look at the output of perl -MConfig -le 'print $Config{sh}'.