I tried to setup thin service in accordance with RVM and thin, root vs. local user and http://wiki.rubyonrails.org/deployment/nginx-thin?rev=1233246014
And I get thin: unrecognized service when starting the service. How to fix that?
~ > sudo /usr/sbin/update-rc.d -f thin defaults
update-rc.d: warning: thin stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (S 0 1 6)
Adding system startup for /etc/init.d/thin ...
/etc/rc0.d/K20thin -> ../init.d/thin
/etc/rc1.d/K20thin -> ../init.d/thin
/etc/rc6.d/K20thin -> ../init.d/thin
/etc/rc2.d/S20thin -> ../init.d/thin
/etc/rc3.d/S20thin -> ../init.d/thin
/etc/rc4.d/S20thin -> ../init.d/thin
/etc/rc5.d/S20thin -> ../init.d/thin
I changed daemon to point /usr/local/rvm/bin/bootup_thin
~ > sudo cat /etc/init.d/thin
#!/bin/sh
### BEGIN INIT INFO
# Provides: thin
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: thin initscript
# Description: thin
### END INIT INFO
# Original author: Forrest Robertson
# Do NOT "set -e"
DAEMON=/usr/local/rvm/bin/bootup_thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
Hm, just found the cause.
firstly I deleted /etc/init.d/thin, then created and had forgot to chmod +x it.
So after chmod +x /etc/init.d/thin it works.
Related
I am writing an sh script and need to replace the . and - with a _
Current:
V123_45_678_910.11_1213-1415.sh
Wanted:
V123_45_678_910_11_1213_1415.sh
I have used a few mv commands, but I am having trouble.
for file in /virtualun/rest/scripts/IOL_Extra/*.sh ; do mv $file ${file//V15_IOL_NVMe_01./V15_IOL_NVMe_01_} ; done
You don't need to match any of the other parts of the file name, just the characters you want to replace. To avoid turning foo.sh into foo-sh, remove the extension first, then add it back to the result of the replacement.
for file in /virtualun/rest/scripts/IOL_Extra/*.sh ; do
base=${file%.sh}
mv -i -- "$file" "${base//[-.]/_}".sh
done
Use the -i option to make sure you don't inadvertently replace one file with another when the modified names coincide.
This should work:
#!/usr/bin/env sh
# Fail on error
set -o errexit
# Disable undefined variable reference
set -o nounset
# Enable wildcard character expansion
set +o noglob
# ================
# CONFIGURATION
# ================
# Pattern
PATTERN="/virtualun/rest/scripts/IOL_Extra/*.sh"
# ================
# LOGGER
# ================
# Fatal log message
fatal() {
printf '[FATAL] %s\n' "$#" >&2
exit 1
}
# Info log message
info() {
printf '[INFO ] %s\n' "$#"
}
# ================
# MAIN
# ================
{
# Check directory exists
[ -d "$(dirname "$PATTERN")" ] || fatal "Directory '$PATTERN' does not exists"
for _file in $PATTERN; do
# Skip if not file
[ -f "$_file" ] || continue
info "Analyzing file '$_file'"
# File data
_file_dirname=$(dirname -- "$_file")
_file_basename=$(basename -- "$_file")
_file_name="${_file_basename%.*}"
_file_extension=
case $_file_basename in
*.*) _file_extension=".${_file_basename##*.}" ;;
esac
# New file name
_new_file_name=$(printf '%s\n' "$_file_name" | sed 's/[\.\-][\.\-]*/_/g')
# Skip if equals
[ "$_file_name" != "$_new_file_name" ] || continue
# New file
_new_file="$_file_dirname/${_new_file_name}${_file_extension}"
# Rename
info "Renaming file '$_file' to '$_new_file'"
mv -i -- "$_file" "$_new_file"
done
}
You can try this:
for f in /virtualun/rest/scripts/IOL_Extra/*.sh; do
mv "$f" $(sed 's/[.-]/_/g' <<< "$f")
done
The sed command is replacing all characters .- by _.
I prefer using sed substitute as posted by oliv.
However, if you have not familiar with regular expression, using rename is faster/easier to understand:
Example:
$ touch V123_45_678_910.11_1213-1415.sh
$ rename -va '.' '_' *sh
`V123_45_678_910.11_1213-1415.sh' -> `V123_45_678_910_11_1213-1415_sh'
$ rename -va '-' '_' *sh
`V123_45_678_910_11_1213-1415_sh' -> `V123_45_678_910_11_1213_1415_sh'
$ rename -vl '_sh' '.sh' *sh
`V123_45_678_910_11_1213_1415_sh' -> V123_45_678_910_11_1213_1415.sh'
$ ls *sh
V123_45_678_910_11_1213_1415.sh
Options explained:
-v prints the name of the file before -> after the operation
-a replaces all occurrences of the first argument with the second argument
-l replaces the last occurrence of the first argument with the second argument
Note that this might not be suitable depending on the other files you have in the given directory that would match *sh and that you do NOT want to rename.
I have a Scala application that I want to run inside a Docker container. To build the docker image, I use sbt-native-packager.
The base image I am using is "openjdk:8-jre-alpine".
Tried "openjdk:8-jdk-alpine" - does not make any difference
Tried sbt-native-packager 1.3.20 - does not make any difference
project/plugins.sbt
resolvers += Resolver.typesafeRepo("releases")
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.17")
build.sbt
enablePlugins(JavaAppPackaging)
mainClass in Compile := Some("MyAppClass")
enablePlugins(DockerPlugin)
dockerBaseImage := "openjdk:8-jre-alpine" // startup fails with permission denied if using alpine :-(
Running a container with the resulting image leads to following error on startup:
docker run my-app:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/opt/docker/bin/my-app\": permission denied": unknown.
ERRO[0000] error waiting for container: context canceled
The app starts up normally when using "openjdk:8-jre".
Update
Contents of /opt/docker/bin/my-app:
#!/usr/bin/env bash
### ------------------------------- ###
### Helper methods for BASH scripts ###
### ------------------------------- ###
die() {
echo "$#" 1>&2
exit 1
}
realpath () {
(
TARGET_FILE="$1"
CHECK_CYGWIN="$2"
cd "$(dirname "$TARGET_FILE")"
TARGET_FILE=$(basename "$TARGET_FILE")
COUNT=0
while [ -L "$TARGET_FILE" -a $COUNT -lt 100 ]
do
TARGET_FILE=$(readlink "$TARGET_FILE")
cd "$(dirname "$TARGET_FILE")"
TARGET_FILE=$(basename "$TARGET_FILE")
COUNT=$(($COUNT + 1))
done
if [ "$TARGET_FILE" == "." -o "$TARGET_FILE" == ".." ]; then
cd "$TARGET_FILE"
TARGET_FILEPATH=
else
TARGET_FILEPATH=/$TARGET_FILE
fi
# make sure we grab the actual windows path, instead of cygwin's path.
if [[ "x$CHECK_CYGWIN" == "x" ]]; then
echo "$(pwd -P)/$TARGET_FILE"
else
echo $(cygwinpath "$(pwd -P)/$TARGET_FILE")
fi
)
}
# TODO - Do we need to detect msys?
# Uses uname to detect if we're in the odd cygwin environment.
is_cygwin() {
local os=$(uname -s)
case "$os" in
CYGWIN*) return 0 ;;
*) return 1 ;;
esac
}
# This can fix cygwin style /cygdrive paths so we get the
# windows style paths.
cygwinpath() {
local file="$1"
if is_cygwin; then
echo $(cygpath -w $file)
else
echo $file
fi
}
# Make something URI friendly
make_url() {
url="$1"
local nospaces=${url// /%20}
if is_cygwin; then
echo "/${nospaces//\\//}"
else
echo "$nospaces"
fi
}
# This crazy function reads in a vanilla "linux" classpath string (only : are separators, and all /),
# and returns a classpath with windows style paths, and ; separators.
fixCygwinClasspath() {
OLDIFS=$IFS
IFS=":"
read -a classpath_members <<< "$1"
declare -a fixed_members
IFS=$OLDIFS
for i in "${!classpath_members[#]}"
do
fixed_members[i]=$(realpath "${classpath_members[i]}" "fix")
done
IFS=";"
echo "${fixed_members[*]}"
IFS=$OLDIFS
}
# Fix the classpath we use for cygwin.
fix_classpath() {
cp="$1"
if is_cygwin; then
echo "$(fixCygwinClasspath "$cp")"
else
echo "$cp"
fi
}
# Detect if we should use JAVA_HOME or just try PATH.
get_java_cmd() {
if [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then
echo "$JAVA_HOME/bin/java"
else
echo "java"
fi
}
echoerr () {
echo 1>&2 "$#"
}
vlog () {
[[ $verbose || $debug ]] && echoerr "$#"
}
dlog () {
[[ $debug ]] && echoerr "$#"
}
execRunner () {
# print the arguments one to a line, quoting any containing spaces
[[ $verbose || $debug ]] && echo "# Executing command line:" && {
for arg; do
if printf "%s\n" "$arg" | grep -q ' '; then
printf "\"%s\"\n" "$arg"
else
printf "%s\n" "$arg"
fi
done
echo ""
}
# we use "exec" here for our pids to be accurate.
exec "$#"
}
addJava () {
dlog "[addJava] arg = '$1'"
java_args+=( "$1" )
}
addApp () {
dlog "[addApp] arg = '$1'"
app_commands+=( "$1" )
}
addResidual () {
dlog "[residual] arg = '$1'"
residual_args+=( "$1" )
}
addDebugger () {
addJava "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=$1"
}
require_arg () {
local type="$1"
local opt="$2"
local arg="$3"
if [[ -z "$arg" ]] || [[ "${arg:0:1}" == "-" ]]; then
die "$opt requires <$type> argument"
fi
}
is_function_defined() {
declare -f "$1" > /dev/null
}
# Attempt to detect if the script is running via a GUI or not
# TODO - Determine where/how we use this generically
detect_terminal_for_ui() {
[[ ! -t 0 ]] && [[ "${#residual_args}" == "0" ]] && {
echo "true"
}
# SPECIAL TEST FOR MAC
[[ "$(uname)" == "Darwin" ]] && [[ "$HOME" == "$PWD" ]] && [[ "${#residual_args}" == "0" ]] && {
echo "true"
}
}
# Processes incoming arguments and places them in appropriate global variables. called by the run method.
process_args () {
local no_more_snp_opts=0
while [[ $# -gt 0 ]]; do
case "$1" in
--) shift && no_more_snp_opts=1 && break ;;
-h|-help) usage; exit 1 ;;
-v|-verbose) verbose=1 && shift ;;
-d|-debug) debug=1 && shift ;;
-no-version-check) no_version_check=1 && shift ;;
-mem) echo "!! WARNING !! -mem option is ignored. Please use -J-Xmx and -J-Xms" && shift 2 ;;
-jvm-debug) require_arg port "$1" "$2" && addDebugger $2 && shift 2 ;;
-main) custom_mainclass="$2" && shift 2 ;;
-java-home) require_arg path "$1" "$2" && jre=`eval echo $2` && java_cmd="$jre/bin/java" && shift 2 ;;
-D*|-agentlib*|-XX*) addJava "$1" && shift ;;
-J*) addJava "${1:2}" && shift ;;
*) addResidual "$1" && shift ;;
esac
done
if [[ no_more_snp_opts ]]; then
while [[ $# -gt 0 ]]; do
addResidual "$1" && shift
done
fi
is_function_defined process_my_args && {
myargs=("${residual_args[#]}")
residual_args=()
process_my_args "${myargs[#]}"
}
}
# Actually runs the script.
run() {
# TODO - check for sane environment
# process the combined args, then reset "$#" to the residuals
process_args "$#"
set -- "${residual_args[#]}"
argumentCount=$#
#check for jline terminal fixes on cygwin
if is_cygwin; then
stty -icanon min 1 -echo > /dev/null 2>&1
addJava "-Djline.terminal=jline.UnixTerminal"
addJava "-Dsbt.cygwin=true"
fi
# check java version
if [[ ! $no_version_check ]]; then
java_version_check
fi
if [ -n "$custom_mainclass" ]; then
mainclass=("$custom_mainclass")
else
mainclass=("${app_mainclass[#]}")
fi
# Now we check to see if there are any java opts on the environment. These get listed first, with the script able to override them.
if [[ "$JAVA_OPTS" != "" ]]; then
java_opts="${JAVA_OPTS}"
fi
# run sbt
execRunner "$java_cmd" \
${java_opts[#]} \
"${java_args[#]}" \
-cp "$(fix_classpath "$app_classpath")" \
"${mainclass[#]}" \
"${app_commands[#]}" \
"${residual_args[#]}"
local exit_code=$?
if is_cygwin; then
stty icanon echo > /dev/null 2>&1
fi
exit $exit_code
}
# Loads a configuration file full of default command line options for this script.
loadConfigFile() {
cat "$1" | sed $'/^\#/d;s/\r$//'
}
# Now check to see if it's a good enough version
# TODO - Check to see if we have a configured default java version, otherwise use 1.6
java_version_check() {
readonly java_version=$("$java_cmd" -version 2>&1 | awk -F '"' '/version/ {print $2}')
if [[ "$java_version" == "" ]]; then
echo
echo No java installations was detected.
echo Please go to http://www.java.com/getjava/ and download
echo
exit 1
else
local major=$(echo "$java_version" | cut -d'.' -f1)
if [[ "$major" -eq "1" ]]; then
local major=$(echo "$java_version" | cut -d'.' -f2)
fi
if [[ "$major" -lt "6" ]]; then
echo
echo The java installation you have is not up to date
echo $app_name requires at least version 1.6+, you have
echo version $java_version
echo
echo Please go to http://www.java.com/getjava/ and download
echo a valid Java Runtime and install before running $app_name.
echo
exit 1
fi
fi
}
### ------------------------------- ###
### Start of customized settings ###
### ------------------------------- ###
usage() {
cat <<EOM
Usage: $script_name [options]
-h | -help print this message
-v | -verbose this runner is chattier
-d | -debug set sbt log level to debug
-no-version-check Don't run the java version check.
-main <classname> Define a custom main class
-jvm-debug <port> Turn on JVM debugging, open at the given port.
# java version (default: java from PATH, currently $(java -version 2>&1 | grep version))
-java-home <path> alternate JAVA_HOME
# jvm options and output control
JAVA_OPTS environment variable, if unset uses "$java_opts"
-Dkey=val pass -Dkey=val directly to the java runtime
-J-X pass option -X directly to the java runtime
(-J is stripped)
# special option
-- To stop parsing built-in commands from the rest of the command-line.
e.g.) enabling debug and sending -d as app argument
\$ ./start-script -d -- -d
In the case of duplicated or conflicting options, basically the order above
shows precedence: JAVA_OPTS lowest, command line options highest except "--".
Available main classes:
MyAppClass
EOM
}
### ------------------------------- ###
### Main script ###
### ------------------------------- ###
declare -a residual_args
declare -a java_args
declare -a app_commands
declare -r real_script_path="$(realpath "$0")"
declare -r app_home="$(realpath "$(dirname "$real_script_path")")"
# TODO - Check whether this is ok in cygwin...
declare -r lib_dir="$(realpath "${app_home}/../lib")"
declare -a app_mainclass=(MyAppClass)
declare -r script_conf_file="${app_home}/../conf/application.ini"
declare -r app_classpath="$lib_dir/my-app-0.1.0-SNAPSHOT.jar:$lib_dir/org.scala-lang.scala-library-2.12.8.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-j-api_2.12-9.0.1.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-messages_2.12-9.0.1.jar:$lib_dir/com.thenewmotion.enum-utils_2.12-0.2.1.jar:$lib_dir/com.thenewmotion.ocpp.ocpp-json_2.12-9.0.1.jar:$lib_dir/org.json4s.json4s-native_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-core_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-ast_2.12-3.6.1.jar:$lib_dir/org.json4s.json4s-scalap_2.12-3.6.1.jar:$lib_dir/com.thoughtworks.paranamer.paranamer-2.8.jar:$lib_dir/org.slf4j.slf4j-api-1.7.25.jar:$lib_dir/org.java-websocket.Java-WebSocket-1.3.9.jar:$lib_dir/org.apache.logging.log4j.log4j-api-2.11.2.jar:$lib_dir/org.apache.logging.log4j.log4j-core-2.11.2.jar:$lib_dir/org.apache.logging.log4j.log4j-slf4j-impl-2.11.2.jar:$lib_dir/com.typesafe.config-1.3.4.jar"
# java_cmd is overrode in process_args when -java-home is used
declare java_cmd=$(get_java_cmd)
# if configuration files exist, prepend their contents to $# so it can be processed by this runner
[[ -f "$script_conf_file" ]] && set -- $(loadConfigFile "$script_conf_file") "$#"
run "$#"
run the sbt task docker:stage. Then analyze the output created in the folder target/docker/stage.
In my case the Dockerfile contains the following:
FROM openjdk:11-jre-slim as stage0
WORKDIR /opt/docker
COPY opt /opt
USER root
RUN ["chmod", "-R", "u=rX,g=rX", "/opt/docker"]
RUN ["chmod", "u+x,g+x", "/opt/docker/bin/sample"]
FROM openjdk:11-jre-slim
LABEL MAINTAINER="your name"
USER root
RUN id -u demiourgos728 2> /dev/null || useradd --system --create-home --uid 1001 --gid 0 demiourgos728
WORKDIR /opt/docker
COPY --from=stage0 --chown=demiourgos728:root /opt/docker /opt/docker
EXPOSE 9000
USER 1001
ENTRYPOINT ["/opt/docker/bin/sample"]
CMD []
I had the problem that the PID file could not be created. I think in your case it will be something similar. There is no magic involved here.
The folder /opt/docker does not have write permissions by default. As the documentation states, you could add the following line to your build.sbt:
dockerAdditionalPermissions += (DockerChmodType.UserGroupWriteExecute, "/opt/docker")
which will add an additional line:
RUN ["chmod", "u=rwX,g=rwX", "/opt/docker"]
to the stage0 container. See nativer packager docs.
Alternatively, disable the PID file by passing a parameter to the JVM:
bashScriptExtraDefines ++= Seq( "addJava '-Dpidfile.path=/dev/null'" )
to your build.sbt. Play Production configuration Docs
I'm trying to create an autocomplete script for use with fish; i'm porting over a bash completion script for the same program.
The program has three top level commands, say foo, bar, and baz and each has some subcommands, just say a b and c for each.
What I'm seeing is that the top level commands auto complete ok, so if I type f I'm getting foo to autocomplete, but then if I hit tab again to see what it's sub commands are, i see foo, bar, baz, a, b, c and it should just be a, b, c
I am using as a reference the git completion script since it seems to work right. I am also using the git flow script as a reference as well.
I think this is handled in the git completion script by:
function __fish_git_needs_command
set cmd (commandline -opc)
if [ (count $cmd) -eq 1 -a $cmd[1] = 'git' ]
return 0
end
return 1
end
Which makes sense, you can only use the completion if there is a single arg to the command, the script itself; if you use that as the condition (-n) for the call to complete on the top level commands, I think the right thing would happen.
However, what I'm seeing is not the case. I copied that function over to my script, changed "git" appropriately, and did not have any luck.
The trimmed down script is as follows:
function __fish_prog_using_command
set cmd (commandline -opc)
set subcommands $argv
if [ (count $cmd) = (math (count $subcommands) + 1) ]
for i in (seq (count $subcommands))
if not test $subcommands[$i] = $cmd[(math $i + 1)]
return 1
end
end
return 0
end
return 1
end
function __fish_git_needs_command
set cmd (commandline -opc)
set startsWith (echo "$cmd[1]" | grep -E 'prog$')
# there's got to be a better way to do this regex, fish newb alert
if [ (count $cmd) = 1 ]
# Is this always false? Is this the problem?
if [ $cmd[1] -eq $cmd[1] ]
return 1
end
end
return 0
end
complete --no-files -c prog -a bar -n "__fish_git_needs_command"
complete --no-files -c prog -a foo -n "__fish_git_needs_command"
complete --no-files -c prog -a a -n "__fish_prog_using_command foo"
complete --no-files -c prog -a b -n "__fish_prog_using_command foo"
complete --no-files -c prog -a c -n "__fish_prog_using_command foo"
complete --no-files -c prog -a baz -n "__fish_git_needs_command"
Any suggestions on how to make this work is much appreciated.
I guess you are aware that return 0 means true and that return 1 means false?
From your output it looks like your needs_command function is not working properly, thus showing bar even when it has subcommands.
I just tried the following code and it works as expected:
function __fish_prog_needs_command
set cmd (commandline -opc)
if [ (count $cmd) -eq 1 -a $cmd[1] = 'prog' ]
return 0
end
return 1
end
function __fish_prog_using_command
set cmd (commandline -opc)
if [ (count $cmd) -gt 1 ]
if [ $argv[1] = $cmd[2] ]
return 0
end
end
return 1
end
complete -f -c prog -n '__fish_prog_needs_command' -a bar
complete -f -c prog -n '__fish_prog_needs_command' -a foo
complete -f -c prog -n '__fish_prog_using_command foo' -a a
complete -f -c prog -n '__fish_prog_using_command foo' -a b
complete -f -c prog -n '__fish_prog_using_command foo' -a c
complete -f -c prog -n '__fish_prog_needs_command' -a baz
Output from completion:
➤ prog <Tab>
bar baz foo
➤ prog foo <Tab>
a b c
➤ prog foo
Is this what you want?
I tried to add in the way
-l 11211
-l 11212
in memcached conf file. But it is just listening to first one i.e 1121
First I used mikewied's solution, but then I bumped into the problem of auto starting the daemon. Another confusing thing in that solution is that it doesn't use the config from etc. I was about to create my own start up scripts in /etc/init.d but then I looked into /etc/init.d/memcached file and saw this beautiful solution
# Usage:
# cp /etc/memcached.conf /etc/memcached_server1.conf
# cp /etc/memcached.conf /etc/memcached_server2.conf
# start all instances:
# /etc/init.d/memcached start
# start one instance:
# /etc/init.d/memcached start server1
# stop all instances:
# /etc/init.d/memcached stop
# stop one instance:
# /etc/init.d/memcached stop server1
# There is no "status" command.
Basically readers of this question just need to read the /etc/init.d/memcached file.
Cheers
Here's what memcached says the -l command is for:
-l <addr> interface to listen on (default: INADDR_ANY, all addresses)
<addr> may be specified as host:port. If you don't specify
a port number, the value you specified with -p or -U is
used. You may specify multiple addresses separated by comma
or by using -l multiple times
First off you need to specify the interface you want memcached to listen on if you are using the -l flag. Use 0.0.0.0 for all interfaces and use 127.0.0.1 is you just want to be able to access memcached from localhost. Second, don't use two -l flags. Use only one and separate each address by a comma. The command below should do what you want.
memcached -l 0.0.0.0:11211,0.0.0.0:11212
Keep in mind that this will have one memcached instance listen on two ports. To have two memcached instances on one machine run these two commands.
memcached -p 11211 -d
memcached -p 11212 -d
The answer from David Dzhagayev is the best one. If you don't have the correct version of memcache init script, here is the one he is talking about:
It should work with any linux distro using init.
#! /bin/bash
### BEGIN INIT INFO
# Provides: memcached
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $local_fs
# Should-Stop: $local_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start memcached daemon
# Description: Start up memcached, a high-performance memory caching daemon
### END INIT INFO
# Usage:
# cp /etc/memcached.conf /etc/memcached_server1.conf
# cp /etc/memcached.conf /etc/memcached_server2.conf
# start all instances:
# /etc/init.d/memcached start
# start one instance:
# /etc/init.d/memcached start server1
# stop all instances:
# /etc/init.d/memcached stop
# stop one instance:
# /etc/init.d/memcached stop server1
# There is no "status" command.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/memcached
DAEMONNAME=memcached
DAEMONBOOTSTRAP=/usr/share/memcached/scripts/start-memcached
DESC=memcached
test -x $DAEMON || exit 0
test -x $DAEMONBOOTSTRAP || exit 0
set -e
. /lib/lsb/init-functions
# Edit /etc/default/memcached to change this.
ENABLE_MEMCACHED=no
test -r /etc/default/memcached && . /etc/default/memcached
FILES=(/etc/memcached_*.conf)
# check for alternative config schema
if [ -r "${FILES[0]}" ]; then
CONFIGS=()
for FILE in "${FILES[#]}";
do
# remove prefix
NAME=${FILE#/etc/}
# remove suffix
NAME=${NAME%.conf}
# check optional second param
if [ $# -ne 2 ];
then
# add to config array
CONFIGS+=($NAME)
elif [ "memcached_$2" == "$NAME" ];
then
# use only one memcached
CONFIGS=($NAME)
break;
fi;
done;
if [ ${#CONFIGS[#]} == 0 ];
then
echo "Config not exist for: $2" >&2
exit 1
fi;
else
CONFIGS=(memcached)
fi;
CONFIG_NUM=${#CONFIGS[#]}
for ((i=0; i < $CONFIG_NUM; i++)); do
NAME=${CONFIGS[${i}]}
PIDFILE="/var/run/${NAME}.pid"
case "$1" in
start)
echo -n "Starting $DESC: "
if [ $ENABLE_MEMCACHED = yes ]; then
start-stop-daemon --start --quiet --exec "$DAEMONBOOTSTRAP" -- /etc/${NAME}.conf $PIDFILE
echo "$NAME."
else
echo "$NAME disabled in /etc/default/memcached."
fi
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --oknodo --retry 5 --pidfile $PIDFILE --exec $DAEMON
echo "$NAME."
rm -f $PIDFILE
;;
restart|force-reload)
#
# If the "reload" option is implemented, move the "force-reload"
# option to the "reload" entry above. If not, "force-reload" is
# just the same as "restart".
#
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --oknodo --retry 5 --pidfile $PIDFILE
rm -f $PIDFILE
if [ $ENABLE_MEMCACHED = yes ]; then
start-stop-daemon --start --quiet --exec "$DAEMONBOOTSTRAP" -- /etc/${NAME}.conf $PIDFILE
echo "$NAME."
else
echo "$NAME disabled in /etc/default/memcached."
fi
;;
status)
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $?
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
done;
exit 0
In case someone else stumbles upon this question, there is a bug on the debian distribution of memcached (which means flavours like Ubuntu would also be affected).
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784357
Because of this bug, even when you have separate configuration files, when you run sudo service memcached restart, only the default configuration file in /etc/memcached.conf will be loaded.
As mentioned in the comment here, the temporary solution is to
Remove /lib/systemd/system/memcached.service
Run sudo systemctl daemon-reload (don't worry, it is safe to do
so)
Finally, run sudo service memcached restart if you are okay with losing all cache information. If not, run sudo service memcached force-reload
Ok, very good answer, Tristan CHARBONNIER.
Please replace code into file /usr/share/memcached/scripts/start-memcached:
#!/usr/bin/perl -w
# start-memcached
# 2003/2004 - Jay Bonci
# This script handles the parsing of the /etc/memcached.conf file
# and was originally created for the Debian distribution.
# Anyone may use this little script under the same terms as
# memcached itself.
use strict;
if($> != 0 and $< != 0)
{
print STDERR "Only root wants to run start-memcached.\n";
exit;
}
my $params; my $etchandle; my $etcfile = "/etc/memcached.conf";
# This script assumes that memcached is located at /usr/bin/memcached, and
# that the pidfile is writable at /var/run/memcached.pid
my $memcached = "/usr/bin/memcached";
my $pidfile = "/var/run/memcached.pid";
if (scalar(#ARGV) == 2) {
$etcfile = shift(#ARGV);
$pidfile = shift(#ARGV);
}
# If we don't get a valid logfile parameter in the /etc/memcached.conf file,
# we'll just throw away all of our in-daemon output. We need to re-tie it so
# that non-bash shells will not hang on logout. Thanks to Michael Renner for
# the tip
my $fd_reopened = "/dev/null";
sub handle_logfile
{
my ($logfile) = #_;
$fd_reopened = $logfile;
}
sub reopen_logfile
{
my ($logfile) = #_;
open *STDERR, ">>$logfile";
open *STDOUT, ">>$logfile";
open *STDIN, ">>/dev/null";
$fd_reopened = $logfile;
}
# This is set up in place here to support other non -[a-z] directives
my $conf_directives = {
"logfile" => \&handle_logfile,
};
if(open $etchandle, $etcfile)
{
foreach my $line (< $etchandle>)
{
$line ||= "";
$line =~ s/\#.*//g;
$line =~ s/\s+$//g;
$line =~ s/^\s+//g;
next unless $line;
next if $line =~ /^\-[dh]/;
if($line =~ /^[^\-]/)
{
my ($directive, $arg) = $line =~ /^(.*?)\s+(.*)/;
$conf_directives->{$directive}->($arg);
next;
}
push #$params, $line;
}
}else{
$params = [];
}
push #$params, "-u root" unless(grep "-u", #$params);
$params = join " ", #$params;
if(-e $pidfile)
{
open PIDHANDLE, "$pidfile";
my $localpid = <PIDHANDLE>;
close PIDHANDLE;
chomp $localpid;
if(-d "/proc/$localpid")
{
print STDERR "memcached is already running.\n";
exit;
}else{
`rm -f $localpid`;
}
}
my $pid = fork();
if($pid == 0)
{
reopen_logfile($fd_reopened);
exec "$memcached $params";
exit(0);
}else{
if(open PIDHANDLE,">$pidfile")
{
print PIDHANDLE $pid;
close PIDHANDLE;
}else{
print STDERR "Can't write pidfile to $pidfile.\n";
}
}
Simple solution to Centos 6
First copy /etc/sysconfig/memcached to /etc/sysconfig/memcached2 and write new settings to the new file.
Then copy /etc/init.d/memcached to /etc/init.d/memcached2 and change in the new file:
PORT to your new port (it should be reset from /etc/sysconfig/memcached2, so we do it just in case)
/etc/sysconfig/memcached to /etc/sysconfig/memcached2
/var/run/memcached/memcached.pid to /var/run/memcached/memcached2.pid
/var/lock/subsys/memcached to /var/lock/subsys/memcached2
Now you can use service memcached2 start, service memcached2 stop etc. Don't forget chkconfig memcached2 on to run it when machine boots up.
in /etc/memcached.conf you can just edit like below
-l 192.168.112.22,127.0.0.1
must use comma between two ip address
I wrote a program that is using the Perl POE Framework to realize a json webservice.
So far so good. I have no problems running that application under debian systems. But when i run my application under RHEL5 the network connection is refused. I have no setuid so the service is running as root and the port (9991) is out of the serviceport-range.
My workaround ist to start the application as nondaemon with nohup $CMD &. Thats more than stupid. But I've absolute no idea
my init.d script
DEBIAN:
#!/bin/sh -e
### BEGIN INIT INFO
# Provides: jobserver
# Required-Start: $local_fs $network $syslog
# Required-Stop: $local_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start/stop jobserver
### END INIT INFO
APPLICATION_CONFIG="/etc/jobserver/jobserver.conf"
test -f $APPLICATION_CONFIG && . $APPLICATION_CONFIG
set -e
if [ ! -x $APPLICATION_PATH ] ; then
echo "No jobserver installed"
exit 0
fi
#load init.d helper functions
. /lib/lsb/init-functions
PIDFILE=$APPLICATION_PIDFILE
CONF=$APPLICATION_CONFIG
DAEMON=$APPLICATION_DAEMON
PARAMETER="--configuration $CONF --daemon"
if [ -z "$PIDFILE" ] ; then
echo "ERROR: APPLICATION_PIDFILE needs to be defined in application config" >&2
exit 2
fi
jobserver_start() {
#log_daemon_msg "Starting Jobserver Daemon"
log_success_msg "Starting Jobserver Daemon"
start-stop-daemon --start --quiet --oknodo --make-pidfile --pidfile "$PIDFILE" --exec "$DAEMON" -- $PARAMETER
#log_end_msg $?
}
jobserver_stop() {
log_success_msg "Stopping Jobserver Daemon"
start-stop-daemon --stop --quiet --oknodo --pidfile "${PIDFILE}"
rm -f "${PIDFILE}"
#log_end_msg $?
}
case $1 in
start)
jobserver_start
;;
stop)
jobserver_stop
;;
restart)
jobserver_stop
jobserver_start
;;
*)
log_success_msg "Usage: /etc/init.d/jobserver {start|stop|restart}"
exit 1
;;
esac
exit $?;
REDHAT:
# Source function library.
. /etc/rc.d/init.d/functions
APPLICATION_CONFIG="/etc/jobserver/jobserver.conf"
test -f $APPLICATION_CONFIG && . $APPLICATION_CONFIG
PIDFILE=$APPLICATION_PIDFILE
CONF=$APPLICATION_CONFIG
DAEMON=$APPLICATION_DAEMON
PARAMETER="--configuration $CONF --daemon"
RETVAL=0
if [ -z "$PIDFILE" ] ; then
echo "ERROR: APPLICATION_PIDFILE needs to be defined in application config" >&2
exit 2
fi
jobserver_start() {
echo -n $"Starting jobserver daemon: "
daemon --pidfile=$PIDFILE $DAEMON $PARAMETER
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
failure;
fi;
echo
return $RETVAL
}
jobserver_stop() {
echo -n $"Stopping jobserver daemon: "
if [ ! -f $PIDFILE ]; then
echo -n $"Jobserver daemon is not running: ";
else
killproc -p $PIDFILE
RETVAL=$?
fi
echo
return $RETVAL;
}
case $1 in
start)
jobserver_start
;;
stop)
jobserver_stop
;;
restart)
jobserver_stop
jobserver_start
;;
*)
echo "Usage: /etc/init.d/jobserver {start|stop|restart}"
exit 1
;;
esac
exit $?;
Daemonprocess:
sub daemonize {
# start a new child process
if (fork()) {
exit(0);
}
# become process group leader
unless (POSIX::setsid) {
die("POSIX setsid failed: $!");
}
# change to root dir
chdir("/");
foreach (0 .. (POSIX::sysconf(&POSIX::_SC_OPEN_MAX) || 1024)) {
POSIX::close($_);
}
# allow only user based io
umask(077);
# reopen pipes
open(STDIN, "<", "/dev/null");
open(STDOUT, ">", "/dev/null");
open(STDERR, ">", "/dev/null");
# Advisory. Fork one more time; this is not "necessary" for most toolserver
# daemons but it is a best practice as there are situations where
# forking twice is needed to avoid zombies. A second fork also
# prevents the daemon from ever re-acquiring a terminal, by making
# the main daemon process not be the process group leader
if (fork()) {
exit(0);
}
# write pidfile
my $pidfile = "/var/run/jobserverd.pid";
open(PIDFILE, ">$pidfile");
print(PIDFILE "$$");
close(PIDFILE);
}
Serverfront:
sub listen {
my $self = shift;
logInfo("Starting webservice. Listening to port ".$self->{'_port'}, 1);
# Spawn the webservice
POE::Component::Server::HTTP->new (
Port => $self->{'_port'},
ContentHandler => {
"/json/" => \&dispatchJSONService
},
Headers => {
"Server" => "Perl JobServer version ".$self->{'_version'}
},
);
$poe_kernel->run();
}
Problem solved. The Problem was the double fork. Thanks to all for reading. Never trust advisories! ;)