Logstash v2.2.0 fails to start as a service - service

+ PATH=/sbin:/usr/sbin:/bin:/usr/bin
+ export PATH
+ id -u
+ [ 0 -ne 0 ]
+ name=logstash
+ pidfile=/var/run/logstash.pid
+ LS_USER=logstash
+ LS_GROUP=logstash
+ LS_HOME=/var/lib/logstash
+ LS_HEAP_SIZE=1g
+ LS_LOG_DIR=/var/log/logstash
+ LS_LOG_FILE=/var/log/logstash/logstash.log
+ LS_CONF_DIR=/etc/logstash/conf.d
+ LS_OPEN_FILES=16384
+ LS_NICE=19
+ LS_OPTS=
+ [ -r /etc/default/logstash ]
+ . /etc/default/logstash
+ KILL_ON_STOP_TIMEOUT=0
+ [ -r /etc/sysconfig/logstash ]
+ program=/opt/logstash/bin/logstash
+ args=agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
+ status
+ [ -f /var/run/logstash.pid ]
+ cat /var/run/logstash.pid
+ pid=12716
+ kill -0 12716
+ return 2
+ code=2
+ [ 2 -eq 0 ]
+ start
+ LS_JAVA_OPTS= -Djava.io.tmpdir=/var/lib/logstash
+ HOME=/var/lib/logstash
+ export PATH HOME LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING
+ id -Gn logstash
+ + sed s/,$//
tr ,
+ echo
+ SGROUPS=logstash
+ [ ! -z logstash ]
+ EXTRA_GROUPS=--groups logstash
+ ulimit -n 16384
+ echo 22073
+ echo logstash started.
logstash started.
+ return 0
+ code=0
+ exit 0
+ nice -n 19 chroot --userspec logstash:logstash --groups logstash / sh -c
cd /var/lib/logstash
ulimit -n 16384
exec "/opt/logstash/bin/logstash" agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
The above is the command output when the logstash service is started.
(Got this by adding -x switch to the start of the init.d script - #!/bin/sh -x)
Logstash v2.2.0 is installed using the DEB package on Ubuntu 14.04
When I run the exec command that is used by the init script (as seen in the above output) logstash startups pretty fine and works flawlessly
"/opt/logstash/bin/logstash" agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
There is something in the init.d service script that causes logstash to fail as a service. I suspect the nice command that the init script uses.
Do you guys see any issues with the 'nice' command that is being used in the init script ?
+ nice -n 19 chroot --userspec logstash:logstash --groups logstash / sh -c
cd /var/lib/logstash
ulimit -n 16384
exec "/opt/logstash/bin/logstash" agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

When you have uninstalled older version, the logstash user and group was also removed. Now with new installation a new logstash user and group was created with different uid and gid. Yet the ownership of common logstash directories belongs to the old logstash uid and gid.
Now when you start new logstash it tries to read/write to this dir and fails hence logstash not running.
Try changing ownership of these dir and start logstash
chown -R logstash:logstash /var/log/logstash
chown -R logstash:logstash /var/lib/logstash
chown -R logstash:logstash /etc/logstash
rm -rf /var/run/logstash.pid
/etc/init.d/logstash start

Related

boot.img too large when I compile AOSP11 source code for OTA

I trying to enable AB OTA in AOSP11 source code.
Followed below steps:
Step 1:
BOARD_USES_AB_IMAGE := true
Step 2 : CONFIG_ANDROID_AB=y in u boot source code.
Getting below compilation error :
libcameradevice curr board is yy356x
[ 91% 48807/53497] Target boot image from recovery: out/target/product/xx/boot.img
FAILED: out/target/product/xx/boot.img
/bin/bash -c "(out/host/linux-x86/bin/mkbootimg --kernel out/target/product/xx/kernel --ramdis
k out/target/product/xx/ramdisk-recovery.img --cmdline "console=ttyFIQ0 androidboot.baseband=N/
A androidboot.wificountrycode=CN androidboot.veritymode=enforcing androidboot.hardware=yy30board andro
idboot.console=ttyFIQ0 androidboot.verifiedbootstate=orange firmware_class.path=/vendor/etc/firmware i
nit=/init rootwait ro init=/init androidboot.selinux=permissive buildvariant=userdebug" --recovery_dt
bo out/target/product/xx/rebuild-dtbo.img --dtb out/target/product/xx/dtb.img --os_version
11 --os_patch_level 2021-08-05 --second kernel/resource.img --header_version 2 --output out/target/
product/xx/boot.img ) && (size=$(for i in out/target/product/xx/boot.img; do stat -c "%s" "$i" | tr -d '\n'; echo +; done; echo 0); total=$(( $( echo "$size" ) )); printname=$(ec
ho -n " out/target/product/xx/boot.img" | tr " " +); maxsize=$(( 100663296-0)); if [ "
$total" -gt "$maxsize" ]; then echo "error: $printname too large ($total > $maxsize)"; false;
elif [ "$total" -gt $((maxsize - 32768)) ]; then echo "WARNING: $printname approaching size li
mit ($total now; limit $maxsize)"; fi )"
error: +out/target/product/xx/boot.img too large (103485440 > 100663296)
[ 91% 48808/53497] //libcore/mmodules/intracoreapi:art-module-intra-core-api-stubs-source metalava mer
metalava detected access to files that are not explicitly specified. See /home/test/aosp/aa_android/
out/soong/.intermediates/libcore/mmodules/intracoreapi/art-module-intra-core-api-stubs-source/android_
common/art-module-intra-core-api-stubs-source-violations.txt for details.
01:25:09 ninja failed with: exit status 1
failed to build some targets (02:42:20 (hh:mm:ss))

Generate many files with wildcard, then merge into one

I have two rules on my Snakefile: one generates several sets of files using wildcards, the other one merges everything into a single file. This is how I wrote it:
chr = range(1,23)
rule generate:
input:
og_files = config["tmp"] + '/chr{chr}.bgen',
output:
out = multiext(config["tmp"] + '/plink/chr{{chr}}',
'.bed', '.bim', '.fam')
shell:
"""
plink \
--bgen {input.og_files} \
--make-bed \
--oxford-single-chr \
--out {config[tmp]}/plink/chr{chr}
"""
rule merge:
input:
plink_chr = expand(config["tmp"] + '/plink/chr{chr}.{ext}',
chr = chr,
ext = ['bed', 'bim', 'fam'])
output:
out = multiext(config["tmp"] + '/all',
'.bed', '.bim', '.fam')
shell:
"""
plink \
--pmerge-list-dir {config[tmp]}/plink \
--make-bed \
--out {config[tmp]}/all
"""
Unfortunately, this does not allow me to track the file coming from the first rule to the 2nd rule:
$ snakemake -s myfile.smk -c1 -np
Building DAG of jobs...
MissingInputException in line 17 of myfile.smk:
Missing input files for rule merge:
[list of all the files made by expand()]
What can I use to be able to generate the 22 sets of files with the wildcard chr in generate, but be able to track them in the input of merge? Thank you in advance for your help
In rule generate I think you don't want to escape the {chr} wildcard, otherwise it doesn't get replaced. I.e.:
out = multiext(config["tmp"] + '/plink/chr{{chr}}',
'.bed', '.bim', '.fam')
should be:
out = multiext(config["tmp"] + '/plink/chr{chr}',
'.bed', '.bim', '.fam')

Cannot launch VS Code from WSL terminal after disabled the path sharing feature

I have export path in .bashrc. But when typecode in terminal, there is no response.
Debug output:
+ COMMIT=e5a624b788d92b8d34d1392e4c4d9789406efe8f
+ APP_NAME=code
+ QUALITY=stable
+ NAME=Code
+ DATAFOLDER=.vscode
+ realpath /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/bin/code
+ dirname /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/bin/code
+ dirname /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/bin
+ VSCODE_PATH=/mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code
+ ELECTRON=/mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/Code.exe
+ IN_WSL=false
+ [ -n Ubuntu-20.04 ]
+ IN_WSL=true
+ [ true = true ]
+ export WSLENV=ELECTRON_RUN_AS_NODE/w:WT_SESSION::WT_PROFILE_ID
+ wslpath -m /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/resources/app/out/cli.js
+ CLI=C:/Users/Chen/AppData/Local/Programs/Microsoft VS Code/resources/app/out/cli.js
+ WSL_EXT_ID=ms-vscode-remote.remote-wsl
+ ELECTRON_RUN_AS_NODE=1 /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/Code.exe C:/Users/Chen/AppData/Local/Programs/Microsoft VS Code/resources/app/out/cli.js --locate-extension ms-vscode-remote.remote-wsl
+ cat /tmp/remote-wsl-loc.txt
+ WSL_EXT_WLOC=
+ [ -n ]
+ ELECTRON_RUN_AS_NODE=1 /mnt/c/Users/Chen/AppData/Local/Programs/Microsoft VS Code/Code.exe C:/Users/Chen/AppData/Local/Programs/Microsoft VS Code/resources/app/out/cli.js .
+ exit 1

LSF serial jobs on HPC performance worse than local sequential executions

I'm learning how to use HPC on our lab's clusters, which uses LSF. I tried a simple serial jobs each of which count the frequency of the words in a text file. I wrote a python code for counting the word frequency named count_word_freq.py, a jobs script named myjob.job as following:
python count_word_freq.py --in ~/books/1.txt --out ~/freqs/freq1.txt
python count_word_freq.py --in ~/books/2.txt --out ~/freqs/freq2.txt
python count_word_freq.py --in ~/books/3.txt --out ~/freqs/freq3.txt
python count_word_freq.py --in ~/books/4.txt --out ~/freqs/freq4.txt
python count_word_freq.py --in ~/books/5.txt --out ~/freqs/freq5.txt
and a lsf script to submit the serial jobs to the selfscheduler:
#!/bin/bash
#BSUB -J test01
#BSUB -P acc_pandeg01a
#BSUB -q alloc
#BSUB -W 20
#BSUB -n 20
#BSUB -m manda
#BSUB -o %J.stdout
#BSUB -eo %J.stderr
#BSUB -L /bin/bash
module load python
module load py_packages
module load selfsched
# And run the program; output will be on stdout
mpirun selfsched < myjobs001.jobs
The python code is as following:
def readBookAsFreqDict(infile):
dic = {}
with open(infile,"r") as file:
for line in file:
contents = line.split(" ")
for cont in contents:
if str.isalpha(cont):
if cont not in dic.keys():
dic[cont] = 1
else:
dic[cont] = dic[cont] + 1
return dic
import sys
import argparse
import time as T
if __name__ == "__main__":
start = T.time()
parser = argparse.ArgumentParser()
parser.add_argument('--i', type=str, help = 'input file')
parser.add_argument('--o', type=str, help = 'output file')
args = parser.parse_args()
dic = readBookAsFreqDict(args.i)
outfile = open(args.o,"w")
for key,freq in dic.iteritems():
outfile.write(key + ":" + str(freq) + "\n")
end = T.time()
print (end - start)
The 5 input texts are almost the same size of around 3.5 MB. My question is that the CPU time for running this serial job is 980s, which is worse than running it sequentially.
To my understanding, the selfscheduler can automatically assign the 5 jobs to empty nodes, thus can save the running time for running it sequentially. Is that because the execution time for each job is too short compared to the time to find an empty node? Is there any other approaches can be used to make it faster?
Thank you!

Compile a static version of pngquant

I'm trying to create a statically linked version of pngquant in Oracle Linux Server release 7.1. I've compiled the static version of zlib and the static version of libpng.
Then, when I configure pngquant, I always get the information that it will be linked with a shared version of zlib.
$ ./configure --with-libpng=../libpng-1.6.21 --extra-cflags="-I../zlib-1.2.8" --extra-ldflags="../zlib-1.2.8/libz.a"
Compiler: gcc
Debug: no
SSE: yes
OpenMP: no
libpng: static (1.6.21)
zlib: shared (1.2.7)
lcms2: no
If I execute make, in the output it seems that the options are correctly passed to the compiler. However, the resulting binary requires libz.so to be executed. It seems that my directives are ignored or that the installed version always takes precedence.
Is there any way of forcing pngquant to be compiled with the static version of zlib?
I'm not sure, if I got it right, but here's a patch to pngquant's configure that worked for me. configure now accepts --with-zlib=<dir> as parameter. Store it to pngquant.patch and apply it with patch -uN -p1 -i pngquant.patch.
diff -ur pngquant-2.9.0/configure pngquant-2.9.0.fixed/configure
--- pngquant-2.9.0/configure 2017-03-06 09:37:30.000000000 +0100
+++ pngquant-2.9.0.fixed/configure 2017-03-07 09:57:20.246012152 +0100
## -48,6 +48,7 ##
help "--with-cocoa/--without-cocoa use Cocoa framework to read images"
fi
help "--with-libpng=<dir> search for libpng in directory"
+ help "--with-zlib=<dir> search for zlib in directory"
echo
help "CC=<compiler> use given compiler command"
help "CFLAGS=<flags> pass options to the compiler"
## -97,6 +98,9 ##
--with-libpng=*)
LIBPNG_DIR=${i#*=}
;;
+ --with-zlib=*)
+ ZLIB_DIR=${i#*=}
+ ;;
--prefix=*)
PREFIX=${i#*=}
;;
## -238,6 +242,19 ##
echo "${MAJ}${MIN}"
}
+# returns full zlib.h version string
+zlibh_string() {
+ echo "$(grep -m1 "define ZLIB_VERSION" "$1" | \
+ grep -Eo '"[^"]+"' | grep -Eo '[^"]+')"
+}
+
+# returns major minor version numbers from png.h
+zlibh_majmin() {
+ local MAJ=$(grep -m1 "define ZLIB_VER_MAJOR" "$1" | grep -Eo "[0-9]+")
+ local MIN=$(grep -m1 "define ZLIB_VER_MINOR" "$1" | grep -Eo "[0-9]+")
+ echo "${MAJ}${MIN}"
+}
+
error() {
status "$1" "error ... $2"
echo
## -420,11 +437,42 ##
error "libpng" "not found (try: $LIBPNG_CMD)"
fi
-# zlib
-if ! find_library "zlib" "z" "zlib.h" "libz.a" "libz.$SOLIBSUFFIX*"; then
- error "zlib" "not found (please install zlib-devel package)"
+# try if given flags are enough for zlib
+HAS_ZLIB=0
+if echo "#include \"zlib.h\"
+ int main(){
+ uLong test = zlibCompileFlags();
+ return 0;
+}" | "$CC" -xc -std=c99 -o /dev/null $CFLAGS $LDFLAGS - &> /dev/null; then
+ status "zlib" "custom flags"
+ HAS_ZLIB=1
fi
+if [ "$HAS_ZLIB" -eq 0 ]; then
+ # try static in the given directory
+ ZLIBH=$(find_h "$ZLIB_DIR" "zlib.h")
+ if [ -n "$ZLIBH" ]; then
+ ZLIBH_STRING=$(zlibh_string "$ZLIBH")
+ ZLIBH_MAJMIN=$(zlibh_majmin "$ZLIBH")
+ if [[ -n "$ZLIBH_STRING" && -n "$ZLIBH_MAJMIN" ]]; then
+ ZLIBA=$(find_f "$ZLIB_DIR" "libz${ZLIBH_MAJMIN}.a")
+ if [ -z "$ZLIBA" ]; then
+ ZLIBA=$(find_f "$ZLIB_DIR" "libz.a")
+ fi
+ if [ -n "$ZLIBA" ]; then
+ cflags "-I${ZLIBH%/*}"
+ lflags "${ZLIBA}"
+ status "zlib" "static (${ZLIBH_STRING})"
+ HAS_ZLIB=1
+ fi
+ fi
+ fi
+fi
+# zlib
+if ! find_library "zlib" "z" "zlib.h" "libz.a" "zlib.$SOLIBSUFFIX*"; then
+ error "zlib" "not found (please install zlib-devel package)"
+fi
+
# lcms2
if [ "$LCMS2" != 0 ]; then
if find_library "lcms2" "lcms2" "lcms2.h" "liblcms2.a" "liblcms2.$SOLIBSUFFIX*"; then
Sorry, the configure script does not support it. It shouldn't be too hard to modify configure to pass appropriate flags to pkg-config or do the same workaround it does for libpng.