IBM HPC 4.2 different behavior between IBM MPI and OpenMPI under LSF - lsf

We have an IBM HPC 4.2 with 32 compute nodes.
We did compile and install Openmpi 1.10.1 with lsf support.
The problem :
We have a different behavior between IBM MPI (MPI chipped with the platform or PMPI) and Openmpi when we use them under lsf.
Example :
I did compile hello_world.c mpi example with both implementations, when I launch the execution without lsb (without bsub) I get
PMPI : mpirun -np 4 -hostlist "compute000 compute001" ./hello_world_pmpi.exe
Hello world! I'm 1 of 4 on compute000
Hello world! I'm 2 of 4 on compute001
Hello world! I'm 3 of 4 on compute001
Hello world! I'm 0 of 4 on compute000
Openmpi : mpirun -np 4 --host "compute000,compute001" --mca btl self,sm --mca mtl psm ./hello_world_ompi.exe
Hello world! I'm 1 of 4 on compute000
Hello world! I'm 2 of 4 on compute000
Hello world! I'm 3 of 4 on compute001
Hello world! I'm 0 of 4 on compute001
Which is logic, but when I use lsf, things change with PMPI which is more weird !!!! and I get
PMPI : bsub -n 4 -R "span[ptile=2]" -o pmpi-%J.out mpirun ./hello_world_pmpi.exe
cat pmpi-xxx.out
...
Hello world! I'm 0 of 1 on compute017
Opemmpi : bsub -n 4 -R "span[ptile=2]" -o ompi-%J.out mpirun --mca btl self,sm --mca mtl psm ./hello_world_ompi.exe
cat pmpi-xxx.out
...
**Hello world! I'm 1 of 4 on compute005
Hello world! I'm 2 of 4 on compute010
Hello world! I'm 3 of 4 on compute010
Hello world! I'm 0 of 4 on compute005**
It seems like just one instance of PMPI is launched and not 4.
I have the same problem with IMB (intel benchmark) and HPCC, they complain about lack of processes, without lsf they both work fine, with lsf, only openmpi works correctly.
any Idea?
Thanks in advance

Related

I cannot submit a job on HPC (QSUB)

for the past 2 months, I have been trying to find out why why I cannot submit a job on our HPC (using QSUB), recently, I found out that my home directory was
$/export/home/wrfuser
while my other co-workers are
$/home/wrfuser1
*note /export
I can submit a job but it never shows a result. Here's my sample hello.qsub:
#!/bin/bash --login
#PBS -j oe
#PBS -l walltime=00:01:00,nodes=1,ppn=1,mem=50mb
export WORKDIR=/mnt/NFS003/WRF/WRF_hist/qsub_test
cd ${WORKDIR}
echo "HELLO WORLD"
[wrfuser#HPC qsub_test]$ vi hello.qsub
[wrfuser#HPC qsub_test]$ qsub hello.qsub
Your job 7618 ("hello.qsub") has been submitted
[wrfuser#HPC qsub_test]$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
7617 0.55500 hello.qsub wrfuser Eqw 04/06/2018 10:21:35 1
7618 0.55500 hello.qsub wrfuser Eqw 04/06/2018 10:35:15 1
[wrfuser#HPC qsub_test]$
If its not possible to do that on /export/home, is there any other way to submit a job on HPC?
I solved it!!! I changed my qsub script to
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -pe orte 64
echo "HELLO JOHN"
mkdir Hello_world
[wrfuser#CADHPC01 run]$
I am using number of nodes,ppn, and memory on my previous script and now I changed it to number of cores #$ -pe orte 64. However, I not 100% sure that it is the main reason for that error.
I am newbie here in stackoverflow and it feels like I will learn and enjoy exponentially here!! Thanks! :D

How to get mallet to load all tokens from a line without a label?

I'm trying to perform topic modeling on a dataset that's in a whitespace delimited file, with no label. I can't get mallet to load all the tokens. I'm using version 2.0.8 on linux and mac.
As a test for the issue, I created a file with the one line:
1 2 3 4 5
Then ran
mallet import-file --token-regex [0-9]+ --keep-sequence true --label 0 --input testData --output testLoaded
mallet train-topics --input testLoaded
I should get 4 tokens, but I only get 3:
Data loaded.
max tokens: 3
total tokens: 3
It gets even worse if I try to use the --data flag (same result whether I use it and --label 0 or --data 2 on its own):
mallet import-file --token-regex [0-9]+ --keep-sequence true --label 0 --data 2 --input testData --output testLoaded2
mallet train-topics --input testLoaded2
Data loaded.
max tokens: 1
total tokens: 1
So either I lose the first token, or I only get the first token (2 is appearing in the output later on, so I know it's not loading the rest of the line as a single token in the latter case).
Mallet parses lines in two phases: first, it segments the line into fields, using the --line-regex option. Then it maps those segments to one of the three instance fields (name, label, data).
The command isn't working because it is only changing the second part, the mapping from regex groups to instance fields. It's telling Mallet to separate off the first two fields, but then ignore them. Here's an example of the default behavior:
$ bin/mallet import-file --input token_test.txt --keep-sequence \
--token-regex [0-9]+ --print-output
name: 1
target: 2
input: 0: 3 (0)
1: 4 (1)
2: 5 (2)
If we add the --label 0 it just ignores the second field, but still captures it:
$ bin/mallet import-file --input token_test.txt --keep-sequence \
--token-regex [0-9]+ --label 0 --print-output
name: 1
target: <null>
input: 0: 3 (0)
1: 4 (1)
2: 5 (2)
Now if we redefine the line regex, we can grab the whole line as a single field as use it all as data:
$ bin/mallet import-file --input token_test.txt --keep-sequence \
--token-regex [0-9]+ --line-regex '(.*)' --data 1 --name 0 --label 0 --print-output
name: csvline:1
target: <null>
input: 0: 1 (0)
1: 2 (1)
2: 3 (2)
3: 4 (3)
4: 5 (4)

strip the last and first character from a String

Is fairly easy to strip the first and last character from a string using awk/sed?
Say I have this string
( 1 2 3 4 5 6 7 )
I would like to strip parentheses from it.
How should I do this?
sed way
$ echo '( 1 2 3 4 5 6 7 )' | sed 's/^.\(.*\).$/\1/'
1 2 3 4 5 6 7
awk way
$ echo '( 1 2 3 4 5 6 7 )' | awk '{print substr($0, 2, length($0) - 2)}'
1 2 3 4 5 6 7
POSIX sh way
$ var='( 1 2 3 4 5 6 7 )'; var="${var#?}"; var="${var%?}"; echo "$var"
1 2 3 4 5 6 7
bash way
$ var='( 1 2 3 4 5 6 7 )'; echo "${var:1: -1}"
1 2 3 4 5 6 7
If you use bash then use the bash way.
If not, prefer the posix-sh way. It is faster than loading sed or awk.
Other than that, you may also be doing other text processing, that you can combine with this, so depending on the rest of the script you may benefit using sed or awk in the end.
why doesn't this work? sed '..' s_res.temp > s_res.temp ?
This does not work, as the redirection > will truncate the file before it is read.
To solve this you have some choices:
what you really want to do is edit the file. sed is a stream editor not a file editor.
ed though, is a file editor (the standard one too!). So, use ed:
$ printf '%s\n' "%s/^.\(.*\).$/\1/" "." "wq" | ed s_res.temp
use a temporary file, and then mv it to replace the old one.
$ sed 's/^.\(.*\).$/\1/' s_res.temp > s_res.temp.temp
$ mv s_res.temp.temp s_res.temp
use -i option of sed. This only works with GNU-sed, as -i is not POSIX and GNU-only:
$ sed -i 's/^.\(.*\).$/\1/' s_res.temp
abuse the shell (not recommended really):
$ (rm test; sed 's/XXX/printf/' > test) < test
On Mac OS X (latest version 10.12 - Sierra) bash is stuck to version 3.2.57 which is quite old. One can always install bash using brew and get version 4.x which includes the substitutions needed for the above to work.
There is a collection of bash versions and respective changes, compiled on the bash-hackers wiki
To remove the first and last characters from a given string, I like this sed:
sed -e 's/^.//' -e 's/.$//'
# ^^ ^^
# first char last char
See an example:
sed -e 's/^.//' -e 's/.$//' <<< "(1 2 3 4 5 6 7)"
1 2 3 4 5 6 7
And also a perl way:
perl -pe 's/^.|.$//g'
If I want to remove the First (1) character and the last two (2) characters using sed.
Input "t2.large",
Output t2.large
sed -e 's/^.//' -e 's/..$//'
`

split a large text (xyz) database into x equal parts

I want to split a large text database (~10 million lines). I can use a command like
$ sed -i -e '4 s/(dB)//' -e '4 s/Best\ unit/Best_Unit/' -e '1,3 d' '/cygdrive/c/ Radio Mobile/Output/TRC_TestProcess/trc_longlands.txt'
$ split -l 1000000 /cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands/trc_longlands.txt 1
The first line is to clean the databse and the next is to split it -
but then the output files do not have the field names. How can I incorporate the field names into each dataset and pipe a list which has the original file, new file name and line numbers (from original file) in it. This is so that it can be used in the arcgis model to re-join the final simplified polygon datasets.
ALTERNATIVELY AND MORE USEFULLY -as this needs to go into a arcgis model, a python based solution is best. More details are in https://gis.stackexchange.com/questions/21420/large-point-to-polygon-by-buffer-join-buffer-dissolve-issues#comment29062_21420 and Remove specific lines from a large text file in python
SO GOING WITH A CYGWIN based Python solution as per answer by icyrock.com
we have process_text.sh
cd /cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
mkdir processing
cp trc_longlands.txt processing/trc_longlands.txt
cd txt_processing
sed -i -e '4 s/(dB)//' -e '4 s/Best\ unit/Best_Unit/' -e '1,3 d' 'trc_longlands.txt'
split -l 1000000 trc_longlands.txt trc_longlands_
cat > a
h
1
2
3
4
5
6
7
8
9
^D
split -l 3
split -l 3 a 1
mv 1aa 21aa
for i in 1*; do head -n1 21aa|cat - $i > 2$i; done
for i in 21*; do echo ---- $i; cat $i; done
how can "TRC_Longlands" and the path be replaced with the input filename -in python we have %path%/%name for this.
in the last line is "do echo" necessary?
and this is called by python using
import os
os.system("process_text.bat")
where process_text.bat is basically
bash process_text.sh
I get the following error when run from dos...
Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft
Corporation. All rights reserved.
C:\Users\georgec>bash
P:\2012\Job_044_DM_Radio_Propogation\Working\FinalPropogat
ion\TRC_Longlands\process_text.sh 'bash' is not recognized as an
internal or external command, operable program or batch file.
also when I run the bash command from cygwin -I get
georgec#ATGIS25
/cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
$ bash process_text.sh : No such file or directory:
/cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
cp: cannot create regular file `processing/trc_longlands.txt\r': No
such file or directory : No such file or directory: txt_processing :
No such file or directoryds.txt
but the files are created in the root directory.
why is there a "." after the directory name? how can they be given a .txt extension?
If you want to just prepend the first line of the original file to all but the first of the splits, you can do something like:
$ cat > a
h
1
2
3
4
5
6
7
^D
$ split -l 3
$ split -l 3 a 1
$ ls
1aa 1ab 1ac a
$ mv 1aa 21aa
$ for i in 1*; do head -n1 21aa|cat - $i > 2$i; done
$ for i in 21*; do echo ---- $i; cat $i; done
---- 21aa
h
1
2
---- 21ab
h
3
4
5
---- 21ac
h
6
7
Obviously, the first file will have one line less then the middle parts and the last part might be shorter, too, but if that's not a problem, this should work just fine. Of course, if your header has more lines, just change head -n1 to head -nX, X being the number of header lines.
Hope this helps.

How can I get Perl to accept negative numbers as command-line arguments?

Is there a way to get Perl to avoid treating negative values as command-line switches? Neither stringifying nor backslashing the argument seems to help under Linux:
$ perl -e 'print "#ARGV\n";' 4 5
4 5
$ perl -e 'print "#ARGV\n";' -4 5
Unrecognized switch: -4 (-h will show valid options).
$ perl -e 'print "#ARGV\n";' "-4" 5
Unrecognized switch: -4 (-h will show valid options).
$ perl -e 'print "#ARGV\n";' '-4' 5
Unrecognized switch: -4 (-h will show valid options).
$ perl -e 'print "#ARGV\n";' \-4 5
Unrecognized switch: -4 (-h will show valid options).
$ perl -E "say join ', ', #ARGV" -- -1 2 3
-1, 2, 3
The trick is using the double-hyphen (--) to end the option parsing. Double-hyphen is a GNU convention:
$ touch -a
usage: touch [-acfm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
$ touch -- -a
$ ls
-a