I made a test script test.qsub:
#!/bin/bash
#PBS -q batch
#PBS -o output.txt
#PBS -e Error.err
echo "hello world"
When running qsub test.qsub it does not generate the output.txt file nor the file error.txt. I also believe that the other options do not work either, appreciate your help ! It is said you should configure the torque.cfg but in my installation the file is not generated and not in /var/spool/torque.
Try "#PBS -k oe". This directs pbs to keep stdout and stderr.
Related
I have tried to submit the script below to HPC
#!/bin/bash
#PBS -N bwa_mem_tumor
#PBS -q batch
#PBS -l walltime=02:00:00
#PBS -l nodes=2:ppn=2
#PBS -j oe
sample=x
ref=absolute/path/GRCh38.p13.genome.fa
fwd=absolutepath/forward_read.fq.gz
rev=absolutepath/reverse_read.fq.gz
module load bio/samtools/1.9
bwa mem $ref $fwd $rev > $sample.tumor.sam && samtools view -S $sample.tumor.sam -b > $sample.tumor.bam && samtools sort $sample.tumor.bam > $sample.tumor.sorted.bam
However as an output I can get only the $sample.tumor.sam and log file says that
Lmod has detected the following error: The following module(s) are unknown:
"bio/samtools/1.9"
Please check the spelling or version number. Also try "module spider ..."
It is also possible your cache file is out-of-date; it may help to try:
$ module --ignore-cache load "bio/samtools/1.9"
Also make sure that all modulefiles written in TCL start with the string
#%Module
However when I input modeles avail it shows that bio/samtools/1.9 is on the list.
Also when i use the option module --ignore-cache load "bio/samtools/1.9"
the result is the same
If i try to continue working with the sam file and input manually the command line
samtools view -b RS0107.tumor.sam > RS0107.tumor.bam
it shows
[W::sam_read1] Parse error at line 200943
[main_samview] truncated file.
What's possibly wrong with the samtools module ir we with the script?
There're multiple perl scripts that is ran from CYGWIN terminal. An example is,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
The above is an example. Were based on input parameter JSmith, it reads a db file, generate an ID and output that to a text file.
Now these perl scripts running on the CYGWIN keeps growing and it's added to a text file like shown below,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
$ perl IdGeneratorTool.pl PTesk -i userInfo.adb -o PTesk.txt
$ perl IdGeneratorTool.pl CMorris -i userInfo.adb -o CMorris.txt
$ perl IdGeneratorTool.pl JLawrence -i userInfo.adb -o JLawrence.txt
$ perl IdGeneratorTool.pl TCruise -i userInfo.adb -o TCruise.txt
...
....
......
.......
.........
And the list keeps growing.
I would like to know whether there's a way to execute all these perl scripts which are in a text file in one go.
I'm new to perl and doesn't have much idea as to what are the options.
An ideal scenario might be, a tool where i can open this text file and click a execute button and then it executes all the scripts and output multiple *.txt files into the same directory.
Or maybe a simple perl script that can do it.
Put them into a file makeall (or whatever you want to call it.
Put as a first line #!/bin/bash into the file
In cygwin enter chmod +x makeall
in cygwin enter ./makeall
With this you've created a bash script which'll do all your calls of the perl script.
Another option would to just put all the user information into a csv file and read that one in order to call your script.
WAIT! Even easier!
Put into the makeall script this:
#!/bin/bash
for user in \
JSmith \
PTesk \
CMorris \
JLawrence \
TCruise \
; do
perl IdGeneratorTool.pl "$user" -i userInfo.adb -o "$user".txt
done
Now you just need to add any additional user the same way I did for your examples.
Without seeing the source for IdGeneratorTool.pl it's hard to give any specific advice; but it is generally not hard to turn something like
do_stuff($ARGV[0], $opt_i, $opt_o);
into
while (<>) {
chomp;
$user, $adb, $outputfile = split('\t');
do_stuff($user, $adb, $outputfile);
}
to read the input from a tab-delimited file instead of from command-line arguments.
You can create text file with list of users (one per line) for example user_list.txt
JSmith
PTesk
CMorris
JLawrence
TCruise
Then create bash script process_list.sh with following content in same directory
#!/bin/bash
for user in `cat user_list.txt`
do
perl IdGeneratorTool.pl $user -i userInfo.adb -o ${user}.txt
done
Now make bash script executable chmod +x process_list.sh and it is ready for execution.
Once you need to add new user edit user_list.txt to add one more line into the file.
Polar Bear
for the past 2 months, I have been trying to find out why why I cannot submit a job on our HPC (using QSUB), recently, I found out that my home directory was
$/export/home/wrfuser
while my other co-workers are
$/home/wrfuser1
*note /export
I can submit a job but it never shows a result. Here's my sample hello.qsub:
#!/bin/bash --login
#PBS -j oe
#PBS -l walltime=00:01:00,nodes=1,ppn=1,mem=50mb
export WORKDIR=/mnt/NFS003/WRF/WRF_hist/qsub_test
cd ${WORKDIR}
echo "HELLO WORLD"
[wrfuser#HPC qsub_test]$ vi hello.qsub
[wrfuser#HPC qsub_test]$ qsub hello.qsub
Your job 7618 ("hello.qsub") has been submitted
[wrfuser#HPC qsub_test]$ qstat
job-ID prior name user state submit/start at queue slots ja-task-ID
7617 0.55500 hello.qsub wrfuser Eqw 04/06/2018 10:21:35 1
7618 0.55500 hello.qsub wrfuser Eqw 04/06/2018 10:35:15 1
[wrfuser#HPC qsub_test]$
If its not possible to do that on /export/home, is there any other way to submit a job on HPC?
I solved it!!! I changed my qsub script to
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -pe orte 64
echo "HELLO JOHN"
mkdir Hello_world
[wrfuser#CADHPC01 run]$
I am using number of nodes,ppn, and memory on my previous script and now I changed it to number of cores #$ -pe orte 64. However, I not 100% sure that it is the main reason for that error.
I am newbie here in stackoverflow and it feels like I will learn and enjoy exponentially here!! Thanks! :D
I've submitted my job by the following command:
bsub -e error.log -o output.log ./myScript.sh
I have a question: why are the output and errors logs available only once the job ended?
Thanks
LSF doesn't steam the output back to the submission host. If the submission host and the execution host have a shared file system, and the JOB_SPOOL_DIR is in that shared file system (the spool directory is $HOME/.lsbatch by default) then you should see the stdout and stderr there. After the job finishes, the files there are copied back to the location specified by bsub.
Check bparams -a | grep JOB_SPOOL_DIR to see if the admin has changed the location of the spool dir. With or without the -o/-e options, while the job is running its stdout/err will be captured in the job's spool directory. When the job is finished, the stdout/stderr is copied to the filenames specified by bsub -o/-e. The location of the files in the spool dir is $JOB_SPOOL_DIR/<jobsubmittime>.<jobid>.out or $JOB_SPOOL_DIR/<jobsubmittime>.<jobid>.err
[user1#beta ~]$ cat log.sh
LINE=1
while :
do
echo "line $LINE"
LINE=$((LINE+1))
sleep 1
done
[user1#beta ~]$ bsub -o output.log -e error.log ./log.sh
Job <930> is submitted to default queue <normal>.
[user1#beta ~]$ tail -f .lsbatch/*.930.out
line 1
line 2
line 3
...
According to the LSF documentation the behaviour is configurable:
If LSB_STDOUT_DIRECT is not set and you use the bsub -o option, the standard output of a job is written to a temporary file and copied to the file you specify after the job finishes.
I'm looking for a way to log information to a file about a submitted job immediately after it starts.
Normally all the job status is appended to the log file after a job has completed, but I'd like to know the information it has when it starts.
I know there's the -B flag but I want it in a file, and I could also do something like:
bsub -J jobby -o run_job.log bjobs -l -J jobby > jobby.log; run_job
but maybe someone knows of a funkier way of doing this.
There are some subtle variations that essentially accomplish the same thing:
You can use a pre-exec to do a similar thing instead of doing the
bjobs as part of the command:
bsub -J jobby -E "bjobs -l -J jobby > jobby.log" run_job
You can use the job's environment to get your own jobid instead of
using -J if you write your submission as a script:
#!/bin/sh
#BSUB -o run_job.log
bjobs -l $LSB_JOBID > $LSB_JOBID.log
run_job
Then submit your job like this:
bsub < jobscript.sh
You can do some combination of the above: use $LSB_JOBID in a
pre-execution script.
That's about as 'funky' as it gets AFAIK :)