Can I suppress an LSF job report without sending mail? - lsf

I would like to submit a job with Platform LSF and have the output placed in a file (bsub -o), without a job report at the end of it. Using bsub -N removes the job report from the file, but instead sends the report via e-mail. Is there a way to suppress it completely?

As explained in http://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_admin/email_notification.dita , you can set the environment variable LSB_JOB_REPORT_MAIL=N at job submission to disable email notification, e.g.:
LSB_JOB_REPORT_MAIL=N bsub -N -o command.stdout command options

How about redirecting the output of the command to a file and sending the report to /dev/null:
bsub -o /dev/null "ls > job.\$LSB_JOBID.out"

I managed to make this work by using the -N switch as suggested to bsub AND suppressing email delivery by using the environment variable:
For (t)csh users:
setenv LSB_JOB_REPORT_MAIL N
For Bourne shell and their variants:
export LSB_JOB_REPORT_MAIL=N
It's a bit convoluted, but it gets the job done.

I can think of two ways to do this. One is a bit of a "sledgehammer" approach, but maybe would work for you.
First, you could set up some email alias that is the email equivalent of /dev/null, and then use the -u option to bsub to send the email report to that user.
Second (the sledgehammer), you could set LSB_MAILPROG in the LSF configuration to point to a sendmail wrapper script that could either a) parse the report and bin it based on certain text matches, or b) bin all email.
Otherwise you're kind of stuck with the header in file indicated by -o.

Use a combination of "-o", "-e", "-N" and "-u /dev/null" on the bsub command line to completely suppress job report and e-mail, e.g.:
$ bsub -N -u /dev/null -o command.stdout -e command.stderr command options
Unfortunately, this would also disable completely any reports of job failures.

Related

AUTOSYS: Command to fetch job status for multiple jobs at the same time

I don't have access privileges in UNIX for Autosys to fetch the job report by executing commands. Because, when I execute autorep command it's throwing an error which is a command not found
So, I tried using the GUI method where there is an Enterprise Command Line option, but over there I could fetch a job status for one job (or) box at a time and the command which I used was
autorep -j JOB1
As because I need to fetch reports for multiple jobs, it will be more time-consuming work and a lot of manual work will be involved if I follow the above-mentioned approach.
And my query is, What I need to do to fetch the job report for multiple jobs (or) boxes at the same time? Because I tried using the below commands which are
autorep -j JOB1, JOB2
autorep -j JOB1 JOB2
autorep -j JOB1 & autorep -j JOB2
But, nothing didn't work so, can anyone please tell the solution of it?
To enable access to Autosys from linux, you would need to install the Autosys binary and configure a few variables.
From GUI, just to help out with a few queries:
autorep -J ALL -s
Returns the current job status report of all the jobs in the particular instance.
autorep -j APP_ID-APP_NAME* -s
You can use globbing patterns unlike in linux.
autorep -M Machine-Name -s
Returns the current job status report of all the jobs in the particular machine/host/server.
Refer more at AUTOSYS WORKLOAD AUTOMATION 12.0.01
you can do that using in 3 ways :
Use WCC ( GUI ) " Quick View" and search jobs by giving a comma .
i.e job1,job2,job3,......job-n . This will give you the status of all jobs you need over there but it wont give you a report as such ( copy and paste in bulk and filter out in excel ).
If you need report of those jobs then I would do like this :
Autosys version - 11.3.6 SP8.
write a script as follows :
report.bat
echo off
cls
set mydir="user directory, complete path"
set input_dir=D:\test\scripts\test
for /f "tokens=1 delims=|" %%A in (%input_dir%\test_jobs.txt) do (autorep -j -w -10 %%A | findstr -V "^$ Job Name ______" )
test_jobs.txt:
job1
job2
job3
.
.
job-n
above will give the result of jobs as expected, If you want to save the output in a text file and view it use ">" at the end and export the same to a text file .
If you access to IDash Tool then login to IDash and generate a report in reports section based on your requirement . (Easy to use and many report formats are available to download )
Note: 1 Use Autosys command line instead of server command line if you prefer script ( search for command line) .
Hope it helps !

How to ssh as different user, change group, and run a script within Perl

I need to be able to run a script from within a script but first I need to ssh as a different user and then change my group.
I am currently doing the following inside my perl script:
`ssh <user>#<host> ; newgrp <group> ; /script/to/run.pl`
When running this command form the command line it doesn't seam to switch groups. I assume this is because it's changing to a new shell.
How do I get around this and get it to work?
Also, please note, I do not have sudo/root privelages.
The first semicolon is interpreted by the local shell. So the three commands are run on the same host. I think you want this
ssh <user>\#<host> "newgrp <grp>; /bin/run.pl"
salva, in his reply, answered my question:
sg $group -c '$cmd'
The reason the following command:
newgrp <int>
doesn't work is because it creates a new shell. At least that is my best guess. the "sg" command gets around this.
I have found the following to work (with ksh on hpux) :
ssh user#host "echo 'date;pwd;echo bozo;id' | newgrp nerds;"
which basically executes the commands as user:nerds :
I think OP wants to construct a string to execute from Perl, notice the backticks. Not sure but OP might have to use:
$s='ssh <user>#<host> ; newgrp <group> ; /script/to/run.pl'; # Normal single quotes not backticks
exec($s);
OP, there are different ways to execute shell functions from a Perl script. You used backticks. There is also exec($s) and system($s).

How to send stderr in email shell script (ash)

I wrote a shell script that I use under ash, and I redirect stderr and stdout to a log file. I would like that log file to be emailed to me only if stderr is not empty.
I tried:
exec >mylog.log 2>&1
# Perform various find commands
if [TEST_IF_STDERR_NOT_EMPTY]; then
/usr/bin/mail -s "mylog" email#mydomain.com < mylog.log
fi
My question is twofold:
1- I get a -sh: /usr/bin/mail: not found error. It seems that the mail command doesn't exist under ash (or at least under my linux box, which is a Synology NAS), what would be the alternative? Worst case, perl is available, but I would prefer to use standard sh commands.
2- How to I test that stderr is not empty?
Thanks
How to check if file is empty in bash
As for the first question, in your code you are calling mail but lower in the post you are calling email. Check your code and make sure it is mail.
Use which mail to get the full path. Maybe it is not installed in /usr/bin/.
Use find to locate mail.
If you can go to another shell, run it and then execute which mail to get the full path of mail in case the path is set up in the alternative shells.

extend runtime limit for a USUSP job

when I doing a calculation halfway, I just found the runtime limit 50:00 may not be sufficient. So I use $bstop 1234 to stop the job 1234 and try to modify the old runtime -W 50:00 to -W 100:00
Can you suggest a command to do so?
I tried
$ bmod -W 100:00 1234
Please request for a minimum of 32 cores!
For more information, please contact XXX#XXX.
Request aborted by esub. Job not modified.
$ bmod [-W 100:00| -Wn ] 1234
-bash: -Wn]: command not found
100:00[8217]: Illegal job ID.
. Job not modified.
according to
[-W [hour:]minute[/host_name | /host_model] | -Wn]
from http://www.cisl.ucar.edu/docs/LSF/7.0.3/command_reference/bmod.cmdref.html
I don't quite understand the syntax, -Wn does it mean Wall time new
Many thanks for your help!
The first command fails because LSF calls a the mandatory esub defined by your administrator to do some preprocessing on the command line, and this is returning an error. Here's the relevant quote from the page you linked:
Like bsub, bmod calls the master esub (mesub), which invokes any
mandatory esub executables configured by an LSF administrator, and any
executable named esub (without .application) if it exists in
LSF_SERVERDIR.
You're going to have to come up with a bmod command line that passes the esub checks, but that might cause other problems because some parameters (like -n I believe) can't be changed at runtime by default so bmod will reject the request if you specify it.
The -Wn option is used to remove the run limit from the job entirely rather than change it to a different value.

CakePHP Shell Cron email error

I am using CakePHP 1.3 and I was able to successfully able setup the cron job to run shells using the example that was given in the CakePHP Book.
*/5 * * * * /full/path/to/cakeshell myshell myparam -cli /usr/bin -console /cakes/1.2.x.x/cake/console -app /full/path/to/app >> /path/to/log/file.log
This outputs the results into a log file but I want to receive email when there is an error so I can try to resolve the problem.
I tried the following with no luck.
If I remove the >> /path/to/log/file.log then even the successful run is emailed.
> /dev/null, my assumption was it would send a successful to /dev/null and error to email.
1> /dev/null, tried another variation of 2
Any help is appreciated.
Thanks
Huseyin,
This is not a CakePHP error then, and is maybe a question better suited for serverfault, as you would script your solution.
Bash's built-in facilities are up to the task, try The linux documentation project's neat introductory tutorials on shell scripting and #man bash.
Your solution basically has to use a temporary file or variable in which you store the output of the last cron job run. If there is an error:
cat THE_TMP_FILE | mail -s "Error from Server Huseyin's server" huseyin#fancy_domain.com
else:
cat THE_TMP_FILE >> blah.blah.log
Unfortunatly, you need a MTA available, in order to make the mail command. If you do not have access to the mail command, then you set another cron job following the first in time which then simply runs a if [ -e THE_FILE_CONTAINING_THE_LAST_ERROR]; then { echo $(cat THE_FILE_CONTAINING_THE_LAST_ERROR); rm -v THE_FILE... ;} ; fi
Of course this is not working code, but pretty close, so you'll get the idea.