AUTOSYS: Command to fetch job status for multiple jobs at the same time - scheduler

I don't have access privileges in UNIX for Autosys to fetch the job report by executing commands. Because, when I execute autorep command it's throwing an error which is a command not found
So, I tried using the GUI method where there is an Enterprise Command Line option, but over there I could fetch a job status for one job (or) box at a time and the command which I used was
autorep -j JOB1
As because I need to fetch reports for multiple jobs, it will be more time-consuming work and a lot of manual work will be involved if I follow the above-mentioned approach.
And my query is, What I need to do to fetch the job report for multiple jobs (or) boxes at the same time? Because I tried using the below commands which are
autorep -j JOB1, JOB2
autorep -j JOB1 JOB2
autorep -j JOB1 & autorep -j JOB2
But, nothing didn't work so, can anyone please tell the solution of it?

To enable access to Autosys from linux, you would need to install the Autosys binary and configure a few variables.
From GUI, just to help out with a few queries:
autorep -J ALL -s
Returns the current job status report of all the jobs in the particular instance.
autorep -j APP_ID-APP_NAME* -s
You can use globbing patterns unlike in linux.
autorep -M Machine-Name -s
Returns the current job status report of all the jobs in the particular machine/host/server.
Refer more at AUTOSYS WORKLOAD AUTOMATION 12.0.01

you can do that using in 3 ways :
Use WCC ( GUI ) " Quick View" and search jobs by giving a comma .
i.e job1,job2,job3,......job-n . This will give you the status of all jobs you need over there but it wont give you a report as such ( copy and paste in bulk and filter out in excel ).
If you need report of those jobs then I would do like this :
Autosys version - 11.3.6 SP8.
write a script as follows :
report.bat
echo off
cls
set mydir="user directory, complete path"
set input_dir=D:\test\scripts\test
for /f "tokens=1 delims=|" %%A in (%input_dir%\test_jobs.txt) do (autorep -j -w -10 %%A | findstr -V "^$ Job Name ______" )
test_jobs.txt:
job1
job2
job3
.
.
job-n
above will give the result of jobs as expected, If you want to save the output in a text file and view it use ">" at the end and export the same to a text file .
If you access to IDash Tool then login to IDash and generate a report in reports section based on your requirement . (Easy to use and many report formats are available to download )
Note: 1 Use Autosys command line instead of server command line if you prefer script ( search for command line) .
Hope it helps !

Related

How to make a workflow run for an infinitely long duration when running it using command line?

I am running a Cadence workflow using command line. I don't want my workflow to timeout (ie, I want it to run it for an infinitely long duration). How can I do so?
You can specify the startToCloseTimeout to a very large number, e.g. 100 years can represent infinite duration for you.
Also there are two ways to start workflows using Command line -- start or run.
./cadence --address <> --domain <> workflow run --tl helloWorldGroup --wt <WorkflowTypeName> --et 31536000 -i '<inputInJson>'
or
./cadence --address <> --domain <> workflow start --tl helloWorldGroup --wt <WorkflowTypeName> --et 31536000 -i '<inputInJson>'
Note that et is short for execution_timeout which is the startToCloseTimeout in seconds.
So Start would return once server accepts the start request. Run will wait for workflow to complete, and return the result at the end. In your case, you need to use Start because you don't know when the workflow will complete. But if you also want to get the workflow result after it's started, you can use observe command.
./cadence --address <> --domain <> workflow observe --workflow_id <>

qsub -t job "has died"

I am trying to submit an array job to our cluster using qsub. The script is like:
#!/bin/bash
#PBS -l nodes=1:ppn=1 # Number of nodes and processor
#..... (Other options)
#PBS -t 0-50 # List job
cd $PBS_O_WORKDIR
./programname << EOF
some parameters
EOF
This script runs without a problem when removing -t option. But every time I added -t, I got following output:
---------------------------------------------
Check nodes and clean them of stray processes
---------------------------------------------
Checking node XXXXXXXXXX
-> User XXXX running job XXXXX.XXX:state=X:ncpus=X
-> Job XXX.XXX has died
Done clearing all the allocated nodes
------------------------------------------------------
Concluding PBS prologue script - XX-XX-XXXX XX:XX:XX
------------------------------------------------------
-------------- Job will be requeued --------------
Where it died and started requeue. No error message. I did not find any similar issue online. Has anyone experienced this before? Thank you!
(I wrote another "manual" array qsub script which works. But I do wish to get the work, as it is in the command option and much cleaner.)

Parallel execution of robot tests in Sauce Labs

I am using Eclipse+Maven based Robot Framework with Java implementation of SeleniumLibrary.
I could execute tests in sauce labs but it executes only on one VM. Has anyone achieved parallel execution of robot tests in Sauce Labs say in multiple VMs? Or can anyone guide to achieve this? Thanks in advance.
This is what I am using to run on multiple concurrent VM's on saucelabs. I have a 1-click batch file that uses start pybot to invoke parallel execution. Example:
ECHO starting parallel run on saucelabs.com
cd c:\base\dir\script
ECHO Win7/Chrome40:
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win7Chrome40 -v DESIRED_CAPABILITIES:"name:Win7 + Chrome40, platform:Windows 7, browserName:chrome, version:40" tests/test.robot
ECHO Win8/IE11
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win8IE11 -v DESIRED_CAPABILITIES:"name:Win8 + IE11, platform:Windows 8.1, browserName:internet explorer, version:11" tests/test.robot
-T tells pybot to not overwrite result logs but create a timestamped log for each run
-d specifies where the results will go
Works like a charm!
A parallel executor for Robot Framework tests. With Pabot you can split one execution into multiple and save test execution time.
https://github.com/mkorpela/pabot

Using Plink to execute command within another command/shell

My question is regarding the way the file ("-m" switch) is used by Plink.
My command is:
plink.exe -ssh admin#10.20.30.1 -pw p#ss30rd -m commandfile.txt
I'm trying to connect to a switch and execute these 3 commands:
system-view
user-interface vty 0
screen-length 200
The issue here, is that each command depends on it's predecessor. In other word, executing the command system-view gives access to a new level or a context where the second command user-interface vty 0 can be valid and executed and same thing for the third command in which is only valid (and available) only if user-interface vty 0 is executed
Is there a way or a workaround that we can achieve this using Plink?
My goal here is to put the "Plink" command line in a script and try to analyse the output
Thanks in advance
If you specify multiple commands using the -m switch, they are executed one after another. While you (if I understand you correctly) want to execute the commands within each other. That's not possible with the -m switch.
What you can do, is to feed the commands to Plink using an input redirection. This way, Plink behaves, just as, if you typed those commands.
(
echo system-view
echo user-interface vty 0
echo screen-length 200
) | plink.exe -ssh admin#10.20.30.1 -pw p#ss30rd
Note that by default, with the -m switch, Plink does not allocate a pseudo terminal, while with the input redirection, it does. So the behavior is not identical. You can use the -t/-T switches to override that.

Can I suppress an LSF job report without sending mail?

I would like to submit a job with Platform LSF and have the output placed in a file (bsub -o), without a job report at the end of it. Using bsub -N removes the job report from the file, but instead sends the report via e-mail. Is there a way to suppress it completely?
As explained in http://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_admin/email_notification.dita , you can set the environment variable LSB_JOB_REPORT_MAIL=N at job submission to disable email notification, e.g.:
LSB_JOB_REPORT_MAIL=N bsub -N -o command.stdout command options
How about redirecting the output of the command to a file and sending the report to /dev/null:
bsub -o /dev/null "ls > job.\$LSB_JOBID.out"
I managed to make this work by using the -N switch as suggested to bsub AND suppressing email delivery by using the environment variable:
For (t)csh users:
setenv LSB_JOB_REPORT_MAIL N
For Bourne shell and their variants:
export LSB_JOB_REPORT_MAIL=N
It's a bit convoluted, but it gets the job done.
I can think of two ways to do this. One is a bit of a "sledgehammer" approach, but maybe would work for you.
First, you could set up some email alias that is the email equivalent of /dev/null, and then use the -u option to bsub to send the email report to that user.
Second (the sledgehammer), you could set LSB_MAILPROG in the LSF configuration to point to a sendmail wrapper script that could either a) parse the report and bin it based on certain text matches, or b) bin all email.
Otherwise you're kind of stuck with the header in file indicated by -o.
Use a combination of "-o", "-e", "-N" and "-u /dev/null" on the bsub command line to completely suppress job report and e-mail, e.g.:
$ bsub -N -u /dev/null -o command.stdout -e command.stderr command options
Unfortunately, this would also disable completely any reports of job failures.