extend runtime limit for a USUSP job - lsf

when I doing a calculation halfway, I just found the runtime limit 50:00 may not be sufficient. So I use $bstop 1234 to stop the job 1234 and try to modify the old runtime -W 50:00 to -W 100:00
Can you suggest a command to do so?
I tried
$ bmod -W 100:00 1234
Please request for a minimum of 32 cores!
For more information, please contact XXX#XXX.
Request aborted by esub. Job not modified.
$ bmod [-W 100:00| -Wn ] 1234
-bash: -Wn]: command not found
100:00[8217]: Illegal job ID.
. Job not modified.
according to
[-W [hour:]minute[/host_name | /host_model] | -Wn]
from http://www.cisl.ucar.edu/docs/LSF/7.0.3/command_reference/bmod.cmdref.html
I don't quite understand the syntax, -Wn does it mean Wall time new
Many thanks for your help!

The first command fails because LSF calls a the mandatory esub defined by your administrator to do some preprocessing on the command line, and this is returning an error. Here's the relevant quote from the page you linked:
Like bsub, bmod calls the master esub (mesub), which invokes any
mandatory esub executables configured by an LSF administrator, and any
executable named esub (without .application) if it exists in
LSF_SERVERDIR.
You're going to have to come up with a bmod command line that passes the esub checks, but that might cause other problems because some parameters (like -n I believe) can't be changed at runtime by default so bmod will reject the request if you specify it.
The -Wn option is used to remove the run limit from the job entirely rather than change it to a different value.

Related

exec power shell script having corrective action every time

Getting the corrective action for exec while using Powershell to ADD usersand groups to local admin group. Please note I am not a scripting guy, not sure what wrong I am doing.
Notice: /
By default, an Exec resource is applied on every run. That is mediated, where desired, by the resource's unless, onlyif, and / or creates parameters, as described in that resource type's documentation.
The creates parameter is probably not appropriate for this particular case, so choose one of unless or onlyif. Each one is expected to specify a command for Puppet to run, whose success or failure (as judged by its exit status) determines whether the Exec should be applied. These two parameters differ primarily in how they interpret the exit status:
unless interprets exit status 0 (success) as indicating that the Exec's main command should not be run
onlyif interprets exit statuses other than 0 (success) as indicating that the Exec's main command should not be run
I cannot advise you about the specific command to use here, but the general form of the resource declaration would be:
exec { 'Add-LocalGroupMember Administrators built-in':
command => '... PowerShell command to do the work ...',
unless => '... PowerShell command that exits with status 0 if the work is already done ...',
provider => 'powershell',
}
(That assumes that the puppetlabs-powershell module is installed, which I take to be the case for you based on details presented in the question.)
I see your comment on the question claiming that you tried this approach without success, but this is the answer. If your attempts to implement this were unsuccessful then you'll need to look more deeply into what went wrong with those. You haven't presented any of those details, and I'm anyway not fluent in PowerShell, but my first guess would be that the exit status of your unless or onlyif script was computed wrongly.
Additionally, you probably should set the Exec's refresh property to a command that succeeds without doing anything. I'm not sure what the would be on Windows, but on most other systems that Puppet supports, /bin/true would be idiomatic. (That's not correct for Windows; I give it only as an example of the kind of thing I mean.) This will prevent running the main command twice in the same Puppet run in the event that the Exec receives an event.

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

How do I run Jmeter test from command line with -Djava.rmi.server.hostname=IP and command line parameter?

I need to run a distributed test with some of the command line parameter and also I need to pass My server IP with -Djava.rmi.server.hostname=IP, since I am running it from the command line I need nee to give as
jmeter -n -t C:\\Jmxfile.jmx -r Gsomeproperty=value in command line.
I am confused of passing even the command line parameter and also hostname? can somebody help me in sending both at a time.
Check the documentation:
Java system properties and JMeter properties can be overridden directly on the command line (instead of modifying jmeter.properties). To do so, use the following options:
-D[prop_name]=[value]
defines a java system property value.
-J[prop_name]=[value]
defines a local JMeter property.
-G[prop_name]=[value]
defines a JMeter property to be sent to all remote servers.
-G[propertyfile]
defines a file containing JMeter properties to be sent to all remote servers.
So, you can send both at a time through the command line.

Not able to execute script from crontab

I am able to execute scrip from command line.
I'm executing it like this:
/path/to/script run
But while executing it from cron like below, the page is not comming:
55 11 * * 2-6 /path/to/script.pl run >> /tmp/script.log 2>&1
The line which is getting a webpage uses LWP::Simple:
my $site = get("http://sever.com/page") ;
I'm not modyfing anything. The page is valid and accessible.
I'm getting enpty page only when I execute this script from crontab. I am able to execute itfrom command line!
Crontab is owned by root. And job is executed as root.
Thanks in advance for any clue!
It's difficult to say what might be causing this, but there are differences between your environment, and the environment created by crontab.
You could try running it through a shell with appropriate args to construct your user environment:
55 11 * * 2-6 /bin/tcsh -l /path/to/script.pl run >> /tmp/script.log 2>&1
I'm assuming you are running it by cron with your own user ID of course. If you aren't, then obviously you should try running it manually with the user ID that cron is using to run it.
If it's not a difference in environment variables (e.g. those that specify a proxy to use), I believe you are running afoul of SElinux. Among other things, it prevents background applications (e.g. cron jobs) from accessing the internet unless you explicitly allow them to do so. I don't know how to do so, but you should be able to find out quite easily.

Perl and SNMP - input options

script uses Net::SNMP module for Perl.
I'm trying to run snmpget command with some options added e.g. ( -Ir ) (here is list of options), but I haven't found any way to do that. In documentation for this module I didn't found anything about adding input options to snmp command.
If there is any other module that supports this, it would bi nice but it wouldn't be first pick as that would require a lot of changes in script (not mine script, just doing minor changes).
I could run system (or backticks) command from Perl, e.g.:
snmpget -v2c -c COMMUNITY -Ir HOST OID
and parse output but I would like to avoid that also.
Any advice or solution would be welcome since I'm still new to Perl.
Thx.
You linked to the documentation of Net::SNMP so I'm sure you've read it all before asking... Right?
There is no "command", there is only your script's calls to the API.
[Edit after the below comments]
Net::SNMP has no option to check indexes before sending the request. So, you could say the equivalent of -Ir is enabled by default. In fact, Net::SNMP does not load your MIB, so it has no way of checking the validity of your requested variables before sending the request.