Can I make Rundeck read a log file on the remote node as job output? - rundeck

I'm using Rundeck to run remote jobs through the SSH executor. Some of the jobs I run log to specific files on the host, rather than STDOUT, and I don't have the ability to change this.
Is there any way to tell Rundeck to read those files as they get written (using something like tail -f), and treat what appears there as the job output?
Adding tail -f itself as a step wouldn't work, since it will never terminate.
If need be, a 'hacky' solution will do (like adding extra job steps for copying and reading logs) but ideally I'd like it to be neater. So if you could give me some guidelines how to build a plugin that will take the filename as a parameter and read the output from there, that would be better.

If you just want a file to read and print on STDOUT, then just use this inline script as additional step in workflow.
#!/usr/bin/python
import os,sys
file_name=sys.argv[1]
if os.path.isfile(file_name):
with open(file_name) as file:
for line in file:
print line
else:
print 'file doesnt exists'
Give file name as argument

Related

Why a Powershell script executes on a previous version of a file that has been changed?

I am trying to modify several values of a PowerShell script template prior to execution. The approach I took was to use the Get-Content command to read the template file, then to use the replace operand to replace the content with a content of my choosing, and then the Set-Content command to update the file, then execute the script. However, according to the error messages that I encounter, it seems that the modified file is not running, but the template one.
The thing I find hard to understand is why, as I use the Get-Content once again and print the result prior to execution, where I witness that the file did change.
I use simple PowerShell scripting with no threads or so ever, the script is being executed on an Azure VM via the Run-Command feature. I wonder why it happens.
Can anyone please explain?

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

Standard for feeding test data to a Nagios plugin?

I'm developing a Nagios plugin in Perl (no Nagios::Plugin, just plain Perl). The error condition I'm checking for normally comes from a command output, called inside the plugin. However, it would be very inconvenient to create the error condition, so I'm looking for a way to feed test output to the plugin to see if it works correctly.
The easiest way I found at the moment would be with a command line option to optionally read input from a file instead of calling the command.
if($opt_f) {
open(FILE, $opt_f);
#output = <FILE>;
close FILE;
}
else {
#output = `my_command`;
}
Are there other, better ways to do this?
Build a command line switch into your plugin, and if you set -t on the command line, you use your test command at /path/to/test/command, else you run the 'production' command at /path/to/production/command
The default action is production, only test it the switch indicating test mode is present.
Or you could have a test version of the command that returns various status for you to test (via a command line argument perhaps).
You put the test version of mycommnd in some test directory (/my/nagois/tests/bin).
Then you manipulate the PATH environment variable on the command line that runs the test.
$ env PATH=/my/nagois/tests/bin:$PATH nagios_pugin.pl
The change to $PATH will only last for as long as that one command executes. The change is localized to the subshell that is spawned to run the plugin.
The backticks used to execute the command will cause the shell to use the PATH to locate the command, and that will the the test version of the command, which lives in the directory that is now the first one on the search path.
let me know if I wasn't clear.
New answer for new method.

Should I turn a perl script that parses a /var/log/.* file into a daemon?

I am writing a perl script to parse, for example, /var/log/syslog.
The perl script triggers further subsequent tasks when particular events in the log appear. The log is parsed following the advice of this post:
Command line: monitor log file and add data to database
Which what I believe is the use of a pipe.
Now I'd like this script to forever run in the background.
This sounds like a daemon to me, and the daemon program referenced in the following question seems ideal:
How can I run a Perl script as a system daemon in linux?
But from this post, it seems clear that daemon's have no open file handles. So how can I have a daemon, or a perl script that becomes a daemon, that monitors a logfile?
It sounds like what you want is a daemon. In that case the advise given in the second post you reference is the best practice. However, you do have other options like daemontools, which removes the fork complexity.
Daemons are allowed to have filehandles, but you should close STDIN, STDOUT, and STDERRR because you shouldn't really use them anymore. A lot of this has to do with the way fork works in *nix systems. Just open the pipe filehandle after your second fork, and you shouldn't have any issues.
this doesn't answer your question, but is another route to consider which may or may not be appropriate for you:
rsyslog can execute a program when a certain message is logged
see Filter Conditions for setting up the up the trigger, Templates for formatting the output that's passed to the script, and Actions > Shell Execute for specifying the executable.
Be sure to read the security implications, and that ryslog blocks while the external program runs. But if your script runs reliably quickly, it may be an option.

Why does my command-line not run from cron?

I have a perl script (part of the XMLTV family of "grabbers", specifically tv_grab_oztivo).
I can successfully run it like this:
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
I use the full paths to everything to eliminate issues with the Working Directory. Permissions shouldn't be a problem.
So, if I run it from the Terminal (Mac OSX) it works just fine.
But when I set it to run via a cron job, nothing appears to happen at all. No output is created etc.
There isn't anything wrong with the crontab as far as I can see, because if I substitute a helloworld.pl for the actual script, it runs just fine at the right time.
So, what can I do to debug? I can see from looking at %ENV in the two cases that the environment is very different, but what other approaches can I take to debugging? How can I see the output of the cron job, which might be some kind of perl "die" message or "not found" message from the shell or whatever?
Or should I be trying to somehow give the cron version of the command the same environment as when it's running as me?
It's often because you don't get the full environment when running under cron. Best bet is to capture the ouput by using the command:
( /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
and then have a look at /tmp/qq.
If it does turn out to be a missing environment, then you may need to put:
. ~/.profile
or something similar, into the execution chain of your cron job, such as:
( . ~/.profile ; /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
If you're looking at %ENV in the two cases, I'd suggest that, as a first step in your perl script, set %ENV to what it is in a cron job, and then trying to run it from the command line. You may need to exec yourself once for this to take full control:
BEGIN {
if (exists $ENV{something_in_your_env_not_in_cron}) {
%ENV = (...);
exec $^X, $0, #ARGV;
}
}
Now try running it, and seeing if there's anything you can do to debug it (including running under perl -d if required). Most likely, you'll find that you end up adding items back into %ENV one at a time until it magically starts working (LD_LIBRARY_PATH is a good one for this, but ORACLE_HOME or DB2HOME for Oracle or DB2 apps might be good choices, too). Then you can either set the variable in your script, or in the crontab.
I'd run a simple shell script by absolute path from the cron command.
Inside that script, I'd ensure that I trapped stdout and stderr to a known (or knowable) file. I'd also ensure that enough of your environment is set. On Unix, you get almost no environment set at all when you run a command via cron - I'm not sure about MacOS X. The standard culprit for problems is PATH. I have a separate .cronfile that sets my working environment enough that I usually don't have problems - that's an analogue of .profile.
On occasion if you can't figure out what's going wrong with your command line, the simplest way to fix it is to turn the whole thing into a shell script. Ideally you shouldn't have to do this, but it can be the fastest way to solve the problem.
File: /files/cron1.sh
#!/bin/sh
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
And then in cron:
/files/cron1.sh
This allows you to test the script independent of cron. Remember though that your login shell runs with different environment variables than cron does.
cron usually captures the output of stdout and stderr and e-mailes any output to the crontab owner.
Did you double check your crontab entry to make sure it's valid and will execute at the right time?
Make sure that the script does not need any environment variables set. Otherwise wrap it in another (bash) script, where you can set the environment variables that the other script expects.