Nagios plugin gives “no output returned” using compiled perl - perl

I have a custom nagios plugin which is written in Perl. For complicated political reasons I am required to hide the source code of this plugin. The only way I found to do this was by using perlc (http://marginalhacks.com/Hacks/perlc).
In the words of author:
"Takes a single perl script, converts the block using a simple encoding with an optionally defined key. The script is decoded at runtime and fed to the perl library, to avoid it getting in the hands of the user."
The problem I am getting is that Nagios shows "No output returned from plugin" when I used the compiled version of the plugin. The raw perl source works just fine.
After debugging for a while I narrowed the problem down to using exit in perl. I.e
This works fine when compiled.
print "OK: Everything is working fine.\n";
This however does not work and results in ""No output returned from plugin"
print "OK: Everything is working fine.\n";
exit 1;
It doesn't matter how I exit (0 1 2 or 3) I still get the same problem.

According to the cross-posted PerlMonks thread, the issue was resolved by enabling autoflushing:
$| = 1;
From perlvar:
HANDLE->autoflush( EXPR )
$OUTPUT_AUTOFLUSH
$|
If set to nonzero, forces a flush right away and after every write or
print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or
not; $| tells you only whether you've asked Perl explicitly to flush
after each write). STDOUT will typically be line buffered if output is
to the terminal and block buffered otherwise. Setting this variable is
useful primarily when you are outputting to a pipe or socket, such as
when you are running a Perl program under rsh and want to see the
output as it's happening.
See the article Suffering from Buffering? for more details.

Related

back ticks not working in perl

Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.

What are the alternatives to the magic punctuation "$|" to turn off the print buffer in Perl?

I was refactoring some old code (by other people) and I came across the following at the top of some CGI scripts:
#Turn on output buffering
local $| = 1;
perlcritic as usual unhelpfully points out the obvious : "Magic punctuation used". Are there any alternatives to this or perlcritic is just grumpy?
Furthermore, on closer inspection. I think the code is wrong.
If I'm not mistaken, it means exactly the opposite from what the comment says. It turns off the output buffering. My memory is a little bit rusty and I can't seem to find the Perl documentation that describes this magic punctuation. The scripts run in mod_perl.
Does messing around with Perl's buffering behavior desirable and results in any performance gain? Most of the stuff written about this comes from the early part of the first decade of the 21st century. Is this still even a valid good practice?
$| is one of a number of punctuation variables that are really per-filehandle. The variable gets or sets the value for the currently selected output filehandle (by default, STDOUT). ($. is slightly different; it is bound to the last filehandle read from.)
The "modern" way to access these is via a method on the filehandle:
use IO::Handle;
$fh->autoflush(1); # instead of $|=1
The method corresponding to each variable is documented in perldoc perlvar.
Your question seems a bit scattered, but I'll try my best to answer thoroughly.
You want to read perldoc pervar. The relevant section says:
$| If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or not; $| tells you only whether you've asked Perl explicitly to
flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this
variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want
to see the output as it's happening. This has no effect on input buffering. See "getc" in perlfunc for that. See "select" in perldoc
on how to select the output channel. See also IO::Handle. (Mnemonic: when you want your pipes to be piping hot.)
So yes, the comment is incorrect. Setting $| = 1 does indeed disable buffering, not turn it on.
As for performance, the reason output buffering is enabled by default is because this improves performance--even in 2011--and probably until the end of time, unless quantum I/O somehow changes the way we understand I/O entirely.
The reasons to disable output buffering are not to improve performance, but to change some other behavior at the expense of performance.
Since I have no idea what your code does, I cannot speculate as to its reason for wanting to disable output buffering.
Some (but by no means all) possible reasons to disable output buffering:
You're writing to a socket or pipe, and the other end expects an immediate response.
You're writing status updates to the console, and want the user to see them immediately, not at the end of a line. This is especially common when you output a period after each of many operations, etc.
For a CGI script, you may want the browser to display some of the HTML output before processing has finished.
The comment, as others have stated, is incorrect. In contrast, local $| = 1 disables output buffering.
To comply with Perl::Critic's policies, you could make use of the English module:
use English qw( -no_match_vars );
local $OUTPUT_AUTOFLUSH = 1; # equivalent to: local $| = 1
As you can check in manuals, $|=1 turn off buffering saying that it is true that the buffers must be flushed, so the comment is wrong.
About it being good or not, I don't know, but me too have seen that always done in CGI scripts, so I suspect it is a good thing in this particular case, maybe it is since normally CGI scripts want to make the data available as soon as they are written.

How can I debug a Perl program that suddenly exits?

I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.

"inappropriate ioctl for device"

I have a Perl script running in an AIX box.
The script tries to open a file from a certain directory and it fails to read the file because file has no read permission, but I get a different error saying inappropriate ioctl for device.
Shouldn't it say something like no read permissions for file or something similar?
What does this inappropriate ioctl for device message mean?
How can I fix it?
EDIT: This is what I found when I did strace.
open("/local/logs/xxx/xxxxServer.log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE,
0666) = 4 _llseek(4, 0, [77146], SEEK_END) = 0
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbffc14f8) = -1 ENOTTY
(Inappropriate ioctl for device)
Most likely it means that the open didn't fail.
When Perl opens a file, it checks whether or not the file is a TTY (so that it can answer the -T $fh filetest operator) by issuing the TCGETS ioctl against it. If the file is a regular file and not a tty, the ioctl fails and sets errno to ENOTTY (string value: "Inappropriate ioctl for device"). As ysth says, the most common reason for seeing an unexpected value in $! is checking it when it's not valid -- that is, anywhere other than immediately after a syscall failed, so testing the result codes of your operations is critically important.
If open actually did return false for you, and you found ENOTTY in $! then I would consider this a small bug (giving a useless value of $!) but I would also be very curious as to how it happened. Code and/or truss output would be nifty.
Odd errors like "inappropriate ioctl for device" are usually a result of checking $! at some point other than just after a system call failed. If you'd show your code, I bet someone would rapidly point out your error.
"inappropriate ioctl for device" is the error string for the ENOTTY error. It used to be triggerred primarily by attempts to configure terminal properties (e.g. echo mode) on a file descriptor that was no terminal (but, say, a regular file), hence ENOTTY. More generally, it is triggered when doing an ioctl on a device that does not support that ioctl, hence the error string.
To find out what ioctl is being made that fails, and on what file descriptor, run the script under strace/truss. You'll recognize ENOTTY, followed by the actual printing of the error message. Then find out what file number was used, and what open() call returned that file number.
Since this is a fatal error and also quite difficult to debug, maybe the fix could be put somewhere (in the provided command line?):
export GPG_TTY=$(tty)
From: https://github.com/keybase/keybase-issues/issues/2798
"files" in *nix type systems are very much an abstract concept.
They can be areas on disk organized by a file system, but they could equally well be a network connection, a bit of shared memory, the buffer output from another process, a screen or a keyboard.
In order for perl to be really useful it mirrors this model very closely, and does not treat files by emulating a magnetic tape as many 4gls do.
So it tried an "IOCTL" operation 'open for write' on a file handle which does not allow write operations which is an inappropriate IOCTL operation for that device/file.
The easiest thing to do is stick an " or die 'Cannot open $myfile' statement at the end of you open and you can choose your own meaningful message.
I just fixed this perl bug.
See https://rt.perl.org/Ticket/Display.html?id=124232
When we push the buffer layer to PerlIO and do a failing isatty() check
which obviously fails on all normal files, ignore the wrong errno ENOTTY.
Eureka moment!
I have had this error before.
Did you invoke the perl debugger with something like :-
perl -d yourprog.pl > log.txt
If so whats going on is perl debug tries to query and perhaps reset the terminal width.
When stdout is not a terminal this fails with the IOCTL message.
The alternative would be for your debug session to hang forever because you did not see the prompt for instructions.
Ran into this error today while trying to use code to delete a folder/files that are living on a Windoze 7 box that's mounted as a share on a Centos server. Got the inappropriate icotl for device error and tried everything that came to mind. Read just about every post on the net related to this.
Obviously the problem was isolated to the mounted Windoze share on the Linux server. Looked
at the file permissions on the Windoze box and noted the files had their permissions set to read only.
Changed those, went back to the Linux server and all worked as expected. This may not be the solution for most but hopefully it saves someone some time.
I tried the following code that seemed to work:
if(open(my $FILE, "<File.txt")) {
while(<$FILE>){
print "$_";}
} else {
print "File could not be opened or did not exists\n";
}
I got the error Can't open file for reading. Inappropriate ioctl for device recently when I migrated an old UB2K forum with a DBM file-based database to a new host. Apparently there are multiple, incompatible implementations of DBM. I had a backup of the database, so I was able to load that, but it seems there are other options e.g. moving a perl script/dbm to a new server, and shifting out of dbm?.
I also get this error "inappropriate ioctl for device" when try to fetch file stat.
It was first time when I got a chance to work on perl script.
my $mtime = (stat("/home/ec2-user/sample/test/status.xml"))[9]
Above code snippet was throwing error. Perl script was written in version 5.12 on Windows, and I have to run it on amazon linux having perl 5.15.
In my case error was because of Array index out of bond ( In java language sense).
When I modified code my $var = (stat("/home/ec2-user/sample/test/status.xml"))[0][9]; error gone and I get correct value.
Of course, it is too late to answer, but I am posting my finding so that it can be helpful for developer community.
If some perl expert can verify this, it will be great..

Why does IIS crash when I print to stderr in Perl?

This has been driving me crazy. We have IIS (6) and windows 2008 and ActiveState Perl 5.10. For some reason whenever we do a warn or a carp it eventually corrupts the app pool. Of course, that's a pretty big deal since it means that our errors actually cause problems.
This happened with the previous version of Perl (5.8) and Windows (2003) and IIS (5.) Anyway, basically I put in a carp or a warn and I get an error message and then some garbage text. Any thoughts?
Check to make sure that IIS and the perl DLL are linked with the same version of the C runtime library. (Use depends.exe or dumpbin /dependents).
To expand: the problem may be that IIS has its FILE* table in one place, and the perl DLL thinks it's going to be in a slightly different place. When perl goes to find the stderr handle, it treats random memory as a file handle, with predictable results.
Try adding the following to the top of your scripts:
BEGIN {
open STDERR, '>> c:/iisError.log'
or die "Can't write to c:/issError.log: $!\n";
binmode STDERR;
}
I'm not sure why you would have this problem. But several "wild" guesses as to sources for such a problem would be addressed by the above code.
(It has been a while since I read the source code for appending to files in Win32, but, as I recall, the >> mode plus binmode means that writes to the file from different processes are unlikely to collide, preventing overlapping text in the log.)
A couple of suggestions:
Make sure that the id of the worker
process has write permission to the
directory/file you are writing. I
probably wouldn't give it full
control of C:, though. Better to
make a sub-directory.
Write to the event log instead of a file using
Win32::EventLog
Update: I discovered that this error only happens when you have a variable in the warn. If the warn is just regular text there are no issues. Also, the variable cannot be empty and it looks like you have to have two warns with nonempty variables to hit the bug.