"inappropriate ioctl for device" - perl

I have a Perl script running in an AIX box.
The script tries to open a file from a certain directory and it fails to read the file because file has no read permission, but I get a different error saying inappropriate ioctl for device.
Shouldn't it say something like no read permissions for file or something similar?
What does this inappropriate ioctl for device message mean?
How can I fix it?
EDIT: This is what I found when I did strace.
open("/local/logs/xxx/xxxxServer.log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE,
0666) = 4 _llseek(4, 0, [77146], SEEK_END) = 0
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbffc14f8) = -1 ENOTTY
(Inappropriate ioctl for device)

Most likely it means that the open didn't fail.
When Perl opens a file, it checks whether or not the file is a TTY (so that it can answer the -T $fh filetest operator) by issuing the TCGETS ioctl against it. If the file is a regular file and not a tty, the ioctl fails and sets errno to ENOTTY (string value: "Inappropriate ioctl for device"). As ysth says, the most common reason for seeing an unexpected value in $! is checking it when it's not valid -- that is, anywhere other than immediately after a syscall failed, so testing the result codes of your operations is critically important.
If open actually did return false for you, and you found ENOTTY in $! then I would consider this a small bug (giving a useless value of $!) but I would also be very curious as to how it happened. Code and/or truss output would be nifty.

Odd errors like "inappropriate ioctl for device" are usually a result of checking $! at some point other than just after a system call failed. If you'd show your code, I bet someone would rapidly point out your error.

"inappropriate ioctl for device" is the error string for the ENOTTY error. It used to be triggerred primarily by attempts to configure terminal properties (e.g. echo mode) on a file descriptor that was no terminal (but, say, a regular file), hence ENOTTY. More generally, it is triggered when doing an ioctl on a device that does not support that ioctl, hence the error string.
To find out what ioctl is being made that fails, and on what file descriptor, run the script under strace/truss. You'll recognize ENOTTY, followed by the actual printing of the error message. Then find out what file number was used, and what open() call returned that file number.

Since this is a fatal error and also quite difficult to debug, maybe the fix could be put somewhere (in the provided command line?):
export GPG_TTY=$(tty)
From: https://github.com/keybase/keybase-issues/issues/2798

"files" in *nix type systems are very much an abstract concept.
They can be areas on disk organized by a file system, but they could equally well be a network connection, a bit of shared memory, the buffer output from another process, a screen or a keyboard.
In order for perl to be really useful it mirrors this model very closely, and does not treat files by emulating a magnetic tape as many 4gls do.
So it tried an "IOCTL" operation 'open for write' on a file handle which does not allow write operations which is an inappropriate IOCTL operation for that device/file.
The easiest thing to do is stick an " or die 'Cannot open $myfile' statement at the end of you open and you can choose your own meaningful message.

I just fixed this perl bug.
See https://rt.perl.org/Ticket/Display.html?id=124232
When we push the buffer layer to PerlIO and do a failing isatty() check
which obviously fails on all normal files, ignore the wrong errno ENOTTY.

Eureka moment!
I have had this error before.
Did you invoke the perl debugger with something like :-
perl -d yourprog.pl > log.txt
If so whats going on is perl debug tries to query and perhaps reset the terminal width.
When stdout is not a terminal this fails with the IOCTL message.
The alternative would be for your debug session to hang forever because you did not see the prompt for instructions.

Ran into this error today while trying to use code to delete a folder/files that are living on a Windoze 7 box that's mounted as a share on a Centos server. Got the inappropriate icotl for device error and tried everything that came to mind. Read just about every post on the net related to this.
Obviously the problem was isolated to the mounted Windoze share on the Linux server. Looked
at the file permissions on the Windoze box and noted the files had their permissions set to read only.
Changed those, went back to the Linux server and all worked as expected. This may not be the solution for most but hopefully it saves someone some time.

I tried the following code that seemed to work:
if(open(my $FILE, "<File.txt")) {
while(<$FILE>){
print "$_";}
} else {
print "File could not be opened or did not exists\n";
}

I got the error Can't open file for reading. Inappropriate ioctl for device recently when I migrated an old UB2K forum with a DBM file-based database to a new host. Apparently there are multiple, incompatible implementations of DBM. I had a backup of the database, so I was able to load that, but it seems there are other options e.g. moving a perl script/dbm to a new server, and shifting out of dbm?.

I also get this error "inappropriate ioctl for device" when try to fetch file stat.
It was first time when I got a chance to work on perl script.
my $mtime = (stat("/home/ec2-user/sample/test/status.xml"))[9]
Above code snippet was throwing error. Perl script was written in version 5.12 on Windows, and I have to run it on amazon linux having perl 5.15.
In my case error was because of Array index out of bond ( In java language sense).
When I modified code my $var = (stat("/home/ec2-user/sample/test/status.xml"))[0][9]; error gone and I get correct value.
Of course, it is too late to answer, but I am posting my finding so that it can be helpful for developer community.
If some perl expert can verify this, it will be great..

Related

Nagios plugin gives “no output returned” using compiled perl

I have a custom nagios plugin which is written in Perl. For complicated political reasons I am required to hide the source code of this plugin. The only way I found to do this was by using perlc (http://marginalhacks.com/Hacks/perlc).
In the words of author:
"Takes a single perl script, converts the block using a simple encoding with an optionally defined key. The script is decoded at runtime and fed to the perl library, to avoid it getting in the hands of the user."
The problem I am getting is that Nagios shows "No output returned from plugin" when I used the compiled version of the plugin. The raw perl source works just fine.
After debugging for a while I narrowed the problem down to using exit in perl. I.e
This works fine when compiled.
print "OK: Everything is working fine.\n";
This however does not work and results in ""No output returned from plugin"
print "OK: Everything is working fine.\n";
exit 1;
It doesn't matter how I exit (0 1 2 or 3) I still get the same problem.
According to the cross-posted PerlMonks thread, the issue was resolved by enabling autoflushing:
$| = 1;
From perlvar:
HANDLE->autoflush( EXPR )
$OUTPUT_AUTOFLUSH
$|
If set to nonzero, forces a flush right away and after every write or
print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or
not; $| tells you only whether you've asked Perl explicitly to flush
after each write). STDOUT will typically be line buffered if output is
to the terminal and block buffered otherwise. Setting this variable is
useful primarily when you are outputting to a pipe or socket, such as
when you are running a Perl program under rsh and want to see the
output as it's happening.
See the article Suffering from Buffering? for more details.

Why does matlab causes terminal std out crash and how do I fix it?

Every time when I finish running a matlab code collection on command line, when I exit matlab, the standard output just gets messed. I can still use the terminal window, but whatever I typed won't show up on the screen, leaving me either type with my eyes blind, or open up a new terminal and excessively cd to the old place.
This happens every single time when I use make to run a matlab collection, and since I'm working a lot on this, it turns out to be very annoying. Does anyone know what's the problem here and how should I fix it?
As was pointed out in the comments, the makescript is probably dumping "bad" characters to the terminal. You could prevent this (but possibly lose useful information) by redirecting the output - instead of sending it to the terminal window, you can send it to a file, or even /dev/null ("the great bit bucket in the sky").
The underlying problem, however, is that your makefile is even sending these characters to the terminal in the first place. I would recommend that you pipe the output to a file with something like make > myDump.txt, then examine the resulting file to see what is going on, and where in your makefile the problem is created. It is possible that you will still be getting some output when you do this - that's because by default > redirects stdout only, and not stderr - a second output stream used for error messages. You can redirect both to a file with make 2>&1 myDump.txt.
You have already seen the recommendation to use stty sane to restore the status of the terminal - I am repeating it here in case someone only looks at answers, and not at comments; but I don't take credit for it :-).

back ticks not working in perl

Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.

How can I debug a Perl program that suddenly exits?

I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.

Why does IIS crash when I print to stderr in Perl?

This has been driving me crazy. We have IIS (6) and windows 2008 and ActiveState Perl 5.10. For some reason whenever we do a warn or a carp it eventually corrupts the app pool. Of course, that's a pretty big deal since it means that our errors actually cause problems.
This happened with the previous version of Perl (5.8) and Windows (2003) and IIS (5.) Anyway, basically I put in a carp or a warn and I get an error message and then some garbage text. Any thoughts?
Check to make sure that IIS and the perl DLL are linked with the same version of the C runtime library. (Use depends.exe or dumpbin /dependents).
To expand: the problem may be that IIS has its FILE* table in one place, and the perl DLL thinks it's going to be in a slightly different place. When perl goes to find the stderr handle, it treats random memory as a file handle, with predictable results.
Try adding the following to the top of your scripts:
BEGIN {
open STDERR, '>> c:/iisError.log'
or die "Can't write to c:/issError.log: $!\n";
binmode STDERR;
}
I'm not sure why you would have this problem. But several "wild" guesses as to sources for such a problem would be addressed by the above code.
(It has been a while since I read the source code for appending to files in Win32, but, as I recall, the >> mode plus binmode means that writes to the file from different processes are unlikely to collide, preventing overlapping text in the log.)
A couple of suggestions:
Make sure that the id of the worker
process has write permission to the
directory/file you are writing. I
probably wouldn't give it full
control of C:, though. Better to
make a sub-directory.
Write to the event log instead of a file using
Win32::EventLog
Update: I discovered that this error only happens when you have a variable in the warn. If the warn is just regular text there are no issues. Also, the variable cannot be empty and it looks like you have to have two warns with nonempty variables to hit the bug.