When debugging complex code using standart Perl debugger in NonStop mode sometimes we get 100 levels deep in subroutine calls error.
Is there another way to set $DB::deep variable, without touching code ?
I tried to use dumpDepth option, but seems like it's not available in NonStop mode.
I know about perl -MPerlIO='via;$DB::deep=500' hack, but it's not working with perl version >= 5.20
Create a ~/.perldb file with
$DB::deep=500
Set permissions on this file to something secure (0444 or 0644) so you don't get this admonishment:
perldb: Must not source insecure rcfile /Users/mob/.perldb.
You or the superuser must be the owner, and it must not
be writable by anyone but its owner.
Related
Maybe I'm wrong, but I am convinced there is some facility provided by UNIX and by the C standard library to get the OS to delete a file once a process exits. But I can't remember what it's called (or maybe I imagined it). In my particular case I would like to access this functionality from perl.
Java has the deleteOnExit function but I understand the deletion is done by the JVM as opposed to the OS which means that if the JVM exits uncleanly (e.g. power failure) then the file will never get deleted.
But I understand the facility I am looking for (if it exists), as it is provided by the OS, the OS looks after the file's deletion, presumably doing some cleanup work on OS start in the case of power failure etc., and certainly doing cleanup in the case the process exits uncleanly.
A very very simple solution to this (that only works on *nix systems) is to:
Create and open the file (keep the file handle around)
Immediately call unlink on the file
Proceed as normal using the file handle, and exit when you feel like it
Then when your program is complete, the file descriptor is closed and the file is truly deleted. This will even work if the program crashes.
Of course this only works within the context of a single script (i.e. other scripts won't be able to directly manipulate the file, although you COULD pass them the file descriptor).
If you are looking for something that the OS may automatically take care of on restart after power failure, an END block isn't enough, you need to create the file where the OS is expecting a temporary file. And once you are doing that, you should just use one of the File::Temp routines (which even offer the option of opening and immediately unlinking the file for you, if you want).
You're looking for atexit(). In Perl this is usually done with END blocks. Java and Perl provide their own because they want to be portable to systems that don't follow the relevant standards (in this case C90).
That said, on Unix the common convention is to open a file and then unlink it; the kernel will delete it when the last reference (which is to say, your file descriptor) is closed. You almost always want to open for read+write.
I think you are looking for a function called tmpfile() which creates file when called and deletes it upon close. check, article
You could do your work in an END block.
In Perl, how do I test if a file of open by another Perl program? I need the first program to be done before the second program can start.
alt text http://musicritics.com/wp/wp-content/uploads/2009/05/20061129morbo.gif
FILES DO NOT WORK THAT WAY!
But seriously, folks, advisory locks with flock are generally the best you can do. There's no way to guarantee that no other program wants to read or write a file at the same time as you.
You may be able to coordinate the two programs using flock: The first program would lock the file, and the second program would also try to acquire a lock on it, and it would block until the first program releases the lock.
Is flock() available on your system ? Otherwise, the two programs have be be synchronized, they can communicate thru a pipe or a socket, or via the presence/absence of a file.
Another direction if you are on a Unix-like system, could be use lsof output.
I assume having the first program starting the second one is not feasible.
In my experience, flock works fine on local systems on both Windows and Linux.
You could also, presumably, have the first program exec the second program when it's done processing the file.
If you are running on Windows, you could call CreateFile directly with a dwShareMode of 0.
According to MSDN:
Prevents other processes from opening a file or device if they request delete, read, or write access.
Win32API::File gives access to this call.
To specifically look for whether or not a file is open or in use, if you're on unix there is a wrapper to the lsof command to list open files: Unix::Lsof
If you're in Unix, you could also call fuser.
I might be misunderstanding the context and my comment may have limited usability in your case, but depending on what you are using the code for/on - Using serial queues to ensure that tasks to execute in a predictable order maybe an option. Your application (written in Perl) will need to explicitly create and manage the serial queues. For more details refer to the following link: GCD
I've got a Perl script that as one of its final steps creates a compressed version of a file it has created. Due to some unfortunate setups, we cannot guarantee that a given customer will have a specific compression function. I want to put something like this together:
if ($has_jar) {
system("jar -c $compressed_file $infile");
}
elsif ($has_zip) {
system("zip -j $compressed_file $infile");
}
else {
copy($infile, $compressed_file);
}
Where if they don't have either of the compression apps, it will just copy the file into the location of the compressed file without compressing it.
My sticky wicket here is that I'm not quite sure what the best way is to determine if they have jar or zip. It looks like I can used exec() instead of system() and take advantage of the fact that it only returns if it fails, but the script actually does do a couple of things after this, so that wouldn't work.
I also need this to be a portable solution as this script runs on both Windows and various Unix distros. Thanks in advance.
I think your best bet is File::Which.
See my multi-which.
For *nix based systems, this should work:
my $has_jar = `which jar` ne '';
This could potentially work for Windows as well if you include which.
Alternatively, you could try the command suggested by this answer,
my $has_jar = `for %i in (jar.exe) do #echo. %~$PATH:i` ne '';
It most likely doesn't return '' if it doesn't find it, however, but I don't have Perl available on a Windows machine to test it out.
Look through the directories specified by the PATH environment variable.
Usually, things like that don't suddenly disappear from the system, so I suggest to check the presence of the tools during setup/installation and save the one to use in the config.
How about just try to run the program. If it can't be run, then you know there's a problem.
Why not use the Archive::Zip package to do the compression, eliminating the need for an external program altogether?
There are a couple of things to think about if you are going to do this:
Use system and exec in the list form so the shell doesn't get a chance to interpret special characters.
Can you store this as configuration instead of putting it in the code? See how CPAN.pm does it, for instance.
How do you know that you are running what you think you are running? If someone makes a trojan horse of the same name, is your program going to happily execute it? Note that using the PATH, as noted in Sinan's multi-which, still has this problem since it relies on the user setting the PATH.
I have a Perl script running in an AIX box.
The script tries to open a file from a certain directory and it fails to read the file because file has no read permission, but I get a different error saying inappropriate ioctl for device.
Shouldn't it say something like no read permissions for file or something similar?
What does this inappropriate ioctl for device message mean?
How can I fix it?
EDIT: This is what I found when I did strace.
open("/local/logs/xxx/xxxxServer.log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE,
0666) = 4 _llseek(4, 0, [77146], SEEK_END) = 0
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbffc14f8) = -1 ENOTTY
(Inappropriate ioctl for device)
Most likely it means that the open didn't fail.
When Perl opens a file, it checks whether or not the file is a TTY (so that it can answer the -T $fh filetest operator) by issuing the TCGETS ioctl against it. If the file is a regular file and not a tty, the ioctl fails and sets errno to ENOTTY (string value: "Inappropriate ioctl for device"). As ysth says, the most common reason for seeing an unexpected value in $! is checking it when it's not valid -- that is, anywhere other than immediately after a syscall failed, so testing the result codes of your operations is critically important.
If open actually did return false for you, and you found ENOTTY in $! then I would consider this a small bug (giving a useless value of $!) but I would also be very curious as to how it happened. Code and/or truss output would be nifty.
Odd errors like "inappropriate ioctl for device" are usually a result of checking $! at some point other than just after a system call failed. If you'd show your code, I bet someone would rapidly point out your error.
"inappropriate ioctl for device" is the error string for the ENOTTY error. It used to be triggerred primarily by attempts to configure terminal properties (e.g. echo mode) on a file descriptor that was no terminal (but, say, a regular file), hence ENOTTY. More generally, it is triggered when doing an ioctl on a device that does not support that ioctl, hence the error string.
To find out what ioctl is being made that fails, and on what file descriptor, run the script under strace/truss. You'll recognize ENOTTY, followed by the actual printing of the error message. Then find out what file number was used, and what open() call returned that file number.
Since this is a fatal error and also quite difficult to debug, maybe the fix could be put somewhere (in the provided command line?):
export GPG_TTY=$(tty)
From: https://github.com/keybase/keybase-issues/issues/2798
"files" in *nix type systems are very much an abstract concept.
They can be areas on disk organized by a file system, but they could equally well be a network connection, a bit of shared memory, the buffer output from another process, a screen or a keyboard.
In order for perl to be really useful it mirrors this model very closely, and does not treat files by emulating a magnetic tape as many 4gls do.
So it tried an "IOCTL" operation 'open for write' on a file handle which does not allow write operations which is an inappropriate IOCTL operation for that device/file.
The easiest thing to do is stick an " or die 'Cannot open $myfile' statement at the end of you open and you can choose your own meaningful message.
I just fixed this perl bug.
See https://rt.perl.org/Ticket/Display.html?id=124232
When we push the buffer layer to PerlIO and do a failing isatty() check
which obviously fails on all normal files, ignore the wrong errno ENOTTY.
Eureka moment!
I have had this error before.
Did you invoke the perl debugger with something like :-
perl -d yourprog.pl > log.txt
If so whats going on is perl debug tries to query and perhaps reset the terminal width.
When stdout is not a terminal this fails with the IOCTL message.
The alternative would be for your debug session to hang forever because you did not see the prompt for instructions.
Ran into this error today while trying to use code to delete a folder/files that are living on a Windoze 7 box that's mounted as a share on a Centos server. Got the inappropriate icotl for device error and tried everything that came to mind. Read just about every post on the net related to this.
Obviously the problem was isolated to the mounted Windoze share on the Linux server. Looked
at the file permissions on the Windoze box and noted the files had their permissions set to read only.
Changed those, went back to the Linux server and all worked as expected. This may not be the solution for most but hopefully it saves someone some time.
I tried the following code that seemed to work:
if(open(my $FILE, "<File.txt")) {
while(<$FILE>){
print "$_";}
} else {
print "File could not be opened or did not exists\n";
}
I got the error Can't open file for reading. Inappropriate ioctl for device recently when I migrated an old UB2K forum with a DBM file-based database to a new host. Apparently there are multiple, incompatible implementations of DBM. I had a backup of the database, so I was able to load that, but it seems there are other options e.g. moving a perl script/dbm to a new server, and shifting out of dbm?.
I also get this error "inappropriate ioctl for device" when try to fetch file stat.
It was first time when I got a chance to work on perl script.
my $mtime = (stat("/home/ec2-user/sample/test/status.xml"))[9]
Above code snippet was throwing error. Perl script was written in version 5.12 on Windows, and I have to run it on amazon linux having perl 5.15.
In my case error was because of Array index out of bond ( In java language sense).
When I modified code my $var = (stat("/home/ec2-user/sample/test/status.xml"))[0][9]; error gone and I get correct value.
Of course, it is too late to answer, but I am posting my finding so that it can be helpful for developer community.
If some perl expert can verify this, it will be great..
Hi I am using a perl script written by another person who is no longer in the company.
If I run the script as a stand alone, then the output are as expected. But when I call the script from another code repeatedly, the output is wrong except for the first time.
I suspect some variables are not initialised properly. When it is called standalone, each time it exits and all the variable values are initialised to defaults. But when called from another perl script, the modules and the variable values are probably carried over to the next call of the script.
Is there any way to flush out the called script from memory before I call it next time?
I tried enabling warning and it was throwing up 1000s of lines of warning...!
EDIT: How I am calling the other script:
The code looks like this:
do "processing.pl";
...
...
...
process(params); #A function in processing.pl
...
...
...
If you want to force the module to be reloaded, delete its entry from %INC and then reload it.
For example:
sub reload_module {
delete $INC{'Your/Silly/Module.pm'};
require Your::Silly::Module;
Your::Silly::Module->import;
}
Note that if this module relies on globals in other modules being set, those may need to be reloaded as well. There's no easy way to know without taking a peak at the code.
Hi I am using a perl script written by another person who is no longer in the company.
I tried enabling warning and it was throwing up 1000s of lines of warning...!
There's your problem right there. The script was not written properly, and should be rewritten.
Ask yourself this question: if it has 1000s of warnings when you enable strict checking, how can you be sure that it is doing the right thing? How can you be sure that it is not clobbering files, trashing data sets, making a mess of your filesystem? Chances are it is doing all of these things, either deliberately or accidentally.
I wouldn't trust running an error-filled script written by someone no longer with the company. I'd rewrite it and be sure that it was doing what I needed it to do.
Unloading a module is a more difficult task than simply removing the %INC entry of the module. Take a look at Class::Unload from CPAN.
If you don't want to rewrite/fix the script, I suggest calling the script via exec() or one of its varieties. While it is not very elegant to do, it will definitely fix your problem.
Are you sure that you need to reload the module? By using do, you are reading the source every time and executing it. What happens if you change that to require, which will only read and evaluate the source once?
Another possibility (just thinking aloud here) could be to do with the local directory? Are they running from the same place. Probably wouldn't work the first time though.
Another option is to use system ('doprocessing.pl');. Lazily, we do this with a few scripts to force re-initialisation of a number of classes/variables etc. And to force the log files to rotate properly.
edit: I have just re-read your question, and it would appear that you are not calling it like this.