How can I check if a file is open by another program in Perl? - perl

In Perl, how do I test if a file of open by another Perl program? I need the first program to be done before the second program can start.

alt text http://musicritics.com/wp/wp-content/uploads/2009/05/20061129morbo.gif
FILES DO NOT WORK THAT WAY!
But seriously, folks, advisory locks with flock are generally the best you can do. There's no way to guarantee that no other program wants to read or write a file at the same time as you.

You may be able to coordinate the two programs using flock: The first program would lock the file, and the second program would also try to acquire a lock on it, and it would block until the first program releases the lock.

Is flock() available on your system ? Otherwise, the two programs have be be synchronized, they can communicate thru a pipe or a socket, or via the presence/absence of a file.
Another direction if you are on a Unix-like system, could be use lsof output.
I assume having the first program starting the second one is not feasible.

In my experience, flock works fine on local systems on both Windows and Linux.
You could also, presumably, have the first program exec the second program when it's done processing the file.

If you are running on Windows, you could call CreateFile directly with a dwShareMode of 0.
According to MSDN:
Prevents other processes from opening a file or device if they request delete, read, or write access.
Win32API::File gives access to this call.

To specifically look for whether or not a file is open or in use, if you're on unix there is a wrapper to the lsof command to list open files: Unix::Lsof

If you're in Unix, you could also call fuser.

I might be misunderstanding the context and my comment may have limited usability in your case, but depending on what you are using the code for/on - Using serial queues to ensure that tasks to execute in a predictable order maybe an option. Your application (written in Perl) will need to explicitly create and manage the serial queues. For more details refer to the following link: GCD

Related

Perl: system command returns before it is done

I'm using a a few system() commands in my perl script that is running on Linux.
The commands I run with the system() function output their data to a log which I then parse to decide what to do next.
I noticed that sometimes it looks like the code that parses the log file (which comes after the system() function) doesn't use the final log.
For example, I search for a "test pass" phrase in the log file - and the script doesn't find it even though when I open the log it is there.
Another example - I try to delete the folder where the log was placed but it doesn't let me because it's "Not empty". When I try to delete it manually it is deleted with errors.
(These examples happen every now and then, but most of the time they don't)
It looks like some kind of "timing" problem to me. How can I solve it?
If you want to be safe and are on Linux, call system sync; after every command that writes to the disk before reading from the disk. That will force the OS to write everything still buffered to the filesystem and only return afterwards. Thus you can be sure that when it is finished, everything you wrote to files now actually has arrived there.
But be aware, that may be overkill in many situations. There are reasons for those buffers and manually calling sync constantly is most likely not the fastest way of achieving things. See also http://linux.die.net/man/8/sync
If, for example, you have something else that you could do between the writing and the reading, like some calculations or whatever, that would likely be enough and you would not waste time by telling the OS that you know better how and when it has to do it's jobs. ^^
But if perfect efficiency is not your main concern (and you should not be using Perl if it was), system sync; after everything that modifies files and before accessing those files is probably okay and safe.

Executing system commands safely while coding in Perl

Should one really use external commands while coding in Perl? I see several disadvantages of it. It's not system independent plus security risks might also be there. What do you think? If there is no way and you have to use the shell commands from Perl then what is the safest way to execute that particular command (like checking pid, uid etc)?
It depends on how hard it is going to be to replicate the functionality in Perl. If I needed to run the m4 macro processor on something, I'd not think of trying to replicate that functionality in Perl myself, and since there's no module on http://search.cpan.org/ that looks suitable, it would appear others agree with me. In that case, then, using the external program is sensible. On the other hand, if I needed to read the contents of a directory, then the combination of readdir() et al plus stat() or lstat() inside Perl is more sensible than futzing with the output of ls.
If you need to execute commands, think very carefully about how you invoke them. In particular, you probably want to avoid the shell interpreting the arguments, so use the array form of system (see also exec), etc, rather than a single string for the command plus arguments (which means the shell is used to process the command line).
Executing external commands can be expensive simply because it involves forking new process and watching for its output if you need it.
Probably more importantly, should external process fail for any reason, it may be difficult to understand what happened by means of your script. Worse still, surprisingly often external process can be stuck forever, so will be your script. You can use special tricks like opening pipe and watching for output in loop, but this itself is error-prone.
Perl is very capable of doing many things. So, if you stick to using only Perl native constructs and modules to accomplish your tasks, not only it will be faster because you never fork, but it will be more reliable and easier to catch errors by looking at native Perl objects and structures returned by library routines. And of course, it will be automatically portable to different platforms.
If your script runs under elevated permissions (like root or under sudo), you should be very careful as to what external programs you execute. One of the simple ways to ensure basic security is to always specify commands by full name, like /usr/bin/grep (but still think twice and just do grep by Perl itself!). However, even this may not be enough if attacker is using LD_PRELOAD mechanism to inject rogue shared libraries.
If you are willing to go very secure, it is suggested to use tainted check by using -T flag like this:
#!/usr/bin/perl -T
Taint flag will be also enabled by Perl automatically if your script was determined to have different real and effective user or group ids.
Tainted mode will severely limit your ability to do many things (like system() call) without Perl complaining - see more at http://perldoc.perl.org/perlsec.html#Taint-mode, but it will give you much higher security confidence.
Should one really use external commands while coding in Perl?
There's no single answer to this question. It all depends on what you are doing within the wide range of potential uses of Perl.
Are you using Perl as a glorified shell script on your local machine, or just trying to find a quick-and-dirty solution to your problem? In that case, it makes a lot of sense to run system commands if that is the easiest way to accomplish your task. Security and speed are not that important; what matters is the ability to code quickly.
On the other hand, are you writing a production program? In that case, you want secure, portable, efficient code. It is often preferable to write the functionality in Perl (or use a module), rather than calling an external program. At least, you should think hard about the benefits and drawbacks.

Delete file on exit

Maybe I'm wrong, but I am convinced there is some facility provided by UNIX and by the C standard library to get the OS to delete a file once a process exits. But I can't remember what it's called (or maybe I imagined it). In my particular case I would like to access this functionality from perl.
Java has the deleteOnExit function but I understand the deletion is done by the JVM as opposed to the OS which means that if the JVM exits uncleanly (e.g. power failure) then the file will never get deleted.
But I understand the facility I am looking for (if it exists), as it is provided by the OS, the OS looks after the file's deletion, presumably doing some cleanup work on OS start in the case of power failure etc., and certainly doing cleanup in the case the process exits uncleanly.
A very very simple solution to this (that only works on *nix systems) is to:
Create and open the file (keep the file handle around)
Immediately call unlink on the file
Proceed as normal using the file handle, and exit when you feel like it
Then when your program is complete, the file descriptor is closed and the file is truly deleted. This will even work if the program crashes.
Of course this only works within the context of a single script (i.e. other scripts won't be able to directly manipulate the file, although you COULD pass them the file descriptor).
If you are looking for something that the OS may automatically take care of on restart after power failure, an END block isn't enough, you need to create the file where the OS is expecting a temporary file. And once you are doing that, you should just use one of the File::Temp routines (which even offer the option of opening and immediately unlinking the file for you, if you want).
You're looking for atexit(). In Perl this is usually done with END blocks. Java and Perl provide their own because they want to be portable to systems that don't follow the relevant standards (in this case C90).
That said, on Unix the common convention is to open a file and then unlink it; the kernel will delete it when the last reference (which is to say, your file descriptor) is closed. You almost always want to open for read+write.
I think you are looking for a function called tmpfile() which creates file when called and deletes it upon close. check, article
You could do your work in an END block.

How do I daemonize a perl script from within a perl script?

I have a perl script which calls another perl script using backticks. I want to instead call this script and have it daemonize. How do I go about doing this?
edit:
I dont care to communicate back with the process/daemon. I'll most likely just stick it in an sqlite3 table or something.
You refer to backticks, thus I suppose that you want to communicate with the daemon after it's started? Since daemons does not use STDOUT, you will have to think of some other way of passing information to and from it.
The Perl interprocess communication man page (perlipc) has several good examples of this, especially the section "Complete dissociation of child from parent".
The Proc::Daemon contains convenient functions for daemonizing a process.

how to perl for bi-directional communication with dsmadmc.exe?

I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.