In a nutshell: wrote a Perl script using flock(). On Linux, it behaves as expected. On AIX, flock() always returns 1, even though another instance of the script, using flock(), should be holding an exclusive lock on the lockfile.
We ship a Bash script to restart our program, relying on flock(1) to prevent simultaneous restarts from making multiple processes. Recently we deployed on AIX, where flock(1) doesn't come by default and won't be provided by the admins. Hoping to keep things simple, I wrote a Perl script called flock, like this:
#!/usr/bin/perl
use Fcntl ':flock';
use Getopt::Std 'getopts';
getopts("nu:x:");
%switches = (LOCK_EX => $opt_x, LOCK_UN => $opt_u, LOCK_NB => $opt_n);
my $lockFlags = 0;
foreach $key (keys %switches) {
if($switches{$key}) {$lockFlags |= eval($key)};
}
$fileDesc = $opt_x || $opt_u;
open(my $lockFile, ">&=$fileDesc") || die "Can't open file descriptor: $!";
flock($lockFile, $lockFlags) || die "Can't change lock - $!\n";;
I tested the script by running (flock -n -x 200; sleep 60)200>lockfile twice, nearly simultaneously, from two terminal tabs.
On Linux, the second run dies with "Resource temporarily unavailable", as expected.
On AIX, the second run acquires the lock, with flock() returning 1, as most definitely not expected.
I understand the flock() is implemented differently on the two systems, the Linux version using flock(1) and the AIX one using, I think, fcntl(1). I don't have enough expertise to understand how this causes my problem, and how to solve it.
Many thanks for any advice.
This isn't anything to do with AIX, the open() call in your script is incorrect.
Should should be something like:
open (my $lockfile, ">>", $fileDesc) # for LOCK_EX, must be write
You were using the "dup() previously opened file handle" syntax with >&=, but the script had not opened any files to duplicate, nor should it.
My quick tests shows the correct behavior (debugging added)
first window:
$ ./flock.pl -n -x lockfile
opened lockfile
locked
second window:
$./flock.pl -n -x lockfile
opened lockfile
Can't change lock - Resource temporarily unavailable
$
It's not about different commands, I suppose; it's more about global differences between AIX and Linux.
In POSIX systems, file locks are advisory: each program could check the file's state and then reconsider what has to be done with it. No explicit checks = no locking.
In Linux systems, however, one can try to enforce a mandatory lock, although the doc itself states that it would be unwise to rely on it: implementation is (and probably will ever be) buggy.
Therefore, I suggest implementing such checks of advisory flags within the script itself.
More about it: man 2 fcntl, man 2 flock.
Related
I have a question about this answer, quoted below, by friedo to another question here. (I don't have permission to comment on it, so I am asking this as a question.)
"You can use File::Tee.
use File::Tee qw(tee);
tee STDOUT, '>>', 'some_file.out';
print "w00p w00p";
If File::Tee is unavailable, it is easily simulated with a pipeline:
open my $tee, "|-", "tee some_file.out";
print $tee "w00p w00p";
close $tee;
Are both of these tees the same? Or is one from Perl and the other from Linux/Unix?
They're mostly the same, but the implementation details differ.
Opening a pipe to tee some_file.out forks a new process and runs the Unix / Linux utility program tee(1) in it. This program reads its standard input (i.e. anything you write to the pipe) and writes it both to some_file.out as well as to stdout (which it inherits from your program).
Obviously, this will not work under Windows, or on any other system that doesn't provide a Unix-style tee command.
The File::Tee module, on the other hand, is implemented in pure Perl, and doesn't depend on any external programs. However, according to its documentation, it also works by forking a new process and running what is essentially a Perl reimplementation of the Unix tee command under it. This does have some advantages, as the documentation states:
"It is implemeted around fork, creating a new process for every tee'ed stream. That way, there are no problems handling the output generated by external programs run with system or by XS modules that don't go through perlio."
On the other hand, the use of fork has its down sides as well:
"BUGS
Does not work on Windows (patches welcome)."
If you do want a pure Perl implementation of the tee functionality that works on all platforms, consider using IO::Tee instead. Unlike File::Tee, this module is implemented using PerlIO and does not use fork.
Alas, this also means that it may not correctly capture the output of external programs executed with system or XS modules that bypass PerlIO.
I am making a few calls to the system, mainly cd commands, as certain functions need to called from certain directories on my system. However, I have noticed that once a call is finished, the effects of that call are lost.
For example, lets say that I start in /home/project and then call:
system("setenv home/project/env/NeededEnvironment");
system("make cfile.o");
The second system call doesn’t know about the first call setting the environment needed for the file to compile. I have tried putting them into one system call separated by ; as well, but have the same problem. Is there anyway to get the effect of the first call to be saved?
That is how system works: it creates a subshell to execute your command, and when the command is complete, the subshell exits leaving your perl process unaffected.
Section 8 of the Perl FAQ also answers this question.
I {changed directory, modified my environment} in a perl script. How come the change disappeared when I exited the script? How do I get my changes to be visible?
Unix
In the strictest sense, it can't be done—the script executes as a different process from the shell it was started from. Changes to a process are not reflected in its parent—only in any children created after the change. There is shell magic that may allow you to fake it by eval()ing the script's output in your shell; check out the comp.unix.questions FAQ for details.
You want code along the lines of
system("cd /home/project/env/NeededEnvironment && make cfile.o") == 0
or warn "$0: make failed";
or use the -C option to make and avoid shell argument parsing as in
system("make", "-C", "/home/project/env/NeededEnvironment", "cfile.o") == 0
or warn "$0: make failed";
If you are writing a Perl script, use Perl itself and shell-out as rarely as possible.
If you need to change your directory:
chdir 'some/other/dir';
If you need to set an environment variable:
$ENV{ SOME_VAR } = 'Some value';
Update
Here are some more commands where the shell equivalent should not be used:
mkdir
unlink
rmdir
Modules everyone should know about:
File::Copy
File::Path
File::Basename
File::Spec
I have a Perl script that
queries a database for a list of files to process
processes the files
and then exits
Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running.
The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly.
I would appreciate some advice on the following:
Is using the lock file a good approach or is there a better/more robust way to do this?
Is using crontab for this a good idea or could I better write an endless loop with sleep()?
Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?
Let's assume you take the continuous loop route. You rejigger your program to be one infinite loop. You sleep for a certain amount of time, then wake up and process your database files, and then go back to sleep.
You now need a mechanism to make sure your program is still up and running. This could be done via something like inetd.
However, your program basically does a single task, and does that task repeatedly through the day. This is what crontab is for. The inetd mechanism is for servers that are waiting for a client, like https or sshd. In these cases, you need a mechanism to recreate the server process as soon as it dies.
One way you can improve your lockfile mechanism is to include the PID with it. For example, in your Perl script you do this:
open my $lock_file_fh, ">", LOCK_FILE_NAME;
say {$lock_file_fh} "$$";
close $lock_file_fh;
Now, if your crontab sees the lock file, it can test to see if that process ID is still running or not:
if [ -f $lock_file ]
then
pid=$(cat $lock_file)
if ! ps -p $pid
then
rm $lock_file
fi
restart_program
else
restart_program
fi
Using a lock file is a fine approach if using cron, although I would recommend a database if you can install and use one easily (MySQL/Postgres/whatever. NOT SQLite). This is more portable than a file on a local filesystem, among other reasons, and can be re-used.
You are indeed correct. cron is not the best idea for this scenario just for the reason you described - if the process dies prematurely, it's hard to recover (you can, by checking timestamps, but not very easily).
What you should use cron for is a "start_if_daemon_died" job instead.
This is well covered on StackOverflow already, e.g. here:
" How can I run a Perl script as a system daemon in linux? " or more posts.
This is not meant as a new answer but simply a worked out example in Perl of David W.'s accepted answer.
my $LOCKFILE = '/tmp/precache_advs.lock';
create_lockfile();
do_something_interesting();
remove_lockfile();
sub create_lockfile {
check_lockfile();
open my $fh, ">", $LOCKFILE or die "Unable to open $LOCKFILE: $!";
print $fh "$$";
close $fh;
return;
}
sub check_lockfile {
if ( -e $LOCKFILE ) {
my $pid = `cat $LOCKFILE`;
if ( system("ps -p $pid") == 0 ) {
# script is still running, so don't start a new instance
exit 0;
}
else {
remove_lockfile();
}
}
return;
}
sub remove_lockfile {
unlink $LOCKFILE or "Unable to remove $LOCKFILE: $!";
return;
}
I have a folder full of script files. When I run them, a program to which they are native is opened, it does some stuff and a CSV file is generated. I wrote some code that I want to run each script file and produce a bunch of CSV files, one for each script.
What happens is the following: when my Perl application is executed, the software is launched and the first of the scripts is run successfully (a CSV file is created). However, at this step the Perl application waits for me to close the software before I continue. It does this for every script. What can I do to avoid this from happening?
use strict;
use warnings;
use Cwd;
my $dir = cwd();
opendir(DIR, $dir);
my #files= grep(/\.acs$/,readdir(DIR));
$dir=~s/\//\\/g;
chdir $dir;
foreach (#files)
{
print "$_\n";
system ("$_")
}
I think you'll want to fork and exec and waitpid - i.e. set up and run your own process and wait for it to finish on your own.
http://larc.ee.nthu.edu.tw/~cthuang/courses/ee2320/12_process.html
This isn't easy, but unfortunately, you're doing this on an OS that isn't script-friendly. Doing this under Linux or OS X you wouldn't have any troubles like this.
You'll need to figure out if these commands are even available under Windows. You may have to find some similar things that are available under windows if there is no posix compatibility library.
your best choice is to ask the application nicely to close itself after it is done.
for example, the cmd.exe command have parameter /C that does exactly that.
try to run the application with /?, and see if anything useful comes up.
failing that, you can use Win32::Process to create the process and then kill it after you are sure it is done. see the documentation for that.
I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.
It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.
For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.
The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.
As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!
Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.