Unable to get Perl's flock working - perl

I can't seem to make Perl's
flock work.
I'm locking a file, checking return valued to make sure it's actually locked, and I'm still able to open and write to it like nothing is the matter.
Here is how I lock the file
#!/usr/bin/perl -w
use strict;
use Fcntl ':flock';
$| = 1;
my $f = $ARGV[0];
open( my $fh, '>>', $f ) or die "Could not open '$f' - $!";
print "locking '$f'...";
flock($fh, LOCK_EX) or die "Could not lock '$f' - $!";
print "locked\n";
sleep 10;
print "waking up and unlocking\n";
close( $fh );
While that script is sleeping I can fiddle with the same text file from a different process
#!/usr/bin/perl -w
use strict;
my $f = $ARGV[0];
open( my $fh, '>>', $f ) or die "Could not open '$f' - $!";
print $fh "This line was appended to a locked file!\n";
close( $fh );
Why am I then able to open the file and write to it without being told that it's locked?

flock() is an advisory lock. You have to have all your processes using flock()
Also realize that the way you are calling flock() it will block until it can get a lock. If you want a failure you have to use the LOCK_NB flag as well.
open(my $lf, ">>fileIWantToLockOn");
my $gotLock = flock($lf, LOCK_EX | LOCK_NB);
unless ($gotLock)
{
print "Couldn't get lock. Exiting";
exit 0;
}
EDIT: Also note that flock() won't work on NFS

I don't think flock does what you think it does. Locking a file doesn't prevent anybody from doing anything to the file except trying to obtain a lock on the same file.
From man 2 flock on my system:
flock(2) places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock(2) and perform I/O on the file.

flock works with advisory locks. Among other things, this means that only other processes that try to flock the same file will realize it's locked.

Related

In Perl script, I can open / write to/ and close a file, but I get "bad file descriptor" when I try to flock it

I can OPEN the file using the file handle, but when I try to FLOCK using the same file handle I get "bad file descriptor."
my $file='/Library/WebServer/Documents/myFile.txt';
open(my $fh, '>', $file) or die "Could not open '$file' - $!";
# I DO NOT GET AN ERROR FROM OPENING THE FILE
flock($fh, LOCK_EX) or die "Could not lock '$file' - $!";
# HERE IS WHERE I GET THE "BAD FILE DESCRIPTOR" ERROR
# IF I COMMENT THIS LINE OUT, THE PRINT AND CLOSE COMMANDS BELOW EXECUTE NORMALLY
print $fh "hello world";
close($fh) or die "Could not write '$file' - $!";
It's the same file handle, so why do OPEN and PRINT work, but not FLOCK? I have tried setting the permissions for the file to 646, 666, and 777, but I always get the same results.
Thanks!
Did you import the constant LOCK_EX per the flock documentation?
use Fcntl ':flock';
If not, LOCK_EX doesn't mean anything and the flock call will fail. Using strict and/or warnings would have identified a problem with this system call.

What is the easiest way to test error handling when writing to a file in Perl?

I have a bog standard Perl file writing code with (hopefully) adequate error handling, of the type:
open(my $fh, ">", "$filename") or die "Could not open file $filname for writing: $!\n";
# Some code to get data to write
print $fh $data or die "Could not write to file $filname: $!\n";
close $fh or die "Could not close file $filname afterwriting: $!\n";
# No I can't use File::Slurp, sorry.
(I just wrote this code from memory, pardon any typos or bugs)
It is somewhat easy to test error handling in the first "die" line (for example, create a non-writable file with the same name you plan to write).
How can I test error handling in the second (print) and third (close) "die" lines?
The only way I know of to induce error when closing is to run out of space on filesystem while writing, which is NOT easy to do as a test.
I would prefer integration test type solutions rather than unit test type (which would involve mocking IO methods in Perl).
Working with a bad filehandle will make them both fail
use warnings;
use strict;
use feature 'say';
my $file = shift || die "Usage: $0 out-filename\n";
open my $fh, '>', $file or die "Can't open $file: $!";
$fh = \*10;
say $fh 'writes ok, ', scalar(localtime) or warn "Can't write: $!";
close $fh or warn "Error closing: $!";
Prints
say() on unopened filehandle 10 at ...
Can't write: Bad file descriptor at ...
close() on unopened filehandle 10 at ...
Error closing: Bad file descriptor at ...
If you don't want to see perl's warnings capture them with $SIG{__WARN__} and print your messages to a file (or STDOUT), for example.
Riffing on zdim's answer ...
Write to a file handle opened for reading.
Close a file handle that has already been closed.

Synchronize processes by locking a file

One of my scripts is installing a component. When run in parallel, the same script tries to install the same component, so I thought about synchronizing the process by locking a file while the script is installing and wait while other script is installing something.
The code would look like this:
# this will create a file handler on a file from TEMP dir with the
# name of the component; if it doesn't exist in TEMP dir, it will create it
my $file = $ENV{"TEMP"}. "\\" . $componentName;
open (my $fh, ">", "$file") or die "Couldn't open file!";
# this will apply an exclusive lock meaning that if another process
# already locked the file, it will wait until the lock is removed
flock($fh, 2) or die "Failed to lock the file";
# install the component..
# closing the file handle automatically removes the lock
close $fh;
I am concerned about the situation when a script locks the file and is starting the installation and the second script comes and tries to create a file handle on the locked file. I didn't see any errors, but I don't want to miss something.
Will there be a problem with this?
The thing that's important to remember is - the 'open' will work in either case, because that doesn't test the lock. It's the flock operation that will block until the lock is released.
And this should work just fine, although once the lock is released - you might want to check if you still need to run the install, unless you don't really care about doing it twice - e.g. if the rest of the script makes use of/relies upon it.
Also - are there other sources of 'installing' that aren't your script, that could cause the same problem? A lock is an advisory thing.
It would be a style improvement in your program to also:
Test $ENV{'TEMP'} to see that it exists, and default (or fail) if it doesn't.
use Fcntl qw ( :flock ); because then you can flock ( $fh, LOCK_EX ); to make it clear you're taking an exclusive lock.
You appear to be using \\ as a file separator. That's probably better if you used something like File::Spec to do that, for portability reasons.
You can use a LOCK_NB for nonblocking: flock ( $fh, LOCK_EX | LOCK_NB ) and then just skip if it's locked.
A lock doesn't prevent the file from being opened or modified; it prevents it from being locked.
This means the open won't fail, and it will clobber the file even if it's locked and still being used. If the lock is meant to protect access to the file (i.e. if the programs actually write to the locked file), you want to use sysopen to avoid clobbering the file if it already exists[1].
use Fcntl qw( LOCK_EX O_CREAT O_WRONLY );
# Open the file without clobbering it, creating it if necessary.
sysopen(my $fh, $qfn, O_WRONLY | O_CREAT)
or die($!);
# Wait for the file to become available.
flock($fh, LOCK_EX)
or die($!);
truncate($fh, 0)
or die($!);
...
or
use Fcntl qw( LOCK_EX LOCK_NB O_CREAT O_WRONLY );
# Open the file without clobbering it, creating it if necessary.
sysopen(my $fh, $qfn, O_WRONLY | O_CREAT)
or die($!);
# Check if the file is locked.
flock($fh, LOCK_EX | LOCK_NB)
or die($!{EWOULDBLOCK} ? "File already in use\n" : $!);
truncate($fh, 0)
or die($!);
...
You could also use open(my $fh, '>>', $qfn) if you don't mind having the file in append mode.

How can I create a TCP server daemon process in Perl?

I wish to create a TCP server daemon process in Perl.
Which is the best framework/module for it?.
Is there anything that comes bundled with Perl?
Edit: Something that has start | stop | restart options would be great.
Edit: It has to be a Multi threaded server.
Well - it's better if you could state what this daemon is supposed to do. As there are specialized frameworks/libraries for various tasks.
For simplest daemon that does nothing, just exists, you can easily do this:
#!/usr/bin/perl
use strict;
use warnings;
use Carp;
use POSIX qw( setsid );
daemonize();
do_your_daemon_stuff();
exit;
sub daemonize {
chdir '/' or croak "Can't chdir to /: $!";
open STDIN, '/dev/null' or croak "Can't read /dev/null: $!";
open STDOUT, '>/dev/null' or croak "Can't write to /dev/null: $!";
defined(my $pid = fork) or croak "Can't fork: $!";
exit if $pid;
setsid or croak "Can't start a new session: $!";
open STDERR, '>&STDOUT' or croak "Can't dup stdout: $!";
}
sub daemonize() was liften from perldoc perlipc (with minor change).
That's all - the code now properly daemonizes, and can do anything you want.
I just read your edit, that you want TCP server.
OK. Here is simplistic code:
#!/usr/bin/perl
use strict;
use warnings;
use Carp;
use POSIX qw( setsid );
use IO::Socket;
my $server_port = get_server_port();
daemonize();
handle_connections( $server_port );
exit;
sub daemonize {
chdir '/' or croak "Can't chdir to /: $!";
open STDIN, '/dev/null' or croak "Can't read /dev/null: $!";
open STDOUT, '>/dev/null' or croak "Can't write to /dev/null: $!";
defined(my $pid = fork) or croak "Can't fork: $!";
exit if $pid;
setsid or croak "Can't start a new session: $!";
open STDERR, '>&STDOUT' or croak "Can't dup stdout: $!";
}
sub get_server_port {
my $server = IO::Socket::INET->new(
'Proto' => 'tcp',
'LocalPort' => 31236,
'Listen' => SOMAXCONN,
'Reuse' => 1,
);
die "can't setup server" unless $server;
return $server;
}
sub handle_connections {
my $port = shift;
my $handled = 0;
while ( my $client = $port->accept() ) {
$handled++;
print $client "Hi, you're client #$handled\n";
chomp ( my $input = <$client> );
my $output = reverse $input;
print $client $output, "\n";
print $client "Bye, bye.\n";
close $client;
}
return;
}
Just remember that this is blocking tcp server, so it will be able to handle 1 connection at the time. If you want more than 1 - it becomes more complex, and you have to ask yourself if you prefer multithreading (or multi-processing), or you prefer single-process, event based server.
You do not actually want to write multithreaded perl. Perl threads are broken - they do not work properly (in my opinion).
Creating a new perl thread clones the entire interpreter including all the data currently in scope - therefore is basically WORSE than creating a new process (which of course, would use copy-on-write) and less useful.
So you definitely don't want it multithreaded.
If possible, I would consider looking at something like AnyEvent as an alternative to a pure threaded approach.
I've never had occasion to try it myself, but I believe POE is highly regarded for that sort of thing.
Here are some examples of TCP servers written with POE.
A quick search reveals quite a few possibilities. Daemon::Generic seems to be straightforward to use.
In addition, there are many server modules for various protocols. In fact, HTTP::Daemon has been a core module for a while now.
"Something that has start | stop | restart options would be great"
There are modules on CPAN which will provide this. E.g., I can see Daemon::Generic which claims to be a framework to provide start/stop/reload for a daemon.
/I3az/

How can I redirect the output from one filehandle into another?

I want to set up a pipeline of processes from within Perl (running on Linux), consisting of two parts run at separate times.
Eg:
Start the consumer process:
open( OUT, "| tar xvf - " ) || die "Failed: tar: $!";
then much later start the producer process:
open( IN, "gpg -d $file |" ) || die "Failed: gpg: $!";
but then somehow redirect the output from gpg into the input to tar.
I can do this by building a loop:
while (<IN> ) {
print OUT;
}
But I would like to know if I can somehow glue the two processes together with redirection.
Add
pipe( IN, OUT );
Before the two open statements. That's it!
If you want to do anything more complicated, I would recommend the IPC::Run CPAN module:
http://search.cpan.org/dist/IPC-Run/
It lets you start processes, tie their input and outputs together, and add logging or redirection at any point in the chain.
If the two processes are completely unrelated, use a FIFO.
use POSIX qw(mkfifo);
mkfifo($path, 0700) or die "mkfifo $path failed: $!";
This creates a FIFO at $path. Now have one process write to that file, and the other process read from it.
I like Proc::SafeExec it lets you tie together processes and file handles in almost arbitrary ways easily. Here's an example:
use strict;
use warnings;
use Proc::SafeExec;
open(my $ls, "-|", "ls", "-l") or die "Err: $!";
open(my $fh, ">", "tmp.txt") or die "Err: $!";
my $p = Proc::SafeExec->new({
exec => [qw(sed -e s/a/b/)],
stdin => $ls,
stdout => $fh,
});
$p->wait();
After looking at IPC::Run, it looks a lot simpler...here's the same example using IPC::Run instead:
use IPC::Run qw(run);
run [qw(ls -l)], "|", [qw(sed -e s/a/b/)], ">", "tmp.txt";