Flock in Perl doesnt work - perl

I have a Perl file. The user opens a file, reads data and displays the data in grid. user edits it and saves it back to the file.
I am trying to use flock so that when the user reads the file, the file gets locked. I tried below code but it didnt work.
Referring to the accepted answer of this post. How do I lock a file in Perl?
use Fcntl ':flock'; #added this at the start
$filename= dsfs.com/folder1/test.txt; #location of my file
open(my $fh, '<', $filename) or die $!; #file open
flock($fh, LOCK_EX) or die "Could not lock '$file' - $!"; #inserted flock before reading starts so that no other user can use this file
#reading of file starts here
#once read, user saves file.
close($fh) or die "Could not write '$file' - $!"; #release lock after user writes.
I guess this is a normal operation without any race around conditions but this doesnot work for me.I am not sure if the perl script is able to detect flock or not.
For testing purposes, i try to open the file before my writing and saving function gets completed. when i try to open the same file before saving gets completed, it means that the lock is not released yet. in this situation if i open the file at backend and edit the file, i am still able to save changes. In practical case, it should not be able to edit anything once the file is locked.
can anyone please suggest me any troubleshooting for this or is my procedure of using flock incorrect ??

There's another problem if your flock implementation is based on lockf(3) or fcntl(2), which it probably is. Namely, LOCK_EX should be used with "write intent", on a file opened for output.
For lockf(3), perldoc -f flock says
Note that the emulation built with lockf(3) doesn't provide shared locks, and it requires that FILEHANDLE be open with write intent.
and for fcntl(2):
Note that the fcntl(2) emulation of flock(3) requires that FILEHANDLE be open with read intent to use LOCK_SH and requires that it be open with write intent to use LOCK_EX.
A workaround for input files or for more complicated synchronized operations is for all processes to sync on a trivial lock file, like:
open my $lock, '>>', "$filename.lock";
flock $lock, LOCK_EX;
# can't get here until our process has the lock ...
open(my $fh, '<', $filename) or die $!; #file open
... read file, manipulate ...
close $fh;
open my $fh2, '>', $filename;
... rewrite file ...
close $fh2;
# done with this operation, can release the lock and let another
# process access the file
close $lock;

There's two problems:
flock will block until it can lock. You therefore need flock ( $file, LOCK_EX | LOCK_NB ) or die $!;
flock (on Unix) is advisory. It won't stop them accessing it unless they also check for a lock.

Related

Open a file and overwrite the file with adjustments and no backup

I have the following three lines:
rename($file_path, $file_fh.'.bak');
open( my $file_IN_fh, '<' , $file_path.'.bak') || die "die message";
open( my $file_OUT_fh, '>' , $file_path) || die "die message";
It works great. It allows me to go through the in file while(<$file_IN_fh>), make a bunch of changes with a script (s///g, if() to determine if the line stays or not, etc), and write to the out file. In the end I get my edited file and the file name is unchanged.
My issue is that I am at a point where I no longer (currently) want the backup files, so I want to replace the code with something similar that won't create the backup file, and comment back and forth the three lines over the years if my needs change.
How do I do this kind of editing in place not from the command line?
One basic way is to read the file line by line and write desired output lines to a temporary file, which is then renamed so to overwrite the original.
use File::Copy qw(move);
open my $fh, '<', $file or die "Can't open $file: $!";
open my $fh_out, '>', $outfile or die "Can't open $outfile: $!";
while (<$fh>) {
next if /line_to_skip/;
s/patt/repl/g;
print $fh_out $_;
}
close $_ for ($fh, $fh_out);
move ($outfile, $file) or die "Can't move $outfile to $file: $!";
This is what is normally done by tools that edit files "in place" (with additional safety, checks, and flexibility). Since the $outfile is temporary use File::Temp.
Add checks when close-ing files.
Note that this changes the file's inode number, which may matter for some applications.†
If the file isn't huge you can simplify this and read it in first
open my $fh, '<', $file or die "Can't open $file: $!";
my #lines = <$fh>;
open $fh, '>', $file or die "Can't open $file for writing: $!";
for (#lines) {
next if /line_to_skip/;
s/patt/repl/g;
print $fh_out $_;
}
close $fh;
what preserves the inode number, since > mode truncates the existing inode data.
† If this is indeed a problem, you can still keep the same inode. After the temporary file is written, open it for reading and open the original file for writing; that truncates the contents of that inode. Then copy the temporary file to the original. Close handles and delete the temporary file.
If the file is huge, then I'd question why you'd want to avoid the temporary file. Otherwise, I'd suggest just loading the file into memory, make modifications, then write it back out.
use File::Slurp qw( read_file write_file );
my $in = read_file($qfn, array_ref => 1);
my #out;
while (defined( $_ = shift(#$in) )) {
s/a/b/g; # For example.
push #out, $_ if /c/; # For example.
}
write_file($qfn, \#out);
I avoided using expensive splice by using two arrays.
Note that using Tie::File might save one line of code, but this will be 30x faster[1], and probably use less memory (despite memory-saving being Tie::File's goal). Tie::File is never the answer!!!
This is not necessarily representative of all Tie::File uses, but I have indeed timed Tie::File taking 30x longer than the alternative at some basic task. That means that 2 seconds worth of work would have taken 1 minute with Tie::File!
Take a look at the Tie::File module. It is a core module and so shouldn't need installing, and the code is as simple as
use Tie::File;
tie my #file, 'Tie::File', $filepath or die $!;
Thereafter the array #file will hold the contents of the file, one line per element, and any changes to the array will be reflected in the file. All array operations such as push, splice, etc. will work fine
Note that line one of the file is in element zero of the array etc.

Creating a file name using variables in Perl

I am trying to write out to a file, where the file name is created from a variable(the name + the user id + the date and time + file extension).
I have read various things on Stackoverflow which I have based my code off.
my $windowsfile = "winUserfile-$User_ID-$datetime.csv";
open winUserfile, ">>", $windowsfile) or die "$!";
print winUserfile "User_ID, Expression\n";
close winUserfile;
I would assumed this would work, but I am getting a syntax error. Would anyone be able to help?
Your second line has a close-paren without the preceeding open:
open winUserfile, ">>", $windowsfile) or die "$!";
You likely want to open it first
open(winUserfile, ">>", $windowsfile) or die "$!";
Or just not bother with them entirely here, as they're optional in this case
open winUserfile, ">>", $windowsfile or die "$!";
Also, it's bad style to use a bareword filehandle, as this creates becomes global. Better to use a lexical one:
open my $winUserfile, ">>", $windowsfile or die "$!";
print $winUserfile "User_ID, Expression\n";
You don't then need to close it; the close will be automatic when the $winUserfile variable goes out of scope.
I like using the IO::All module for file io.
use IO::All
my $windowsfile = "winUserfile-$User_ID-$datetime.csv";
io($windowsfile) > "User_ID, Expression\n";
my $windowsfile = "winUserfile-$User_ID-$datetime.csv";
open (winUserfile, ">>$windowsfile") or die "$!";
print winUserfile "User_ID, Expression\n";
close winUserfile;

How do I give parallel write access to a file without collisions?

I have some child processes which should write some logs into a common file. I am wondering if this code work so that the processes will write into the common file without collisions:
sub appendLogs {
open FILE, "+>>", $DMP or die "$!";
flock FILE, LOCK_EX or die "$!";
print FILE "xyz\n";
close FILE;
}
If not, could you give me any hints, how I could fix or improve it?
For logging purpose, I would use Log4perl instead of reinventing the wheel. It has a support for what you are looking.
How can I synchronize access to an appender?
Log4Perl bundling logging from several programs into one log
Yes, as long as every process that tries to write to file uses flock, they will go without collisions.
If you would like your code to be portable, you should seek to the end of the file after you lock the filehandle but before you write to it. See the "mailbox appender" example in perldoc -f flock, which is similar to what you are doing.
sub appendLogs {
open FILE, "+>>", $DMP or die "$!";
flock FILE, LOCK_EX or die "$!";
seek FILE, 0, 2; # <--- after lock, move cursor to end of file
print FILE "xyz\n";
close FILE;
}
The seek may be necessary because another process could append the file (and move the position of the end of the file) after you open the file handle but before you acquire the lock.

PERL CGI program

I was trying out an elementary Perl/CGI script to keep track of visitors coming to a web page. The Perl code looks like this:
#!/usr/bin/perl
#KEEPING COUNT OF VISITORS IN A FILE
use CGI':standard';
print "content-type:text/html\n\n";
#opening file in read mode
open (FILE,"<count.dat");
$cnt= <FILE>;
close(FILE);
$cnt=$cnt+1;
#opening file to write
open(FILE,">count.dat");
print FILE $cnt;
close(FILE);
print "Visitor count: $cnt";
The problem is that the web page does not increment the count of visitors on each refresh. The count remains at the initital value of $cnt , ie 1. Any ideas where the problem lies?
You never test if the attempt to open the file handle works. Given a file which I had permission to read from and write to that contained a single number and nothing else, the code behaved as intended. If the file did not exist then the count would always be 1, if it was read-only then it would remain at whatever the file started at.
More general advice:
use strict; and use warnings; (and correct code based on their complaints)
Use the three argument call to open as per the first example in the documentation
When you open a file always || handle_the_error_in($!);
Don't use a file to store data like this, you have potential race conditions.
Get the name of the language correct
Here's an alternate solution that uses only one open() and creates the file if it doesn't already exist. Locking eliminates a potential race condition among multiple up-daters.
#!/usr/bin/env perl
use strict;
use warnings;
use Fcntl qw(:DEFAULT :flock);
my $file = 'mycount';
sysopen(my $fh, $file, O_RDWR|O_CREAT) or die "Can't open '$file' $!\n";
flock($fh, LOCK_EX) or die "Can't lock $file: $!\n";
my $cnt = <$fh>;
$cnt=0 unless $cnt;
$cnt++;
seek $fh, 0, 0;
print ${fh} $cnt;
close $fh or die "Can't close $file: $\n";
print "Visitor count: $cnt\n";
A few potential reasons:
'count.dat' is not being opened for reading. Always test with or die $!; at minimum to check if the file opened or not
The code is not being executed and you think it is
The most obvious thing that you would have forgotten is to change permissions of the file count.dat
Do this :
sudo chmod 777 count.dat
That should do the trick
You will need to close the webpage and reopen it again. Just refreshing the page won't increment the count.

How do I lock a file in Perl?

What is the best way to create a lock on a file in Perl?
Is it best to flock on the file or to create a lock file to place a lock on and check for a lock on the lock file?
If you end up using flock, here's some code to do it:
use Fcntl ':flock'; # Import LOCK_* constants
# We will use this file path in error messages and function calls.
# Don't type it out more than once in your code. Use a variable.
my $file = '/path/to/some/file';
# Open the file for appending. Note the file path is quoted
# in the error message. This helps debug situations where you
# have a stray space at the start or end of the path.
open(my $fh, '>>', $file) or die "Could not open '$file' - $!";
# Get exclusive lock (will block until it does)
flock($fh, LOCK_EX) or die "Could not lock '$file' - $!";
# Do something with the file here...
# Do NOT use flock() to unlock the file if you wrote to the
# file in the "do something" section above. This could create
# a race condition. The close() call below will unlock the
# file for you, but only after writing any buffered data.
# In a world of buffered i/o, some or all of your data may not
# be written until close() completes. Always, always, ALWAYS
# check the return value of close() if you wrote to the file!
close($fh) or die "Could not write '$file' - $!";
Some useful links:
PerlMonks file locking tutorial (somewhat old)
flock() documentation
In response to your added question, I'd say either place the lock on the file or create a file that you call 'lock' whenever the file is locked and delete it when it is no longer locked (and then make sure your programs obey those semantics).
The other answers cover Perl flock locking pretty well, but on many Unix/Linux systems there are actually two independent locking systems: BSD flock() and POSIX fcntl()-based locks.
Unless you provide special options to configure when building Perl, its flock will use flock() if available. This is generally fine and probably what you want if you just need locking within your application (running on a single system). However, sometimes you need to interact with another application that uses fcntl() locks (like Sendmail, on many systems) or perhaps you need to do file locking across NFS-mounted filesystems.
In those cases, you might want to look at File::FcntlLock or File::lockf. It is also possible to do fcntl()-based locking in pure Perl (with some hairy and non-portable bits of pack()).
Quick overview of flock/fcntl/lockf differences:
lockf is almost always implemented on top of fcntl, has file-level locking only. If implemented using fcntl, limitations below also apply to lockf.
fcntl provides range-level locking (within a file) and network locking over NFS, but locks are not inherited by child processes after a fork(). On many systems, you must have the filehandle open read-only to request a shared lock, and read-write to request an exclusive lock.
flock has file-level locking only, locking is only within a single machine (you can lock an NFS-mounted file, but only local processes will see the lock). Locks are inherited by children (assuming that the file descriptor is not closed).
Sometimes (SYSV systems) flock is emulated using lockf, or fcntl; on some BSD systems lockf is emulated using flock. Generally these sorts of emulation work poorly and you are well advised to avoid them.
CPAN to the rescue: IO::LockedFile.
Ryan P wrote:
In this case the file is actually unlocked for a short period of time while the file is reopened.
So don’t do that. Instead, open the file for read/write:
open my $fh, '+<', 'test.dat'
or die "Couldn’t open test.dat: $!\n";
When you are ready to write the counter, just seek back to the start of the file. Note that if you do that, you should truncate just before close, so that the file isn’t left with trailing garbage if its new contents are shorter than its previous ones. (Usually, the current position in the file is at its end, so you can just write truncate $fh, tell $fh.)
Also, note that I used three-argument open and a lexical file handle, and I also checked the success of the operation. Please avoid global file handles (global variables are bad, mmkay?) and magic two-argument open (which has been a source of many a(n exploitable) bug in Perl code), and always test whether your opens succeed.
I think it would be much better to show this with lexical variables as file handlers
and error handling.
It is also better to use the constants from the Fcntl module than hard code the magic number 2 which might not be the right number on all operating systems.
use Fcntl ':flock'; # import LOCK_* constants
# open the file for appending
open (my $fh, '>>', 'test.dat') or die $!;
# try to lock the file exclusively, will wait till you get the lock
flock($fh, LOCK_EX);
# do something with the file here (print to it in our case)
# actually you should not unlock the file
# close the file will unlock it
close($fh) or warn "Could not close file $!";
Check out the full documentation of flock and the File locking tutorial on PerlMonks even though that also uses the old style of file handle usage.
Actually I usually skip the error handling on close() as there is not
much I can do if it fails anyway.
Regarding what to lock, if you are working in a single file then lock that file. If you need to lock several files at once then - in order to avoid dead locks - it is better to pick one file that you are locking. Does not really matter if that is one of the several files you really need to lock or a separate file you create just for the locking purpose.
Have you considered using the LockFile::Simple module? It does most of the work for you already.
In my past experience, I have found it very easy to use and sturdy.
use strict;
use Fcntl ':flock'; # Import LOCK_* constants
# We will use this file path in error messages and function calls.
# Don't type it out more than once in your code. Use a variable.
my $file = '/path/to/some/file';
# Open the file for appending. Note the file path is in quoted
# in the error message. This helps debug situations where you
# have a stray space at the start or end of the path.
open(my $fh, '>>', $file) or die "Could not open '$file' - $!";
# Get exclusive lock (will block until it does)
flock($fh, LOCK_EX);
# Do something with the file here...
# Do NOT use flock() to unlock the file if you wrote to the
# file in the "do something" section above. This could create
# a race condition. The close() call below will unlock it
# for you, but only after writing any buffered data.
# In a world of buffered i/o, some or all of your data will not
# be written until close() completes. Always, always, ALWAYS
# check the return value on close()!
close($fh) or die "Could not write '$file' - $!";
My goal in this question was to lock a file being used as a data store for several scripts. In the end I used similar code to the following (from Chris):
open (FILE, '>>', test.dat') ; # open the file
flock FILE, 2; # try to lock the file
# do something with the file here
close(FILE); # close the file
In his example I removed the flock FILE, 8 as the close(FILE) performs this action as well. The real problem was when the script starts it has to hold the current counter, and when it ends it has to update the counter. This is where Perl has a problem, to read the file you:
open (FILE, '<', test.dat');
flock FILE, 2;
Now I want to write out the results and since i want to overwrite the file I need to reopen and truncate which results in the following:
open (FILE, '>', test.dat'); #single arrow truncates double appends
flock FILE, 2;
In this case the file is actually unlocked for a short period of time while the file is reopened. This demonstrates the case for the external lock file. If you are going to be changing contexts of the file, use a lock file. The modified code:
open (LOCK_FILE, '<', test.dat.lock') or die "Could not obtain lock";
flock LOCK_FILE, 2;
open (FILE, '<', test.dat') or die "Could not open file";
# read file
# ...
open (FILE, '>', test.dat') or die "Could not reopen file";
#write file
close (FILE);
close (LOCK_FILE);
Developed off of http://metacpan.org/pod/File::FcntlLock
use Fcntl qw(:DEFAULT :flock :seek :Fcompat);
use File::FcntlLock;
sub acquire_lock {
my $fn = shift;
my $justPrint = shift || 0;
confess "Too many args" if defined shift;
confess "Not enough args" if !defined $justPrint;
my $rv = TRUE;
my $fh;
sysopen($fh, $fn, O_RDWR | O_CREAT) or LOGDIE "failed to open: $fn: $!";
$fh->autoflush(1);
ALWAYS "acquiring lock: $fn";
my $fs = new File::FcntlLock;
$fs->l_type( F_WRLCK );
$fs->l_whence( SEEK_SET );
$fs->l_start( 0 );
$fs->lock( $fh, F_SETLKW ) or LOGDIE "failed to get write lock: $fn:" . $fs->error;
my $num = <$fh> || 0;
return ($fh, $num);
}
sub release_lock {
my $fn = shift;
my $fh = shift;
my $num = shift;
my $justPrint = shift || 0;
seek($fh, 0, SEEK_SET) or LOGDIE "seek failed: $fn: $!";
print $fh "$num\n" or LOGDIE "write failed: $fn: $!";
truncate($fh, tell($fh)) or LOGDIE "truncate failed: $fn: $!";
my $fs = new File::FcntlLock;
$fs->l_type(F_UNLCK);
ALWAYS "releasing lock: $fn";
$fs->lock( $fh, F_SETLK ) or LOGDIE "unlock failed: $fn: " . $fs->error;
close($fh) or LOGDIE "close failed: $fn: $!";
}
One alternative to the lock file approach is to use a lock socket. See Lock::Socket on CPAN for such an implementation. Usage is as simple as the following:
use Lock::Socket qw/lock_socket/;
my $lock = lock_socket(5197); # raises exception if lock already taken
There are a couple of advantages to using a socket:
guaranteed (through the operating system) that no two applications will hold the same lock: there is no race condition.
guaranteed (again through the operating system) to clean up neatly when your process exits, so there are no stale locks to deal with.
relies on functionality that is well supported by anything that Perl runs on: no issues with flock(2) support on Win32 for example.
The obvious disadvantage is of course that the lock namespace is global. It is possible for a kind of denial-of-service if another process decides to lock the port you need.
[disclosure: I am the author of the afor-mentioned module]
Use the flock Luke.
Edit: This is a good explanation.
flock creates Unix-style file locks, and is available on most OS's Perl runs on. However flock's locks are advisory only.
edit: emphasized that flock is portable
Here's my solution to reading and writing in one lock...
open (TST,"+< readwrite_test.txt") or die "Cannot open file\n$!";
flock(TST, LOCK_EX);
# Read the file:
#LINES=<TST>;
# Wipe the file:
seek(TST, 0, 0); truncate(TST, 0);
# Do something with the contents here:
push #LINES,"grappig, he!\n";
$LINES[3]="Gekke henkie!\n";
# Write the file:
foreach $l (#LINES)
{
print TST $l;
}
close(TST) or die "Cannot close file\n$!";
Flock is probably the best but requires you to write all the supporting code around it - timeouts, stale locks, non-existant files etc.
I trued LockFile::Simple but found it started setting the default umask to readonly and not cleaning this up. Resulting in random permissions problems on a multi process/multi-threaded application on modperl
I've settled on wrapping up NFSLock with some empty file handling.