Is it possible for Perl move success with response 0? - perl

Every function in the perl File::Copy module is supposed to return 1 in case of success and 0 in case of failure.
In my case, I have noticed (using whatever logs I had) that move returns 0 even when the operation succeeds (because files are actually moved) with value of $! as No such file or directory.
Has anyone noticed such issue before?

If move returns 0, trying to rename the file failed, and then either trying to copy it failed or trying to unlink the original file after copying it failed. I don't see other possibilities, at least in File::Copy version 2.33.
You may want to just try the rename and, if needed, the copy and unlink yourself, if you need better error reporting.
What version of File::Copy are you using? What version of perl? What operating system.

From File::Copy, on copy
If an error occurs in setting permissions, cp will return 0, regardless of whether the file was successfully copied.
While this is for copy, the move may also copy the file and then delete it (if it can't rename it).
There are yet other possibilities, that involve other processes interfering with the file.

Related

Unable to delete the file from Perl script

I have an old perl script which was always working , but suddenly something is broken which is not deleting the file.
-rw-r--r-- 1 nobody uworld 6 Dec 03 11:15 shot32.file
The command to delete the above file is inside a perl script
`rm $shotfile`;
I have checked $shotfile is shot32.file and it is in the right location.
So file location and filename is not the problem.
Regarding the permission, the perl script is running under nobody user as well , so what could be other reasons for this to not work .
Appreciate your help.
To delete a file, you need write permissions on the directory the file is in. The permissions on the file don't matter.
That said, that's some pretty awful code you've got there. You're shelling out (without escaping anything, hello shell injection!) just to run rm (which you could've run directly without going through the shell), and you're capturing its output for no reason (and you're ignoring whatever was captured anyway). Also, you're not checking for errors (which would be harder in this form as well).
This is all much more complicated than it has to be. Perl has a built-in function for deleting files:
unlink $shotfile or warn "$0: can't unlink $shotfile: $!\n";
This will delete the file or warn you about any problems (with $! containing the reason for the failure). Change warn to die if you want the program to abort instead.

gsutil cp: concurrent execution leads to local file corruption

I have a Perl script which calls 'gsutil cp' to copy a selected from from GCS to a local folder:
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-folder]";
$output = `$cmd 2>&1`;
The script is called via HTTP and hence can be initiated multiple times (e.g. by double-clicking on a link). When this happens, the local file can end up being exactly double the correct size, and hence obviously corrupt. Three things appear odd:
gsutil seems not to be locking the local file while it is writing to
it, allowing another thread (in this case another instance of gsutil)
to write to the same file.
The '-n' seems to have no effect. I would have expected it to prevent
the second instance of gsutil from attempting the copy action.
The MD5 signature check is failing: normally gsutil deletes the
target file if there is a signature mismatch, but this is clearly
not always happening.
The files in question are larger than 2MB (typically around 5MB) so there may be some interaction with the automated resume feature. The Perl script only calls gsutil if the local file does not already exist, but this doesn't catch a double-click (because of the time lag for the GCS transfer authentication).
gsutil version: 3.42 on FreeBSD 8.2
Anyone experiencing a similar problem? Anyone with any insights?
Edward Leigh
1) You're right, I don't see a lock in the source.
2) This can be caused by a race condition - Process 1 checks, sees the file is not there. Process 2 checks, sees the file is not there. Process 1 begins upload. Process 2 begins upload. The docs say this is a HEAD operation before the actual upload process -- that's not atomic with the actual upload.
3) No input on this.
You can fix the issue by having your script maintain an atomic lock of some sort on the file prior to initiating the transfer - i.e. your check would be something along the lines of:
use Lock::File qw(lockfile);
if (my $lock = lockfile("$localfile.lock", { blocking => 0 } )) {
... perform transfer ...
undef $lock;
}
else {
die "Unable to retrieve $localfile, file is locked";
}
1) gsutil doesn't currently do file locking.
2) -n does not protect against other instances of gsutil run concurrently with an overlapping destination.
3) Hash digests are calculated on the bytes as they are being downloaded as a performance optimization. This avoids a long-running computation once the download completes. If the hash validation succeeds, you're guaranteed that the bytes were written successfully at one point. But if something (even another instance of gsutil) modifies the contents in-place while the process is running, the digesters will not detect this.
Thanks to Oesor and Travis for answering all points between them. As an addendum to Oesor's suggested solution, I offer this alternative for systems lacking Lock::File:
use Fcntl ':flock'; # import LOCK_* constants
# if lock file exists ...
if (-e($lockFile))
{
# abort if lock file still locked (or sleep and re-check)
abort() if !unlink($lockFile);
# otherwise delete local file and download again
unlink($filePath);
}
# if file has not been downloaded already ...
if (!-e($filePath))
{
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-dir]";
abort() if !open(LOCKFILE, ">$lockFile");
flock(LOCKFILE, LOCK_EX);
my $output = `$cmd 2>&1`;
flock(LOCKFILE, LOCK_UN);
unlink($lockFile);
}

File::Copy reports incorrect failure on NFS write?

My Perl script is moving files onto an NFS mounted filesystem, using the move function from File::Copy. Recently, some of the files returned an error, causing my script to print the message "move returned 0, A file or directory in the path name does not exist." (move returns 1 on success, 0 on error, the error message is from $!)
The really weird thing is that the system that processes the files has reported back that it successfully processed the files that failed! I have never seen an error message from a successful write before, so I wonder if it has something to do with NFS. I thought it was strange that in a run where 28 files were moved, the first 24 failed and the last 4 succeeded. The script has been running with no errors for several months, and has now demonstrated this problem twice in 2 weeks.
The host is running on AIX, though I doubt that makes a difference.
I think it is a NFS issue, not Perl. NFS could be really weird in some cases.
You should stat/read the writed file and do not depend on reported errors.
The File::Copy::Reliable module use the same error handling it will fail with same error.
Form source:
copy( $source, $destination )
|| croak("copy_reliable($source, $destination) failed: $!");
Simply put the copy into an eval block, and try to read/stat the file in the destination.
If you really cautious you could use md5/sha1 hash on both files to be sure that they are the same.
Regards,

How to copy a read-only directory in Perl?

I'm using File::Copy::Recursive::dircopy( $original_dirname, $new_dirname ) or die $!; to copy a read-only directory from within a Perl script. I get a Permission denied error.
I can see that $new_dirname is created, but is marked as read only (like the original directory). Maybe this prevents from the content to be copied into it?..
Yes, this definitely seems to be a bug in File::Copy::Recursive. A temporary work around is to set $File::Copy::Recursive::KeepMode to 0 and do the chmod yourself.
It appears to have already been reported and the author is working on a fix, but it was coming "soon" on 2009-05-20 and "this weekend" on 2010-04-14.

How can I resume downloads in Perl?

I have a project that depends upon some other binaries to be downloaded from web at install time.For this what i do is:
if ( file-present-in-src/)
# skip that file
else
# use wget to download the file
The problem with this approach is that when I interrupt a download in middle, and do invoke the script next time, the partially downloaded file is also skipped (which is not desired), also I want wget to resume the download of the partially downloaded file.
How should I go about it:
Possible Solutions I could think of:
Let the file to be downloaded to some file say download_tmp. Move to original file
if successful.
Handle SIG{'INT'} to write proper cleanup code.
But none of these could help resume the partial file download,
Any insights?
Fist, I don't understand what this has to do with Perl, since you're using wget to do the dowloading ... You could use libwww-perl (perldoc LWP) and have more control about the download process.
Then I second your idea of downloading to a "tmp" filename and move the file on success.
However I think you need to go further and verify the integrity of the files. Doing an MD5 or SHA hash is very easy, and match the downloaded one with what you're expecting. You can have a short file on server containing the checksum (filename.md5). Determine success only when you have a match.
Note that catching all the signals and generally trying to make the process unkillable, and then expecting it to have worked is bound to fail at one point or another. There could be a network timeout, a crash, power failure, configuration problem on the server ... you should instead assume downloads can fail, because they will, and code so that your process can recover.
Finally you're not telling us what kind of binaries you're downloading and what you're doing with them. Since you use wget I'm going to assume you're on Unix; you should consider using RPM+Yum or the likes, they handle all this for you. RPM are easy to write, really.
use your first approach ..
download to "FileName".tmp
move "FileName".tmp to "FileName" move! not copy
once per diem clean out all .tmp files (paranoia rulez)
You could just use wget's -N and -c options and remove the entire "if file exists" logic.