I have an old perl script which was always working , but suddenly something is broken which is not deleting the file.
-rw-r--r-- 1 nobody uworld 6 Dec 03 11:15 shot32.file
The command to delete the above file is inside a perl script
`rm $shotfile`;
I have checked $shotfile is shot32.file and it is in the right location.
So file location and filename is not the problem.
Regarding the permission, the perl script is running under nobody user as well , so what could be other reasons for this to not work .
Appreciate your help.
To delete a file, you need write permissions on the directory the file is in. The permissions on the file don't matter.
That said, that's some pretty awful code you've got there. You're shelling out (without escaping anything, hello shell injection!) just to run rm (which you could've run directly without going through the shell), and you're capturing its output for no reason (and you're ignoring whatever was captured anyway). Also, you're not checking for errors (which would be harder in this form as well).
This is all much more complicated than it has to be. Perl has a built-in function for deleting files:
unlink $shotfile or warn "$0: can't unlink $shotfile: $!\n";
This will delete the file or warn you about any problems (with $! containing the reason for the failure). Change warn to die if you want the program to abort instead.
Related
Every function in the perl File::Copy module is supposed to return 1 in case of success and 0 in case of failure.
In my case, I have noticed (using whatever logs I had) that move returns 0 even when the operation succeeds (because files are actually moved) with value of $! as No such file or directory.
Has anyone noticed such issue before?
If move returns 0, trying to rename the file failed, and then either trying to copy it failed or trying to unlink the original file after copying it failed. I don't see other possibilities, at least in File::Copy version 2.33.
You may want to just try the rename and, if needed, the copy and unlink yourself, if you need better error reporting.
What version of File::Copy are you using? What version of perl? What operating system.
From File::Copy, on copy
If an error occurs in setting permissions, cp will return 0, regardless of whether the file was successfully copied.
While this is for copy, the move may also copy the file and then delete it (if it can't rename it).
There are yet other possibilities, that involve other processes interfering with the file.
I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)
Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".
My Perl script is moving files onto an NFS mounted filesystem, using the move function from File::Copy. Recently, some of the files returned an error, causing my script to print the message "move returned 0, A file or directory in the path name does not exist." (move returns 1 on success, 0 on error, the error message is from $!)
The really weird thing is that the system that processes the files has reported back that it successfully processed the files that failed! I have never seen an error message from a successful write before, so I wonder if it has something to do with NFS. I thought it was strange that in a run where 28 files were moved, the first 24 failed and the last 4 succeeded. The script has been running with no errors for several months, and has now demonstrated this problem twice in 2 weeks.
The host is running on AIX, though I doubt that makes a difference.
I think it is a NFS issue, not Perl. NFS could be really weird in some cases.
You should stat/read the writed file and do not depend on reported errors.
The File::Copy::Reliable module use the same error handling it will fail with same error.
Form source:
copy( $source, $destination )
|| croak("copy_reliable($source, $destination) failed: $!");
Simply put the copy into an eval block, and try to read/stat the file in the destination.
If you really cautious you could use md5/sha1 hash on both files to be sure that they are the same.
Regards,
I'm using File::Copy::Recursive::dircopy( $original_dirname, $new_dirname ) or die $!; to copy a read-only directory from within a Perl script. I get a Permission denied error.
I can see that $new_dirname is created, but is marked as read only (like the original directory). Maybe this prevents from the content to be copied into it?..
Yes, this definitely seems to be a bug in File::Copy::Recursive. A temporary work around is to set $File::Copy::Recursive::KeepMode to 0 and do the chmod yourself.
It appears to have already been reported and the author is working on a fix, but it was coming "soon" on 2009-05-20 and "this weekend" on 2010-04-14.