checking to see if files are executable perl - perl

I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.

It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.

The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)

Related

How to get vim to list the PIDs of selected files that are presently being edited, avoiding recovery mode, and not list all the other files

The vim manual page contains two similar -r type commands. I'll give more background below, this question is really how to invoke the first type of -r to list the swap files, but avoid the second -r that invokes recovery
-r List swap files, with information about using them for re‐
covery.
-r {file} Recovery mode. The swap file is used to recover a crashed
editing session. The swap file is a file with the same
filename as the text file with ".swp" appended. See ":help
recovery".
The -r without filename (the first -r above ) reports on the swap files of other files too, including ones in other directories
Background:
I'm trying to have vim report the swap files of a specific file (mostly to determine if vim still editing the file). If the file is being edited ( in another window, either on linux or cygwin ) I can 'raise' that window up to the top with "\e[2t\e[1t" as I have successfully be able to do thanks to Bring Window to Front
Vim has multiple swap file names, and multiple directories that it could put a file, so I want to ask vim, please tell me the name of the swap files that are currently in use for a given file, and if there is a current vim process on the file. Unfortunately, sometimes vim will open a command file in recovery mode in unexpected ways.
I'm invoking vim like this vim -r -c :q file, well actually, I'm invoking it from script, since I want vim to see something more like a terminal, then I look at the output file, so it's more like script -q -c "vim -r -c :q foo" fooscript, then I look in the fooscript file for messages like /Note: process STILL RUNNING: (\d+)/
It is beginning to look like I need to use vim -r without a file name, and parse the output of the -r report, and that there isn't a way to get the report pre-filtered to a single file in question.
after switching my focus to just vim -r, and
Knowing that vim will try to put the swap file into the same directory as the file it's editing ( thanks to #romainl for the pointer to :help swap-file )
observing that vim -r reports on the files in the current directory first,
observing that the file name associated with the swap file is reported before the process id of the vim process, and
observing that vim appends (STILL RUNNING) if it finds the active process
I changed the current directory appropriately and ran this code after plugging in the name of the file-to-search-for
perl -lne '
last if /^\s+In directory/;
undef $f if /^\d+/;
$f = $1 if /^\s+file name:\s+(.*)\s*$/;
if ( $f =~ m#/file-to-search-for# && /^\s+ process ID:\s(\d+).*?STILL RUNNING/ ) {
print $1;
$pid //= $1;
}
END { exit !$pid; } '
The pid of the running vim process is printed, and the exit status is zero when the appropiate swap file is found, and non-zero if the file was not being edited

Unable to delete the file from Perl script

I have an old perl script which was always working , but suddenly something is broken which is not deleting the file.
-rw-r--r-- 1 nobody uworld 6 Dec 03 11:15 shot32.file
The command to delete the above file is inside a perl script
`rm $shotfile`;
I have checked $shotfile is shot32.file and it is in the right location.
So file location and filename is not the problem.
Regarding the permission, the perl script is running under nobody user as well , so what could be other reasons for this to not work .
Appreciate your help.
To delete a file, you need write permissions on the directory the file is in. The permissions on the file don't matter.
That said, that's some pretty awful code you've got there. You're shelling out (without escaping anything, hello shell injection!) just to run rm (which you could've run directly without going through the shell), and you're capturing its output for no reason (and you're ignoring whatever was captured anyway). Also, you're not checking for errors (which would be harder in this form as well).
This is all much more complicated than it has to be. Perl has a built-in function for deleting files:
unlink $shotfile or warn "$0: can't unlink $shotfile: $!\n";
This will delete the file or warn you about any problems (with $! containing the reason for the failure). Change warn to die if you want the program to abort instead.

colorgcc perl script with output to non-tty enabled writing to C dependency files

Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".

Why does grep hang when run against the / directory?

My question is in two parts :
1) Why does grep hang when I grep all files under "/" ?
for example :
grep -r 'h' ./
(note : right before the hang/crash, I note that I see some "no such device or address" messages , regarding sockets....
Of course, I know that grep shouldn't run against a socket, but I would think that since sockets are just files in Unix, it should return a negative result, rather than crashing.
2) Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, I'm looking for all recently written log files.
As #ninjalj said, if you don't use -D skip, grep will try to read all your device files, socket files, and FIFO files. In particular, on a Linux system (and many Unix systems), it will try to read /dev/zero, which appears to be infinitely long.
You'll be waiting for a while.
If you're looking for a system log, starting from /var/log is probably the best approach.
If you're looking for something that really could be anywhere in your file system, you can do something like this:
find / -xdev -type f -print0 | xargs -0 grep -H pattern
The -xdev argument to find tells it to stay within a single filesystem; this will avoid /proc and /dev (as well as any mounted filesystems). -type f limits the search to ordinary files. -print0 prints the file names separated by null characters rather than newlines; this avoid problems with files having spaces or other funny characters in their names.
xargs reads a list of file names (or anything else) on its standard input and invokes the specified command on everything in the list. The -0 option works with find's -print0.
The -H option to grep tells it to prefix each match with the file name. By default, grep does this only if there are two or more file names on its command line. Since xargs splits its arguments into batches, it's possible that the last batch will have just one file, which would give you inconsistent results.
Consider using find ... -name '*.log' to limit the search to files with names ending in .log (assuming your log files have such names), and/or using grep -I ... to skip binary files.
Note that all this depends on GNU-specific features. Some of these options might not be available on MacOS (which is based on BSD) or on other Unix systems. Consult your local documentation, and consider installing GNU findutils (for find and xargs) and/or GNU grep.
Before trying any of this, use df to see just how big your root filesystem is. Mine is currently 268 gigabytes; searching all of it would probably take several hours. A few minutes spent (a) restricting the files you search and (b) making sure the command is correct will be well worth the time you spend.
By default, grep tries to read every file. Use -D skip to skip device files, socket files and FIFO files.
If you keep seeing error messages, then grep is not hanging. Keep iotop open in a second window to see how hard your system is working to pull all the contents off its storage media into main memory, piece by piece. This operation should be slow, or you have a very barebones system.
Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, Im looking for all recently written log files.
Grepping the whole FS is very rarely a good idea. Try grepping the directory where the log files should have been written; likely /var/log. Even better, if you know anything about the names of the files you're looking for (say, they have the extension .log), then do a find or locate and grep the files reported by those programs.

Where does CGI.pm normally create temporary files?

On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder:
use CGI;
my $tmpfile = new CGITempFile(1);
print "tmpfile='", $tmpfile->as_string(), "'\n";
The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1.
All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s).
How I can fix this Perl installation and force it to return '.\CGItemp1' by default?
I've even copied the whole Perl folder from one of the working servers to this machine but no joy.
#Hometoast:
I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical.
From command line they point to the user profile directory, for example:
C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1
When run under IIS as a CGI script they both point to:
c:\windows\temp
In registry key HKEY_USERS/.DEFAULT/Environment, both servers have:
%USERPROFILE%\Local Settings\Temp
The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder.
#Ranguard:
The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway.
#Chris:
That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server.
To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again.
The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm.
The algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles).
IIUC, if a C:\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory "." is used.
I hope this helps.
Not the direct answer to your question, but have you tried using File::Temp?
It is specifically designed to work on any OS.
If you're running this script as you, check the %TEMP% environment variable to see if if it differs.
If IIS is executing, check the values in registry for TMP and TEMP under
HKEY_USERS/.DEFAULT/Environment