In cPanel, they tell you to insert this code into the beginning of Perl files. I'm not sure what it does. I've tried code with and without this in the beginning of the file and it seems to all work the same. I haven't tested that out with cron running the code, but only as myself. By "tested it out", I mean using print lines, database connections & returns, subs, vars, etc...
BEGIN
{
my $base_module_dir = (-d '/home/root/perl' ? '/home/root/perl' : ( getpwuid($>) )[7] . '/perl/');
unshift #INC, map { $base_module_dir . $_ } #INC;
}
It is designed to set your module search path. Specifically, it sets the default location (first location checked) the user's local perl/ directory. It not only adds that directory but makes it a new root for #INC. It does this for every entry in #INC. In a limited access environment such as those that use CPanel this insures you scripts (general cgi) user your modules over any other.
BEGIN means it occurs before any code not in the block.
The first line determines if /home/root/perl exists and is a directory. If both are true it assigns that to $base_module_dir, otherwise it assigns <user home>/perl/ to the variable. Remember, in perl you can index a function call directly if it returns a list.
It finds the user's home directory with getpwuid($>). getpwuid() gets user account information for a given user (generally from passwd on a Unix system) and returns it as a list. $> is the effective user id of the script. The reason for the index of 7 is that is the location of the home directory in the list (and it is the 8th field in passwd if memory serves).
It then prepends ALL entries in #INC with $base_module_dir and inserts those modified entries at the front of #INC. So it's not just adding $base_module_dir as a directory but is adding that as a new root for all entries in #INC. That's why it uses map instead of just adding a single entry.
Maybe a bit easier to read:
# The BEGIN block is explained in perldoc perlmod
BEGIN {
# Prefix all dirs already in the include path, with root's perl path if it exists, or the
# current user's perl path if not and make perl look for modules in those paths first:
# Example:
# "/usr/lib/perl" => "/home/root/perl/usr/lib/perl, /usr/lib/perl"
my $root_user_perl_dir = '/home/root/perl';
# Fetch user home dir in a non-intuitive way:
# my $user_perl_dir = ( getpwuid($>) )[7] . '/perl/');
# Fetch user home dir slightly more intuitive:
my $current_userid = $>; # EFFECTIVE_USER_ID see perldoc perlvar
# See perldoc perlfunc / perldoc -f getpwuid
my ($name,$passwd,$uid,$gid,$quota,$comment,$gcos,$dir,$shell,$expire)
= getpwuid($current_userid);
my $current_user_home_dir = $dir;
my $user_perl_dir = $current_user_home_dir . '/perl/';
my $base_module_dir = '';
if (-d $root_user_perl_dir ) {
# Use this if the path exists
$base_module_dir = $root_user_perl_dir;
}
else {
# or fallback to current user's path
$base_module_dir = $user_perl_dir;
}
# Generate the new paths
my #prefixed_INC = map { $base_module_dir . $_ } #INC;
# Add the generated paths in front of the existing ones.
#INC = (#prefixed_INC, #INC);
}
This code sets up Perl to prefer modules in either /home/root/perl--if it exists and is a directory--or ~/perl when looking for modules to load. It basically takes every path that the Perl would normally use and bases them in that chosen directory.
Likely, this allows the user to have a debugging or bugfix version of a system module and for Perl to prefer that instead.
It does this in a BEGIN block because it's the only way to make sure that a block of logic can be run to modify #INC to affect the behavior of use statements.
Related
I tried using File::Spec->case_tolerant, but it returns false on HFS+, which is wrong. I suspect it's because File::Spec::Unix always returns false. My current workaround is this function:
my $IS_CASE_INSENSITIVE;
sub _is_case_insensitive {
unless (defined $IS_CASE_INSENSITIVE) {
$IS_CASE_INSENSITIVE = 0;
my ($uc) = glob uc __FILE__;
if ($uc) {
my ($lc) = glob lc __FILE__;
$IS_CASE_INSENSITIVE = 1 if $lc;
}
}
return $IS_CASE_INSENSITIVE;
}
But that's a hack since: 1) on a case-sensitive file system both of those files might exists; and 2) different volumes can have different file systems.
In truth, every directory considered must be checked on its own. This is because, on Unix-like systems, any directory can be a different file system than some other directory. Furthermore, use of glob is not very reliable; from perlport:
Don't count on filename globbing. Use opendir, readdir, and closedir instead.
But I think that #borodin is onto something with the use of -e. So here's a function that uses -e to determine whether the specified directory is on a case-insensitive file system:
my %IS_CASE_INSENSITIVE;
sub is_case_insensitive {
my $dir = shift;
unless (defined $IS_CASE_INSENSITIVE{$dir}) {
$IS_CASE_INSENSITIVE{$dir} = -e uc $dir && -e lc $dir;
}
return $IS_CASE_INSENSITIVE{$dir};
}
You could probably add some heuristics for Windows to just cache the value for the drive letter, since that defines a mount point. And of course, it will fail on case-sensisitve file systems if both uppercase and lowercase variations of the directory exist. But otherwise, unless there is some other way to tell more globally which directories match to which mount points, you have to check for any directory.
I suggest you make use of the core File::Temp module to create a new unique file that has lower-case characters in its name. The file is set to be deleted when the object is destroyed, which is when the subroutine exits if not before.
If the file doesn't exist when access by the upper-cased file name then the filing system is case-sensitive.
If the upper-cased name does exist then we have to check that we haven't happened upon a file whose upper-case version was already there, so we delete the file. If the upper-case entry has now gone then the filing system is case-insensitive.
If the upper-cased name is still there then it is a file that existed before we created the temporary file. We just loop around and create a new temporary file with a different name, although the chances of this happening are absolutely tiny. If you prefer you can minimize this possibility even further by using an outlandish value for SUFFIX. Just be careful that the characters you use are valid on any fileing system that you wish to test.
I've tested this on both Windows 7 and Ubuntu.
use strict;
use warnings;
use 5.010;
use autodie;
use File::Temp ();
printf "File system %s case_insensitive\n", case_insensitive() ? "is" : "isn't";
sub case_insensitive {
while () {
my $tmp = File::Temp->new(
TEMPLATE => 'tempXXXXXX',
SUFFIX => '.tmp',
UNLINK => 1,
);
my $uc_filename = uc $tmp->filename;
return 0 if not -e $uc_filename;
$tmp = undef;
return 1 if not -e $uc_filename;
}
}
I was trying to ask for help I posted a previous question. Also I don't want to use any modules unless it is a built in module I prefer to write my own. I know the recursive part to list all files from multiple directories, but don't understand where exactly or how I would specify the desired level of search, so if I give as parameters root and 3 it should look through at least 3 directories and then retrieve all files as long it is less than or equal to 3. Any help is greatly appreciated.
do you just want it to list all files, or to return them in an array. If merely printing them is enough, you do something like:
sub print_txt_recurse() {
my ($filepath, $level) = #_;
#some code to get file paths and and print txt files going through each file
elsif (-d $file && $level > 1 ) {
print_txt_recurse($file, $level - 1);
}
return;
}
You could use File::Find, a core module of Perl, which means it supposes to be available everywhere.
See Core modules (F)
fairly new to perl so this is most likely is not the best code which is why I am posting. I got this to work but was wondering if there is a better way. I do not have the ability to download modules. I am copying the last modified directory in a build folder from one server to another server. The argument allows me to choose which build directory to choose from.
Thanks
#!C:\strawberry\perl
use warnings;
use strict;
use File::Copy::Recursive;
my $NewFolder = `(dir /o-d/ad/b \\\\myserver1.name.com\\builds\\$ARGV[0] | head -1)`;
chomp($NewFolder);
$dir1 = "\\\\myserver1.name.com\\builds\\$ARGV[0]/$NewFolder";
$dir2 = "\\\\myserver2.name.com\\builds\\$ARGV[0]/Backup/$NewFolder";
File::Copy::Recursive::dircopy $dir1, $dir2 or die "Copy failed: $!";
Use forward slashes. It just makes your code easier to read:
$dir1 = "\\\\myserver1.name.com\\builds\\$ARGV[0]/$NewFolder";
vs.
$dir1 = "//myserver1.name.com/builds/$ARGV[0]/$NewFolder";
Also, don't do system calls where Perl can do it. For example, Perl can see the last modification date of a file via the stat. Even better is the File::stat module that makes the stat command so much easier to use.
Don't use #ARGV in your programs. Instead, read the variables from #ARGV into your own variables. It makes your program easier to understand, and your own variables have limited scope while #ARGV is global.
Use modern conventions. Variable names should be in all lower case, and use underscores to separate out words. That is $new_folder vs. $NewFolder. Is this arbitrary? Yes, but it's a convention followed by most Perl developers. It means not wondering if the variable is $newFolder, $NewFolder, or $newfolder because you know by these rules it is $new_folder.
And finally, use autodie; This will kill your program whenever a file operation fails. This turns perl from a check function for errors programming language into a exception checking language. This way, you don't have to worry whether or not you have to check for a failed IO operation.
Here's a completely untested, error ridden example:
use strict;
use warnings;
use autodie;
use File::Copy::Recursive qw(dircopy); #Optional Module
use File::Stat;
use constants {
ORIG_SERVER => '//myserver1.name.com/builds',
TO_SERVER => '//myserver2.name.com/builds',
};
my $from_directory = shift;
#
# Find newest directory
#
opendir my $dir_fh, ORIG_SERVER . "/$from_directory";
my $newest_directory;
while ( my $sub_directory = readdir $dir_fh ) {
next if $sub_directory eq "." or $sub_directory eq "..";
next unless -d $sub_directory;
if ( not defined $newest_directory ) {
$youngest_directory = $sub_directory;
next;
}
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
my $sub_directory_stat = stat ORIG_SERVER . "/$directory/$sub_directory";
if ( $newest_directory_stat->mtime > $sub_directory_stat->mtime ) {
$newest_directory = $sub_directory;
}
}
dircopy ORIG_SERVER . "/$directory/$youngest_directory",
TO_SERVER . "/$directory/$youngest_directory/backup";
My program is a lot longer than your program because your program depended upon various system operating commands, like dir and head which I don't believe is a standard Windows OS command. Instead, I read each entry under that directory into my loop. Anything that's not a directory, I toss (next if -d $sub_directory) and I toss out the special directories . and ...
After that, I use stat to find the youngest directory which to me means the one with the newest modification time. Note that Unix doesn't store creation time. However, according to perlport ctime is creation time on Win32, so you might prefer that instead of mtime.
If I didn't use File::stat, instead of this:
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
my $sub_directory_stat = stat ORIG_SERVER . "/$directory/$sub_directory";
if ( $newest_directory_stat->mtime > $sub_directory_stat->mtime ) {
$newest_directory = $sub_directory;
}
I could have done this:
my $newest = ORIG_SERVER . "/$directory/$newest_directory";
my $sub_dir = ORIG_SERVER . "/$directory/$sub_directory";
if ( stat( $newest )[9] > stat( $sub_dir )[9] ) {
$newest_directory = $sub_directory;
}
The stat command without File::stat returns an array of values, and I could have simply used the [9] element of that array. However, what is 9? Even though it could of saved me a few lines of code, and including an extra Perl module, it's better to use File::stat.
One thing you notice is that constants don't interpolate which means I have to keep doing things like this:
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
However, you can use this bit of Perlish black magic to interpolate constants inside quotes:
my $youngest_directory_stat = stat "#{[ORIG_SERVER]}/$directory/$newest_directory";
Hope that helps.
I need to check whether any of a set of directories exist in a Perl script. The directories are named in the format XXXX*YYY - I need to check for each XXXX and enter an if statement if true.
In my script I have two variables $monitor_location (contains the path to the root directory being scanned) and $clientid (contains the XXXX).
The code snippet below has been expanded to show more of what I'm doing. I have a query which returns each client ID, I'm then looping for each record returned and trying to calculate the disk space used by that client ID.
I have the following code so far (doesn't work):
# loop for each client
while ( ($clientid, $email, $name, $max_record) = $query_handle1->fetchrow_array() )
{
# add leading zeroes to client ID if needed
$clientid=sprintf"%04s",$clientid;
# scan file system to check how much recording space has been used
if (-d "$monitor_location/$clientid\*") {
# there are some call recordings for this client
$str = `du -c $monitor_location/$clientid* | tail -n 1 2>/dev/null`;
$str =~ /^(\d+)/;
$client_recspace = $1;
print "Client $clientid has used $client_recspace of $max_record\n";
}
}
To be clear, I want to enter the if statement if there are any folders that start with XXXX.
Hope this makes sense! Thanks
You can use glob to expand the wildcard:
for my $dir (grep -d, glob "$monitor_location/$clientid*") {
...
}
I have a "thing" against glob. (It seems to only work once (for me), meaning you couldn't re-glob that same dir again later in the same script. It's probably just me, though.)
I prefer readdir(). This is definitely longer, but it WFM.
chdir("$monitor_location") or die;
open(DIR, ".") or die;
my #items = grep(-d, grep(/^$clientid/, readdir(DIR)));
close(DIR);
Everything in #items matches what you want.
How can I find a file, user owned by tree and group owned by tree? And how can I find a whole directory inside which files are owned by tree?
The File::Find module is a standard Perl module (i.e., it is available on all installations of Perl). You can use File::Find to go through a directory tree and search for the file you want.
To use, you create a wanted subroutine that parses the files, then have the find subroutine include that wanted routine in its call. The File::Find module is a bit klutzy because it was originally only meant to use for the find2perl command.
Here's some completely untested code. Notice that you do yucky stuff like using global variables and package variables. It's one of the reasons I don't like File::Find.
use File::Find;
our $myUid = getpwnam('tree');
our $muGid = getgrnam('tree');
find (\&wanted, #dirList);
sub wanted {
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size, $atime,$mtime,$ctime,$blksize,$blocks) = stat($File::Find::name);
next if (not -f $File::Find::name);
next if ($uid != $myUid);
next if ($gid != $myGid);
print qq(File "$File::Find::name" is owned by group 'tree' and user 'tree'\n);
}
I wrote my own File::Find called File::OFind because it's more object oriented. You can get that from here. It's a bit easier to understand. (Again, completely untested):
use File::OFind;
# Really should test if these return something
my $myUid = getpwnam('tree');
my $muGid = getgrnam('tree');
# Create your directory search object
my $find = File::OFind->new(STAT => 1, $directory);
# Now keep looping and examining each file
while($find->Next) {
next if ($find->Uid != $myUid);
next if ($find->Gid != $myGid);
next if ($find->Type ne "f"); #Is this a file?
print $find->Name . " is owned by group and user tree\n";
}
The builtin Perl functions you will need to accomplish this task include getpwnam, getgrnam, and stat.
($name,$passwd,$uid,$gid,
$quota,$comment,$gcos,$dir,$shell,$expire) = getpwnam 'tree';
will return a lot of useful information about the user tree. For this task you will be particularly interested in the $uid field. Likewise,
($name,$passwd,$gid,$members) = getgrnam 'tree';
retrieves data about the group tree. You will be most interested in the $gid field. Finally, the stat function
($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
$atime,$mtime,$ctime,$blksize,$blocks)
= stat($filename);
returns a 13-element array with system information about a file (or a directory). For your task, you are looking for files such that the user and group ids returned from stat($filename) match the user and group ids returned from getpwnam and getgrnam.
File::Find::Rule makes this clean and simple:
use File::Find::Rule;
my $uid_tree = getpwnam('tree');
my $gid_tree = getgrnam('tree');
my #files =
File::Find::Rule
->file()
->uid($uid_tree)
->gid($gid_tree)
->in('.');
Ref:
File::Find::Rule
getpw* and getgr*