tilde (~) directories in Perl - perl

I found a slight misbehaviour in my Perl script when I create and check for the existence of directories with a tilde sign, which doesn't happen if I use a full /home/user path. When I run this script for the first time, it creates the new directory. When I run it the second time, it doesn't recognise the existence of the directory, and tries to create it a second time:
#!/usr/bin/perl
use strict;
my $outdir = '~/test';
my $cmd = "mkdir $outdir";
unless (-d $outdir) {
0 == system($cmd) or die "Error creating outdir $outdir\n $?";
}
1;
[~] $ rm test/ -rf
[~] $ perl dir.pl
[~] $ perl dir.pl
mkdir: cannot create directory `/home/avilella/test': File exists
Error creating outdir ~/test
256 at dir.pl line 7.
How can I reliably deal with directories that use the tilde ~ sign in Perl?

The tilde is interpreted by the shell to mean your home directory.
Hence Perl's -d operator sees something different (a file/directory called ~) to your shell invocation 'mkdir ~/whatever' (which expands ~ to mean /home/user).
I would try to use exclusively Perl functions to perform your operations. You'll avoid spawning new processes and your file access will be performed in a consistent fashion.
Note Perl's mkdir built-in function. Note also the File::Glob module which does perform expansion of the ~ character (perhaps useful if you have users entering directory names manually)

You can use the %ENV home directory, which is the values imported from the shell:
my $home = $ENV{HOME};
You should also know that mkdir is a Perl built-in function:
mkdir "$home/test" or die "Cannot create test: $!";

~ is interpreted by the shell that is invoked by the system function. It's the shell that replaces ~ by the user's home directory. As far as Perl or the kernel is concerned, ~ means a file or directory with a one-character name, like any other character. So the test done by -d fails, because there's no directory called ~.
If you'd used Perl's built-in mkdir function rather than calling an external command via a shell script, you would have had an error at that point because the directory ~ doesn't exist.
The user's home directory is almost always available in the environment variable HOME. If you like, you can fall back to querying the user database if HOME is not present, but that's an abnormal situation. Do use the HOME environment variable if it is present, because it is sometimes useful to change it to run a program with different configuration files, and the environment variable is always available in practice whereas the user database could be unavailable due to network trouble in some configurations (e.g. NIS or LDAP).
my $home_directory = $ENV{HOME};
if (!defined $home_directory) {$home_directory = getpwuid($<);}
my $outdir = "$home_directory/test";
unless (-d $outdir) {
mkdir $outdir or die "Error creating $outdir: $!\n"
}

Your script can't create directories which exist. That's the error you presented us:
[~] $ rm test/ -rf
[~] $ perl dir.pl
[~] $ perl dir.pl
mkdir: cannot create directory `/home/avilella/test': File exists
Error creating outdir ~/test
256 at dir.pl line 7.
The problem is the line of your delete:
[~] $ rm test/ -rf
is wrong. Like most commands, the right syntax would be:
[~] $ <command> <options> <parameters>
so it would be:
[~] $ rm -rf test/

Related

Is there any equivalent of `pwd -L` in perl?

Is there an equivalent of shell's "pwd -L" in perl?
I want the current working directory with symlinks unresolved?
My current working directory is "/path1/dir1/dir2/dir3", and here dir1 is symlink to test1/test2. I want current working directory to be "/path1/dir1/dir2/dir3" via perl script. What I am getting is /path1/test1/test2/dir2/dir3.
How can I get the current working directory to be the path with no symlinks resolved? In other words, I would want to implement shell's pwd -L.
use the perl backtick operator to run the pwd -L command on your system and capture the output into a variable, this works on my system:
perl -e 'chomp( my $pwdl = `pwd -L` ); print "$pwdl\n";'
An attempt to replicate the behavior of bash's pwd builtin using just perl (In particular, with the aid of the Path::Tiny and core Cwd modules):
First, from help pwd in a bash shell:
-L print the value of $PWD if it names the current working directory
-P print the physical directory, without any symbolic links
(The GNU coreutils version of pwd(1) also reads the PWD environment variable for its implementation of -L, which is why running it with qx// works even though it doesn't have access to the shell's internal variables keeping track of the working directory and path taken to it)
$ pwd -P # First, play with absolute path with symlinks resolved
/.../test1/test2/dir2/dir3
$ perl -MCwd -E 'say getcwd'
/.../test1/test2/dir2/dir3
$ perl -MPath::Tiny -E 'say Path::Tiny->cwd'
/.../test1/test2/dir2/dir3
$ pwd -L # Using $PWD to preserve the symlinks
/.../dir1/dir2/dir3
$ /bin/pwd -L
/.../dir1/dir2/dir3
$ PWD=/foo/bar /bin/pwd -L # Try to fake it out
/.../test1/test2/dir2/dir3
$ perl -MPath::Tiny -E 'my $pwd = path($ENV{PWD}); say $pwd if $pwd->realpath eq Path::Tiny->cwd'
/.../dir1/dir2/dir3
As a function (With some added checks so it can handle a missing $PWD environment var or one that points to a non-existent path):
#!/usr/bin/env perl
use strict;
use warnings;
use feature qw/say/;
use Path::Tiny;
sub is_same_file ($$) {
my $s1 = $_[0]->stat;
my $s2 = $_[1]->stat;
return $s1->dev == $s2->dev && $s1->ino == $s2->ino;
}
sub get_working_dir () {
my $cwd = Path::Tiny->cwd;
# $ENV{PWD} must exist and be non-empty
if (exists $ENV{PWD} && $ENV{PWD} ne "") {
my $pwd = path($ENV{PWD});
# And must point to a directory that is the same filesystem entity as cwd
return $pwd->is_dir && is_same_file($pwd, $cwd) ? $pwd : $cwd;
} else {
return $cwd;
}
}
say get_working_dir;

How to ignore read-only files with `perl -i`?

Perl’s -i switch appears to modify read-only files:
$ echo 'foobar' > tmp.txt
$ chmod -w tmp.txt
$ perl -pi -w -e 's/foobar/FOOBAR/' tmp.txt
$ cat tmp.txt
FOOBAR
This is unexpected, as the command should not have been able to modify the file per its permissions. Expectedly, trying to update it via other means fails:
$ echo 'barbaz' > tmp.txt
-bash: tmp.txt: Permission denied
Why is Perl modifying read-only files (and how?), and, most importantly: how can I get Perl to not do so?
The only somewhat informative resource I can find on this is in the Perl FAQ:
The permissions on a file say what can happen to the data in that file. … If you try to write to the file, the permissions of the file govern whether you're allowed to.
Which ultimately seems like its saying it shouldn’t be able to write to it, since the file system says you cannot.
Filter #ARGV in a BEGIN block:
perl -pi -e 'BEGIN{#ARGV=grep{-w $_}#ARGV} s/foobar/FOOBAR/' files
Now if none of the files on the command line are writable, #ARGV will be empty and the ARGV filehandle will try to read from STDIN. I can think of two ways to keep this from being a problem:
Close STDIN in the BEGIN block, too
perl -pi -e 'BEGIN{close STDIN;#ARGV=grep{-w $_}#ARGV}s/foobar/FOOBAR/' files
Always call this one-liner redirecting input from /dev/null
perl -pi -e 'BEGIN{#ARGV=grep{-w $_}#ARGV}s/foobar/FOOBAR/' files < /dev/null
See the documentation in perlrun:
renaming the input file, opening the output file by the original name, and selecting that output file as the default for print() statements
(...)
For a discussion of issues surrounding file permissions and -i, see "Why does Perl let me
delete read-only files? Why does -i clobber protected files? Isn't this a bug in Perl?" in
perlfaq5.
From perlrun:
-i
specifies that files processed by the <> construct are to be edited in-place. It does this by renaming the input file, opening the output file by the original name, and selecting that output file as the default for print() statements.
So it is doesn't really modify the file. It moves the file out of the way (which requires directory write permissions, not file write permissions) and then creates a new one with the old name.
how can I get Perl to not do so?
I don't think you can when you use -i.

How can I make a shell script indicate that it was successful?

If I have a basic .sh file containing the following script code:
#!/bin/sh
rm -rf "MyFolder"
How do I make this running script file display results to the terminal that will indicate if the directory removal was successful?
You don't really need to make it say it was successful. You could have it say something only on error ✖, and then silence means success ✔.
That's how the Unix philosophy works:
The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users' attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue. http://www.linfo.org/rule_of_silence.html
That's the way rm itself behaves.
If you are asking about the general case, as suggested by your question's title, you can run your script with sh -x scriptname to see what it's doing. It's also quite common to write diagnostic output into the script itself, and control it with an option.
#!/bin/sh
verbose=false
case $1 in -v | --verbose )
verbose=true
shift ;;
esac
say () {
$verbose || return
echo "$0: $#" >&2
}
say "Removing $dir ..."
rm -rf "$dir" || say "Failed."
If you run this script without any options, it will run silently, like a well-behaved Unix utility should. If you run it with the -v option, it will print some diagnostics to standard error.
rm -rf "My Folder" && echo "Done" || echo "Error!"
You can read more on creating a sequence of pipelines in bash manual
In the bash (and other similar shells) the ? environment variable gives you the exit code of the last executed command. So you can do:
#!/bin/sh
rm -rf "My Folder"
echo $?
UPDATE
If once the rm command has been executed the directory doesn't exist (because it has been successfully removed or because it didn't exist when the command was executed) the script will print 0. If the directory exists (which will mean that the command has been unable to remove it) then the script will print an exit code other than 0. If I understand properly the question this is exactly the requested behavior. If it is not, please correct me.
The previous answers was wrong : rm don't exit with error code > 0 when the dir isn't present.
Instead, I recommend to use :
dir='/path/to/dir'
if [[ -d $dir ]]; then
rm -rf "$dir"
fi
If you want rm to return a status, remove -f flag.
Example on Linux Mint (the dir doesn't exists):
$ rm -rf /tmp/sdfghjklm
$ echo $?
0
$ rm -r /tmp/sdfghjklm
$ echo $?
1

How to check if a Perl script doesn't have any compilation errors?

I am calling many Perl scripts in my Bash script (sometimes from csh also).
At the start of the Bash script I want to put a test which checks if all the Perl scripts are devoid of any compilation errors.
One way of doing this would be to actually call the Perl script from the Bash script and grep for "compilation error" in the piped log file, but this becomes messy as different Perl scripts are called at different points in the code, so I want to do this at the very start of the Bash script.
Is there a way to check if the Perl script has no compilation error?
Beware!!
Using the below command to check compilation errors in your Perl program can be dangerous.
$ perl -c yourperlprogram
Randal has written a very nice article on this topic which you should check out
Sanity-checking your Perl code (Linux Magazine Column 91, Mar 2007)
Quoting from his article:
Probably the simplest thing we can tell is "is it valid?". For this,
we invoke perl itself, passing the compile-only switch:
perl -c ourprogram
For this operation, perl compiles the program,
but stops just short of the execution phase. This means that every
part of the program text is translated into the internal data
structure that represents the working program, but we haven't actually
executed any code. If there are any syntax errors, we're informed, and
the compilation aborts.
Actually, that's a bit of a lie. Thanks to BEGIN blocks (including
their layered-on cousin, the use directive), some Perl code may have
been executed during this theoretically safe "syntax check". For
example, if your code contains:
BEGIN { warn "Hello, world!\n" }
then you will see that message,
even during perl -c! This is somewhat surprising to people who
consider "compile only" to mean "executes no code". Consider the
code that contains:
BEGIN { system "rm", "-rf", "/" }
and you'll see the problem with
that argument. Oops.
Apart from perl -c program.pl, it's also better to find warnings using the command:
perl -w program.pl
For details see: http://www.perl.com/pub/2004/08/09/commandline.html
I use the following part of a bash func for larger perl projects :
# foreach perl app in the src/perl dir
while read -r dir ; do
echo -e "\n"
echo "start compiling $dir ..." ;
cd $product_instance_dir/src/perl/$dir ;
# run the autoloader utility
find . -name '*.pm' -exec perl -MAutoSplit -e 'autosplit($ARGV[0], $ARGV[1], 0, 1, 1)' {} \;
# foreach perl file check the syntax by setting the correct INC dirs
while read -r file ; do
perl -MCarp::Always -I `pwd` -I `pwd`/lib -wc "$file"
# run the perltidy inline
# perltidy -b "$file"
# sleep 3
ret=$? ;
test $ret -ne 0 && break 2 ;
done < <(find "." -type f \( -name "*.pl" -or -name "*.pm" \))
test $ret -ne 0 && break ;
echo "stop compiling $dir ..." ;
echo -e "\n\n"
cd $product_instance_dir ;
done < <(ls -1 "src/perl")
When you need to check errors/warnings before running but your file depends on mutliple other files you can add option -I:
perl -I /path/to/dependency/lib -c /path/to/file/to/check
Edit: from man perlrun
Directories specified by -I are prepended to the search path for modules (#INC).

Does perl's -i with no argument create a backup file on Cygwin?

I have a bug report from a reliable person that on Cygwin and Perl 5.14.2, using perl's -i switch with no value creates a .bak backup file. It shouldn't according to the documentation in perlrun:
If no extension is supplied, no backup is made and the current
file is overwritten.
I don't have access to Cygwin at the moment. Does anyone else see this behavior? Can you explain it? Is is something about creating the backup file, which should only be a temporary file, and failing to remove it?
Here's the steps I suggest to recreate it. Remember, this is for Cygwin:
Create and change into empty directory
Create a text file in that directory. The contents are not important
Run perl -p -i -e 's/perl/Perl/g' filename
Check for a .bak file when you are done
Save the answers for an explanation of what might be happening if you find that backup file. Upvoting a prior comment for "Yes I see that" or "No, can't reproduce it" can be an informal poll.
perldoc perlcygwin sayeth (edited for clarity):
Because of Windows-ish restrictions, inplace editing of files with perl -i
must create a
backup of each file being edited. Therefore Perl adds the suffix .bak automatically — as
though invoked with perl -i.bak— if
you use perl -i with no explicit backup extension.
Arguably this information should be in perlport also.
Yes. For example:
# show we're in cygwin
% uname -a
CYGWIN_NT-6.1-WOW64 xzodin 1.7.15(0.260/5/3) 2012-05-09 10:25 i686 Cygwin
# show that directory is empty
% ls
# create a file
% touch foo
# invoke 'perl -pi' (but do nothing)
% perl -pi -e "" foo
# show that a backup file with extension '.bak' is created.
% ls
foo foo.bak