Perl removing the files using system command returns success always - perl

I have a script which takes filenames (with its full path) as an arguments and deletes them from the system.
Here is the code:
#!/usr/bin/perl
use strict; use warnings;
warn "No arguments/files names passed to the script: $!\n" unless #ARGV;
my $count = 0;
foreach (#ARGV) {
my $cmd = "rm -rf $_";
my $exit_code = system($cmd);
if($exit_code != 0){
print "Command $cmd failed with an exit code of $exit_code.\n";
exit($exit_code >> 8);
} else {
print "Command $cmd successful!\n";
$count++;
}
}
print "Out of ".scalar(#ARGV)." file(s) ".$count." file(s) deleted\n";
I have two questions:
Here if I pass the dummy file say the file which doesn't exists, it gives me $exit_code as 0. How it is possible ? Shouldn't it through exit code other than 0 ?
When I delete the file in Perl way unlink $_; it doesn't delete them. How can I forcefully delete using unlink command ?

Here if I pass the dummy file say the file which doesn't exists, it
gives me $exit_code as 0. How it is possible ? Shouldn't it through
exit code other than 0 ?
You are using rm with the -f option. From the man page of rm:
-f, --force
ignore nonexistent files and arguments, never prompt
With this option, as far as I know, you will always get a return code of 0 when trying to remove a file that does not exist.
When I delete the file in Perl way unlink $_; it doesn't delete them.
How can I forcefully delete using unlink command ?
There are lots of reasons a file will not delete. If it has been set to immutable, the sticky bit is set on the directory containing the files (and you are not the owner of the files) or simply the user running your script does not have write permissions of the files. The point is none of that has to do with unlink. You have to have proper permissions before removing a file using any method at all whether its rm or unlink etc.

I like to use rmtree from File::Path. No need to shell out at all to get a recursive delete.
As BryanK already answered, 0 is the expected error code with the -f options. When you run into these issues, test the command in the shell to see if it's Perl (or whatever), or the command. The exit value of the command shows up in $? (the shell version, which is why Perl's variable has the same name):
$ rm -rf test_dir
$ echo $?
0

Related

How best (idiomatically) to fail perl script (run with -n/-p) when input file not found?

$ perl -pe 1 foo && echo ok
Can't open foo: No such file or directory.
ok
I'd really like the perl script to fail when the file does not exist. What's the "proper" way to make -p or -n fail when the input file does not exist?
The -p switch is just a shortcut for wrapping your code (the argument following -e) in this loop:
LINE:
while (<>) {
... # your program goes here
} continue {
print or die "-p destination: $!\n";
}
(-n is the same but without the continue block.)
The <> empty operator is equivalent to readline *ARGV, and that opens each argument in succession as a file to read from. There's no way to influence the error handling of that implicit open, but you can make the warning it emits fatal (note, this will also affect several warnings related to the -i switch):
perl -Mwarnings=FATAL,inplace -pe 1 foo && echo ok
Set a flag in the body of the loop, check the flag in the END block at the end of the oneliner.
perl -pe '$found = 1; ... ;END {die "No file found" unless $found}' -- file1 file2
Note that it only fails when no file was processed.
To report the problem when not all files have been found, you can use something like
perl -pe 'BEGIN{ $files = #ARGV} $found++ if eof; ... ;END {die "Some files not found" unless $files == $found}'

Handling Perforce message in Perl when there are no new files submitted

I am trying to code a Perl subroutine that returns an array of files that has been modified and submitted to the Perforce repository from $previous_date until now. This is how the subroutine looks like:
sub p4_files {
my ($previous_date) = #_;
my $files = "//depot/project/design/...rtl.sv"
my $p4cmd = "p4 files -e $files\#$previous_date,\#now";
my #filelist = `$p4cmd`;
chomp #filelist;
return #filelist;
}
The subroutine works as expected if there are files submitted between the given dates. However, it happens that no new changes are made, and executing the p4 files command returns a message instead:
prompt% p4 files -e //depot/project/design/...rtl.sv\#25/05/2017,\#now
prompt% //depot/project/design/...rtl.sv\#25/05/2017,\#now - no revision(s) after that date.
How should I handle this in my Perl script? I would like to exit the script when such a situation is encountered.
Unfortunately, p4 returns exit code 0 regardless of whether it finds some files or whether it returns the "no revision(s) after that date" message. That means you have to parse the output.
The simplest solution is probably to exit the script if $filelist[0] =~ / - no revision\(s\) after that date\./. The downside is we don't know how "stable" that message is. Will future versions of Perforce emit this message exactly, or is it possible they would reword?
Another option is to use the -s switch: my $p4cmd = "p4 -s files -e $files\#$previous_date,\#now";. That causes p4 to prepend the "severity" before every line of output. If a file is found, the line will start with info:, while the "no revision(s) after that date" will start with error:. That looks a bit more stable to me: exit if grep /^error:/, #filelist. Watch out for the last line; when you use the -s switch, you get an extra line with the exit code.
Yet another option would be to use P4Perl. In that case you'd get the results as structured data, which will obviate the parsing. That's arguably the most elegant, but you'd need the P4Perl module first.
I suggest using the -F flag to tame the output:
my $p4cmd = "p4 -F %depotFile% files -e $files\#$previous_date,\#now";
and then go ahead with the:
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
#filelist will be empty if there are no lines of output containing a %depotFile% field, and now your caller doesn't need to try to parse the depot path out of the standard p4 files output.
If you want to massage the p4 files output further, take a look at p4 -e files (args) so you can see what the different fields are that you can plug into -F.
Just do nothing if the array isn't populated.
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
chomp #filelist;
To suppress the message, just redirect stderr of the command to a bitbucket:
my $p4cmd = "p4 files -e $files\#$previous_date,\#now 2> /dev/null";

Perl script that invokes shell command doesn't work

I am writing a simple Perl program to test a shell script for changing directory. But it doesn't work.
This is my code :
$result = `cd/`;
print $result;
It works fine when I use
$result =`dir`;
If you need to change the cwd directory in your script, then you should use Perl's built-in chdir function.
perldoc -f chdir
cd (by default) doesn't output anything, so you're assigning an empty string to your $result variable.
If you want to output the (full) path of the directory you changed to, simply append && pwd inside the backticks:
$result = `cd / && pwd`;
Note that `...` creates a child process for running the shell with the specified command, so whatever environment changes you perform there - including changing the directory - do NOT affect the Perl script itself.
In other words: you're NOT changing the Perl script's current directory with your shell command.
If your intent is:
to simply test whether the shell command you enclose in `...` succeeds or not, use, the system() function instead; e.g.:
system('cd /') == 0 || die "Command failed";
to capture the output from the shell command, presume it to be a directory path and change the Perl script's working directory to it:
$result = `cd / && pwd` || die "Command failed.";
chomp $result; # trim trailing newline
# Change Perl script's working dir.
chdir $result || die "Could not change to $result.";
To affect the current working directory of the perl process, use the chdir() function.

perl script to add line of code only modifies one file

I have this:
perl -pi -e 'print "code I want to insert\n" if $. == 2' *.php
which puts the line code I want to insert on the second line of the file, which is what I need done to every single PHP file
If I run it in a directory with both PHP files and non-PHP files it does the right thing, but only to one PHP file. I thought *.php would apply it to all PHP files, but it doesn't do it.
How can I write it so it will modify every PHP file in a directory? Bonus if there is an easy way to do this recursively through all directories. I don't mind running the Perl script for each directory as there aren't that many, but don't want to hand edit every single file.
The problem is that the file handle ARGV that Perl uses to read the files passed on the command line is never explicitly closed, so the line number $. just keeps incrementing after the end of the first file and never goes back to one.
Fix this by closing ARGV when it has reached end of file. Perl will reopen it to read the next file in the list, and so reset $.
perl -i -pe 'print "code I want to insert\n" if $. == 2; close ARGV if eof' *.php
If you can use sed, this should work:
sed -si '2i\CODE YOU WANT TO INSERT' *.php
To do it recursively, you might try:
find -name '*.php' -execdir sed -si '2i\CODE YOU WANT TO INSERT' '{}' +
Using File::Find.
Note, I've included 3 sanity checks to verify that things are actually being processed they way that you want.
Initially the script will just print out the found files until you comment out the bare return.
Then the script will save backups unless you uncomment the unlink statement.
Finally, the script will only process a single file until you comment out the exit statement.
These three checks are just so you can verify that everything is working as you desire before editing a whole directory tree.
use strict;
use warnings;
use File::Find;
my $to_insert = "code I want to insert\n";
find(sub {
return unless -f && /\.php$/;
print "Edit $File::Find::name\n";
return; # Comment out once satisfied with found files
local $^I = '.bak';
local #ARGV = $_;
while (<>) {
print $to_insert if $. == 2 && $_ ne $to_insert;
print;
}
# unlink "$_$^I"; # Uncomment to delete backups once certain that first file is processed correctly.
exit; # Comment out once certain that first file is processed correctly
}, '.')

Unix commands in Perl?

I'm very new to Perl, and I would like to make a program that creates a directory and moves a file into that directory using the Unix command like:
mkdir test
Which I know would make a directory called "test". From there I would like to give more options like:
mv *.jpg test
That would move all .jpg files into my new directory.
So far I have this:
#!/usr/bin/perl
print "Folder Name:";
$fileName = <STDIN>;
chomp($fileType);
$result=`mkdir $fileName`;
print"Your folder was created \n";
Can anyone help me out with this?
Try doing this :
#!/usr/bin/perl
use strict; use warnings;
print "Folder Name:";
$dirName = <STDIN>;
chomp($dirName);
mkdir($dirName) && print "Your folder was created \n";
rename $_, "$dirName/$_" for <*.jpg>;
You will have a better control when using built-in perl functions than using Unix commands. That's the point of my snippet.
Most (if not all) Unix commands have a corresponding version as a function
e.g
mkdir - see here
mv - See here
Etc. either get a print out of the various manual pages (or probably have a trip down to the book shop - O'Reilly nut shell book is quite good along with others).
In perl you can use bash commands in backticks. However, what happens when the directory isn't created by the mkdir command? Your program doesn't get notified of this and continues on its merry way thinking that everything is fine.
You should use built in command in perl that do the same thing.
http://perldoc.perl.org/functions/mkdir.html
http://perldoc.perl.org/functions/rename.html
It is much easier to trap errors with those functions and fail gracefully. In addition, they run faster because you don't have to fork a new process for each command you run.
Perl has some functions similar to those of the shell. You can just use
mkdir $filename;
You can use backquotes to run a shell command, but it is only usefull if the command returns anything to its standard output, which mkdir does not. For commands without output, use system:
0 == system "mv *.jpg $folder" or die "Cannot move: $?";