Handling Perforce message in Perl when there are no new files submitted - perl

I am trying to code a Perl subroutine that returns an array of files that has been modified and submitted to the Perforce repository from $previous_date until now. This is how the subroutine looks like:
sub p4_files {
my ($previous_date) = #_;
my $files = "//depot/project/design/...rtl.sv"
my $p4cmd = "p4 files -e $files\#$previous_date,\#now";
my #filelist = `$p4cmd`;
chomp #filelist;
return #filelist;
}
The subroutine works as expected if there are files submitted between the given dates. However, it happens that no new changes are made, and executing the p4 files command returns a message instead:
prompt% p4 files -e //depot/project/design/...rtl.sv\#25/05/2017,\#now
prompt% //depot/project/design/...rtl.sv\#25/05/2017,\#now - no revision(s) after that date.
How should I handle this in my Perl script? I would like to exit the script when such a situation is encountered.

Unfortunately, p4 returns exit code 0 regardless of whether it finds some files or whether it returns the "no revision(s) after that date" message. That means you have to parse the output.
The simplest solution is probably to exit the script if $filelist[0] =~ / - no revision\(s\) after that date\./. The downside is we don't know how "stable" that message is. Will future versions of Perforce emit this message exactly, or is it possible they would reword?
Another option is to use the -s switch: my $p4cmd = "p4 -s files -e $files\#$previous_date,\#now";. That causes p4 to prepend the "severity" before every line of output. If a file is found, the line will start with info:, while the "no revision(s) after that date" will start with error:. That looks a bit more stable to me: exit if grep /^error:/, #filelist. Watch out for the last line; when you use the -s switch, you get an extra line with the exit code.
Yet another option would be to use P4Perl. In that case you'd get the results as structured data, which will obviate the parsing. That's arguably the most elegant, but you'd need the P4Perl module first.

I suggest using the -F flag to tame the output:
my $p4cmd = "p4 -F %depotFile% files -e $files\#$previous_date,\#now";
and then go ahead with the:
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
#filelist will be empty if there are no lines of output containing a %depotFile% field, and now your caller doesn't need to try to parse the depot path out of the standard p4 files output.
If you want to massage the p4 files output further, take a look at p4 -e files (args) so you can see what the different fields are that you can plug into -F.

Just do nothing if the array isn't populated.
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
chomp #filelist;
To suppress the message, just redirect stderr of the command to a bitbucket:
my $p4cmd = "p4 files -e $files\#$previous_date,\#now 2> /dev/null";

Related

Why is perltidy going to stdout?

I have a bash command, get-modified-perl-files, that returns all the Perl files I have modified in my repository. I would like to use perltidy on all of these files.
I created a bash function to do the job:
tidy() {
for f in `get-modified-perl-files`
do
echo $f
perltidy -b $f
done
}
According to the help page of perltidy, the -b option should create a backup of my original file and modify it in-place:
-b backup original to .bak and modify file in-place
However, when I launch my bash function, no backup is created. My files are not modified, but the output of perltidy is printed on the standard output. As a consequence, I decided to change my call to perltidy that way:
\cp $f $f.bak
perltidy $f > $f
Now, when I run my command, the backup of my file is correctly done, but the original file is emptied, and the following message is displayed:
skipping file: file.pl: Zero size
I've found a workaround which gives the result I want, but it seems far-fetched:
\cp -f $f $f.bak
echo "$(perltidy $f)" > $f
Why the -b option doesn't work? Is there a way to do the same job without using this weird redirection?
EDIT: Here is my .perltidyrc file:
--perl-best-practices
--no-standard-error-output
--closing-side-comments
--closing-side-comment-interval=10
--blanks-before-subs
--blanks-before-blocks
--maximum-line-length=130
By default perltidy does not print the file contents to STDOUT. To do so requires the -st option (or --standard-output). Since you are not using this option on the perltidy command line, there is likely a .perltidyrc file with -st in it that is being used.
To ignore the .perltidyrc file, use the -npro (--noprofile) option:
perltidy -npro -b $f
Refer to the "Using a .perltidyrc command file" section of the man page for your installed version:
perldoc perltidy
For addition debug information, you can run:
perltidy -dpro
perltidy -dop
Another possibility is that you aliased the perltidy command to perltidy -st. You should be able to avoid an alias with:
\perltidy -npro -b $f
Now that you edited your Question to show your .perltidyrc file, it looks like the culprit is:
--perl-best-practices
Either change the rc file, or ignore it as above.
See also Perltidy always prints to standard out
perltidy $f > $f
This will never do what you want, with any program. When you run a program with > $f, that tells the shell that you want the program to run with its stdout connected to $f. So before the program is run, the shell opens $f for writing, which destroys the contents of the file. Then it connects the handle to stdout in the child, then it runs perltidy, which tries to read $f and finds... nothing, because the original contents were already wiped out. Not a recipe for success. This is why perltidy has its own "in-place editing" feature in the first place.

Perl removing the files using system command returns success always

I have a script which takes filenames (with its full path) as an arguments and deletes them from the system.
Here is the code:
#!/usr/bin/perl
use strict; use warnings;
warn "No arguments/files names passed to the script: $!\n" unless #ARGV;
my $count = 0;
foreach (#ARGV) {
my $cmd = "rm -rf $_";
my $exit_code = system($cmd);
if($exit_code != 0){
print "Command $cmd failed with an exit code of $exit_code.\n";
exit($exit_code >> 8);
} else {
print "Command $cmd successful!\n";
$count++;
}
}
print "Out of ".scalar(#ARGV)." file(s) ".$count." file(s) deleted\n";
I have two questions:
Here if I pass the dummy file say the file which doesn't exists, it gives me $exit_code as 0. How it is possible ? Shouldn't it through exit code other than 0 ?
When I delete the file in Perl way unlink $_; it doesn't delete them. How can I forcefully delete using unlink command ?
Here if I pass the dummy file say the file which doesn't exists, it
gives me $exit_code as 0. How it is possible ? Shouldn't it through
exit code other than 0 ?
You are using rm with the -f option. From the man page of rm:
-f, --force
ignore nonexistent files and arguments, never prompt
With this option, as far as I know, you will always get a return code of 0 when trying to remove a file that does not exist.
When I delete the file in Perl way unlink $_; it doesn't delete them.
How can I forcefully delete using unlink command ?
There are lots of reasons a file will not delete. If it has been set to immutable, the sticky bit is set on the directory containing the files (and you are not the owner of the files) or simply the user running your script does not have write permissions of the files. The point is none of that has to do with unlink. You have to have proper permissions before removing a file using any method at all whether its rm or unlink etc.
I like to use rmtree from File::Path. No need to shell out at all to get a recursive delete.
As BryanK already answered, 0 is the expected error code with the -f options. When you run into these issues, test the command in the shell to see if it's Perl (or whatever), or the command. The exit value of the command shows up in $? (the shell version, which is why Perl's variable has the same name):
$ rm -rf test_dir
$ echo $?
0

perl script to add line of code only modifies one file

I have this:
perl -pi -e 'print "code I want to insert\n" if $. == 2' *.php
which puts the line code I want to insert on the second line of the file, which is what I need done to every single PHP file
If I run it in a directory with both PHP files and non-PHP files it does the right thing, but only to one PHP file. I thought *.php would apply it to all PHP files, but it doesn't do it.
How can I write it so it will modify every PHP file in a directory? Bonus if there is an easy way to do this recursively through all directories. I don't mind running the Perl script for each directory as there aren't that many, but don't want to hand edit every single file.
The problem is that the file handle ARGV that Perl uses to read the files passed on the command line is never explicitly closed, so the line number $. just keeps incrementing after the end of the first file and never goes back to one.
Fix this by closing ARGV when it has reached end of file. Perl will reopen it to read the next file in the list, and so reset $.
perl -i -pe 'print "code I want to insert\n" if $. == 2; close ARGV if eof' *.php
If you can use sed, this should work:
sed -si '2i\CODE YOU WANT TO INSERT' *.php
To do it recursively, you might try:
find -name '*.php' -execdir sed -si '2i\CODE YOU WANT TO INSERT' '{}' +
Using File::Find.
Note, I've included 3 sanity checks to verify that things are actually being processed they way that you want.
Initially the script will just print out the found files until you comment out the bare return.
Then the script will save backups unless you uncomment the unlink statement.
Finally, the script will only process a single file until you comment out the exit statement.
These three checks are just so you can verify that everything is working as you desire before editing a whole directory tree.
use strict;
use warnings;
use File::Find;
my $to_insert = "code I want to insert\n";
find(sub {
return unless -f && /\.php$/;
print "Edit $File::Find::name\n";
return; # Comment out once satisfied with found files
local $^I = '.bak';
local #ARGV = $_;
while (<>) {
print $to_insert if $. == 2 && $_ ne $to_insert;
print;
}
# unlink "$_$^I"; # Uncomment to delete backups once certain that first file is processed correctly.
exit; # Comment out once certain that first file is processed correctly
}, '.')

Copy output and extract number from it in perl

I am working on a perl script in which I will run a command and get a output like : your id is <895162>. I will store this string and read the number from this string only . The problem is my main command will run in shell using the system command from perl .
like :
#ids.csh is "echo your id is <1123221>"
my $p = system ("./ids.csh 2>&1 > /dev/null");
print "$p\n";
$p =~ s/[^0-9]//g;
but the output is not copying to the $p file , Where I am going wrong ?
system runs a command but doesn't capture it. For that, you want qx/backticks:
my $p = `./ids.csh 2>/dev/null`;
As Len Jaffe said, you probably want to throw away stderr output (rather than displaying it to your screen or wherever your stderr is going), but not stdout (that contains the message you want to capture).
Note that when qx fails, it can do so for several different reasons and constructing a meaningful error message is not trivial. If you run into problems, consider using IPC::System::Simple's capture() instead.
You have redirected all of the output to /dev/null, which means that all of your output is being discarded.
I think you probably mean:
./ids.csh 2>/dev/null
Which will redirect stderr to /dev/null while leaving stdout unchanged.

Inserting headers into multiple files

I found some command line with Perl that inserts headers into my files without going through the tedious process of inserting them one by one. Can someone walk me through the Perl aspect of this command line? I'm new to this and can't seem to find the right explanations for what I wrote.
cat header.txt | perl -0 -i -pe 'BEGIN{$h = <STDIN>}; print $h' 1*
-e
rather than provide a script in a xxxx.pl file, provide it on the command line
-p
makes it iterate over filename arguments somewhat like sed but also prints the contents of $_ at the end of the script.
the two above are combined in -pe
-i
indicate you want to edit the file in place and write the output to the same file. In practice, Perl renames the input file and reads from this renamed version while writing to a new file with the original name
-0
redefines the end of record character (\n by default) so that you can read the entire input file as a single line
1*
is the command line argument to your script, so I guess you are modifying any file with a name that starts with 1 (you could have used *.c, or whatever depending on the type of files you are trying to modify)
print $h
prints the variable $h that is the "main" of your script. if it was initialized with the content of the header file (the intent of this one-liner) then it will print the header file
BEGIN{ some code here }
this is stuff you execute before the script starts. this is where I'm stumped. this doesn't seem like valid perl code
so basically:
this will supposedly slurp the entire header file (because of -0) in the BEGIN block and store it in the variable $h
iterate over all the files specified by the wildcards at the end of the command line
for each file: print the header (print $h) then print hte file itself (because of -pe)
so it's equivalent to spelling the script out:
$h = gets content of the entire header file
while (<>){ #loop implied by -pe, iterates over all the 1* files
# the main contents of the "-e" script are inserted below as part of executing -pe
print h$; #print the header we saved
print $_; # implied by -pe, and since we are using -0, this prints the entire content in one shot
# end of the "-e" script. again it was a single print $h statement, the second print is implied by -pe
}
It's a bit hard to explain, take a look at the perlrun documentation for details (run man perlrun).
This is not 100% complete explanation because I don;t think the BEGIN block is right. I tried it on my ubuntu machine and it complained about its syntax too
Here's something similar, with an explanation. The program in the question doesn't run on my mac.
I needed to add the #nullable disable directive to the top of all my csharp files as part of migrating to nullable reference types.
perl -w -i -p -0777 -e 's/^/#nullable disable\n\n/' $(find . -iname '*.cs')
-w enable warnings
-i edit files in place
-p read each file block by block, printing each block after applying a perl expression. the default block size is one line
-0777 changes the default block size to the entire file
-e the perl expression to execute
The final argument uses shell command substitution to create a list of files. It passes that list of file paths to the perl command. The find command searches for files that end in .cs.
The perl program is a single substitution command. It matches the very beginning of the block and replaces (prepends, really) with "#nullable disable" and a couple new-lines.