Perl command line - assume while loop around - perl

Can anyone explain the difference in output of the two perl (using cygwin) commands below:
$ echo abc | perl -n -e 'if ($_ =~ /a/) {print 1;}'
prints :
1
$ echo abc | perl -e 'if ($_ =~ /a/) {print 1;}'
The first prints '1' while second one outputs blank?
Thanks

-n switch adds while loop around your code, so in your case $_ is populated from standard input. In second example there is no while loop thus $_ is leaved undefined.
Using Deparse you can ask perl to show how your code is parsed,
perl -MO=Deparse -n -e 'if ($_ =~ /a/) {print 1;}'
LINE: while (defined($_ = <ARGV>)) {
if ($_ =~ /a/) {
print 1;
}
}
perl -MO=Deparse -e 'if ($_ =~ /a/) {print 1;}'
if ($_ =~ /a/) {
print 1;
}

Related

Perl oneliner where slurps #ARGV

I am looking for a Perl oneliner (inserting it into Bash script), and I need the next interface:
perl -0777 -nlE 'commands' file1 file2 .... fileN
I created the next:
perl -0777 -lnE 'BEGIN{$str=quotemeta(do{local(#ARGV, $/)="file1"; <>})} say "working on $ARGV" if $_ =~ /./' "$#"
Prettier:
perl -0777 -lnE '
BEGIN{
$str = quotemeta(
do{
local(#ARGV, $/)="file1"; <> #localize ARGV to "file1.txt" for <>
}
)
}
say "working on $ARGV" if $_ =~ /./ #demo only action
' "$#"
It works, but with this I need edit the source code every time when needing to change file1.
How do I change the script to the following?
Slurp the $ARGV[0] (file1) into $str (in the BEGIN block)
And slurp the other arguments into $_ in the main loop
Pass it as an argument, removing it from #ARGV in the BEGIN block.
$ echo foo >refile
$ echo -ne 'foo\nbar\nfood\nbaz\n' >file1
$ echo -ne 'foo\nbar\nfood\nbaz\n' >file2
$ perl -lnE'
BEGIN {
local #ARGV = shift(#ARGV);
$re = join "|", map quotemeta, <>;
}
say "$ARGV:$.:$_" if /$re/;
close(ARGV) if eof; # Reset $.
' refile file1 file2
file1:1:foo
file1:3:food
file2:1:foo
file2:3:food

Awk Equivalent in perl

I am understanding perl in command line, please help me
what is equivalent in perl
awk '{for(i=1;i<=NF;i++)printf i < NF ? $i OFS : $i RS}' file
awk '!x[$0]++' file
awk 'FNR==NR{A[$0];next}($0 in A)' file1 file2
awk 'FNR==NR{A[$1]=$5 OFS $6;next}($1 in A){print $0,A[$1];delete A[$1]}' file1 file1
Please someone help me...
Try the awk to perl translator. For example:
$ echo awk '!x[$0]++' file | a2p
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
while (<>) {
chomp; # strip record separator
print $_ if $awk;print $_ if !($X{$_}++ . $file);
}
You can ignore the boiler plate at the beginning and see the meat of the perl in the while loop. The translation is seldom perfect (even in this simple example, the perl code omits newlines), but it usually provides a reasonable approximation.
Another example (the one Peter is having trouble with in the comments):
$ echo '{for(i=1;i<=NF;i++)printf( i < NF ? ( $i OFS ) : ($i RS))}' | a2p
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
while (<>) {
chomp; # strip record separator
#Fld = split(' ', $_, -1);
for ($i = 1; $i <= ($#Fld+1); $i++) {
printf (($i < ($#Fld+1) ? ($Fld[$i] . $,) : ($Fld[$i] . $/)));
}
}

How to add blank line after every grep result using Perl?

How to add a blank line after every grep result?
For example, grep -o "xyz" may give something like -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I want the output to be like this -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I would like to do something like
grep "xyz" | perl (code to add a new line after every grep result)
This is the direct answer to your question:
grep 'xyz' | perl -pe 's/$/\n/'
But this is better:
perl -ne 'print "$_\n" if /xyz/'
EDIT
Ok, after your edit, you want (almost) this:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++'
If you don’t like the blank line at the beginning, make it:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++ && $. > 1'
NOTE: This won’t work right on filenames with colons in them. :)½
If you want to use perl, you could do something like
grep "xyz" | perl -p -e 's/(.*)/\1\n/g'
If you want to use sed (where I seem to have gotten better results), you could do something like
grep "xyz" | sed 's/.*/\0\n/g'
This prints a newline after every single line of grep output:
grep "xyz" | perl -pe 'print "\n"'
This prints a newline in between results from different files. (Answering the question as I read it.)
grep 'xyx' * | perl -pe '/(.*?):/; if ($f ne $1) {print "\n"; $f=$1}'
Use a state machine to determine when to print a blank line:
#!/usr/bin/env perl
use strict;
use warnings;
# state variable to determine when to print a blank line
my $prev_file = '';
# change DATA to the appropriate input file handle
while( my $line = <DATA> ){
# did the state change?
if( my ( $file ) = $line =~ m{ \A ([^:]*) \: .*? xyz }msx ){
# blank lines between states
print "\n" if $file ne $prev_file && length $prev_file;
# set the new state
$prev_file = $file;
}
# print every line
print $line;
}
__DATA__
file1:xyz
file2:xyz
file2:xyz2
file3:xyz

Finding pipe and redirects in perl #ARGV

When writing a traditional Unix/Linux program perl provides the diamond operator <>. I'm trying to understand how to test if there are no argument passed at all to avoid the perl script sitting in a wait loop for STDIN when it should not.
#!/usr/bin/perl
# Reading #ARGV when pipe or redirect on the command line
use warnings;
use strict;
while ( defined (my $line = <ARGV>)) {
print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
close(ARGV) if eof;
}
sub usage {
print << "END_USAGE" ;
Usage:
$0 file
$0 < file
cat file | $0
END_USAGE
exit();
}
A few outputs runs shows that the <> works, but with no arguments we are hold in wait for STDIN input, which is not what I want.
$ cat grab.pl | ./grab.pl
-: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
-: 8 close(ARGV) if eof;
$ ./grab.pl < grab.pl
-: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
-: 8 close(ARGV) if eof;
$ ./grab.pl grab.pl
grab.pl: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
grab.pl: 8 close(ARGV) if eof;
$ ./grab.pl
^C
$ ./grab.pl
[Ctrl-D]
$
First thought is to test $#ARGV which holds the number of the last argument in #ARGV. Then I added a test to above script, before the while loop like so:
if ( $#ARGV < 0 ) { # initiated to -1 by perl
usage();
}
This did not produced the desired results. $#ARGV is -1 for the redirect and pipe on the command line. Running with this check (grabchk.pl) the problem changed and I can't read the file content by the <> in the pipe or redirect cases.
$ ./grabchk.pl grab.pl
grab.pl: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ;
grab.pl: 8 close(ARGV) if eof;
$ ./grabchk.pl < grab.pl
Usage:
./grabchk.pl file
./grabchk.pl < file
cat file | ./grabchk.pl
$ cat grab.pl | ./grabchk.pl
Usage:
./grabchk.pl file
./grabchk.pl < file
cat file | ./grabchk.pl
Is there a better test to find all the command line parameters passed to perl by the shell?
You can use file test operator -t to check if the file handle STDIN is open to a TTY.
So if it is open to a terminal and there are no arguments then you display the usage text.
if ( -t STDIN and not #ARGV ) {
# print usage and exit
}
use -t operator to check if STDIN is connected to a tty
when you use pipe or shell redirection, it will return false, so you use
if ( -t STDIN and not #ARGV ){ exit Usage(); }

How to write a Perl script to convert file to all upper case?

How can I write a Perl script to convert a text file to all upper case letters?
perl -ne "print uc" < input.txt
The -n wraps your command line script (which is supplied by -e) in a while loop. A uc returns the ALL-UPPERCASE version of the default variable $_, and what print does, well, you know it yourself. ;-)
The -p is just like -n, but it does a print in addition. Again, acting on the default variable $_.
To store that in a script file:
#!perl -n
print uc;
Call it like this:
perl uc.pl < in.txt > out.txt
$ perl -pe '$_= uc($_)' input.txt > output.txt
perl -pe '$_ = uc($_)' input.txt > output.txt
But then you don't even need Perl if you're using Linux (or *nix). Some other ways are:
awk:
awk '{ print toupper($0) }' input.txt >output.txt
tr:
tr '[:lower:]' '[:upper:]' < input.txt > output.txt
$ perl -Tpe " $_ = uc; " --
$ perl -MO=Deparse -Tpe " $_ = uc; " -- a s d f
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
$ cat myprogram.pl
#!/usr/bin/perl -T --
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}