When writing a traditional Unix/Linux program perl provides the diamond operator <>. I'm trying to understand how to test if there are no argument passed at all to avoid the perl script sitting in a wait loop for STDIN when it should not.
#!/usr/bin/perl
# Reading #ARGV when pipe or redirect on the command line
use warnings;
use strict;
while ( defined (my $line = <ARGV>)) {
print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
close(ARGV) if eof;
}
sub usage {
print << "END_USAGE" ;
Usage:
$0 file
$0 < file
cat file | $0
END_USAGE
exit();
}
A few outputs runs shows that the <> works, but with no arguments we are hold in wait for STDIN input, which is not what I want.
$ cat grab.pl | ./grab.pl
-: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
-: 8 close(ARGV) if eof;
$ ./grab.pl < grab.pl
-: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
-: 8 close(ARGV) if eof;
$ ./grab.pl grab.pl
grab.pl: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ; # an example
grab.pl: 8 close(ARGV) if eof;
$ ./grab.pl
^C
$ ./grab.pl
[Ctrl-D]
$
First thought is to test $#ARGV which holds the number of the last argument in #ARGV. Then I added a test to above script, before the while loop like so:
if ( $#ARGV < 0 ) { # initiated to -1 by perl
usage();
}
This did not produced the desired results. $#ARGV is -1 for the redirect and pipe on the command line. Running with this check (grabchk.pl) the problem changed and I can't read the file content by the <> in the pipe or redirect cases.
$ ./grabchk.pl grab.pl
grab.pl: 7 print "$ARGV: $. $line" if ($line =~ /eof/) ;
grab.pl: 8 close(ARGV) if eof;
$ ./grabchk.pl < grab.pl
Usage:
./grabchk.pl file
./grabchk.pl < file
cat file | ./grabchk.pl
$ cat grab.pl | ./grabchk.pl
Usage:
./grabchk.pl file
./grabchk.pl < file
cat file | ./grabchk.pl
Is there a better test to find all the command line parameters passed to perl by the shell?
You can use file test operator -t to check if the file handle STDIN is open to a TTY.
So if it is open to a terminal and there are no arguments then you display the usage text.
if ( -t STDIN and not #ARGV ) {
# print usage and exit
}
use -t operator to check if STDIN is connected to a tty
when you use pipe or shell redirection, it will return false, so you use
if ( -t STDIN and not #ARGV ){ exit Usage(); }
Related
I am looking for a Perl oneliner (inserting it into Bash script), and I need the next interface:
perl -0777 -nlE 'commands' file1 file2 .... fileN
I created the next:
perl -0777 -lnE 'BEGIN{$str=quotemeta(do{local(#ARGV, $/)="file1"; <>})} say "working on $ARGV" if $_ =~ /./' "$#"
Prettier:
perl -0777 -lnE '
BEGIN{
$str = quotemeta(
do{
local(#ARGV, $/)="file1"; <> #localize ARGV to "file1.txt" for <>
}
)
}
say "working on $ARGV" if $_ =~ /./ #demo only action
' "$#"
It works, but with this I need edit the source code every time when needing to change file1.
How do I change the script to the following?
Slurp the $ARGV[0] (file1) into $str (in the BEGIN block)
And slurp the other arguments into $_ in the main loop
Pass it as an argument, removing it from #ARGV in the BEGIN block.
$ echo foo >refile
$ echo -ne 'foo\nbar\nfood\nbaz\n' >file1
$ echo -ne 'foo\nbar\nfood\nbaz\n' >file2
$ perl -lnE'
BEGIN {
local #ARGV = shift(#ARGV);
$re = join "|", map quotemeta, <>;
}
say "$ARGV:$.:$_" if /$re/;
close(ARGV) if eof; # Reset $.
' refile file1 file2
file1:1:foo
file1:3:food
file2:1:foo
file2:3:food
Can anyone explain the difference in output of the two perl (using cygwin) commands below:
$ echo abc | perl -n -e 'if ($_ =~ /a/) {print 1;}'
prints :
1
$ echo abc | perl -e 'if ($_ =~ /a/) {print 1;}'
The first prints '1' while second one outputs blank?
Thanks
-n switch adds while loop around your code, so in your case $_ is populated from standard input. In second example there is no while loop thus $_ is leaved undefined.
Using Deparse you can ask perl to show how your code is parsed,
perl -MO=Deparse -n -e 'if ($_ =~ /a/) {print 1;}'
LINE: while (defined($_ = <ARGV>)) {
if ($_ =~ /a/) {
print 1;
}
}
perl -MO=Deparse -e 'if ($_ =~ /a/) {print 1;}'
if ($_ =~ /a/) {
print 1;
}
I am understanding perl in command line, please help me
what is equivalent in perl
awk '{for(i=1;i<=NF;i++)printf i < NF ? $i OFS : $i RS}' file
awk '!x[$0]++' file
awk 'FNR==NR{A[$0];next}($0 in A)' file1 file2
awk 'FNR==NR{A[$1]=$5 OFS $6;next}($1 in A){print $0,A[$1];delete A[$1]}' file1 file1
Please someone help me...
Try the awk to perl translator. For example:
$ echo awk '!x[$0]++' file | a2p
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
while (<>) {
chomp; # strip record separator
print $_ if $awk;print $_ if !($X{$_}++ . $file);
}
You can ignore the boiler plate at the beginning and see the meat of the perl in the while loop. The translation is seldom perfect (even in this simple example, the perl code omits newlines), but it usually provides a reasonable approximation.
Another example (the one Peter is having trouble with in the comments):
$ echo '{for(i=1;i<=NF;i++)printf( i < NF ? ( $i OFS ) : ($i RS))}' | a2p
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
while (<>) {
chomp; # strip record separator
#Fld = split(' ', $_, -1);
for ($i = 1; $i <= ($#Fld+1); $i++) {
printf (($i < ($#Fld+1) ? ($Fld[$i] . $,) : ($Fld[$i] . $/)));
}
}
I'm looking for a simple/elegant way to grep a file such that every returned line must match every line of a pattern file.
With input file
acb
bc
ca
bac
And pattern file
a
b
c
The command should return
acb
bac
I tried to do this with grep -f but that returns if it matches a single pattern in the file (and not all). I also tried something with a recursive call to perl -ne (foreach line of the pattern file, call perl -ne on the search file and try to grep in place) but I couldn't get the syntax parser to accept a call to perl from perl, so not sure if that's possible.
I thought there's probably a more elegant way to do this, so I thought I'd check. Thanks!
===UPDATE===
Thanks for your answers so far, sorry if I wasn't clear but I was hoping for just a one-line result (creating a script for this seems too heavy, just wanted something quick). I've been thinking about it some more and I came up with this so far:
perl -n -e 'chomp($_); print " | grep $_ "' pattern | xargs echo "cat input"
which prints
cat input | grep a | grep b | grep c
This string is what I want to execute, I just need to somehow execute it now. I tried an additional pipe to eval
perl -n -e 'chomp($_); print " | grep $_ "' pattern | xargs echo "cat input" | eval
Though that gives the message:
xargs: echo: terminated by signal 13
I'm not sure what that means?
One way using perl:
Content of input:
acb
bc
ca
bac
Content of pattern:
a
b
c
Content of script.pl:
use warnings;
use strict;
## Check arguments.
die qq[Usage: perl $0 <input-file> <pattern-file>\n] unless #ARGV == 2;
## Open files.
open my $pattern_fh, qq[<], pop #ARGV or die qq[ERROR: Cannot open pattern file: $!\n];
open my $input_fh, qq[<], pop #ARGV or die qq[ERROR: Cannot open input file: $!\n];
## Variable to save the regular expression.
my $str;
## Read patterns to match, and create a regex, with each string in a positive
## look-ahead.
while ( <$pattern_fh> ) {
chomp;
$str .= qq[(?=.*$_)];
}
my $regex = qr/$str/;
## Read each line of data and test if the regex matches.
while ( <$input_fh> ) {
chomp;
printf qq[%s\n], $_ if m/$regex/o;
}
Run it like:
perl script.pl input pattern
With following output:
acb
bac
Using Perl, I suggest you read all the patterns into an array and compile them. Then you can read through your input file using grep to make sure all of the regexes match.
The code looks like this
use strict;
use warnings;
open my $ptn, '<', 'pattern.txt' or die $!;
my #patterns = map { chomp(my $re = $_); qr/$re/; } grep /\S/, <$ptn>;
open my $in, '<', 'input.txt' or die $!;
while (my $line = <$in>) {
print $line unless grep { $line !~ $_ } #patterns;
}
output
acb
bac
Another way is to read all the input lines and then start filtering by each pattern:
#!/usr/bin/perl
use strict;
use warnings;
open my $in, '<', 'input.txt' or die $!;
my #matches = <$in>;
close $in;
open my $ptn, '<', 'pattern.txt' or die $!;
for my $pattern (<$ptn>) {
chomp($pattern);
#matches = grep(/$pattern/, #matches);
}
close $ptn;
print #matches;
output
acb
bac
Not grep and not a one liner...
MFILE=file.txt
PFILE=patterns
i=0
while read line; do
let i++
pattern=$(head -$i $PFILE | tail -1)
if [[ $line =~ $pattern ]]; then
echo $line
fi
# (or use sed instead of bash regex:
# echo $line | sed -n "/$pattern/p"
done < $MFILE
A bash(Linux) based solution
#!/bin/sh
INPUTFILE=input.txt #Your input file
PATTERNFILE=patterns.txt # file with patterns
# replace new line with '|' using awk
PATTERN=`awk 'NR==1{x=$0;next}NF{x=x"|"$0}END{print x}' "$PATTERNFILE"`
PATTERNCOUNT=`wc -l <"$PATTERNFILE"`
# build regex of style :(a|b|c){3,}
PATTERN="($PATTERN){$PATTERNCOUNT,}"
egrep "${PATTERN}" "${INPUTFILE}"
Here's a grep-only solution:
#!/bin/sh
foo ()
{
FIRST=1
cat pattern.txt | while read line; do
if [ $FIRST -eq 1 ]; then
FIRST=0
echo -n "grep \"$line\""
else
echo -n "$STRING | grep \"$line\""
fi
done
}
STRING=`foo`
eval "cat input.txt | $STRING"
How can I write a Perl script to convert a text file to all upper case letters?
perl -ne "print uc" < input.txt
The -n wraps your command line script (which is supplied by -e) in a while loop. A uc returns the ALL-UPPERCASE version of the default variable $_, and what print does, well, you know it yourself. ;-)
The -p is just like -n, but it does a print in addition. Again, acting on the default variable $_.
To store that in a script file:
#!perl -n
print uc;
Call it like this:
perl uc.pl < in.txt > out.txt
$ perl -pe '$_= uc($_)' input.txt > output.txt
perl -pe '$_ = uc($_)' input.txt > output.txt
But then you don't even need Perl if you're using Linux (or *nix). Some other ways are:
awk:
awk '{ print toupper($0) }' input.txt >output.txt
tr:
tr '[:lower:]' '[:upper:]' < input.txt > output.txt
$ perl -Tpe " $_ = uc; " --
$ perl -MO=Deparse -Tpe " $_ = uc; " -- a s d f
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
$ cat myprogram.pl
#!/usr/bin/perl -T --
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}