For example, my.txt containts
a
b
xx
c
d
I want print from the second line below lines that contains xx
I tried
perl -nle 'if(/xx/){$n=$.};print if $.>($n+1)' my.txt
But it didn't work. It just print all lines.
Before $n is defined it is interpreted as 0 (zero), meaning that $. > 1 will also be printed before xx. This might be what you wanted:
perl -nle 'if(/xx/){$n=$.}; print if defined($n) and $. > $n+1' my.txt
Related
I have a command line written in perl that executes in Solaris (maybe this is irrelevant as it is UNIX-like) which inserts a "wait" string every 6 lines
perl -pe 'print "wait\n" if ($. % 6 == 0);' file
However, I want to replace that 6 by a parameter (ARGV[0]), resulting in something like this:
perl -pe 'print "wait\n" if ($. % ARGV[0] == 0);' file 6
It goes well, giving me the right output, until it finishes reading the file and treats "6" as the next file (even when it understood it as ARGV[0] before).
Is there any way to use the -p option and specify which parameters are files and which ones are not?
Edited: I thought there was a problem with using the -f option but as #ThisSuitIsBlackNot pointed out, I was using it wrongly.
-p, as a superset of -n, wraps the code with a while (<>) { } loop, which reads from the files named on the command line. You need to extract the argument before entering the loop.
perl -e'$n = shift; while (<>) { print "wait\n" if $. % $n == 0; print }' 6 file
or
perl -pe'BEGIN { $n = shift } print "wait\n" if $. % $n == 0' 6 file
Alternatively, you could also use an env var.
N=6 perl -pe'print "wait\n" if $. % $ENV{N} == 0' file
Here is the example:
$cat test.tsv
AAAATTTTCCCCGGGG foo
GGGGCCCCTTTTAAAA bar
$perl -wne 'while(<STDIN>){ print $_;}' <test.tsv
GGGGCCCCTTTTAAAA bar
This should work like cat and not like tail -n +2. What is happening here? And what the correct way?
The use of the -n option creates this (taking from man perlrun):
while (<STDIN>) {
while(<STDIN>){ print $_;} #< your code
}
This shows two while(<STDIN>) instances. They both take all available inputs from STDIN, breaking at newlines.
When you run with a test.tsv which is at least two lines long, the first (outer) use of while(<STDIN>) takes the first line, and the second (inner) one takes the second line - so your print statement is first passed the second line.
If you had more than two lines in test.tsv then the inner loop would print out all lines from the second line onwards.
The correct way to make this work is simply to rely on the -n option you pass to perl:
perl -wne 'print $_;' < test.tsv
Because the -n switch implicitly puts your code inside a loop which goes through the file line by line. Remove the 'n' from the list of switches, or (even better) remove your loop from the code, leave only the print command there.
nbokor#nbokor:~/tmp$ perl -wne 'print $_;' <test.csv
AAAATTTTCCCCGGGG foo
GGGGCCCCTTTTAAAA bar
Remove -n command line option. It duplicates while(<STDIN>){ ... }.
$perl -MO=Deparse -wne 'while(<STDIN>){ print $_;}'
BEGIN { $^W = 1; }
LINE: while (defined($_ = <ARGV>)) {
while (defined($_ = <STDIN>)) {
print $_;
}
}
-e syntax OK
I have the following query on the command line, and I would like the output values to show up on single line so I can feed it to my monitoring system, I'm wondering how I can accomplish this via either perl , sed ,awk
My command line
activemq:query -QQueue=PCA --view QueueSize,ConsumerCount,EnqueueCount,DequeueCount
output
ConsumerCount = 1
QueueSize = 0
DequeueCount = 148248
EnqueueCount = 148248
Desierd output
1 0 148248 148248
Thank you
Using command line switches is fun:
perl -anwe'print "$F[2] "'
-a autosplits the line on whitespace, and also thereby strips newline. We add a space and print the last field.
Pipe it to awk:
... | awk -F= '{printf "%s",$2}'
Output, as desired:
1 0 148248 148248
Here is another one. Keeping pushing values in an array, and print the entire array at the end. This gives you a new-line at the end.
However, if your file is very huge, this will not be ideal. In that case, go with TLP's crafty one-liner.
... | perl -lane 'push #a, $F[2] }{ print "#a"'
Perl version:
... | perl -l40 -ane 'print $F[2]'
or (Perl 5.8.8)
... | perl -ne 'chomp; split /=/; print $_[1]'
Note: Since Perl 5.12.0, "split() no longer modifies #_ when called in scalar or void context", so the second version will not work for Perl >= 5.12.0, but the first version should still works.
Testing:
$ cat t00.txt
ConsumerCount = 1
QueueSize = 0
DequeueCount = 148248
EnqueueCount = 148248
$ cat t00.txt | perl -ne 'chomp; split /=/; print $_[1]'
1 0 148248 148248
$ cat t00.txt | perl -l40 -ane 'print $F[2]'
1 0 148248 148248
I have a file which had many lines which containts "x_y=XXXX" where XXXX can be a number from 0 to some N.
Now,
a) I would like to get only the XXXX part of the line in every such line.
b) I would like to get the average
Possibly both of these in one liners.
I am trying out sometihng like
cat filename.txt | grep x_y | (this need to be filled)
I am not sure what to file
In the past I have used commands like
perl -pi -e 's/x_y/m_n/g'
to replace all the instances of x_y.
But now, I would like to match for x_y=XXXX and get the XXXX out and then possibly average it out for the entire file.
Any help on this will be greatly appreciated. I am fairly new to perl and regexes.
Timtowtdi (as usual).
perl -nE '$s+=$1, ++$n if /x_y=(\d+)/; END { say "avg:", $s/$n }' data.txt
The following should do:
... | grep 'x_y=' | perl -ne '$x += (split /=/, $_)[1]; $y++ }{ print $x/$y, "\n"'
The }{ is colloquially referred to as eskimo operator and works because of the code which -n places around the -e (see perldoc perlrun).
Using awk:
/^[^_]+_[^=]+=[0-9]+$/ {sum=sum+$2; cnt++}
END {
print "sum:", sum, "items:", cnt, "avg:", sum/cnt
}
$ awk -F= -f cnt.awk data.txt
sum: 55 items: 10 avg: 5.5
Pure bash-solution:
#!/bin/bash
while IFS='=' read str num
do
if [[ $str == *_* ]]
then
sum=$((sum + num))
cnt=$((cnt + 1))
fi
done < data.txt
echo "scale=4; $sum/$cnt" | bc ;exit
Output:
$ ./cnt.sh
5.5000
As a one-liner, split up with comments.
perl -nlwe '
push #a, /x_y=(\d+)/g # push all matches onto an array
}{ # eskimo-operator, is evaluated last
$sum += $_ for #a; # get the sum
print "Average: ", $sum / #a; # divide by the size of the array
' input.txt
Will extract multiple matches on a line, if they exist.
Paste version:
perl -nlwe 'push #a, /x_y=(\d+)/g }{ $sum += $_ for #a; print "Average: ", $sum / #a;' input.txt
I want to extract rows 1 to n from my .csv file. Using this
perl -ne 'if ($. == 3) {print;exit}' infile.txt
I can extract only one row. How to put a range of rows into this script?
If you have only a single range and a single, possibly concatenated input stream, you can use:
#!/usr/bin/perl -n
if (my $seqno = 1 .. 3) {
print;
exit if $seqno =~ /E/;
}
But if you want it to apply to each input file, you need to catch the end of each file:
#!/usr/bin/perl -n
print if my $seqno = 1 .. 3;
close ARGV if eof || $seqno =~ /E/;
And if you want to be kind to people who forget args, add a nice warning in a BEGIN or INIT clause:
#!/usr/bin/perl -n
BEGIN { warn "$0: reading from stdin\n" if #ARGV == 0 && -t }
print if my $seqno = 1 .. 3;
close ARGV if eof || $seqno =~ /E/;
Notable points include:
You can use -n or -p on the #! line. You could also put some (but not all) other command line switches there, like ‑l or ‑a.
Numeric literals as
operands to the scalar flip‐flop
operator are each compared against
readline counter, so a scalar 1 ..
3 is really ($. == 1) .. ($. ==
3).
Calling eof with neither an argument nor empty parens means the last file read in the magic ARGV list of files. This contrasts with eof(), which is the end of the entire <ARGV> iteration.
A flip‐flop operator’s final sequence number is returned with a "E0" appended to it.
The -t operator, which calls libc’s isatty(3), default to the STDIN handle — unlike any of the other filetest operators.
A BEGIN{} block happens during compilation, so if you try to decompile this script with ‑MO=Deparse to see what it really does, that check will execute. With an INIT{}, it will not.
Doing just that will reveal that the implicit input loop as a label called LINE that you perhaps might in other circumstances use to your advantage.
HTH
What's wrong with:
head -3 infile.txt
If you really must use Perl then this works:
perl -ne 'if ($. <= 3) {print} else {exit}' infile.txt
You can use the range operator:
perl -ne 'if (1 .. 3) { print } else { last }' infile.txt