How to print values on single line - perl

I have the following query on the command line, and I would like the output values to show up on single line so I can feed it to my monitoring system, I'm wondering how I can accomplish this via either perl , sed ,awk
My command line
activemq:query -QQueue=PCA --view QueueSize,ConsumerCount,EnqueueCount,DequeueCount
output
ConsumerCount = 1
QueueSize = 0
DequeueCount = 148248
EnqueueCount = 148248
Desierd output
1 0 148248 148248
Thank you

Using command line switches is fun:
perl -anwe'print "$F[2] "'
-a autosplits the line on whitespace, and also thereby strips newline. We add a space and print the last field.

Pipe it to awk:
... | awk -F= '{printf "%s",$2}'
Output, as desired:
1 0 148248 148248

Here is another one. Keeping pushing values in an array, and print the entire array at the end. This gives you a new-line at the end.
However, if your file is very huge, this will not be ideal. In that case, go with TLP's crafty one-liner.
... | perl -lane 'push #a, $F[2] }{ print "#a"'

Perl version:
... | perl -l40 -ane 'print $F[2]'
or (Perl 5.8.8)
... | perl -ne 'chomp; split /=/; print $_[1]'
Note: Since Perl 5.12.0, "split() no longer modifies #_ when called in scalar or void context", so the second version will not work for Perl >= 5.12.0, but the first version should still works.
Testing:
$ cat t00.txt
ConsumerCount = 1
QueueSize = 0
DequeueCount = 148248
EnqueueCount = 148248
$ cat t00.txt | perl -ne 'chomp; split /=/; print $_[1]'
1 0 148248 148248
$ cat t00.txt | perl -l40 -ane 'print $F[2]'
1 0 148248 148248

Related

perl : how to print after a specific line

For example, my.txt containts
a
b
xx
c
d
I want print from the second line below lines that contains xx
I tried
perl -nle 'if(/xx/){$n=$.};print if $.>($n+1)' my.txt
But it didn't work. It just print all lines.
Before $n is defined it is interpreted as 0 (zero), meaning that $. > 1 will also be printed before xx. This might be what you wanted:
perl -nle 'if(/xx/){$n=$.}; print if defined($n) and $. > $n+1' my.txt

Perl autosplit inside quotes

I tried to print my apache2 access log dates with the following one liner. Within single quotes it shows the value but within double quotes it shows as Array object. My command is as follows
sudo perl -ane 'print $F[3] . "\n" ' /var/log/apache2/access.log | head -n 10
Output
[21/Jul/2014:09:16:05
[21/Jul/2014:09:16:05
[21/Jul/2014:11:32:55
[21/Jul/2014:11:32:55
[21/Jul/2014:11:32:59
[21/Jul/2014:11:33:02
[21/Jul/2014:11:33:02
[21/Jul/2014:11:33:10
[21/Jul/2014:11:33:14
[21/Jul/2014:11:33:14
sudo perl -ane "print $F[3] . \"\n\" " /var/log/apache2/access.log | head -n 10
output
ARRAY(0x195fca8)
ARRAY(0x1960128)
ARRAY(0x195fca8)
ARRAY(0x195fd38)
ARRAY(0x1960128)
ARRAY(0x195fca8)
ARRAY(0x195fd38)
ARRAY(0x1960128)
ARRAY(0x195fca8)
ARRAY(0x195fd38)
When I try to dereference the array it shows one.
sudo perl -ane "print #{$F[3]} . \"\n\" " /var/log/apache2/access.log | head -n 10
output
1
1
1
1
1
1
1
1
1
1
I am using ubuntu 12.04 64 bit with guake terminal.
When you're using double quotes $ gets interpolated by Bash and not by Perl. That's why this is happening.
Example:
a='foo bar'; perl -le "print \"$a\""
foo bar

Any way to find if two adjacent new lines start with certain words?

Say I have a file like so:
+jaklfjdskalfjkdsaj
fkldsjafkljdkaljfsd
-jslakflkdsalfkdls;
+sdjafkdjsakfjdskal
I only want to find and count the amount of times during this file a line that starts with - is immediately followed by a line that starts with +.
Rules:
No external scripts
Must be done from within a bash script
Must be inline
I could figure out how to do this in a Python script, for instance, but I've never had to do something this extensive in Bash.
Could anyone help me out? I figure it'll end up being grep, perl, or maybe a talented sed line -- but these are things I'm still learning.
Thank you all!
grep -A1 "^-" $file | grep "^+" | wc -l
The first grep finds all of the lines starting with -, and the -A1 causes it to also output the line after the match too.
We then grep that output for any lines starting with +. Logically:
We know the output of the first grep is only the -XXX lines and the following lines
We know that a +xxx line cannot also be a -xxx line
Therefore, any +xxx lines must be following lines, and should be counted, which we do with wc -l
Easy in Perl:
perl -lne '$c++ if $p and /^\+/; $p = /^-/ }{ print $c' FILE
awk one-liner:
awk -v FS='' '{x=x sprintf("%s", $1)}END{print gsub(/-\+/,"",x)}' file
e.g.
kent$ cat file
+jaklfjdskalfjkdsaj
fkldsjafkljdkaljfsd
-jslakflkdsalfkdls;
+sdjafkdjsakfjdskal
-
-
-
+
-
+
foo
+
kent$ awk -v FS='' '{x=x sprintf("%s", $1)}END{print gsub(/-\+/,"",x)}' file
3
Another Perl example. Not as terse as choroba's, but more transparent in how it works:
perl -e'while (<>) { $last = $cur; $cur = $_; print $last, $cur if substr($last, 0, 1) eq "-" && substr($cur, 0, 1) eq "+" }' < infile
Output:
-jslakflkdsalfkdls;
+sdjafkdjsakfjdskal
Pure bash:
unset c p
while read line ; do
[[ $line == +* && $p == 0 ]] && (( c++ ))
[[ $line == -* ]]
p=$?
done < FILE
echo $c

need perl one liner to get a specific content out of the line and possibly average it

I have a file which had many lines which containts "x_y=XXXX" where XXXX can be a number from 0 to some N.
Now,
a) I would like to get only the XXXX part of the line in every such line.
b) I would like to get the average
Possibly both of these in one liners.
I am trying out sometihng like
cat filename.txt | grep x_y | (this need to be filled)
I am not sure what to file
In the past I have used commands like
perl -pi -e 's/x_y/m_n/g'
to replace all the instances of x_y.
But now, I would like to match for x_y=XXXX and get the XXXX out and then possibly average it out for the entire file.
Any help on this will be greatly appreciated. I am fairly new to perl and regexes.
Timtowtdi (as usual).
perl -nE '$s+=$1, ++$n if /x_y=(\d+)/; END { say "avg:", $s/$n }' data.txt
The following should do:
... | grep 'x_y=' | perl -ne '$x += (split /=/, $_)[1]; $y++ }{ print $x/$y, "\n"'
The }{ is colloquially referred to as eskimo operator and works because of the code which -n places around the -e (see perldoc perlrun).
Using awk:
/^[^_]+_[^=]+=[0-9]+$/ {sum=sum+$2; cnt++}
END {
print "sum:", sum, "items:", cnt, "avg:", sum/cnt
}
$ awk -F= -f cnt.awk data.txt
sum: 55 items: 10 avg: 5.5
Pure bash-solution:
#!/bin/bash
while IFS='=' read str num
do
if [[ $str == *_* ]]
then
sum=$((sum + num))
cnt=$((cnt + 1))
fi
done < data.txt
echo "scale=4; $sum/$cnt" | bc ;exit
Output:
$ ./cnt.sh
5.5000
As a one-liner, split up with comments.
perl -nlwe '
push #a, /x_y=(\d+)/g # push all matches onto an array
}{ # eskimo-operator, is evaluated last
$sum += $_ for #a; # get the sum
print "Average: ", $sum / #a; # divide by the size of the array
' input.txt
Will extract multiple matches on a line, if they exist.
Paste version:
perl -nlwe 'push #a, /x_y=(\d+)/g }{ $sum += $_ for #a; print "Average: ", $sum / #a;' input.txt

variable for field separator in perl

In awk I can write: awk -F: 'BEGIN {OFS = FS} ...'
In Perl, what's the equivalent of FS? I'd like to write
perl -F: -lane 'BEGIN {$, = [what?]} ...'
update with an example:
echo a:b:c:d | awk -F: 'BEGIN {OFS = FS} {$2 = 42; print}'
echo a:b:c:d | perl -F: -ane 'BEGIN {$, = ":"} $F[1] = 42; print #F'
Both output a:42:c:d
I would prefer not to hard-code the : in the Perl BEGIN block, but refer to wherever the -F option saves its argument.
To sum up, what I'm looking for does not exist:
there's no variable that holds the argument for -F, and more importantly
Perl's "FS" is fundamentally a different data type (regular expression) than the "OFS" (string) -- it does not make sense to join a list of strings using a regex.
Note that the same holds true in awk: FS is a string but acts as regex:
echo a:b,c:d | awk -F'[:,]' 'BEGIN {OFS=FS} {$2=42; print}'
outputs "a[:,]42[:,]c[:,]d"
Thanks for the insight and workarounds though.
You can use perl's -s (similar to awk's -v) to pass a "FS" variable, but the split becomes manual:
echo a:b:c:d | perl -sne '
BEGIN {$, = $FS}
#F = split $FS;
$F[1] = 42;
print #F;
' -- -FS=":"
If you know the exact length of input, you could do this:
echo a:b:c:d | perl -F'(:)' -ane '$, = $F[1]; #F = #F[0,2,4,6]; $F[1] = 42; print #F'
If the input is of variable lengths, you'll need something more sophisticated than #f[0,2,4,6].
EDIT: -F seems to simply provide input to an automatic split() call, which takes a complete RE as an expression. You may be able to find something more suitable by reading the perldoc entries for split, perlre, and perlvar.
You can sort of cheat it, because perl is actually using the split function with your -F argument, and you can tell split to preserve what it splits on by including capturing parens in the regex:
$ echo a:b:c:d | perl -F'(:)' -ane 'print join("/", #F);'
a/:/b/:/c/:/d
You can see what perl's doing with some of these "magic" command-line arguments by using -MO=Deparse, like this:
$ perl -MO=Deparse -F'(:)' -ane 'print join("/", #F);'
LINE: while (defined($_ = <ARGV>)) {
our(#F) = split(/(:)/, $_, 0);
print join('/', #F);
}
-e syntax OK
You'd have to change your #F subscripts to double what they'd normally be ($F[2] = 42).
Darnit...
The best I can do is:
echo a:b:c:d | perl -ne '$v=":";#F = split("$v"); $F[1] = 42; print join("$v", #F) . "\n";'
You don't need the -F: this way, and you're only stating the colon once. I was hoping there was someway of setting variables on the command line like you can with Awk's -v switch.
For one liners, Perl is usually not as clean as Awk, but I remember using Awk before I knew of Perl and writing 1000+ line Awk scripts.
Trying things like this made people think Awk was either named after the sound someone made when they tried to decipher such a script, or stood for AWKward.
There is no input record separator in Perl. You're basically emulating awk by using the -a and -F flags. If you really don't want to hard code the value, then why not just use an environmental variable?
$ export SPLIT=":"
$ perl -F$SPLIT -lane 'BEGIN { $, = $ENV{SPLIT}; } ...'