Extract text with different delimiters - perl

my textfile looks like this
foo.en 14 :: xyz 1;foo bar 2;foofoo 5;bar 9
bar.es 18 :: foo bar 4;kjp bar 2;bar 6;barbar 8
Ignoring text before the :: delimiter, is there a one liner unix command (many pipes allowed) or one liner perl script that extract the text such that yields the output of unique words delimited by ; ?:
xyz
foo bar
foofoo
bar
kjp bar
barbar
i've tried looping through the textfile with a python script but i'm looking for a one-liner for the task.
ans = set()
for line in open(textfile):
ans.add(line.partition(" :: ")[1].split(";").split(" ")[:-1])
for a in ans:
print a

With Perl:
perl -nle 's/.*?::\s*//;!$s{$_}++ and print for split /\s*\d+;?/' input
Description:
s/.*?::\s*//; # delete up to the first '::'
This part:
!$s{$_}++ and print for split /\s*\d+;?/
can be rewritten like this:
foreach my $word (split /\s*\d+;?/) { # for split /\s*\d+;?/
if (not defined $seen{$word}}) { # !$s{$_}
print $word; # and print
}
$seen{$word}++; # $s{$_}++
}
Since the increment in !$s{$_}++ is a post increment, Perl first test for the false condition and then does the increment. An undefined hash value has the value 0. If the test fails, i.e., $s{$_} was previously incremented, then the and part is skipped due to short circuiting.

cat textfile | sed 's/.*:://g' | tr '[0-9]*;' '\n' | sort -u
Explanation:
sed 's/.*:://g' Take everything up to and including `::` and replace it with nothing
tr '[0-9];' '\n' Replace numbers and semicolon with newlines
sort -u Sort, and return unique instances
it does result in a sorted output, I believe...

You can try this:
$ awk -F ' :: ' '{print $2}' input.txt | grep -oP '[^0-9;]+' | sort -u
bar
barbar
foo bar
foofoo
kjp bar
xyz
If your phrases contains numbers, try this perl regex: '[^;]+?(?=\s+\d+(;|$))'

With only awk :
$ awk -F' :: ' '{
gsub(/[0-9]+/, "")
split($2, arr, /;/ )
for (a in arr) arr2[arr[a]]=""
}
END{
for (i in arr2) print i
}' textfile.txt
And a one-liner version :
awk -F' :: ' '{gsub(/[0-9]+/, "");split($2, arr, /;/ );for (a in arr) arr2[arr[a]]="";}END{for (i in arr2) print i}' textfile.txt

Related

to arrange the value in columns as per value in 1st column

I have a file with following data
cat text.txt
281475473926267,46,47
281474985385546,310,311
281474984889537,248,249
281475473926267,16,17
281474985385546,20,28
281474984889537,112,68
The values in 1st column are duplicate at some places
i want o/p as given below
cat output.txt
281475473926267 16,17,46,47
281474985385546 20,28,310,311
281474984889537 68,112,248,249
It should print uniq values of column 1 and then space and then it should print respective values of other column in one line arranged in ascending order.
I tried below:
cat text.txt | perl -F, -lane ' $kv{$F[0]}{$F[1]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) }}'
281474984889537 112,248
281474985385546 310,20
281475473926267 46,16
here i am not able to print all the values in front of value in 1st column
for 281474984889537 it should print 68,112,248,249, but its printing only 112,248
also i am not sure how to arrange them in ascending order.
cat text.txt | perl -F, -lane ' $kv{$F[0]}{$F[1]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) }}'
281474984889537 112,248
281474985385546 310,20
281475473926267 46,16
here i am not able to print all the values in front of value in 1st column
multi-step
$ awk -F, '{print $1,$2; print $1,$3}' file |
sort -k1n -k2n |
awk 'p!=$1{if(p) print p,a[p]; a[$1]=$2; p=$1; next}
{a[$1]=a[$1] "," $2}
END {print p,a[p]}' |
sort -k2n
281475473926267 16,17,46,47
281474985385546 20,28,310,311
281474984889537 68,112,248,249
With GNU awk for true multi-dimensional arrays and sorted_in:
$ cat tst.awk
BEGIN { FS="," }
{
for (i=2; i<=NF; i++) {
keyVals[$1][$i]
}
}
END {
PROCINFO["sorted_in"] = "#ind_num_asc"
for (key in keyVals) {
vals = ""
for (val in keyVals[key]) {
vals = (vals == "" ? "" : vals ",") val
}
print key, vals
}
}
$ awk -f tst.awk file
281474984889537 68,112,248,249
281474985385546 20,28,310,311
281475473926267 16,17,46,47
The above will work no matter how many fields you have on each line and it will remove duplicate values when they occur on multiple lines for the same key value.
This might work for you (GNU sed):
sed -r 'H;x;s/((\n[^\n,]*),[^\n]*)(.*)\2([^\n]*)\n?/\1\4\3/;x;$!d;x;s/.//;:b;h;s/\n.*//;s/[^,]*,//;s/,/\n/g;s/.*/echo "&"|sort -n|paste -sd,/e;G;s/^([^\n]*)\n([^\n,]*),[^\n]*/\2 \1/;P;:c;tc;s/[^\n]*\n//;tb;d' file
The script works in two parts. In the first part of the processing the lines of the file are held in memory and reduced in size by appending values of the same key to a single key. At the end of file the second part of processing is enacted. Each line is broken into two, the appended values are sorted and re-appended to the key, printed and removed, until all the lines have been processed.
To correct your Perl-oneliner, use this.
$ cat text.txt
281475473926267,46,47
281474985385546,310,311
281474984889537,248,249
281475473926267,16,17
281474985385546,20,28
281474984889537,112,68
$ cat text.txt | perl -F, -lanE ' #t1=#{$kv{$F[0]}}; push(#t1,#F[1..2]); $kv{$F[0]}=[#t1]; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",#{$y}) }}'
281474985385546 310,311,20,28
281475473926267 46,47,16,17
281474984889537 248,249,112,68
$
When you have more columns, a small change on the above one-liner from 1..2 to 1..$#F will do the trick. Check this out
$ cat > text2.txt
281475473926267,46,47,49
281474985385546,310,311
281474984889537,248,249,311,677,213
281475473926267,16,17
281474985385546,20,28
281474984889537,112,68,54,78,324,67
$ cat text2.txt | perl -F, -lanE ' #t1=#{$kv{$F[0]}}; push(#t1,#F[1..$#F]); $kv{$F[0]}=[#t1]; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",#{$y}) }}'
281474984889537 248,249,311,677,213,112,68,54,78,324,67
281474985385546 310,311,20,28
281475473926267 46,47,49,16,17
$

Sed - replace words

I have a problem with replacing string.
|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
I want to find occurrence of Svc till | appears and swap place with Stm till | appears.
My attempts went to replacing characters and this is not my goal.
awk -F'|' -v OFS='|'
'{a=b=0;
for(i=1;i<=NF;i++){a=$i~/^Stm=/?i:a;b=$i~/^Svc=/?i:b}
t=$a;$a=$b;$b=t}7' file
outputs:
|Svc=101|Seq=2|Num=2|Stm=2|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
the code exchange the column of Stm.. and Svc.., no matter which one comes first.
If perl solution is okay, assumes only one column matches each for search terms
$ cat ip.txt
|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
$ perl -F'\|' -lane '
#i = grep { $F[$_] =~ /Svc|Stm/ } 0..$#F;
$t=$F[$i[0]]; $F[$i[0]]=$F[$i[1]]; $F[$i[1]]=$t;
print join "|", #F;
' ip.txt
|Svc=101|Seq=2|Num=2|Stm=2|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
-F'\|' -lane split input line on |, see also Perl flags -pe, -pi, -p, -w, -d, -i, -t?
#i = grep { $F[$_] =~ /Svc|Stm/ } 0..$#F get index of columns matching Svc and Stm
$t=$F[$i[0]]; $F[$i[0]]=$F[$i[1]]; $F[$i[1]]=$t swap the two columns
Or use ($F[$i[0]], $F[$i[1]]) = ($F[$i[1]], $F[$i[0]]); courtesy How can I swap two Perl variables
print join "|", #F print the modified array
You need to use capture groups and backreferences in a string substition.
The below will swap the 2:
echo '|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631' | sed 's/\(Stm.*|\)\(.*\)\(Svc.*|\)/\3\2\1/'
As pointed out in the comment from #Kent, this will not work if the strings were not in that order.

need perl one liner to get a specific content out of the line and possibly average it

I have a file which had many lines which containts "x_y=XXXX" where XXXX can be a number from 0 to some N.
Now,
a) I would like to get only the XXXX part of the line in every such line.
b) I would like to get the average
Possibly both of these in one liners.
I am trying out sometihng like
cat filename.txt | grep x_y | (this need to be filled)
I am not sure what to file
In the past I have used commands like
perl -pi -e 's/x_y/m_n/g'
to replace all the instances of x_y.
But now, I would like to match for x_y=XXXX and get the XXXX out and then possibly average it out for the entire file.
Any help on this will be greatly appreciated. I am fairly new to perl and regexes.
Timtowtdi (as usual).
perl -nE '$s+=$1, ++$n if /x_y=(\d+)/; END { say "avg:", $s/$n }' data.txt
The following should do:
... | grep 'x_y=' | perl -ne '$x += (split /=/, $_)[1]; $y++ }{ print $x/$y, "\n"'
The }{ is colloquially referred to as eskimo operator and works because of the code which -n places around the -e (see perldoc perlrun).
Using awk:
/^[^_]+_[^=]+=[0-9]+$/ {sum=sum+$2; cnt++}
END {
print "sum:", sum, "items:", cnt, "avg:", sum/cnt
}
$ awk -F= -f cnt.awk data.txt
sum: 55 items: 10 avg: 5.5
Pure bash-solution:
#!/bin/bash
while IFS='=' read str num
do
if [[ $str == *_* ]]
then
sum=$((sum + num))
cnt=$((cnt + 1))
fi
done < data.txt
echo "scale=4; $sum/$cnt" | bc ;exit
Output:
$ ./cnt.sh
5.5000
As a one-liner, split up with comments.
perl -nlwe '
push #a, /x_y=(\d+)/g # push all matches onto an array
}{ # eskimo-operator, is evaluated last
$sum += $_ for #a; # get the sum
print "Average: ", $sum / #a; # divide by the size of the array
' input.txt
Will extract multiple matches on a line, if they exist.
Paste version:
perl -nlwe 'push #a, /x_y=(\d+)/g }{ $sum += $_ for #a; print "Average: ", $sum / #a;' input.txt

sed/awk: replace N occurrence

it's possible to change N (for example second occurrence) in file using one-line sed/awk except such method?:
line_num=`awk '/WHAT_TO_CHANGE/ {c++; if (c>=2) {c=NR;exit}}END {print c}' INPUT_FILE` && sed "$line_num,$ s/WHAT_TO_CHANGE/REPLACE_TO/g" INPUT_FILE > OUTPUT_FILE
Thanks
To change the Nth occurence in a line you can use this:
$ echo foo bar foo bar foo bar foo bar | sed 's/foo/FOO/2'
foo bar FOO bar foo bar foo bar
So all you have to do is to create a "one-liner" of your text, e.g. using tr
tr '\n' ';'
do your replacement and then convert it back to a multiline again using
tr ';' '\n'
This awk solution assumes that WHAT_TO_CHANGE occurs only once per line. The following replaces the second 'one' with 'TWO':
awk -v n=2 '/one/ { if (++count == n) sub(/one/, "TWO"); } 1' file.txt

variable for field separator in perl

In awk I can write: awk -F: 'BEGIN {OFS = FS} ...'
In Perl, what's the equivalent of FS? I'd like to write
perl -F: -lane 'BEGIN {$, = [what?]} ...'
update with an example:
echo a:b:c:d | awk -F: 'BEGIN {OFS = FS} {$2 = 42; print}'
echo a:b:c:d | perl -F: -ane 'BEGIN {$, = ":"} $F[1] = 42; print #F'
Both output a:42:c:d
I would prefer not to hard-code the : in the Perl BEGIN block, but refer to wherever the -F option saves its argument.
To sum up, what I'm looking for does not exist:
there's no variable that holds the argument for -F, and more importantly
Perl's "FS" is fundamentally a different data type (regular expression) than the "OFS" (string) -- it does not make sense to join a list of strings using a regex.
Note that the same holds true in awk: FS is a string but acts as regex:
echo a:b,c:d | awk -F'[:,]' 'BEGIN {OFS=FS} {$2=42; print}'
outputs "a[:,]42[:,]c[:,]d"
Thanks for the insight and workarounds though.
You can use perl's -s (similar to awk's -v) to pass a "FS" variable, but the split becomes manual:
echo a:b:c:d | perl -sne '
BEGIN {$, = $FS}
#F = split $FS;
$F[1] = 42;
print #F;
' -- -FS=":"
If you know the exact length of input, you could do this:
echo a:b:c:d | perl -F'(:)' -ane '$, = $F[1]; #F = #F[0,2,4,6]; $F[1] = 42; print #F'
If the input is of variable lengths, you'll need something more sophisticated than #f[0,2,4,6].
EDIT: -F seems to simply provide input to an automatic split() call, which takes a complete RE as an expression. You may be able to find something more suitable by reading the perldoc entries for split, perlre, and perlvar.
You can sort of cheat it, because perl is actually using the split function with your -F argument, and you can tell split to preserve what it splits on by including capturing parens in the regex:
$ echo a:b:c:d | perl -F'(:)' -ane 'print join("/", #F);'
a/:/b/:/c/:/d
You can see what perl's doing with some of these "magic" command-line arguments by using -MO=Deparse, like this:
$ perl -MO=Deparse -F'(:)' -ane 'print join("/", #F);'
LINE: while (defined($_ = <ARGV>)) {
our(#F) = split(/(:)/, $_, 0);
print join('/', #F);
}
-e syntax OK
You'd have to change your #F subscripts to double what they'd normally be ($F[2] = 42).
Darnit...
The best I can do is:
echo a:b:c:d | perl -ne '$v=":";#F = split("$v"); $F[1] = 42; print join("$v", #F) . "\n";'
You don't need the -F: this way, and you're only stating the colon once. I was hoping there was someway of setting variables on the command line like you can with Awk's -v switch.
For one liners, Perl is usually not as clean as Awk, but I remember using Awk before I knew of Perl and writing 1000+ line Awk scripts.
Trying things like this made people think Awk was either named after the sound someone made when they tried to decipher such a script, or stood for AWKward.
There is no input record separator in Perl. You're basically emulating awk by using the -a and -F flags. If you really don't want to hard code the value, then why not just use an environmental variable?
$ export SPLIT=":"
$ perl -F$SPLIT -lane 'BEGIN { $, = $ENV{SPLIT}; } ...'