My input has a mix of tabs and spaces for readability. I want to modify a field using perl -a, then print out the line in its original form. (The data is from findup, showing me a count of duplicate files and the space they waste.) Input is:
2 * 4096 backup/photos/photo.jpg photos/photo.jpg
2 * 111276032 backup/books/book.pdf book.pdf
The output would convert field 3 to kilobytes, like this:
2 * 4 KB backup/photos/photo.jpg photos/photo.jpg
2 * 108668 KB backup/books/book.pdf book.pdf
In my dream world, this would be my code, since I could just will perl to automatically recombine #F and preserve the original whitespace:
perl -lanE '$F[2]=int($F[2]/1024)." KB"; print;'
In real life, joining with a single space seems like my only option:
perl -lanE '$F[2]=int($F[2]/1024)." KB"; print join(" ", #F);'
Is there any automatic variable which remembers the delimiters? If I had a magic array like that, the code would be:
perl -lanE 'BEGIN{use List::Util "reduce";} $F[2]=int($F[2]/1024)." KB"; print reduce { $a . shift(#magic) . $b } #F;'
No, there is no such magic object. You can do it by hand though
perl -wnE'#p = split /(\s+)/; $p[4] = int($p[4]/1024); print #p' input.txt
The capturing parens in split's pattern mean that it is also returned, so you catch exact spaces. Since spaces are in the array we now need the fifth field.
As it turns out, -F has this same property. Thanks to Сухой27. Then
perl -F'(\s+)' -lanE'$F[4] = int($F[4]/1024); say #F' input.txt
Note: with 5.20.0 "-F now implies -a and -a implies -n". Thanks to ysth.
You could just find the correct part of the line and modify it:
perl -wpE's/^\s*+(?>\S+\s+){2}\K(\S+)/int($1\/1024) . " KB"/e'
Related
I have a file that contains for some of the lines a number that is coded as text -> binary -> octets and I need to decode that to end up with the number.
All the lines where this encoded string is, begins with STRVID:
For example I have in one of the lines:
STRVID: SarI3gXp
If I do this echo "SarI3gXp" | perl -lpe '$_=unpack"B*"' I get the number in binary
0101001101100001011100100100100100110011011001110101100001110000
Now just to decode from binary to octets I do this (assign the previous command to a variable and then convert binary to octets
variable=$(echo "SarI3gXp" | perl -lpe '$_=unpack"B*"') ; printf '%x\n' "$((2#$variable))"
The result is the number but not in the correct order
5361724933675870
To get the previous number in the correct order I have to get for each couple of digits first the second digit and then the first digit to finally have the number I'm looking for. Something like this:
variable=$(echo "SarI3gXp" | perl -lpe '$_=unpack"B*"') ; printf '%x\n' "$((2#$variable))" | gawk 'BEGIN {FS = ""} {print $2 $1 $4 $3 $6 $5 $8 $7 $10 $9 $12 $11 $14 $13 $16 $15}'
And finally I have the number I'm looking for:
3516279433768507
I don't have any clue on how to do this automatically for every line that begins with STRVID: in my file. At the end what I need is the whole file but when a line begins with STRVID: then the decoded value.
When I find this:
STRVID: SarI3gXp
I will have in my file
STRVID: 3516279433768507
Can someone help with this?
First of all, all you need for the conversion is
unpack "h*", "SarI3gXp"
A perl one-liner using -p will execute the provided program for each line, and s///e allows us to modify a string with code as the replacement expression.
perl -pe's/^STRVID:\s*\K\S+/ unpack "h*", $& /e'
See Specifying file to process to Perl one-liner.
Please inspect the following sample demo code snippet for compliance with your problem.
You do not need double conversion when it can be done in one go.
Note: please read pack documentation , unpack utilizes same TEMPLATE
use strict;
use warnings;
use feature 'say';
while( <DATA> ) {
chomp;
/^STRVID: (.+)/
? say 'STRVID: ' . unpack("h*",$1)
: say;
}
__DATA__
It would be nice if you provide proper input data sample
STRVID: SarI3gXp
Perhaps the result of this script complies with your requirements.
To work with real input data file replace
while( <DATA> ) {
with
while( <> ) {
and pass filename as an argument to the script.
Output
It would be nice if you provide proper input data sample
STRVID: 3516279433768507
Perhaps the result of this script complies with your requirements.
To work with real input data file replace
while( <DATA> ) {
with
while( <> ) {
and pass filename as an argument to the script.
./script.pl input_file.dat
you can cross flip the numbers entirely via regex (and without back-references either) :
variable=$(echo "SarI3gXp" | perl -lpe '$_=unpack"B*"') ;
printf '%x\n' "$((2#$variable))" |
mawk -F'^$' 'gsub("..", "_&=&_") + gsub(\
"(^|[0-9]_)(_[0-9]|$)", _)+gsub("=",_)^_'
1 3516279433768507
The idea is to make a duplicate copy on the other side, like this :
_53=53__61=61__72=72__49=49__33=33__67=67__58=58__70=70_
then scrub out the leftovers, since the numbers u now want are anchoring the 2 sides of each equal sign ("=")
I have different files like below format
Scenario 1 :
File1
no,name
1,aaa
20,bbb
File2
no,name,address
5,aaa,ghi
7,ccc,mn
I would like to get column list which is having more number of columns and if it is in the same order
**Expected output for scenario 1 :**
no,name,address
Scenario 2 :
File1
no,name
1,aaa
20,bbb
File2
no,age,name,address
5,2,aaa,ghi
7,3,ccc,mn
Expected Results :
Both file headers and positions are different as a message
I am interested in any short solution using bash / perl / sed / awk.
Perl solution:
perl -lne 'push #lines, $_;
close ARGV;
next if #lines < 2;
#lines = sort { length $a <=> length $b } #lines;
if (0 == index "$lines[1],", $lines[0]) {
print $lines[1];
} else {
print "Both file headers and positions are different";
}' -- File1 File2
-n reads the input line by line and runs the code for each line
-l removes newlines from input and adds them to printed lines
closing the special file handle ARGV makes Perl open the next file and read from it instead of processing the rest of the currently opened file.
next makes Perl go back to the beginning of the code, it can continue once more than one input line has been read.
sort sorts the lines by length so that we know the longer one is in the second element of the array.
index is used to check whether the shorter header is a prefix of the longer one (including the comma after the first header, so e.g. no,names is correctly rejected)
I'm trying to split a file by ",". It is a CSV file.
However, one "column" has values that includes "/" and spaces. And it seems to freak out with that column and does not print anything after that column but moves on to the next row.
My code is simply:
perl -lane '#values = split(",",$F[0]); print $values[0]."\t".$values[3];' basefile.txt > newfile.txt
The basefile.txt looks like:
"1","text","abc // 123 /// some more text // text","filename1"
"2","text","abc // 123 /// some more text // text","filename2"
"3","text","abc // 123 /// some more text // text","filename3"
My newfile.txt should have an output of:
"1","filename1"
"2","filename2"
"3","filename3"
Instead I get:
"1",
"2",
"3",
Thanks!
It's not the / that is confusing perl here, it's the spaces combined with the -a flag. Try:
perl -lne '#values = split(",",$_); print $values[0]."\t".$values[3]' basefile
Or, better yet, use Text::CSV_XS to do the splitting.
It's not the '/', it's the spaces.
The -a flag causes perl to split each line of input and put the fields into the variable #F. The delimiter for this split operation is whitespace, unless you override it with the -Fdelimiter option on the command line, too.
So for the input
"1","text","abc // 123 /// some more text // text","filename"
with the -lan flags specified, perl sets
$F[0] = '"1","text","abc';
$F[1] = '//';
$F[2] = '123';
$F[3] = '///';
$F[4] = 'some';
etc.
It seems like you just want to do your split operation on the whole line. In which case you should stop using the -a flag and just say
#values = split(",",$_); ...
or leverage the -a and -F... options and say
perl -F/,/ -lane '#values=#F; ...'
I am trying to remove all but the first character of a specific field in a .tab file. I want to keep only first character in fields 10 and 11.
Normally the fields have 35 characters in them, so I used:
awk '{gsub ("..................................$","",$10;print} file
however, there are some fields which have less than 35, and were ignored by this replace function. I tired using substring, but I cannot figure out how to make it field specific. I believe there is a way to use perl inside awk so that I can use the function
perl -pe 's/(.).*/$1/g'
but I am not sure how to do that and use the field as the input value, so the file comes out identical except for the altered field.
is there a way to do the perl equivalent with gsub, or the awk equivalent with perl?
help is appreciated!
One way using awk:
awk '{ for (i=10;i<=11;i++) { $i = substr( $i, 1, 1) } } { print }' infile
Another way using gensub function of gawk
gawk '{ for (i=10;i<=11;i++) { $i = gensub(/(.).*/ , "\\1", G , $i) } }1' infile
A shortest awk version, I could figure out:
awk '($10=substr($10,1,1))&&$11=substr($11,1,1)' infile
If the 10th and/or 11th field is not existing then the line is not printed.
Similar version in perl
perl -ane '$F[9]=~s/(.).*/$1/;$F[10]=~s/(.).*/$1/;print "#F\n"' infile
This prints the line even if 10th and/or 11th field is not defined.
Another way with perl:
perl -pe '$c=0; s/(\S+)/(++$c < 10 || $c > 11) ? $1 : substr($1,0,1)/eg' filename
I have a directory full of files containing records like:
FAKE ORGANIZATION
799 S FAKE AVE
Northern Blempglorff, RI 99xxx
01/26/2011
These items are being held for you at the location shown below each one.
IF YOU ASKED THAT MATERIAL BE MAILED TO YOU, PLEASE DISREGARD THIS NOTICE.
The Waltons. The complete DAXXXX12118198
Pickup at:CHUPACABRA LOCATION 02/02/2011
GRIMLY, WILFORD
29 FAKE LANE
S. BLEMPGLORFF RI 99XXX
I need to remove all entries with the expression Pickup at:CHUPACABRA LOCATION.
The "record separator" issue:
I can't touch the input file's formatting -- it must be retained as is. Each record
is separated by roughly 40+ new lines.
Here's some awk ( this works ):
BEGIN {
RS="\n\n\n\n\n\n\n\n\n+"
FS="\n"
}
!/CHUPACABRA/{print $0}
My stab with perl:
perl -a -F\n -ne '$/ = "\n\n\n\n\n\n\n\n\n+";$\ = "\n";chomp;$regex="CHUPACABRA";print $_ if $_ !~ m/$regex/i;' data/lib51.000
Nothing is returned. I'm not sure how to specify 'field separator' in perl except at the commandline. Tried the a2p utility -- no dice. For the curious, here's what it produces:
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z
# process any FOO=bar switches
#$FS = ' '; # set field separator
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
$/ = "\n\n\n\n\n\n\n\n\n+";
$FS = "\n";
while (<>) {
chomp; # strip record separator
if (!/CHUPACABRA/) {
print $_;
}
}
This has to run under someone's Windows box otherwise I'd stick with awk.
Thanks!
Bubnoff
EDIT ( SOLVED ) **
Thanks mob!
Here's a ( working ) perl script version ( adjusted a2p output ):
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z
# process any FOO=bar switches
#$FS = ' '; # set field separator
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
$/ = "\n"x10;
$FS = "\n";
while (<>) {
chomp; # strip record separator
if (!/CHUPACABRA/) {
print $_;
}
}
Feel free to post improvements or CPAN goodies that make this more idiomatic and/or perl-ish. Thanks!
In Perl, the record separator is a literal string, not a regular expression. As the perlvar doc famously says:
Remember: the value of $/ is a string, not a regex. awk has to be better for something. :-)
Still, it looks like you can get away with $/="\n" x 10 or something like that:
perl -a -F\n -ne '$/="\n"x10;$\="\n";chomp;$regex="CHUPACABRA";
print if /\S/ && !m/$regex/i;' data/lib51.000
Note the extra /\S/ &&, which will skip empty paragraphs from input that has more than 20 consecutive newlines.
Also, have you considered just installing Cygwin and having awk available on your Windows machine?
There is no need for (much)conversion if you can download gawk for windows
Did you know that Perl comes with a program called a2p that does exactly what you described you want to do in your title?
And, if you have Perl on your machine, the documentation for this program is already there:
C> perldoc a2p
My own suggestion is to get the Llama book and learn Perl anyway. Despite what the Python people say, Perl is a great and flexible language. If you know shell, awk and grep, you'll understand many of the Perl constructs without any problems.