splitting on pipe character in perl - perl

I have a little problem. I want to split a line at every pipe character found using the split operator. Like in this example.
echo "000001d17757274585d28f3e405e75ed|||||||||||1||||||||||||||||||||||||" | \
perl -ane '$data = $_ ; chop $data ; #d = split(/\|/ , $data) ; print $#d+1,"\n" ;'
I would expect an ouput of 36
as awk splitting with the delimiter | return 36, but instead I get 12, as if the split stopped at the 1 character in the line.
echo "000001d17757274585d28f3e405e75ed|||||||||||1|||||||||||||||||||||||||||||||||||||||" | \
awk -F"|" '{print NF}'
Any idea. I have tried many ways of quoting the |, but without success.
Many thanks by advance.

According to split:
By default, empty leading fields are preserved, and empty trailing ones are deleted.
You need to specify a negative limit to the split to get the trailing ones:
split(/\|/, $data, -1)

Related

How to surround a string in double quotes

I have a file with the following
firsttext=cat secondtext=dog thirdtext=mouse
and I want it to return this string:
"firsttext=cat" "secondtext=dog" "thirdtext=mouse"
I yave tried this one-liner but it gives me an error.
cat oneline | perl -ne 'print \"$_ \" '
Can't find string terminator '"' anywhere before EOF at -e line 1.
I don't understand the error.Why can't it just add the quotation marks?
Also, if I have a variable in this string, I want it to be interpolated like:
firsttext=${animal} secondtext=${othervar} thirdtext=mouse
Which should output
"firsttext=cat" "secondtext=dog" "thirdtext=mouse"
perl -lne '#f = map qq/"$_"/, split; print "#f";' oneline
What you want is this:
cat oneline | perl -ne 'print join " ", map { qq["$_"] } split'
The -ne option only splits on lines, it won't split on arbitrary whitespace without other options set.

PERL : Using Text::Wrap and specify the end of line

Yes, I'm re-writing cowsay :)
#!/usr/bin/perl
use Text::Wrap;
$Text::Wrap::columns = 40;
my $FORTUNE = "The very long sentence that will be outputted by another command and it can be very long so it is word-wrapped The very long sentence that will be outputted by another command and it can be very long so it is word-wrapped";
my $TOP = " _______________________________________
/ \\
";
my $BOTTOM = "\\_______________________________________/
";
print $TOP;
print wrap('| ', '| ', $FORTUNE) . "\n";
print $BOTTOM;
Produces this
_______________________________________
/ \
| The very long sentence that will be
| outputted by another command and it
| can be very long so it is
| word-wrapped The very long sentence
| that will be outputted by another
| command and it can be very long so it
| is word-wrapped
\_______________________________________/
How can I get this ?
_______________________________________
/ \
| The very long sentence that will be |
| outputted by another command and it |
| can be very long so it is |
| word-wrapped The very long sentence |
| that will be outputted by another |
| command and it can be very long so it |
| is word-wrapped |
\_______________________________________/
I could not find a way in the documentation, but you can apply a small hack if you save the string. It is possible to assign a new line ending by using a package variable:
$Text::Wrap::separator = "|$/";
You also need to prevent the module from expanding tabs and messing with the character count:
$Text::Wrap::unexpand = 0;
This is simply a pipe | followed by the input record separator $/ (newline most often). This will add a pipe to the end of the line, but no padding space, which will have to be added manually:
my $text = wrap('| ', '| ', $FORTUNE) . "\n";
$text =~ s/(^.+)\K\|/' ' x ($Text::Wrap::columns - length($1)) . '|'/gem;
print $text;
This will match the beginning of each line, ending with a |, add the padding space by multiplying a space by columns minus length of matched string. We use the /m modifier to make ^ match newlines inside the string. .+ by itself will not match newlines, which means each match will be an entire line. The /e modifier will "eval" the replacement part as code, not a string.
Note that it is somewhat of a quick hack, so bugs are possible.
If you're willing to download a more powerful module, you can use Text::Format. It has a lot more options for customizing, but the most relevant one is rightFill which fills the rest of the columns in each line with spaces.
Unfortunately, you can't customize the left and right sides with non-space characters. You can use a workaround by doing regex substitutions, just as Text::NWrap does in its source code.
#!/usr/bin/env perl
use utf8;
use Text::Format;
chop(my $FORTUNE = "The very long sentence that will be outputted by another command and it can be very long so it is word-wrapped " x 2);
my $TOP = "/" . '‾'x39 . "\\\n";
my $BOTTOM = "\\_______________________________________/\n";
my $formatter = Text::Format->new({ columns => 37, firstIndent => 0, rightFill => 1 });
my $text = $formatter->format($FORTUNE);
$text =~ s/^/| /mg;
$text =~ s/\n/ |\n/mg;
print $TOP;
print $text;
print $BOTTOM;

Printing reverse complement of DNA in single-line Perl

I want to write a quick single-line perl script to produce the reverse complement of a sequence of DNA. The following isn't working for me, however:
$ cat sample.dna.sequence.txt | perl -ne '{while (<>) {$seq = $_; $seq =~ tr /atcgATCG/tagcTAGC/; $revComp = reverse($seq); print $revComp;}}'
Any suggestions? I'm aware that
tr -d "\n " < input.txt | tr "[ATGCatgcNn]" "[TACGtacgNn]" | rev
works in bash, but I want to do it with perl for the practice.
Your problem is that is that you're using both -n and while (<>) { }, so you end up with while (<>) { while (<>) { } }.
If you know how to do <file.txt, why did you switch to cat file.txt|?!
perl -0777ne's/\n //g; tr/ATGCatgcNn/TACGtacgNn/; print scalar reverse $_;' input.txt
or
perl -0777pe's/\n //g; tr/ATGCatgcNn/TACGtacgNn/; $_ = reverse $_;' input.txt
Or if you don't need to remove the newlines:
perl -pe'tr/ATGCatgcNn/TACGtacgNn/; $_ = reverse $_;' input.txt
If you need to use cat, the following one liner should work for you.
ewolf#~ $cat foo.txt
atNgNt
gatcGn
ewolf#~ $cat foo.txt | perl -ne '$seq = $_; $seq =~ tr/atcgATCG/tagcTAGC/;print reverse( $seq )'
taNcNa
ctagCn
Considering the DNA sequences in single-line format in a multifasta file:
cat multifasta_file.txt | while IFS= read L; do if [[ $L == >* ]]; then echo "$L"; else echo $L | rev | tr "ATGCatgc" "TACGtacg"; fi; done > output_file.txt
If your multifasta file is not in single-line format, you can transform your file to single-line before using the command above, like this:
awk '/^>/ {printf("\n%s\n",$0);next; } { printf("%s",$0);} END {printf("\n");}' <multifasta_file.txt >multifasta_file_singleline.txt<="" p="">
Then,
cat multifasta_file_SingleLine.txt | while IFS= read L; do if [[ $L == >* ]]; then echo "$L"; else echo $L | rev | tr "ATGCatgc" "TACGtacg"; fi; done > output_file.txt
Hope it is useful for someone. It took me some time to build it.
The problem is that you're using -n in the perl flag, yet you've written your own loop. -n wraps your supplied code in a while loop like while(<STDIN>){...}. So the STDIN file handle has already been read from and your code does it again, getting EOF (end of file) or rather 'undefined'. You either need to remove the n from -ne or remove the while loop from your code.
Incidentally, a complete complement tr pattern, including ambiguous bases, is:
tr/ATGCBVDHRYKMatgcbvdhrykm/TACGVBHDYRMKtacgvbhdyrmk/
Ambiguous bases have complements too. For example, a V stands for an A, C, or G. Their complements are T, G, and C, which is represented by the ambiguous base B. Thus, V and B are complementary.
You don't need to include any N's or n's in your tr pattern (as was demonstrated in another answer) because the complement is the same and leaving them out will leave them untouched. It's just extra processing to put them in the pattern.

Insert comma after certain byte range

I'm trying to turn a big list of data into a CSV. Its basically a giant list with no spaces, and the rows are separated by newlines. I have made a bash script that basically loops through the document, awks out the line, cuts the byte range, and then adds a comma and appends it to the end of the line. It looks like this:
awk -v n=$x 'NR==n { print;exit}' PROP.txt | cut -c 1-12 | tr -d '\n' >> $x.tmp
awk -v n=$x 'NR==n { print;exit}' PROP.txt | cut -c 13-17 | tr -d '\n' | xargs -I {} sed -i '' -e 's~$~,{}~' $x.tmp
awk -v n=$x 'NR==n { print;exit}' PROP.txt | cut -c 18-22 | tr -d '\n' | xargs -I {} sed -i '' -e 's~$~,{}~' $x.tmp
awk -v n=$x 'NR==n { print;exit}' PROP.txt | cut -c 23-34 | tr -d '\n' | xargs -I {} sed -i '' -e 's~$~,{}~' $x.tmp
The problem is this is EXTREMELY slow, and the data has about 400k rows. I know there must be a better way to accomplish this. Essentially I just need to add a comma after every 12/17/22/34 etc character of a line.
Any help is appreciated, thank you!
There are many many ways to do this with Perl. Here is one way:
perl -pe 's/(.{12})(.{5})(.{5})(.{12})/$1,$2,$3,$4,/' < input-file > output-file
The matching pattern in the substitution captures four groups of text from the beginning of each line with 12, 5, 5, and 12 arbitrary characters. The replacement pattern places a comma after each group.
With GNU awk, you could write
gawk 'BEGIN {FIELDWIDTHS="12 5 5 12"; OFS=","} {$1=$1; print}'
The $1=$1 part is to force awk to rewrite the like, incorporating the output field separator, without changing anything.
This is very much a job for substr.
use strict;
use warnings;
my #widths = (12, 5, 5, 12);
my $offset;
while (my $line = <DATA>) {
for my $width (#widths) {
$offset += $width;
substr $line, $offset, 0, ',';
++$offset;
}
print $line;
}
__DATA__
1234567890123456789012345678901234567890
output
123456789012,34567,89012,345678901234,567890

Counting lines ignored by grep

Let me try to explain this as clearly as I can...
I have a script that at some point does this:
grep -vf ignore.txt input.txt
This ignore.txt has a bunch of lines with things I want my grep to ignore, hence the -v (meaning I don't want to see them in the output of grep).
Now, what I want to do is I want to be able to know how many lines of input.txt have been ignored by each line of ignore.txt.
For example, if ignore.txt had these lines:
line1
line2
line3
I would like to know how many lines of input.txt were ignored by ignoring line1, how many by ignoring line2, and so on.
Any ideas on how can I do this?
I hope that made sense... Thanks!
Note that the sum of the ignored lines plus the shown lines may NOT add up to the total number of lines... "line1 and line2 are here" will be counted twice.
#!/usr/bin/perl
use warnings;
use strict;
local #ARGV = 'ignore.txt';
chomp(my #pats = <>);
foreach my $pat (#pats) {
print "$pat: ", qx/grep -c $pat input.txt/;
}
According to unix.stackexchange
grep -o pattern file | wc -l
counts the total number of a given pattern in the file. A solution, given this and the information, that you already use a script, is to use several grep instances to filter and count the patterns, which you want to ignore.
However, I'd try to build a more comfortable solution involving a scripting language like e.g. python.
This script will count the matched lines by hash lookup and save the lines to be printed in #result, where you may process them as you will. To emulate grep, just print them.
I made the script so it can print out an example. To use with the files, uncomment the code in the script, and comment the ones marked # example line.
Code:
use strict;
use warnings;
use v5.10;
use Data::Dumper; # example line
# Example data.
my #ignore = ('line1' .. 'line9'); # example line
my #input = ('line2' .. 'line9', 'fo' .. 'fx', 'line2', 'line3'); # example line
#my $ignore = shift; # first argument is ignore.txt
#open my $fh, '<', $ignore or die $!;
#chomp(my #ignore = <$fh>);
#close $fh;
my #result;
my %lookup = map { $_ => 0 } #ignore;
my $rx = join '|', map quotemeta, #ignore;
#while (<>) { # This processes the remaining arguments, input.txt etc
for (#input) { # example line
chomp; # Required to avoid bugs due to missing newline at eof
if (/($rx)/) {
$lookup{$1}++;
} else {
push #result, $_;
}
}
#say for #result; # This will emulate grep
print Dumper \%lookup; # example line
Output:
$VAR1 = {
'line6' => 1,
'line1' => 0,
'line5' => 1,
'line2' => 2,
'line9' => 1,
'line3' => 2,
'line8' => 1,
'line4' => 1,
'line7' => 1
};
while IFS= read -r pattern ; do
printf '%s:' "$pattern"
grep -c -v "$pattern" input.txt
done < ignore.txt
grep with -c counts matching lines, but with -v added it counts non-matching lines. So, simply loop over the patterns and count once for each pattern.
This will print the number of ignored matches along with the matching pattern:
grep -of ignore.txt input.txt | sort | uniq -c
For example:
$ perl -le 'print "Coroline" . ++$s for 1 .. 21' > input.txt
$ perl -le 'print "line2\nline14"' > ignore.txt
$ grep -of ignore.txt input.txt | sort | uniq -c
1 line14
3 line2
I.e., A line matching "line14" was ignored once. A line matching "line2" was ignored 3 times.
If you just wanted to count the total ignored lines this would work:
grep -cof ignore.txt input.txt
Update: modified the example above to use strings so that the output is a little clearer.
This might work for you:
# seq 1 15 | sed '/^1/!d' | sed -n '$='
7
Explanation:
Delete all lines except those that match. Pipe these matching (ignored) lines to another sed command. Delete all these lines but show the line number only of the last line. So in this example 1 thru 15, lines 1,10 thru 15 are ignored - a total of 7 lines.
EDIT:
Sorry misread the question (still a little confused!):
sed 's,.*,sed "/&/!d;s/.*/matched &/" input.txt| uniq -c,' ignore.txt | sh
This shows the number of matches for each pattern in the the ignore.txt
sed 's,.*,sed "/&/d;s/.*/non-matched &/" input.txt | uniq -c,' ignore.txt | sh
This shows the number of non-matches for each pattern in the the ignore.txt
If using GNU sed, these should work too:
sed 's,.*,sed "/&/!d;s/.*/matched &/" input.txt | uniq -c,;e' ignore.txt
or
sed 's,.*,sed "/&/d;s/.*/non-matched &/" input.txt | uniq -c,;e' ignore.txt
N.B. Your success with patterns may vary i.e. check for meta characters beforehand.
On reflection I thought this can be improved to:
sed 's,.*,/&/i\\matched &,;$a\\d' ignore.txt | sed -f - input.txt | sort -k2n | uniq -c
or
sed 's,.*,/&/!i\\non-matched &,;$a\\d' ignore.txt | sed -f - input.txt | sort -k2n | uniq -c
But NO, on large files this is actually slower.
Are both ignore.txt and input.txt sorted?
If so, you can use the comm command!
$ comm -12 ignore.txt input.txt
How many lines are ignored?
$ comm -12 ignore.txt input.txt | wc -l
Or, if you want to do more processing, combine comm with awk.:
$ comm ignore.txt input.txt | awk '
END {print "Ignored lines = " igtotal " Lines not ignored = "commtotal " Lines unique to Ignore file = " uniqtotal}
{
if ($0 !~ /^\t/) {uniqtotal+=1}
if ($0 ~ /^\t[^\t]/) {commtotal+=1}
if ($0 ~ /^\t\t/) {igtotal+=1}
}'
Here I'm taking advantage with the tabs that are placed in the output by the comm command:
* If there are no tabs, the line is in ignore.txt only.
* If there is a single tab, it is in input.txt only
* If there are two tabs, the line is in both files.
By the way, not all the lines in ignore.txt are ignored. If the line isn't also in input.txt, the line can't really be said to be ignored.
With Dennis Williamson's Suggestion
comm ignore.txt input.txt | awk '
!/^\t/ {uniqtotal++}
/^\t[^\t]/ {commtotal++}
/^\t\t/ {igtotal++}
END {print "Ignored lines = " igtotal " Lines not ignored = "commtotal " Lines unique to Ignore file = " uniqtotal}'