i'm very new to Perl and I've been trying to implement a function that print out the line with the largest number from the standard input. For example, If the input is:
Hello, i'm 18
1 this year is 2019 1
1 2 3 - 4
The output should be: 1 this year is 2019 1
And secondly, I would like to know what does $line =~ /-?(?:\d+.?\d*|.\d+)/g mean?
The following is what I've tried, it is not working and I hope someone could fix it for me. I'm struggling with filtering out random characters but leaving out the digit.
Is it necessary to push the largest number onto an array? Is there any way that once we could do this in one step?
#!/usr/bin/perl -w
while ($lines = <STDIN>){
#input = $lines =~ /\d+/g;
if (!#input){
} else {
$largest_number = sort {$a <=> $b} #input[0];
push(#number, $largest_number);
}
}
if (!#number){
}else{
print $largest_number;
}
#input[0] returns just the first value from the array. You probably want to use #input instead - but this way you'd get the numbers from one line sorted. Also, you need to store the whole line somewhere in order to be able to display it later.
Here's how I would do it:
#!/usr/bin/perl
use warnings;
use strict;
my #max;
while (my $line = <>) {
my #numbers = $line =~ /\d+/g;
for my $n (#numbers) {
if (! #max || $n > $max[0]) {
#max = ($n, $line);
}
}
}
print $max[1] if #max;
The #max array stores the largest number in $max[0], while $max[1] keeps the whole line. You just compare each number to the largest one, there's no need to search for the maximum for each line.
To store all the lines containing the largest number, change the loop body to
if (! #max || $n > $max[0]) {
#max = ($n, $line);
} elsif ($n == $max[0]) {
push #max, $line;
}
and the last line to
print #max[ 1 .. $#max ] if #max;
Related
I have a below fix file and I want to find out how many orders are sent at same time. I am using tag 52 as the sending time.
Below is the file,
8=FIX.4.2|9=115|35=A|52=20080624-12:43:38.021|10=186|
8=FIX.4.2|52=20080624-12:43:38.066|10=111|
8=FIX.4.2|9=105|35=1|22=BOO|52=20080624-12:43:39.066|10=028|
If I want to count number how many same occurances of Tag 52 values were sent? How can I check?
So far, I have written below code but not giving me the frequency.
#!/usr/bin/perl
$f = '2.txt';
open (F,"<$f") or die "Can not open\n";
while (<F>)
{
chomp $_;
#data = split (/\|/,$_);
foreach $data (#data)
{
if ( $data == 52){
#data1 = split ( /=/,$data);
for my $j (#data1)
{
$hash{$j}++;
} for my $j (keys %hash)
{
print "$j: ", $hash{j}, "\n";
}
}
}
}
Here is your code corrected:
#!/usr/bin/perl
$f = '2.txt';
open (F,"<$f") or die "Can not open\n";
my %hash;
while (<F>) {
chomp $_;
#data = split (/\|/,$_);
foreach $data (#data) {
if ($data ~= /^52=(.*)/) {
$hash{$1}++;
}
}
}
for my $j (keys %hash) {
print "$j: ", $hash{j}, "\n";
}
Explanation:
if ( $data == 52) compares the whole field against value 52, not a substring of the field. Of course, you do not have such fields, and the test always fails. I replaces it with a regexp comparison.
The same regexp gives an opportunity to catch a timestamp immediately, without a need to split the field once more. It is done by (.*) in the regexp and $1 in the following assignment.
It is hardly makes sense to output the hash for every line of input data (your code outputs it within the foreach loop). I moved it down. But maybe, outputting the current hash for every line is what you wanted, I do not know.
I'm quite new to Perl and I have some problems in skipping lines using a foreach loop. I want to copy some lines of a text file to a new one.
When the first words of a line are FIRST ITERATION, skip two more lines and print everything following until the end of the file or an empty line is encountered.
I've tried to find out a similar post but nobody talks about working with text files.
This is the form I thought of
use 5.010;
use strict;
use warnings;
open( INPUT, "xxx.txt" ) or die("Could not open log file.");
open( OUT, ">>yyy.txt" );
foreach my $line (<INPUT>) {
if ( $line =~ m/^FIRST ITERATION/ ) {
# print OUT
}
}
close(OUT);
close(INFO);
I tried using next and $line++ but my program prints only the line that begins with FIRST ITERATION.
I may try to use a for loop but I don't know how many lines my file may have, nor do I know how many lines there are between "First Iteration" and the next empty line.
The simplest way is to process the file a line at a time and keep a state flag which is set to 1 if the current line is begins with FIRST ITERATION and 0 if it is blank, otherwise it is incremented if it is already positive so that it provides a count of the line number within the current block
This solution expects the path to the input file as a parameter on the command line and prints its output to STDOUT, so you will need to redirect the output to the file on the command line as necessary
Note that the regex pattern /\S/ checks whether there is a non-blank character anywhere in the current line, so not /\S/ is true if the line is empty or all blank characters
use strict;
use warnings;
my $lines = 0;
while ( <> ) {
if ( /^FIRST ITERATION/ ) {
$lines = 1;
}
elsif ( not /\S/ ) {
$lines = 0;
}
elsif ( $lines > 0 ) {
++$lines;
}
print if $lines > 3;
}
This can be simplified substantially by using Perl's built-in range operator, which keeps its own internal state and returns the number of times it has been evaluated. So the above may be written
use strict;
use warnings;
while ( <> ) {
my $s = /^FIRST ITERATION/ ... not /\S/;
print if $s and $s > 3;
}
And the last can be rewritten as a one-line command line program like this
$ perl -ne '$s = /^FIRST ITERATION/ ... not /\S/; print if $s and $s > 3' myfile.txt
Use additional counter, that will say on which condition print line. Something like this:
$skipCounter = 3;
And in foreach:
if ($skipCounter == 2) {
// print OUT
}
if ( $line =~ m/^FIRST ITERATION/) {
$skipCounter = 0;
}
$skipCounter++;
Advice: Use STDIN and STDOUT instead of files, this will allowes you to change them without modifying script
Code:
#!/usr/bin/perl
use 5.010;
use strict;
use warnings;
open(INPUT, "xxx.txt" ) or die "Could not open log file: $!.";
open(OUT, ">yyy.txt") or die "Could not open output file: $!";
while( my $line = <INPUT> )
{
if ( $line =~ m/^FIRST ITERATION/) {
<INPUT>; # skip line
<INPUT>; # skip line
while( $line = <INPUT>) # print till empty line
{
last if $line eq "\n";
print OUT $line;
}
};
};
close (OUT);
close (INPUT);
You're on the right track. What you need to use is the flip-flop operator (which is basically the range operator) ... It will toggle for you between two matches, so you get everything in between. After that, it's a matter of keeping track of the lines you want to skip.
So basically we are checking for FIRST ITERATION and for an empty line, and grab everything in between those. $skip is used to remember how many lines were skipped. It starts at 0 and gets incremented for the first two lines after we start being in the flip-flop if block. In the else case, where we are after the flip-flop, it gets reset to 0 so we can start over with the next block.
Since you know how to open and write files, I'll skip that.
use strict;
use warnings;
my $skip = 0;
while (<DATA>) {
if (/^FIRST ITERATION$/ .. /^$/) {
next if $skip++ <= 2;
print $_;
} else {
$skip = 0;
}
}
__DATA__
FIRST ITERATION
skip1
skip2
foo
bar
baz
don't print this
The output of this is:
foo
bar
baz
To stick with your own code, here's a very verbose solution that uses a foreach and no flip-flop. It does the same thing, just with a lot more words.
my $skip = 0; # skip lines
my $match = 0; # keep track of if we're in between the borders
foreach my $line (<DATA>) {
if ( $line =~ m/^FIRST ITERATION/ ) {
$match = 1; # we are inside the match
next;
}
if ($line =~ m/^$/) {
$match = 0; # we are done matching
next;
}
if ($match) {
$skip++; # count skip-lines
if ($skip <= 2) {
next; # ... and skip the first two
}
print $line; # this is the content we want
}
}
Using paragraph mode (which returns blocks separated by blank lines rather than lines):
local $/ = ""; # Paragraph mode.
while (<>) {
s/\n\K\n+//; # Get rid of trailing empty lines.
print /^FIRST ITERATION\n[^\n]*\n[^\n]*\n(.*)/ms;
}
Using the flip-flop operator:
while (<>) {
if (my $line_num = /^FIRST ITERATION$/ .. /^$/) {
print if $line_num > 3 && $line_num !~ /E0/;
}
}
$line_num !~ /E0/ is true when the flip-flop is flopping (i.e. for the first empty line after FIRST ITERATION). This is checked to avoid printing the blank line.
I have a code which adds all vectors in all files.
There can be any number of input files. For example first input file is:
0.55 0 0.3335 1.2
0.212 0 2.2025 1
and the second one is:
0.25 0 0.3333 1.0
0.1235 0 0.2454 1
What I get is the sum of all vectors, thus in result i get one vector
which is:
1.13550 0 3.1147 4.2
But I'm trying to sum the first vector of the first file with the first vector of the second file and so on. In result according to this example I should get 2 vectors.
For now I have this:
use strict;
use warnings;
if ($ARGV[0] ne "vector1.dat"){
die ("vector1.dat is necessary as first argument");
}
my #sum = 0;
my $dim = 0;
while (<>) {
#Ignore blank lines, hashtags
#and lines starting with $
if ($_ =~ /#/ || $_ =~ /^$/ || $_ =~ /^\s$/){
next;
}
my #vectors = split(" ", $_);
my $vector_length = #vectors;
if ($dim eq 0) {
$dim = $vector_length;
}
else {
if ($dim ne $vector_length) {
die ("Vector dimensions do not match. : $!");
}
}
for (my $i = 0; $i <= $#vectors; $i++) {
$sum[$i] += $vectors[$i];
}
}
$" = "\t\t";
print "\n --- \n #sum \n";
What I need is just to find out how to identify each file's nth line
and to sum the column values of those lines while keeping in mind, that there can be n number of files.
I saw filehandling question over here with similar issue, however
I didn't find my answer there.
Just looking for some suggestions and guidance. Got stuck on this.
Open each file yourself and use the $. variable to know which line you are on (or count the files yourself). Here's the basic structure:
foreach my $file ( #files ) {
open my $fh, '<', $file or die ...;
while( <$fh> ) {
chomp;
$sum[ $. ] = ...; # $. is the line number
}
}
If you don't like $., you can use its longer name. You have to turn on English (which comes with Perl):
use English;
## use English qw( -no_match_vars ); # for v5.16 and earlier
foreach my $file ( #files ) {
open my $fh, '<', $file or die ...;
while( <$fh> ) {
chomp;
$sum[ $INPUT_LINE_NUMBER ] = ...;
}
}
Or, you can count yourself, which might be handy if the vectors in the files don't line up by strict line number (perhaps because of comments or some other formatting oddity):
foreach my $file ( #files ) {
open my $fh, '<', $file or die ...;
my $line = -1;
while( <$fh> ) {
$line++;
chomp;
$sum[ $line ] = ...;
}
}
The harder way is the answer bart gives which inspects eof at the end of every line to see if the magical ARGV handle is looking at a new file, and resetting $. if it is. It's an interesting trick but hardly anyone is going to understand what it's doing (or even notice it).
For the other part of the problem, I think you're doing the vector sum wrong, or using confusing variable names. A line is a vector, and the numbers in the lines are a component. A two dimensional array will work. The first index is the line number and the second in the component index:
while( <$fh> ) {
chomp;
... skip unwanted lines
my #components = split;
... various dimension checks
foreach my $i ( 0 .. $#components ) {
$sum[ $. ][ $i ] += $components[ $i ];
}
}
The Data::Dumper module is handy for complex data structures. You can also see the perldsc (Perl Data Structures Cookbook) documentation. The $. variable is found in perlvar.
$. is the line number of the most recently read file handle. close(ARGV) if eof; can be used to reset the file number between files (as documented in eof). (Note: eof() is different than eof.) So you now have line numbers.
The second problem you have is that you are adding vector components ($vectors[$i]) to a vectors ($sum[$i]). You need to add vector components to vectors components. Start by using more appropriate variable names.
This is what we get:
my #sum_vectors;
while (<>) {
s/#.*//; # Remove comments.
next if /^\s*$/; # Ignore blank lines.
my #vector = split;
if ($sum_vectors[$.] && #{ $sum_vectors[$.] } != #vector) {
die("$ARGV:$.: Vector dimensions do not match\n");
}
for my $i (0..$#vector) {
$sum_vectors[$.][$i] += $vector[$i];
}
} continue {
close(ARGV) if eof; # Reset line numbers for each file.
}
Two other errors fixed:
$! did not contain anything meaningful when you used it.
You ignored lines that contain comments, even if they contained valid data too.
Try this:
#!/usr/bin/perl
use strict;
use warnings;
if ($ARGV[0] ne "vector1.dat"){
die ("vector1.dat is necessary as first argument");
}
my %sum;
my $dim = 0;
my $vector_length;
my $line_number;
while (<>) {
#Ignore blank lines, hashtags
#and lines starting with $
if ($_ =~ /#/ || $_ =~ /^$/ || $_ =~ /^\s$/){
next;
}
my #vectors = split(" ", $_);
$vector_length = #vectors;
if ($dim eq 0) {
$dim = $vector_length;
}
else {
if ($dim ne $vector_length) {
die ("Vector dimensions do not match. : $!");
}
}
for (my $i = 0; $i <= $#vectors; $i++) {
$sum{$.}{$i} += $vectors[$i];
}
$line_number = $.;
$. = 0 if eof;
}
$" = "\t\t";
for (my $line=1; $line<=$line_number; $line++)
{
print $line;
for (my $vector=0; $vector<$vector_length; $vector++)
{
print " " . $sum{$line}{$vector};
}
print "\n";
}
This is really frustrating me. The script I'm writing is indexing coordinates in a hash and then using those index numbers to pull out values from an array.
The weird thing is that if the value begins with 2 or 22 it will not print. Any other number works. I'll show you two variations and output of the script.
First variation. This is what I want the script to do. Print chromosome, position, value.
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use Scalar::Util qw(looks_like_number);
open IN, "/home/big/scratch/affy_map.txt" or die "Cannot open reference\n";
my %ref;
my $head = <IN>;
my $index = 0;
while(<IN>){
chomp $_;
my #row = split /\t/, $_;
my $value = join "\t", $row[1],$row[2];
if($row[1] == 2 && $row[2] <= 50000 && $row[2] <= 51113178) { $ref{$index}=$value; print $index."\t".$value."\n";}
if($row[1] == 22 && $row[2] <= 16300001 && $row[2] <= 20500000) { $ref{$index}=$value; print $index."\t".$value."\n"; }
$index++;
}
close(IN);
my #files;
my $masterDirect = "/nfs/archive02/big/Norm/norm_gcc/";
find(\&file_names, $masterDirect);
sub file_names {
if( -f && $File::Find::name=~/\.nzd$/)
{
push #files, $File::Find::name;
}
}
my $count=0;
foreach(#files){
$count++;
if($count % 100 == 0 ){ print "\n","-" x 10, " $count ", "-" x 10,"\n";}
undef my #probes;
open IN, $_;
#file name handling
my #inDir = split "\/", $_;
my $id = pop(#inDir);
$id =~ s/\.gcc.nzd$//;
#header test
$head =<IN>;
if(looks_like_number($head)) { push #probes, $head; }
#open output
open OUT, ">/home/big/scratch/phase1_affy/".$id."_select_probeset.txt";
#load probe array
#probes = <IN>;
close(IN);
foreach my $key (sort keys %ref){
#intended function
print OUT $ref{$key}."\t".$probes[$key];
#testing
my #temp = split "\t", $ref{$key};
foreach(#temp){if($temp[0] == 2){print $key."\t".$ref{$key}."\t".$probes[$key];}}
}
close(OUT);
}
Here's the output for the test. The printing from the reference file is flawless. The first number is the $key or index number. The second is frome $probes[$key] why is the $ref{$key} missing?
146529 0.777314368326637
146529 0.777314368326637
146530 0.116241153901913
146530 0.116241153901913
146531 0.940593233609167
146531 0.940593233609167
Variation 2.
...
foreach my $key (sort keys %ref){
print OUT $ref{$key}."\t".$probes[$key];
my #temp = split "\t", $ref{$key};
foreach(#temp){if($temp[0] == 2){print $key."\t".$ref{$key}."\n";}}
}
And its output. See now it's printing correctly. $key and $ref{$key}
146542 2 31852
146542 2 31852
146543 2 37693
146543 2 37693
146544 2 40415
146544 2 40415
146545 2 40814
I thought it might be a DOS->UNIX file problem but I performed perl -pi -e 's/\R/\n/g' input_files.txt for all the input the script sees. It prints the same value twice because there are two elements in the #temp array. I'm really at a loss right now.
Here is a hint for possible issue. In the beginning part,
if($row[1] == 2 && $row[2] <= 50000 && $row[2] <= 51113178) { $ref{$index}=$value; print $index."\t".$value."\n";}
Note that you used two "<=" for $row[2], which looks peculiar. The next line has such "problem" too. Please double check it first otherwise you may have filtered them out in the first place.
I'm new to Perl, and I've hit a mental roadblock. I need to extract information from a tab delimited file as shown below.
#name years risk total
adam 5 100 200
adam 5 50 100
adam 10 20 300
bill 20 5 100
bill 30 10 800
In this example, the tab delimited file shows length of investment, amount of money risked, and total at the end of investment.
I want to parse through this file, and for each name (e.g. adam), calculate sum of years invested 5+5, and calculate sum of earnings (200-100) + (100-50) + (300-20). I also would like to save the totals for each name (200, 100, 300).
Here's what I have tried so far:
my $filename;
my $seq_fh;
open $seq_fh, $frhitoutput
or die "failed to read input file: $!";
while (my $line = <$seq_fh>) {
chomp $line;
## skip comments and blank lines and optional repeat of title line
next if $line =~ /^\#/ || $line =~ /^\s*$/ || $line =~ /^\+/;
#split each line into array
my #line = split(/\s+/, $line);
my $yeartotal = 0;
my $earning = 0;
#$line[0] = name
#$line[1] = years
#$line[2] = start
#$line[3] = end
while (#line[0]){
$yeartotal += $line[1];
$earning += ($line[3]-$line[2]);
}
}
Any ideas of where I went wrong?
The Text::CSV module can be used to read tab-delimited data. Often much nicer than trying to manually hack yourself something up with split and so on when it comes to things like quoting, escaping, etc..
You're wrong here : while(#line[0]){
I'd do:
my $seq_fh;
my %result;
open($seq_fh, $frhitoutput) || die "failed to read input file: $!";
while (my $line = <$seq_fh>) {
chomp $line;
## skip comments and blank lines and optional repeat of title line
next if $line =~ /^\#/ || $line =~ /^\s*$/ || $line =~ /^\+/;
#split each line into array
my #line = split(/\s+/, $line);
$result{$line[0]}{yeartotal} += $line[1];
$result{$line[0]}{earning} += $line[3] - $line[2];
}
You should use hash, something like this:
my %hash;
while (my $line = <>) {
next if $line =~ /^#/;
my ($name, $years, $risk, $total) = split /\s+/, $line;
next unless defined $name and defined $years
and defined $risk and defined $total;
$hash{$name}{years} += $years;
$hash{$name}{risk} += $risk;
$hash{$name}{total} += $total;
$hash{$name}{earnings} += $total - $risk;
}
foreach my $name (sort keys %hash) {
print "$name earned $hash{$name}{earnings} in $hash{$name}{years}\n";
}
Nice opportunity to explore Perl's powerful command line options! :)
Code
Note: this code should be a command line oneliner, but it's a little bit easier to read this way. When writing it in a proper script file, you really should enable strict and warnings and use a little bit better names. This version won't compile under strict, you have to declare our $d.
#!/usr/bin/perl -nal
# collect data
$d{$F[0]}{y} += $F[1];
$d{$F[0]}{e} += $F[3] - $F[2];
# print summary
END { print "$_:\tyears: $d{$_}{y},\tearnings: $d{$_}{e}" for sort keys %d }
Output
adam: years: 20, earnings: 430
bill: years: 50, earnings: 885
Explanation
I make use of the -n switch here which basically lets your code iterate over the input records (-l tells it to use lines). The -a switch lets perl split the lines into the array #F. Simplified version:
while (defined($_ = <STDIN>)) {
chomp $_;
our(#F) = split(' ', $_, 0);
# collect data
$d{$F[0]}{y} += $F[1];
$d{$F[0]}{e} += $F[3] - $F[2];
}
%d is a hash with the names as keys and hashrefs as values, which contain years (y) and earnings (e).
The END block is executed after finishing the input line processing and outputs %d.
Use O's Deparse to view the code which is actually executed:
book:/tmp memowe$ perl -MO=Deparse tsv.pl
BEGIN { $/ = "\n"; $\ = "\n"; }
LINE: while (defined($_ = <ARGV>)) {
chomp $_;
our(#F) = split(' ', $_, 0);
$d{$F[0]}{'y'} += $F[1];
$d{$F[0]}{'e'} += $F[3] - $F[2];
sub END {
print "${_}:\tyears: $d{$_}{'y'},\tearnings: $d{$_}{'e'}" foreach (sort keys %d);
}
;
}
tsv.pl syntax OK
It seems like a fixed-width file, I would use unpack for that