How to Hash in Perl - perl

I am finding uniques URL in a log file along with the response stamp which can be available using $line[7]. I am using Hash to get the unique URLs.
How can I get the count of Unique URL?
How can I get the average of response time along with the count of Unique URL?
With below code I am getting
url1
url2
url3
but I want it along with the average response time and count of each URL
URL Av.RT Count
url1 10.5 125
url2 9.3 356
url3 7.8 98
Code:
#!/usr/bin/perl
open(IN, "web1.txt") or die "can not open file";
# Hash to store final list of unique IPs
my %uniqueURLs = ();
my $z;
# Read log file line by line
while (<IN>) {
#line = split(" ",$_);
$uniqueURLs{$line[9]}=1;
}
# Go through the hash table and print the keys
# which are the unique IPs
for $url (keys %uniqueURLs) {
print $url . "\n";
}

store a listref in your hashing directory:
$uniqueURLs{$line[9]} = [ <avg response time>, <count> ];
adjust the elements accordingly, eg. the count:
if (defined($uniqueURLs{$line[9]})) {
# url known, increment count,
# update average response time with data from current log entry
$uniqueURLs{$line[9]}->[0] =
(($uniqueURLs{$line[9]}->[0] * $uniqueURLs{$line[9]}->[1]) + ($line[7] + 0.0))
/ ($uniqueURLs{$line[9]}->[1] + 1)
;
$uniqueURLs{$line[9]}->[1] += 1;
}
else {
# url not yet known,
# init count with 1 and average response time with actual response time from log entry
$uniqueURLs{$line[9]} = [ $line[7] + 0.0, 1 ];
}
to print results:
# Go through the hash table and print the keys
# which are the unique IPs
for $url (keys %uniqueURLs) {
printf ( "%s %f %d\n", $url, $uniqueURLs{$url}->[0], $uniqueURLs{$url}->[1]);
}
adding 0.0 will guarantee type coercion from string to float as a safeguard.

Read up on References. Also, read up on modern Perl practices which will help improve your programming skills.
Instead of just using the keys of your hash of unique URLs, you could store information in those hashes. Let's start with just a count of the unique URLs:
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
use constant {
WEB_FILE => "web1.txt",
};
open my $web_fh, "<", WEBFILE; #Autodie will catch this for you
my %unique_urls;
while ( my $line = <$web_fh> ) {
my $url = (split /\s+/, $line)[9];
if ( not exists $unique_urls{$url} ) { #Not really needed
$unique_urls{$url} = 0;
}
$unique_urls{$url} += 1;
}
close $web_fh;
Now, each key in your %unique_urls hash will contain the number of unique URLs you have.
This, by the way, is your code written in a bit more modern style. The use strict; and use warnings; pragmas will catch about 90% of the standard programming errors. The use autodie; will catch exceptions to things that you forget to check. In this case, the program will automatically die if the file doesn't exist.
The three parameter version of the open command is preferred, and so is using scalar variables for file handles. Using scalar variables for the file handle makes them easier to pass in subroutines, and the file will automatically close if the file handle falls out of scope.
However, we want to store in two items per hash. We want to store the unique count, and we want to store something that will help us find the average response time. This is where references come in.
In Perl, variables deal with single data items. A scalar variable (like $foo) deals with an individual data item. Arrays and Hashes (like #foo and %foo) deal with lists of individual data items. References help you get around this limitation.
Let's look at an array of people:
$person[0] = "Bob";
$person[1] = "Ted";
$person[2] = "Carol";
$person[3] = "Alice";
However, people are more than just first names. They have last names, phone numbers, addresses, etc. Let's take a look at a hash for Bob:
my %bob_hash;
$bob_hash{FIRST_NAME} = "Bob";
$bob_hash{LAST_NAME} = "Jones";
$bob_hash{PHONE} = "555-1234";
We can take a reference to this hash by putting a backslash in front of it. A reference is merely the memory address where this hash is stored:
$bob_reference = \%bob_hash;
print "$bob_reference\n": # Prints out something like HASH(0x7fbf79004140)
However, that memory address is a single item, and could be stored in our array of people!
$person[0] = $bob_reference;
If we want to get to the items in our reference, we dereference it by putting the right data type symbol in front. Since this is a hash, we will use %:
$bob_hash = %{ $person[0] };
Perl provides an easy way to dereference hashes with the -> syntax:
$person[0]->{FIRST_NAME} = "Bob";
$person[0]->{LAST_NAME} = "Jones";
$person[0]->{PHONE} = "555-1212";
We'll use the same technique in %unique_urls to store the number of times, and the total amount of response time. (Average will be total time / number of times).
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
use constant {
WEB_FILE => "web1.txt",
};
open my $web_fh, "<", WEB_FILE; #Autodie will catch this for you
my %unique_urls;
while ( my $line ( <$web_fh> ) {
my $url = (split /\s+/, $line)[9];
my $response_time = (split /\s+/, $line)[10]; #Taking a guess
if ( not exists $unique_urls{$url} ) { #Not really needed
$unique_urls{$url}->{INSTANCES} = 0;
$unique_urls{$url}->{TOTAL_RESP_TIME} = 0;
}
$unique_urls{$url}->{INSTANCES} += 1;
$unique_urls{$url}->{TOTAL_RESP_TIME} += $response_time;
}
$close $web_fh;
Now we can print them out:
print "%20.20s %6s %8s\n", "URL", "INST", "AVE";
for my $url ( sort keys %unique_urls ) {
my $total_resp_time = $unique_urls{$url}->{TOTAL_RESP_TIME};
my $instances = $unique_urls{$url}->{INSTANCES};
my $average = $total_resp_time / $instances
printf "%-20.20s %-6d %-8.5f\n", $url, $instances, $average";
}
I like using printf for tables.

Instead of setting the value to 1 here:
$uniqueURLs{$line[9]}=1;
Store a data structure indicating the response time and the number of times this URL has been seen (so you can properly calculate the average). You can use an array ref, or hashref if you want. If the key doesn't exist yet, that means it hasn't been seen yet, and you can set some initial values.
# Initialize 3-element arrayref: [count, total, average]
$uniqueURLS{$line[9]} = [0, 0, 0] if not exists $uniqueURLS{$line[9]};
$uniqueURLs{$line[9]}->[0]++; # Count
$uniqueURLs{$line[9]}->[1] += $line[7]; # Total time
# Calculate average
$uniqueURLs{$line[9]}->[2] = $uniqueURLs{$line[9]}->[1] / $uniqueURLs{$line[9]}->[0];
One way you can get count of uniqueURLS is by counting the keys:
print scalar(keys %uniqueURLS); # Print number of unique url's
In your loop, you can print out the url and average time like this:
for $url (keys %uniqueURLs) {
print $url, ' - ', $uniqueURLs[$url]->[2], "seconds \n";
}

Related

Why is my Tie::IxHash program taking a long time?

Basically, I have a script to create a hash for COGs with corresponding gene IDs:
# Open directory and get all the files in it
opendir(DIR, "/my/path/to/COG/");
my #infiles = grep(/OG-.*\.fasta/, readdir(DIR));
closedir(DIR);
# Create hash for COGs and their corresponding gene IDs
tie my %ids_for, 'Tie::IxHash';
if (! -e '/my/path/to/COG/COG_hash.ref') {
for my $infile (#infiles) {
## $infile
%ids_for = (%ids_for, read_COG_fasta($infile));
}
## %ids_for
store \%ids_for, '/my/path/to/COG/COG_hash.ref';
}
my $id_ref = retrieve('/my/path/to/COG/COG_hash.ref');
%ids_for = %$id_ref;
## %ids_for
The problem isn't that it doesn't work (at least I think), but that it is extremely slow for some reason. When I tried to test run it, it would take weeks for me to have an actual result. Somehow the hash creation is really really slow and I'm sure there is some way to optimize it better for it to work way faster.
Ideally, the paths should be the input of the script that way there would be no need to constantly change the script in case the path changes.
It would also be great if there could be a way to see the progress of the hash creation, like maybe have it show that it is 25% done, 50% done, 75% done and ultimately 100% done. Regarding this last point I have seen things like use Term::ProgressBar but I am not sure if it would be appropriate in this case.
Do you really need Tie::IxHash?
That aside, I suspect your culprit is this set of lines:
for my $infile (#infiles) {
## $infile
%ids_for = (%ids_for, read_COG_fasta($infile));
}
To add a key to the hash, you are creating a list of the current key-value pairs, adding the new pair, then assigning it all back to the hash.
What happens if you take the results from read_COG_fasta and add the keys one at a time?
for my $infile (#infiles) {
my %new_hash = read_COG_fasta($infile);
foreach my $key ( keys %new_hash ) {
$ids_for{$key} = $new_hash{$key};
}
}
As for progress, I usually have something like this when I'm trying to figure out something:
use v5.26;
my $file_count = #files;
foreach my $n ( 0 .. $#files ) {
say "[$n/$file_count] Processing $file[$n]";
my %result = ...;
printf "\tGot %d results", scalar %hash; # v5.26 feature!
}
You could do the same sort of thing with the keys that you get back so you can track the size.

How to access a hash value by index stored in variable

I am working on some code to extract data stored in JSON format from web page. JSON data extracted and decode properly into hash.
The structure of JSON data is very complex, I wrote some assisting code/function which 'traverse' through the hash and helps finds 'index' (location) of a value of interest in the hash.
The 'find' function returns 'index' (location) of the data in the hash which stored in a variable.
I attempt to use this variable (stored 'index') in other operations but so far without any success.
Please see included simple demo snippet code for explanation of the problem.
Thank you,
Polar Bear
use strict;
use warnings;
use JSON qw(decode_json);
my $index;
my $slice;
my $data = decode_json( join '', <DATA> );
printf "TITLE: %-15s TIME: %5s TIMES: %5s FAVORITE: %s\n",
$data->{playList}[1]{title},
$data->{playList}[1]{time},
$data->{playList}[1]{played},
$data->{playList}[1]{favorite} ? "yes" : "no";
$index = '{playList}[1]';
$slice = $data{$index}; # does not pass 'use strict' compilation error
$slice = $data->{$index}; # empty slice
$slice = $data->$index; # Can't call method "{playList}[1]" on unblessed reference at
printf "TITLE: %-15s TIME: %5s TIMES: %5s FAVORITE: %s\n",
$slice->{title},
$slice->{time},
$slice->{played},
$slice->{favorite} ? "yes" : "no";
__DATA__
{
"playList": [
{
"title": "Song name 1",
"time": "3:25",
"played": "240",
"favorite": "1"
},
{
"title": "Song name 2",
"time": "4:12",
"played": "30",
"favorite": "0"
},
{
"title": "Song name 3",
"time": "2:56",
"played": "85",
"favorite": "0"
}
]
}
I expect to access data by utilizing 'index' stored in a variable but I could not find a way to achieve this result. See comments in the code for details.
NOTE: In real life the index looks as following
my $index = "{contents}{twoColumnBrowseResultsRenderer}{tabs}[0]{tabRenderer}{content}{sectionListRenderer}{contents}[0]{itemSectionRenderer}{contents}[0]{playlistVideoListRenderer}{contents}[0]{playlistVideoRenderer}{title}{accessibility}{accessibilityData}{label}";
SOLUTION:
I would like to extend my 'thank you' to Håkon Hægland and lordadmira for offered solution
use Data::Diver qw/Dive/; # or Data::DPath, etc
# Capture web page, extract data JSON, convert to hash, assign hash ref to $data
my $data = ...;
# Find index/location in the hash
#my $index = find($data, $value);
my $index = "{contents}{twoColumnBrowseResultsRenderer}{tabs}[0]{tabRenderer}{content}{sectionListRenderer}{contents}[0]{itemSectionRenderer}{contents}[0]{playlistVideoListRenderer}{contents}[0]{playlistVideoRenderer}{title}{accessibility}{accessibilityData}{label}";
$index =~ s/[{\[]//g; # throw away opening brackets
my #index = split /[}\]]/, $index; # split into array on closing brackets
pop #index for 1..8 # 8 levels to back up to
my $slice = Dive( $data, #index ); # extract hash slice of interest
# extract playlist
my $playlist = $slice->{playlistVideoListRenderer}{contents};
# go through playlist and extract information of our interest
foreach ( #$playlist ) {
my $video = $_->{playlistVideoRenderer};
printf "%s %8s %s\n",
$video->{videoId},
$video->{lengthText}{simpleText},
$video->{title}{simpleText};
}
Both of them referred me to use Data::Dive with help of this module I can do back up few levels from the depth of the hash and extract slice of interest.
It was learned that by utilizing this module the index in form of array is easier to work with. Due this factor I will alter my find function to return an index array.
You can use Data::Diver :
use Data::Diver qw( DiveVal );
use JSON qw(decode_json);
my $data = decode_json( join '', <DATA> );
my $slice = DiveVal( $data, qw( playList 1 ) );
printf "TITLE: %-15s TIME: %5s TIMES: %5s FAVORITE: %s\n",
$slice->{title},
$slice->{time},
$slice->{played},
$slice->{favorite} ? "yes" : "no";
Output:
TITLE: Song name 2 TIME: 4:12 TIMES: 30 FAVORITE: no
I would like to extend my 'thank you' to Håkon Hægland and lordadmira for offered solution
use Data::Diver qw/Dive/; # or Data::DPath, etc
my $data = ...;
my $index = "{contents}{twoColumnBrowseResultsRenderer}{tabs}[0]{tabRenderer}{content}{sectionListRenderer}{contents}[0]{itemSectionRenderer}{contents}[0]{playlistVideoListRenderer}{contents}[0]{playlistVideoRenderer}{title}{accessibility}{accessibilityData}{label}";
$index =~ s/[{\[]//g;
my #index = split /[}\]]/, $index;
pop #index for 1..8 # 8 levels to back up to
my $slice = Dive( $data, #index );
my $playlist = $slice->{playlistVideoListRenderer}{contents};
foreach ( #$playlist ) {
my $video = $_->{playlistVideoRenderer};
printf "%s %8s %s\n",
$video->{videoId},
$video->{lengthText}{simpleText},
$video->{title}{simpleText};
}
Both of them referred me to use Data::Dive with help of this module I can do back up few levels from the depth of the hash and extract slice of interest.
It was learned that by utilizing this module the index in form of array is easier to work with. Due this factor I will alter my find function to return an index array.
In the comments, you said you had a function that finds the an element in your JSON data structure and returns a "path" to that element, and that your question is about finding a higher-level container of that element.
If this was XML, I'd use an XPath to do both the search and finding the right container. But worry not, someone has devlopped an XPath-like language for JSON, and someone has provide this functionality via Perl module JSON::Path.
The problem is that your code is returning all of the keys and elements to find the given element but not the element value itself. The simplest answer is to return a reference to that located value.
... bunch of lookdown code
return \ $this_level->{the_key_I_want}
That way the caller can have direct read/write access to the leaf value.
If you want to use the list of keys and elements directly to access a deep value, you will have to do a string eval. This is NOT recommended unless you 1000% percent trust the data because some joker can name a hash key "; system qw{rm -rf /}".
$index = "{contents}{twoColumnBrowseResultsRenderer}{tabs}[0]{tabRenderer}{content}{sectionListRenderer}{contents}[0]{itemSectionRenderer}{contents}[0]{playlistVideoListRenderer}{contents}[0]{playlistVideoRenderer}{title}{accessibility}{accessibilityData}{label}"
$value = eval "\$HASH$index";
The right way to use a list like that is another lookdown function that takes the list of keys etc and accesses them one by one.
HTH
PS: I slightly misunderstood your original question. Just write in your code:
$slice = $data->{playList}[1];
2nd Edit to actually use $index:
use Data::Diver qw/Dive/; # or Data::DPath, etc
my $data = ...;
my $index = "{contents}{twoColumnBrowseResultsRenderer}{tabs}[0]{tabRenderer}{content}{sectionListRenderer}{contents}[0]{itemSectionRenderer}{contents}[0]{playlistVideoListRenderer}{contents}[0]{playlistVideoRenderer}{title}{accessibility}{accessibilityData}{label}";
my #index = split /[{}[\]]+/, $index;
shift #index;
pop #index for 1..2 # however many levels to back up to
my $slice = Dive( $data, #index );

Perl - Data comparison taking huge time

open(INFILE1,"INPUT.txt");
my $modfile = 'Data.txt';
open MODIFIED,'>',$modfile or die "Could not open $modfile : $!";
for (;;) {
my $line1 = <INFILE1>;
last if not defined $line1;
my $line2 = <INFILE1>;
last if not defined $line2;
my ($tablename1, $colname1,$sql1) = split(/\t/, $line1);
my ($tablename2, $colname2,$sql2) = split(/\t/, $line2);
if ($tablename1 eq $tablename2)
{
my $sth1 = $dbh->prepare($sql1);
$sth1->execute;
my $hash_ref1 = $sth1->fetchall_hashref('KEY');
my $sth2 = $dbh->prepare($sql2);
$sth2->execute;
my $hash_ref2 = $sth2->fetchall_hashref('KEY');
my #fieldname = split(/,/, $colname1);
my $colcnt=0;
my $rowcnt=0;
foreach $key1 ( keys(%{$hash_ref1}) )
{
foreach (#fieldname)
{
$colname =$_;
my $strvalue1='';
#val1 = $hash_ref1->{$key1}->{$colname};
if (defined #val1)
{
my #filtered = grep /#val1/, #metadata;
my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
my $strvalue2='';
#val2 = $hash_ref2->{$key1}->{$colname};
if (defined #val2)
{
my #filtered = grep /#val2/, #metadata2;
my $strvalue2 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
if ($strvalue1 ne $strvalue2 )
{
$colcnt = $colcnt + 1;
print MODIFIED "$tablename1\t$colname\t$strvalue1\t$strvalue2\n";
}
}
}
if ($colcnt>0)
{
print "modified count is $colcnt\n";
}
%$hash_ref1 = ();
%$hash_ref2 = ();
}
The program is Read input file in which every line contrain three strings seperated by tab. First is TableName, Second is ALL Column Name with commas in between and third contain the sql to be run. As this utlity is doing comparison of data, so there are two rows for every tablename. One for each DB. So data needs to be picked from each respective db's and then compared column by column.
SQL returns as ID in the result set and if the value is coming from db then it needs be translated to a string by reading from a array (that array contains 100K records with Key and value seperated by ||)
Now I ran this for one set of tables which contains 18K records in each db. There are 8 columns picked from db in each sql. So for every record out of 18K, and then for every field in that record i.e. 8, this script is taking a lot of time.
My question is if someone can look and see if it can be imporoved so that it takes less time.
File contents sample
INPUT.TXT
TABLENAME COL1,COL2 select COL1,COL2 from TABLENAME where ......
TABLENAMEB COL1,COL2 select COL1,COL2 from TABLENAMEB where ......
Metadata array contains something like this(there are two i.e. for each db)
111||Code 1
222||Code 2
Please suggest
Your code does look a bit unusual, and could gain clarity from using subroutines vs. just using loops and conditionals. Here are a few other suggestions.
The excerpt
for (;;) {
my $line1 = <INFILE1>;
last if not defined $line1;
my $line2 = <INFILE1>;
last if not defined $line2;
...;
}
is overly complicated: Not everyone knows the C-ish for(;;) idiom. You have lots of code duplication. And aren't you actually saying loop while I can read two lines?
while (defined(my $line1 = <INFILE1>) and defined(my $line2 = <INFILE1>)) {
...;
}
Yes, that line is longer, but I think it's a bit more self-documenting.
Instead of doing
if ($tablename1 eq $tablename2) { the rest of the loop }
you could say
next if $tablename1 eq $tablename2;
the rest of the loop;
and save a level of intendation. And better intendation equals better readability makes it easier to write good code. And better code might perform better.
What are you doing at foreach $key1 (keys ...) — something tells me you didn't use strict! (Just a hint: lexical variables with my can perform slightly better than global variables)
Also, doing $colname = $_ inside a for-loop is a dumb thing, for the same reason.
for my $key1 (keys ...) {
...;
for my $colname (#fieldname) { ... }
}
my $strvalue1='';
#val1 = $hash_ref1->{$key1}->{$colname};
if (defined #val1)
{
my #filtered = grep /#val1/, #metadata;
my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
I don't think this does what you think it does.
From the $hash_ref1 you retrive a single element, then assign that element to an array (a collection of multiple values).
Then you called defined on this array. An array cannot be undefined, and what you are doing is quite deprecated. Calling defined function on a collection returns info about the memory management, but does not indicate ① whether the array is empty or ② whether the first element in that array is defined.
Interpolating an array into a regex isn't likely to be useful: The elements of the array are joined with the value of $", usually a whitespace, and the resulting string treated as a regex. This will wreak havoc if there are metacharacters present.
When you only need the first value of a list, you can force list context, but assign to a single scalar like
my ($filtered) = produce_a_list;
This frees you from weird subscripts you don't need and that only slow you down.
Then you assign to a $strvalue1 variable you just declared. This shadows the outer $strvalue1. They are not the same variable. So after the if branch, you still have the empty string in $strvalue1.
I would write this code like
my $val1 = $hash_ref1->{$key1}{$colname};
my $strvalue1 = defined $val1
? do {
my ($filtered) = grep /\Q$val1/, #metadata;
substr $filtered, 2 + index $filtered, '||'
} : '';
But this would be even cheaper if you pre-split #metadata into pairs and test for equality with the correct field. This would remove some of the bugs that are still lurking in that code.
$x = $x + 1 is commonly written $x++.
Emptying the hashrefs at the end of the iteration is unneccessary: The hashrefs are assigned to a new value at the next iteration of the loop. Also, it is unneccessary to assist Perls garbage collection for such simple tasks.
About the metadata: 100K records is a lot, so either put it in a database itself, or at the very least a hash. Especially for so many records, using a hash is a lot faster than looping through all entries and using slow regexes … aargh!
Create the hash from the file, once at the beginning of the program
my %metadata;
while (<METADATA>) {
chomp;
my ($key, $value) = split /\|\|/;
$metadata{$key} = $value; # assumes each key only has one value
}
Simply look up the key inside the loop
my $strvalue1 = defined $val1 ? $metadata{$val1} // '' : ''
That should be so much faster.
(Oh, and please consider using better names for variables. $strvalue1 doesn't tell me anything, except that it is a stringy value (d'oh). $val1 is even worse.)
This is not really an answer but it won't really fit well in a comment either so, until you provide some more information, here are some observations.
Inside you inner for loop, there is:
#val1 = $hash_ref1->{$key1}->{$colname};
Did you mean #val1 = #{ $hash_ref1->{$key1}->{$colname} };?
Later, you check if (defined #val1)? What did you really want to check? As perldoc -f defined points out:
Use of "defined" on aggregates (hashes and arrays) is
deprecated. It used to report whether memory for that aggregate
had ever been allocated. This behavior may disappear in future
versions of Perl. You should instead use a simple test for size:
In your case, if (defined #val1) will always be true.
Then, you have my #filtered = grep /#val1/, #metadata; Where did #metadata come from? What did you actually intend to check?
Then you have my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
There is some interesting stuff going on in there.
You will need to verbalize what you are actually trying to do.
I strongly suspect there is a single SQL query you can run that will give you what you want but we first need to know what you want.

Regarding getting the size of array in a hash of array structure

I wrote the following Perl function
sub Outputing
{
my $featureMatrix = shift;
my $indexRow = shift;
my $fileName = "/projectworkspace/input.dat";
open(DATA, "> $fileName");
printf DATA "%d", $#$indexRow;
print DATA "\n";
my $numDataPoints = $#{$featureMatrix{$indexRow->[1]}};
printf DATA "%d", $numDataPoints;
print DATA "\n";
close DATA;
}
I calling Outputing as follows:
Outputing($matrix, $Rows);e
$matrix is a hash of array, whose structure is like this
my $matrix
= { 200 => [ 0.023, 0.035, 0.026 ],
110 => [ 0.012, 0.020, 0,033],
};
Rows is an array storing the sorted key of matrix, it is obtained as follows
my #Rows = sort keys %matrix;
both matrix and Rows are used as parameters passed to Outputing.
The printed out $numDataPoints is -1, which is not correct? I do not know which might be the reason that causes this problem? If we use the above example, and assume $indexRow->[1]=110, then $numDataPoints should be 2. I am not sure whether the $#{$featureMatrix{$indexRow->[1]}}; is the correct way to get the size of this array.
Assuming that you've included all the relevant code, this:
my #indexRow = sort keys %featureMatrix;
should be this:
my #indexRow = sort keys %$featureMatrix;
and this:
my $numDataPoints = $#{$featureMatrix{$indexRow->[1]}};
should be this:
my $numDataPoints = $#{$featureMatrix->{$indexRow->[1]}};
That is, the problem is that in some places, you're using a hash named %featureMatrix, and in others, you're using a hashref named $featureMatrix that refers to an anonymous hash.
You should be using use warnings and use strict to prevent such mistakes: those would have prevented you from using %featureMatrix when you've only declared $featureMatrix. (Actually, use warnings might not help in this case — it could detect if you used %featureMatrix exactly once, but in your case, you use it a few times — but use strict would almost certainly have helped.)

Why does my Perl for loop exit early?

I am trying to get a perl loop to work that is working from an array that contains 6 elements. I want the loop to pull out two elements from the array, perform certain functions, and then loop back and pull out the next two elements from the array until the array runs out of elements. Problem is that the loop only pulls out the first two elements and then stops. Some help here would be greatly apperaciated.
my open(infile, 'dnadata.txt');
my #data = < infile>;
chomp #data;
#print #data; #Debug
my $aminoacids = 'ARNDCQEGHILKMFPSTWYV';
my $aalen = length($aminoacids);
my $i=0;
my $j=0;
my #matrix =();
for(my $i=0; $i<2; $i++){
for( my $j=0; $j<$aalen; $j++){
$matrix[$i][$j] = 0;
}
}
The guidelines for this program states that the program should ignore the presence of gaps in the program. which means that DNA code that is matched up with a gap should be ignored. So the code that is pushed through needs to have alignments linked with gaps removed.
I need to modify the length of the array by two since I am comparing two sequence in this part of the loop.
#$lemseqcomp = $lenarray / 2;
#print $lenseqcomp;
#I need to initialize these saclar values.
$junk1 = " ";
$junk2 = " ";
$seq1 = " ";
$seq2 = " ";
This is the loop that is causeing issues. I belive that the first loop should move back to the array and pull out the next element each time it loops but it doesn't.
for($i=0; $i<$lenarray; $i++){
#This code should remove the the last value of the array once and
#then a second time. The sequences should be the same length at this point.
my $last1 =pop(#data1);
my $last2 =pop(#data1);
for($i=0; $i<length($last1); $i++){
my $letter1 = substr($last1, $i, 1);
my $letter2 = substr($last2, $i, 1);
if(($letter1 eq '-')|| ($letter2 eq '-')){
#I need to put the sequences I am getting rid of somewhere. Here is a good place as any.
$junk1 = $letter1 . $junk1;
$junk2 = $letter1 . $junk2;
}
else{
$seq1 = $letter1 . $seq1;
$seq2 = $letter2 . $seq2;
}
}
}
print "$seq1\n";
print "$seq2\n";
print "#data1\n";
I am actually trying to create a substitution matrix from scratch and return the data. The reason why the code looks weird, is because it isn't actually finished yet and I got stuck.
This is the test sequence if anyone is curious.
YFRFR
YF-FR
FRFRFR
ARFRFR
YFYFR-F
YFRFRYF
First off, if you're going to work with sequence data, use BioPerl. Life will be so much easier. However...
Since you know you'll be comparing the lines from your input file as pairs, it makes sense to read them into a datastructure that reflects that. As elsewhere suggested, an array like #data[[line1, line2],[line3,line4]) ensures that the correct pairs of lines are always together.
What I'm not clear on what you're trying to do is:
a) are you generating a consensus
sequence where the 2 sequences are
difference only by gaps
b) are your 2 sequences significantly
different and you're trying to
exclude the non-aligning parts and
then generate a consensus?
So, does the first pair represent your data, or is it more like the second?
ATCG---AAActctgGGGGG--taGC
ATCGcccAAActctgGGGGGTTtaGC
ATCG---AAActctgGGGGG--taGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
ATCGcccAAActctgGGGGGTTtaGCGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
The problem is that you're using $i as the counter variable for both your loops, so the inner loop modifies the counter out from under the outer loop. Try changing the inner loop's counter to $j, or using my to localize them properly.
Don't store your values as an array, store as a two-dimensional array:
my #dataset = ([$val1, $val2], [$val3, $val4]);
or
my #dataset;
push (#dataset, [$val_n1, $val_n2]);
Then:
for my $value (#dataset) {
### Do stuff with $value->[0] and $value->[1]
}
There are lots of strange things in your code: you are initializing a matrix then not using it; reading a whole file into an array; scanning a string C style but then not doing anything with the unmatched values; and finally, just printing the two last processed values (which, in your case, are the two first elements of your array, since you are using pop.)
Here's a guess.
use strict;
my $aminoacids = 'ARNDCQEGHILKMFPSTWYV';
# Preparing a regular expression. This is kind of useful if processing large
# amounts of data. This will match anything that is not in the string above.
my $regex = qr([^$aminoacids]);
# Our work function.
sub do_something {
my ($a, $b) = #_;
$a =~ s/$regex//g; # removing unwanted characters
$b =~ s/$regex//g; # ditto
# Printing, saving, whatever...
print "Something: $a - $b\n";
return ($a, $b);
}
my $prev;
while (<>) {
chomp;
if ($prev) {
do_something($prev, $_);
$prev = undef;
} else {
$prev = $_;
}
}
print STDERR "Warning: trailing data: $prev\n"
if $prev;
Since you are a total Perl/programming newbie, I am going to show a rewrite of your first code block, then I'll offer you some general advice and links.
Let's look at your first block of sample code. There is a lot of stuff all strung together, and it's hard to follow. I, personally, am too dumb to remember more than a few things at a time, so I chop problems into small pieces that I can understand. This is (was) known as 'chunking'.
One easy way to chunk your program is use write subroutines. Take any particular action or idea that is likely to be repeated or would make the current section of code long and hard to understand, and wrap it up into a nice neat package and get it out of the way.
It also helps if you add space to your code to make it easier to read. Your mind is already struggling to grok the code soup, why make things harder than necessary? Grouping like things, using _ in names, blank lines and indentation all help. There are also conventions that can help, like making constant values (values that cannot or should not change) all capital letters.
use strict; # Using strict will help catch errors.
use warnings; # ditto for warnings.
use diagnostics; # diagnostics will help you understand the error messages
# Put constants at the top of your program.
# It makes them easy to find, and change as needed.
my $AMINO_ACIDS = 'ARNDCQEGHILKMFPSTWYV';
my $AMINO_COUNT = length($AMINO_ACIDS);
my $DATA_FILE = 'dnadata.txt';
# Here I am using subroutines to encapsulate complexity:
my #data = read_data_file( $DATA_FILE );
my #matrix = initialize_matrix( 2, $amino_count, 0 );
# now we are done with the first block of code and can do more stuff
...
# This section down here looks kind of big, but it is mostly comments.
# Remove the didactic comments and suddenly the code is much more compact.
# Here are the actual subs that I abstracted out above.
# It helps to document your subs:
# - what they do
# - what arguments they take
# - what they return
# Read a data file and returns an array of dna strings read from the file.
#
# Arguments
# data_file => path to the data file to read
sub read_data_file {
my $data_file = shift;
# Here I am using a 3 argument open, and a lexical filehandle.
open( my $infile, '<', $data_file )
or die "Unable to open dnadata.txt - $!\n";
# I've left slurping the whole file intact, even though it can be very inefficient.
# Other times it is just what the doctor ordered.
my #data = <$infile>;
chomp #data;
# I return the data array rather than a reference
# to keep things simple since you are just learning.
#
# In my code, I'd pass a reference.
return #data;
}
# Initialize a matrix (or 2-d array) with a specified value.
#
# Arguments
# $i => width of matrix
# $j => height of matrix
# $value => initial value
sub initialize_matrix {
my $i = shift;
my $j = shift;
my $value = shift;
# I use two powerful perlisms here: map and the range operator.
#
# map is a list contsruction function that is very very powerful.
# it calls the code in brackets for each member of the the list it operates against.
# Think of it as a for loop that keeps the result of each iteration,
# and then builds an array out of the results.
#
# The range operator `..` creates a list of intervening values. For example:
# (1..5) is the same as (1, 2, 3, 4, 5)
my #matrix = map {
[ ($value) x $i ]
} 1..$j;
# So here we make a list of numbers from 1 to $j.
# For each member of the list we
# create an anonymous array containing a list of $i copies of $value.
# Then we add the anonymous array to the matrix.
return #matrix;
}
Now that the code rewrite is done, here are some links:
Here's a response I wrote titled "How to write a program". It offers some basic guidelines on how to approach writing software projects from specification. It is aimed at beginners. I hope you find it helpful. If nothing else, the links in it should be handy.
For a beginning programmer, beginning with Perl, there is no better book than Learning Perl.
I also recommend heading over to Perlmonks for Perl help and mentoring. It is an active Perl specific community site with very smart, friendly people who are happy to help you. Kind of like Stack Overflow, but more focused.
Good luck!
Instead of using a C-style for loop, you can read data from an array two elements at a time using splice inside a while loop:
while (my ($letter1, $letter2) = splice(#data, 0, 2))
{
# stuff...
}
I've cleaned up some of your other code below:
use strict;
use warnings;
open(my $infile, '<', 'dnadata.txt');
my #data = <$infile>;
close $infile;
chomp #data;
my $aminoacids = 'ARNDCQEGHILKMFPSTWYV';
my $aalen = length($aminoacids);
# initialize a 2 x 21 array for holding the amino acid data
my $matrix;
foreach my $i (0 .. 1)
{
foreach my $j (0 .. $aalen-1)
{
$matrix->[$i][$j] = 0;
}
}
# Process all letters in the DNA data
while (my ($letter1, $letter2) = splice(#data, 0, 2))
{
# do something... not sure what?
# you appear to want to look up the letters in a reference table, perhaps $aminoacids?
}