How to filter number using range in tcl? - range

set a [ 100 200 300 1677 ]
foreach s {
if { 1000 <= $s <= 2000 } {
puts "$s"
}
}
I want filter the number 1677 as output using the range 1000 to 2000 . any solutions please guide....

Your code has multiple issues:
To store a list in a variable, either use list, or put the values in braces (there are other methods too, but those are the most common):
set a [list 100 200 300 1677]
set a {100 200 300 1677}
You need to pass the list to the foreach command
foreach s $a {...}
The condition in your if statement doesn't evaluate as you might expect: 1000 <= $s results in a boolean, 0 or 1, which is always less than 2000. So the condition is always true. Instead you should use something like:
if {1000 <= $s && $s <= 2000} {...}

Related

Perl simple way to do multiple decision such as grade scores (except if-elif)?

Try to seek Perl experts feedback on approaching this simple scores grading. What I came up is to use conditional opeartor to mimic multiway branch, but wonder if there's more simple/straight syntax to express to enhance the readability (and future maintainability).
Coming from Python/C, and pick up this new language, so I try to explore new syntax. if this sounds too naïve.
my $grade =
($score < 60) ? "F" :
($score < 68) ? "C" :
($score < 75) ? "B" :
($score < 90) ? "B+" :
($score < 95) ? "A" :
($score <= 100) ? "A+" :
"No Grade"; # catch-all default
You'd either need an hash array of 101 entries, or one with 21 and special cases. The optimization of using a hash array in this fashion would be premature without cause.
One would often use a hash or array as a dispatch table in similar situations, but it doesn't work here since you'd have to include all possible values.
Well, maybe not all possible value. For example, we could truncate the numbers to remove decimal points. And if all the value were divisible by 5, we divide by 5 to obtain a manageable 21 values. Then a dispatch table could be used. (Though an array would work better than a hash.)
That 68 means we can't do that. Either 65, 66 and 67, or 68 and 69 would need to be special-cased. The 100 is also a special value since you apparently want to tolerate invalid inputs. And that suggests a bad design unless performance is crucial.
Let's say the optimization was warranted.
You could setup the following tables:
my #grades_lkup;
$grades_lkup[ $_ ] = "F" for 0 .. 59;
$grades_lkup[ $_ ] = "C" for 60 .. 67;
$grades_lkup[ $_ ] = "B" for 68 .. 74;
$grades_lkup[ $_ ] = "B+" for 75 .. 89;
$grades_lkup[ $_ ] = "A" for 90 .. 94;
$grades_lkup[ $_ ] = "A+" for 95 .. 100;
Then all you'd need is
my $grade = $grades_lkup[ $score ] // "No Grade";

How can I improve Perl compare performance

I have an array ref of about 50,000 users. I want to go through all those users and compare each one to all the others in order to build a weighted list of matches (if the name is an exact match it's worth x, a partial match is worth y etc).
After going through the list and doing all the checks, I then want to go get the 10 highest weighted matches. Here is sort of a example of what I'm doing to help explain:
#!/usr/bin/perl
######################################################################
# Libraries
# ---------
use strict;
use warnings;
my $users = [];
$users->[0]{'Name'} = 'xxx';
$users->[0]{'Address'} = 'yyyy';
$users->[0]{'Phone'} = 'xxx';
$users->[1]{'Name'} = 'xxx';
$users->[1]{'Address'} = 'yyyy';
$users->[1]{'Phone'} = 'xxx';
$users->[2]{'Name'} = 'xxx';
$users->[3]{'Address'} = 'yyyy';
$users->[4]{'Phone'} = 'xxx';
foreach my $user_to_check (#$users) {
my $matched_users = [];
foreach my $user (#$users) {
$user_to_check->{'Weight'} = 0;
if (lc($user_to_check->{'Name'}) eq lc($user->{'Name'})) {
$user_to_check->{'Weight'} = ($user_to_check->{'Weight'} + 10);
} elsif ((length($user_to_check->{'Name'}) > 2) && (length($user->{'Name'}) > 2) && ($user_to_check->{'Name'} =~ /\Q$user->{'Name'}\E/i)) {
$user_to_check->{'Weight'} = ($user_to_check->{'Weight'} + 5);
}
if (lc($user_to_check->{'Address'}) eq lc($user->{'Address'})) {
.....
}
if ($user_to_check->{'Weight'} > 0) {
# We have matches, add to matched users
push (#$matched_users,$user);
}
}
# Now we want to get just the top 10 highest matching users
foreach my $m_user (sort { $b->{'Weight'} <=> $a->{'Weight'} } #$matched_users ) {
last if $counter == 10;
.... # Do stuff with the 10 we want
}
}
The problem is, it's sooo slow. It takes more than a day to run (and I've tried it on multiple machines). I know that the "sort" is a killer but I did also try inserting the results into a tmp mysql table and then at the end instead of doing the Perl sort, I just did an order by select, but the difference in time was very minor.
As I'm just going through a existing data structure and comparing it I'm not sure what I could do (if anything) to speed it up. I'd appreciate any advise.
O(n²)
You compare each element in #$users against every element in there. That is 5E4² = 2.5E9 comparisions. For example, you wouldn't need to compare an element against itself. You also don't need to compare an element against one you have already compared. I.e. in this comparision table
X Y Z
X - + +
Y - - +
Z - - -
there only have to be three comparision to have compared each element against all others. The nine comparisions you are doing are 66% unneccessary (asymptotically: 50% unneccessary).
You can implement this by looping over indices:
for my $i (0 .. $#$users) {
my $userA = $users->[$i];
for my $j ($i+1 .. $#$users) {
my $userB = $users->[$j];
...;
}
}
But this means that upon match, you have to increment the weight of both matching users.
Do things once, not 100,000 times
You lowercase the name of each user 1E5 times. This is 1E5 - 1 times to much! Just do it once for each element, possibly at data input.
As a side note, you shouldn't perform lowercasing, you should do case folding. This is available since at least v16 via the fc feature. Just lowercasing will be buggy when you have non-english data.
use feature 'fc'; # needs v16
$user->[NAME] = fc $name;
or
use Unicode::CaseFold;
$user->[NAME] = fc $name;
When hashes are not fast enough
Hashes are fast, in that a lookup takes constant time. But a single hash lookup is more expensive than an array access. As you only have a small, predefined set of fields, you can use the following trick to use hash-like arrays:
Declare some constants with the names of your fields that map to indices, e.g.
use constant {
WEIGHT => 0,
NAME => 1,
ADDRESS => 2,
...;
};
And then put your data into arrays:
$users->[0][NAME] = $name; ...;
You can access the fields like
$userA->[WEIGHT] += 10;
While this looks like a hash, this is actually a safe method to access only certain fields of an array with minimal overhead.
Regexes are slow
Well, they are quite fast, but there is a better way to determine if a string is a substring of another string: use index. I.e.
$user_to_check->{'Name'} =~ /\Q$user->{'Name'}\E/i
Can be written as
(-1 != index $user_to_check->{Name}, $user->{Name})
assuming both are already lowercased case folded.
Alternative implementation
Edit: this appears to be invalidated by your edit to your question. This assumed you were trying to find some global similarities, not to obtain a set of good matches for each user
Implementing these ideas would make your loops look somewhat like
for my $i (0 .. $#$users) {
my $userA = $users->[$i];
for my $j ($i+1 .. $#$users) {
my $userB = $users->[$j];
if ($userA->[NAME] eq $userB->[NAME]) {
$userA->[WEIGHT] += 10;
$userB->[WEIGHT] += 10;
} elsif ((length($userA->[NAME]) > 2) && (length($userB->[NAME]) > 2))
$userA->[WEIGHT] += 5 if -1 != index $userA->[NAME], $userB->[NAME];
$userB->[WEIGHT] += 5 if -1 != index $userB->[NAME], $userA->[NAME];
}
if ($userA->[ADDRESS] eq $userB->[ADDRESS]) {
..... # More checks
}
}
}
my (#top_ten) = (sort { $b->[WEIGHT] <=> $a->[WEIGHT] } #$users)[0 .. 9];
Divide and conquer
The task you show is highly parallelizable. If you have the memory, using threads is easy here:
my $top10 = Thread::Queue->new;
my $users = ...; # each thread gets a copy of this data
my #threads = map threads->create(\&worker, $_), [0, int($#$users/2)], [int($#$users/2)+1, $#users];
# process output from the threads
while (defined(my $ret = $top10->dequeue)) {
my ($user, #top10) = #$ret;
...;
}
$_->join for #threads;
sub worker {
my ($from, $to) = #_;
for my $i ($from .. $to) {
my $userA = $users->[$i];
for $userB (#$users) {
...;
}
my #top10 = ...;
$top10->enqueue([ $userA, #top10 ]); # yield data to the main thread
}
}
You should probably return your output via a queue (as shown here), but do as much processing as possible inside the threads. With more advanced partitioning of the workload, should spawn as many threads as you have processors available.
But if any kind of pipelining, filtering or caching can decrease the number of iterations needed in the nested loops, you should do such optimizations (think map-reduce-style programming).
Edit: Elegantly reducing complexity through hashes for deduplication
What we are essentially doing is calculating a matrix of how good our records match, e.g.
X Y Z
X 9 4 5
Y 3 9 2
Z 5 2 9
If we assume that X is similar to Y implies Y is similar to X, then the matrix is symmetric, and we only need half of it:
X Y Z
X \ 4 5
Y \ 2
Z \
Such a matrix is equivalent to a weighted, undirected graph:
4 X 5 | X – Y: 4
/ \ | X – Z: 5
Y---Z | Y – Z: 2
2 |
Therefore, we can represent it elegantly as a hash of hashes:
my %graph;
$graph{X}{Y} = 4;
$graph{X}{Z} = 5;
$graph{Y}{Z} = 2;
However, such a hash structure implies a direction (from node X to node Y). To make querying the data easier, we might as well include the other direction too (due to the implementation of hashes, this won't lead to a large memory increase).
$graph{$x}{$y} = $graph{$y}{$x} += 2;
Because each node is now only connected to those nodes it is similar to, we don't have to sort through 50,000 records. For the 100th record, we can get the ten most similar nodes like
my $node = 100;
my #top10 = (sort { $graph{$node}{$b} <=> $graph{$node}{$a} } keys %{ $graph{$node} })[0 .. 9];
This would change the implementation to
my %graph;
# build the graph, using the array indices as node ID
for my $i (0 .. $#$users) {
my $userA = $users->[$i];
for my $j ($i+1 .. $#$users) {
my $userB = $users->[$j];
if ($userA->[NAME] eq $userB->[NAME]) {
$graph{$j}{$i} = $graph{$i}{$j} += 10;
} elsif ((length($userA->[NAME]) > 2) && (length($userB->[NAME]) > 2))
$graph{$j}{$i} = $graph{$i}{$j} += 5
if -1 != index $userA->[NAME], $userB->[NAME]
or -1 != index $userB->[NAME], $userA->[NAME];
}
if ($userA->[ADDRESS] eq $userB->[ADDRESS]) {
..... # More checks
}
}
}
# the graph is now fully populated.
# do somethething with each top10
while (my ($node_id, $similar) = each %graph) {
my #most_similar_ids = (sort { $similar->{$b} <=> $similar->{$a} } keys %$similar)[0 .. 9];
my ($user, #top10) = #$users[ $node_id, #most_similar_ids ];
...;
}
Building the graph this way should take half the time of naive iteration, and if the average number of edges for each node is low enough, going through similar nodes should be considerably faster.
Parallelizing this is a bit harder, as the graph each thread produces has to be combined before the data can be queried. For this, it would be best for each thread to perform the above code with the exception that the iteration bounds are given as parameters, and that only one edge should produced. The pair of edges will be completed in the combination phase:
THREAD A [0 .. 2/3] partial
\ graph
=====> COMBINE -> full graph -> QUERY
/ partial
THREAD B [2/3 .. 1] graph
# note bounds recognizing the triangular distribution of workload
However, this is only beneficial if there are only very few similar nodes for a given node, as combination is expensive.

Randomly selecting letters by frequency of use

After feeding few Shakespeare books to my Perl script I have a hash with 26 english letters as keys and the number of their occurences in texts - as value:
%freq = (
a => 24645246,
b => 1409459,
....
z => 807451,
);
and of course the total number of all letters - let's say in the $total variable.
Is there please a nice trick to generate a string holding 16 random letters (a letter can occur several times there) - weighted by their frequency of use?
To be used in a word game similar to Ruzzle:
Something elegant - like picking a random line from a file, as suggested by a Perl Cookbook receipt:
rand($.) < 1 && ($line = $_) while <>;
The Perl Cookbook trick for picking a random line (which can also be found in perlfaq5) can be adapted for weighted sampling too:
my $chosen;
my $sum = 0;
foreach my $item (keys %freq) {
$sum += $freq{$item};
$chosen = $item if rand($sum) < $freq{$item};
}
Here, $sum corresponds to the line counter $. and $freq{$item} to the constant 1 in the Cookbook version.
If you're going to be picking a lot of weighted random samples, you can speed this up a bit with some preparation (note that this destroys %freq, so make a copy first if you want to keep it):
# first, scale all frequencies so that the average frequency is 1:
my $avg = 0;
$avg += $_ for values %freq;
$avg /= keys %freq;
$_ /= $avg for values %freq;
# now, prepare the array we'll need for fast weighted sampling:
my #lookup;
while (keys %freq) {
my ($lo, $hi) = (sort {$freq{$a} <=> $freq{$b}} keys %freq)[0, -1];
push #lookup, [$lo, $hi, $freq{$lo} + #lookup];
$freq{$hi} -= (1 - $freq{$lo});
delete $freq{$lo};
}
Now, to draw a random weighted sample from the prepared distribution, you just do this:
my $r = rand #lookup;
my ($lo, $hi, $threshold) = #{$lookup[$r]};
my $chosen = ($r < $threshold ? $lo : $hi);
(This is basically the Square Histogram method described in Marsaglia, Tsang & Wang (2004), "Fast Generation of Discrete Random Variables", J. Stat. Soft. 11(3) and originally due to A.J. Walker (1974).)
I have no clue about Perl syntax so I'll just write pseudo-code. You can do something like that
sum <= 0
foreach (letter in {a, z})
sum <= sum + freq[letter]
pick r, a random integer in [0, sum[
letter <= 'a' - 1
do
letter <= letter + 1
r <= r - freq(letter)
while r > 0
letter is the resulting value
The idea behind this code is to make a stack of boxes for each letter. The size of each box is the frequency of the letter. Then we choose a random location on this stack and see which letter's box we landed.
Example :
freq(a) = 5
freq(b) = 3
freq(c) = 3
sum = 11
| a | b | c |
- - - - - - - - - - -
When we choose a 0 <= r < 11, we have the following probabilities
Pick a 'a' = 5 / 11
Pick a 'b' = 3 / 11
Pick a 'c' = 3 / 11
Which is exactly what we want.
You can first built a table of the running sum of the frequency. So if you have the following data:
%freq = (
a => 15,
b => 25,
c => 30,
d => 20
);
the running sum would be;
%running_sums = (
a => 0,
b => 15,
c => 40, # 15 + 25
d => 70, # 15 + 25 + 30
);
$max_sum = 90; # 15 + 25 + 30 + 20
To pick a single letter with the weighted frequency, you need to select a number between [0,90), then you can do a linear search on the running_sum table for the range that includes the letter. For example, if your random number is 20 then the appropriate range is 15-40, which is for the letter 'b'. Using linear search gives a total running time of O(m*n) where m is the number of letters we need and n is the size of the alphabet (therefore m=16, n=26). This is essentially what #default locale do.
Instead of linear search, you can also do a binary search on the running_sum table to get the closest number rounded down. This gives a total running time of O(m*log(n)).
For picking m letters though, there is a faster way than O(m*log(n)), perticularly if n < m. First you generate m random numbers in sorted order (which can be done without sorting in O(n)) then you do a linear matching for the ranges between the list of sorted random numbers and the list of running sums. This gives a total runtime of O(m+n). The code in its entirety running in Ideone.
use List::Util qw(shuffle);
my %freq = (...);
# list of letters in sorted order, i.e. "a", "b", "c", ..., "x", "y", "z"
# sorting is O(n*log(n)) but it can be avoided if you already have
# a list of letters you're interested in using
my #letters = sort keys %freq;
# compute the running_sums table in O(n)
my $sum = 0;
my %running_sum;
for(#letters) {
$running_sum{$_} = $sum;
$sum += $freq{$_};
}
# generate a string with letters in $freq frequency in O(m)
my $curmax = 1;
my $curletter = $#letters;
my $i = 16; # the number of letters we want to generate
my #result;
while ($i > 0) {
# $curmax generates a uniformly distributed decreasing random number in [0,1)
# see http://repository.cmu.edu/cgi/viewcontent.cgi?article=3483&context=compsci
$curmax = $curmax * (1-rand())**(1. / $i);
# scale the random number $curmax to [0,$sum)
my $num = int ($curmax * $sum);
# find the range that includes $num
while ($num < $running_sum{$letters[$curletter]}) {
$curletter--;
}
push(#result, $letters[$curletter]);
$i--;
}
# since $result is sorted, you may want to use shuffle it first
# Fisher-Yates shuffle is O(m)
print "", join('', shuffle(#result));

Get regions from a file that are part of regions in other file (Without loops)

I have two files:
regions.txt: First column is the chromosome name, second and third are start and end position.
1 100 200
1 400 600
2 600 700
coverage.txt: First column is chromosome name, again second and third are start and end positions, and last column is the score.
1 100 101 5
1 101 102 7
1 103 105 8
2 600 601 10
2 601 602 15
This file is very huge it is about 15GB with about 300 million lines.
I basically want to get the mean of all scores in coverage.txt that are in each region in regions.txt.
In other words, start at the first line in regions.txt, if there is a line in coverage.txt which has the same chromosome, start-coverage is >= start-region, and end-coverage is <= end-region, then save its score to a new array. After finish searching in all coverages.txt print the region chromosome, start, end, and the mean of all scores that have been found.
Expected output:
1 100 200 14.6 which is (5+7+8)/3
1 400 600 0 no match at coverages.txt
2 600 700 12.5 which is (10+15)/2
I built the following MATLAB script which take very long time since I have to loop over coverage.txt many time. I don't know how to make a fast awk similar script.
My matlab script
fc = fopen('coverage.txt', 'r');
ft = fopen('regions.txt', 'r');
fw = fopen('out.txt', 'w');
while feof(ft) == 0
linet = fgetl(ft);
scant = textscan(linet, '%d%d%d');
tchr = scant{1};
tx = scant{2};
ty = scant{3};
coverages = [];
frewind(fc);
while feof(fc) == 0
linec = fgetl(fc);
scanc = textscan(linec, '%d%d%d%d');
cchr = scanc{1};
cx = scanc{2};
cy = scanc{3};
cov = scanc{4};
if (cchr == tchr) && (cx >= tx) && (cy <= ty)
coverages = cat(2, coverages, cov);
end
end
covmed = median(coverages);
fprintf(fw, '%d\t%d\t%d\t%d\n', tchr, tx, ty, covmed);
end
Any suggestions to make an alternative using AWK, Perl, or , ... etc I will aslo be pleased if someone can teach me how to get rid of all loops in my matlab script.
Thanks
Here is a Perl solution. I use hashes (aka dictionaries) to access the various ranges via the chromosome, thus reducing the number of loop iterations.
This is potentially efficient, as I don't do a full loop over regions.txt on every input line. Efficiency could perhaps be increased further when multithreading is used.
#!/usr/bin/perl
my ($rangefile) = #ARGV;
open my $rFH, '<', $rangefile or die "Can't open $rangefile";
# construct the ranges. The chromosome is used as range key.
my %ranges;
while (<$rFH>) {
chomp;
my #field = split /\s+/;
push #{$ranges{$field[0]}}, [#field[1,2], 0, 0];
}
close $rFH;
# iterate over all the input
while (my $line = <STDIN>) {
chomp $line;
my ($chrom, $lower, $upper, $value) = split /\s+/, $line;
# only loop over ranges with matching chromosome
foreach my $range (#{$ranges{$chrom}}) {
if ($$range[0] <= $lower and $upper <= $$range[1]) {
$$range[2]++;
$$range[3] += $value;
last; # break out of foreach early because ranges don't overlap
}
}
}
# create the report
foreach my $chrom (sort {$a <=> $b} keys %ranges) {
foreach my $range (#{$ranges{$chrom}}) {
my $value = $$range[2] ? $$range[3]/$$range[2] : 0;
printf "%d %d %d %.1f\n", $chrom, #$range[0,1], $value;
}
}
Example invocation:
$ perl script.pl regions.txt <coverage.txt >output.txt
Output on the example input:
1 100 200 6.7
1 400 600 0.0
2 600 700 12.5
(because (5+7+8)/3 = 6.66…)
Normally, I would load the files into R and calculate it, but given that one of them is so huge, this would become a problem. Here are some thoughts that might help you solving it.
Consider splitting coverage.txt by chromosomes. This would make the calculations less demanding.
Instead of looping over coverage.txt, you first read the regions.txt full into memory (I assume it is much smaller). For each region, you keep a score and a number.
Process coverage.txt line by line. For each line, you determine the chromosome and the region that this particular stretch belongs to. This will require some footwork, but if regions.txt is not too large, it might be more efficient. Add the score to the score of the region and increment number by one.
An alternative, most efficient way requires both files to be sorted first by chromosome, then by position.
Take a line from regions.txt. Record the chromosome and positions. If there is a line remaining from previous loop, go to 3.; otherwise go to 2.
Take a line from coverage.txt.
Check whether it is within the current region.
yes: add the score to the region, increment number. Move to 2.
no: divide score by number, write the current region to output, go to 1.
This last method requires some fine tuning, but will be most efficient -- it requires to go through each file only once and does not require to store almost anything in the memory.
Here's one way using join and awk. Run like:
join regions.txt coverage.txt | awk -f script.awk - regions.txt
Contents of script.awk:
FNR==NR && $4>=$2 && $5<=$3 {
sum[$1 FS $2 FS $3]+=$6
cnt[$1 FS $2 FS $3]++
next
}
{
if ($1 FS $2 FS $3 in sum) {
printf "%s %.1f\n", $0, sum[$1 FS $2 FS $3]/cnt[$1 FS $2 FS $3]
}
else if (NF == 3) {
print $0 " 0"
}
}
Results:
1 100 200 6.7
1 400 600 0
2 600 700 12.5
Alternatively, here's the one-liner:
join regions.txt coverage.txt | awk 'FNR==NR && $4>=$2 && $5<=$3 { sum[$1 FS $2 FS $3]+=$6; cnt[$1 FS $2 FS $3]++; next } { if ($1 FS $2 FS $3 in sum) printf "%s %.1f\n", $0, sum[$1 FS $2 FS $3]/cnt[$1 FS $2 FS $3]; else if (NF == 3) print $0 " 0" }' - regions.txt
Here is a simple MATLAB way to bin your coverage into regions:
% extract the regions extents
bins = regions(:,2:3)';
bins = bins(:);
% extract the coverage - only the start is needed
covs = coverage(:,2);
% use histc to place the coverage start into proper regions
% this line counts how many coverages there are in a region
% and assigns them proper region ids.
[h, i]= histc(covs(:), bins(:));
% sum the scores into correct regions (second output of histc gives this)
total = accumarray(i, coverage(:,4), [numel(bins),1]);
% average the score in regions (first output of histc is useful)
avg = total./h;
% remove every second entry - our regions are defined by start/end
avg = avg(1:2:end);
Now this works assuming that the regions are non-overlapping, but I guess that is the case. Also, every entry in the coverage file has to fall into some region.
Also, it is trivial to 'block' this approach over coverages, if you want to avoid reading in the whole file. You only need the bins, your regions file, which presumably is small. You can process the coverages in blocks, incrementally add to total and compute the average in the end.

Identifying subarrays in matrices in Perl

I am relatively new to Perl, and I need to make a relatively sophisticated matricial computation and don't know what data structures to use.
Not sure if this is the appropriate forum for this, but say you have following matrix in a multi-dimensional array in Perl:
0.2 0.7 0.2
0.6 0.8 0.7
0.6 0.1 0.8
0.1 0.2 0.9
0.6 0.3 0.0
0.6 0.9 0.2
I am trying to identify column segments in this Matrix corresponding to continuous values that are higher than a given threshold, e.g. 0.5
For example, if we threshold this matrix, we have:
0 1 0
1 1 1
1 0 1
0 0 1
1 0 0
1 1 0
If we now focus on the first column:
0
1
1
0
1
1
we can see that there are two continuous segments:
0 1 1 0 1 1
The first track (sequence of ones) starts with index 1 and ends with index 2
The second track (sequence of ones) starts with index 4 and ends with index 5
I would like to detect all such tracks in the original matrix, but I don't know how to proceed or what Perl data structures are most appropriate for this.
Ideally I would like something easy to index, e.g. assuming that we use the variable tracks, I can store the indices for the first column (index 0) as follows:
# First column, first track
$tracks{0}{0}{'start'} = 1;
$tracks{0}{0}{'end'} = 2;
# First column, second track
$tracks{0}{1}{'start'} = 4;
$tracks{0}{1}{'end'} = 5;
# ...
What are good data structures and/or libraries I can use to approach this problem in Perl?
I am just giving the algorithmic answer and you can code it in whatever language you like.
Split the problem into subproblems:
Thresholding: depending how you store you input this can be as simple as an iteration over an $n$ dimensional matrix, or a tree/list traversal if your matrices are sparse. This is the easy bit.
The algorithm for finding continuous segments is called 'run-length-encoding'. It takes a sequence with possible duplicates like
1 0 0 1 1 1 1 0 1 and returns another sequence which tells you which element is next, and how many of them are there. So for example the above sequence would be 1 1 0 2 1 4 0 1 1 1. The encoding is unique so if you ever want to invert it you are OK.
The first 1 is there because the original input starts with 1, and first 0 is there because after the 1 there is a 0, and the fourth number is two because there are two consecutive zeros. There are zillions of rle-encoders if you don't want to do your own.
Its main purpose is compression and it works reasonably well for that purpose if you have long runs of identical items. Depending on your needs you may have to run it horizontally, vertically and even diagonally.
You find the precise algorithm in all the classical books on data structures and algorithm. I'd suggest Cormen-Leiseron-Rivest-Stein: 'Introduction to Algorithms' first, then Knuth.
Once you get the gist, you can safely 'fuse' the thresholding with RLE to avoid iterating twice over your inputs.
This seems to do what you want. I have represented the data in the form you suggested, as the ideal form depends entirely on what you want to do with the result
It works by calculating the list of 0s and 1s from each column, adding barrier values of zero at each end (one in $prev and one in the for list) and then scanning the list for changes between 1 and 0
Every time a change is found, a track start or end is recorded. If $start is undefined then the current index is recorded as the start of a segment, otherwise the current segment ended at one less than the current index. A hash is built with start and end keys, and pushed onto the #segments array.
The final set of nested loops dumps the calculated data in the form you show in the question
use strict;
use warnings;
use constant THRESHOLD => 0.5;
my #data = (
[ qw/ 0.2 0.7 0.2 / ],
[ qw/ 0.6 0.8 0.7 / ],
[ qw/ 0.6 0.1 0.8 / ],
[ qw/ 0.1 0.2 0.9 / ],
[ qw/ 0.6 0.3 0.0 / ],
[ qw/ 0.6 0.9 0.2 / ],
);
my #tracks;
for my $colno (0 .. $#{$data[0]}) {
my #segments;
my $start;
my $prev = 0;
my $i = 0;
for my $val ( (map { $_->[$colno] > THRESHOLD ? 1 : 0 } #data), 0 ) {
next if $val == $prev;
if (defined $start) {
push #segments, { start => $start, end=> $i-1 };
undef $start;
}
else {
$start = $i;
}
}
continue {
$prev = $val;
$i++;
}
push #tracks, \#segments;
}
# Dump the derived #tracks data
#
for my $colno (0 .. $#tracks) {
my $col = $tracks[$colno];
for my $track (0 .. $#$col) {
my $data = $col->[$track];
printf "\$tracks[%d][%d]{start} = %d\n", $colno, $track, $data->{start};
printf "\$tracks[%d][%d]{end} = %d\n", $colno, $track, $data->{end};
}
print "\n";
}
output
$tracks[0][0]{start} = 1
$tracks[0][0]{end} = 2
$tracks[0][1]{start} = 4
$tracks[0][1]{end} = 5
$tracks[1][0]{start} = 0
$tracks[1][0]{end} = 1
$tracks[1][1]{start} = 5
$tracks[1][1]{end} = 5
$tracks[2][0]{start} = 1
$tracks[2][0]{end} = 3
Lamenting the poor support for multidimensional arrays by Perl, I soon found myself throwing together a small solution of my own. The algorithm is rather similar to Borodins idea, but with a slightly different structure:
sub tracks {
my ($data) = #_; # this sub takes a callback as argument
my #tracks; # holds all found ranges
my #state; # is true if we are inside a range/track. Also holds the starting index of the current range.
my $rowNo = 0; # current row number
while (my #row = $data->()) { # fetch new data
for my $i (0..$#row) {
if (not $state[$i] and $row[$i]) {
# a new track is found
$state[$i] = $rowNo+1; # we have to pass $rowNo+1 to ensure a true value
} elsif ($state[$i] and not $row[$i]) {
push #{$tracks[$i]}, [$state[$i]-1, $rowNo-1]; # push a found track into the #tracks array. We have to adjust the values to revert the previous adjustment.
$state[$i] = 0; # reset state to false
}
}
} continue {$rowNo++}
# flush remaining tracks
for my $i (0..$#state) {
push #{$tracks[$i]}, [$state[$i]-1, $rowNo-1] if $state[$i]
}
return #tracks;
}
#state doubles as a flag indicating if we are inside a track and as a record for the track starting index. In the state and tracks arrays, the index indicates the current column.
As a data source, I used an external file, but this can be easily plugged into anything, e.g. a preexisting array. The only contract is that it must return an arbitrary sequence of true and false values and the empty list when no further data is available.
my $limit = 0.5
my $data_source = sub {
defined (my $line = <>) or return (); # return empty list when data is empty
chomp $line;
return map {$_ >= $limit ? $_ : 0} split /\s+/, $line; # split the line and map the data to true and false values
};
With the data you gave copy-pasted as input, I get the following printout as output (printing code omitted):
[ [1 2], [4 5] ]
[ [0 1], [5 5] ]
[ [1 3] ]
With your structure, this would be
$tracks[0][0][0] = 1;
$tracks[0][0][1] = 2;
$tracks[0][1][0] = 4;
...;
If this is modified to a hash, further data like the original value could be incorporated.