How can I find the top 100 most used strings (words) in a .txt file using Perl? So far I have the following:
use 5.012;
use warnings;
open(my $file, "<", "file.txt");
my %word_count;
while (my $line = <$file>) {
foreach my $word (split ' ', $line) {
$word_count{$word}++;
}
}
for my $word (sort keys %word_count) {
print "'$word': $word_count{$word}\n";
}
But this only counts each word, and organizes it in alphabetical order. I want the top 100 most frequently used words in the file, sorted by number of occurrences. Any ideas?
Related: Count number of times string repeated in files perl
From reading the fine perlfaq4(1) manpage, one learns how to sort hashes by value. So try this. It’s rather more idiomatically “perlian” than your approach.
#!/usr/bin/env perl
use v5.12;
use strict;
use warnings;
use warnings FATAL => "utf8";
use open qw(:utf8 :std);
my %seen;
while (<>) {
$seen{$_}++ for split /\W+/; # or just split;
}
my $count = 0;
for (sort {
$seen{$b} <=> $seen{$a}
||
lc($a) cmp lc($b) # XXX: should be v5.16's fc() instead
||
$a cmp $b
} keys %seen)
{
next unless /\w/;
printf "%-20s %5d\n", $_, $seen{$_};
last if ++$count > 100;
}
When run against itself, the first 10 lines of output are:
seen 6
use 5
_ 3
a 3
b 3
cmp 2
count 2
for 2
lc 2
my 2
Related
sub Solution{
my $n=$_[0];
my $m=lc $_[1];
my #chars=split("",$m);
my $result=0;
my #vowels=("a","e","i","o","u");
#OUTPUT [uncomment & modify if required]
for(my $i=0;$i<$n;$i=$i+1){
for(my $j=0;$j<5;$j=$j+1){
if($chars[$i]==$vowels[$j]){
$result=$result+1;
last;
}
}
}
print $result;
}
#INPUT [uncomment & modify if required]
my $n=<STDIN>;chomp($n);
my $m=<STDIN>;chomp($m);
Solution($n,$m);
So I wrote this solution to find the number of vowels in a string. $n is the length of the string and $m is the string.
However, for the input 3 nam I always get the input as 3.
Can someone help me debug it?
== compares numbers. eq compares strings. So instead of $chars[$i]==$vowels[$j] you should write $chars[$i] eq $vowels[$j]. If you had used use warnings;, which is recommended, you'd have gotten a warning about that.
And by the way, there's no need to work with extra variables for the length. You can get the length of a string with length() and of an array for example with scalar(). Also, the last index of an array #a can be accessed with $#a. Or you can use foreach to iterate over all elements of an array.
A better solution is using a tr operator which, in scalar context, returns the number of replacements:
perl -le 'for ( #ARGV ) { $_ = lc $_; $n = tr/aeiouy//; print "$_: $n"; }' Use Perl to count how many vowels are in each string
use: 2
perl: 1
to: 1
count: 2
how: 1
many: 2
vowels: 2
are: 2
in: 1
each: 2
string: 1
I included also y, which is sometimes a vowel, see: https://simple.wikipedia.org/wiki/Vowel
Let me suggest a better approach to count letters in a text
#!/usr/bin/env perl
#
# vim: ai:ts=4:sw=4
#
use strict;
use warnings;
use feature 'say';
use Data::Dumper;
my $debug = 0; # debug flag
my %count;
my #vowels = qw/a e i o u/;
map{
chomp;
my #chars = split '';
map{ $count{$_}++ } #chars;
} <DATA>;
say Dumper(\%count) if $debug;
foreach my $vowel (#vowels) {
say "$vowel: $count{$vowel}";
}
__DATA__
So I wrote this solution to find the number of vowels in a string. $n is the length of the string and $m is the string. However, for the input 3 nam I always get the input as 3.
Can someone help me debug it?
Output
a: 7
e: 18
i: 12
o: 12
u: 5
Your code is slightly modified form
#!/usr/bin/env perl
#
# vim: ai:ts=4:sw=4
#
use strict;
use warnings;
use feature 'say';
my $input = get_input('Please enter sentence:');
say "Counted vowels: " . solution($input);
sub get_input {
my $prompt = shift;
my $input;
say $prompt;
$input = <STDIN>;
chomp($input);
return $input;
}
sub solution{
my $str = lc shift;
my #chars=split('',$str);
my $count=0;
my #vowels=qw/a e i o u/;
map{
my $c=$_;
map{ $count++ if $c eq $_} #vowels;
} #chars;
return $count;
}
I am trying to write a small program that takes from command line file(s) and prints out the number of occurrence of a word from all files and in which file it occurs. The first part, finding the number of occurrence of a word, seems to work well.
However, I am struggling with the second part, namely, finding in which file (i.e. file name) the word occurs. I am thinking of using an array that stores the word but don’t know if this is the best way, or what is the best way.
This is the code I have so far and seems to work well for the part that counts the number of times a word occurs in given file(s):
use strict;
use warnings;
my %count;
while (<>) {
my $casefoldstr = lc $_;
foreach my $str ($casefoldstr =~ /\w+/g) {
$count{$str}++;
}
}
foreach my $str (sort keys %count) {
printf "$str $count{$str}:\n";
}
The filename is accessible through $ARGV.
You can use this to build a nested hash with the filename and word as keys:
use strict;
use warnings;
use List::Util 'sum';
while (<>) {
$count{$word}{$ARGV}++ for map +lc, /\w+/g;
}
foreach my $word ( keys %count ) {
my #files = keys %$word; # All files containing lc $word
print "Total word count for '$word': ", sum( #{ $count{$word} }{#files} ), "\n";
for my $file ( #files ) {
print "$count{$word}{$file} counts of '$word' detected in '$file'\n";
}
}
Using an array seems reasonable, if you don't visit any file more than once - then you can always just check the last value stored in the array. Otherwise, use a hash.
#!/usr/bin/perl
use warnings;
use strict;
my %count;
my %in_file;
while (<>) {
my $casefoldstr = lc;
for my $str ($casefoldstr =~ /\w+/g) {
++$count{$str};
push #{ $in_file{$str} }, $ARGV
unless ref $in_file{$str} && $in_file{$str}[-1] eq $ARGV;
}
}
foreach my $str (sort keys %count) {
printf "$str $count{$str}: #{ $in_file{$str} }\n";
}
I would like to sort files numerically using Perl script.
My files looks like below:
1:file1:filed2
3:filed1:field2
10:filed1:field2
4:field1:field2
7:field1:field2
I would like to display it as:
1:file1:filed2
3:filed1:field2
4:field1:field2
7:field1:field2
10:filed1:field2
The way sort works in perl, is that it works through your list, setting each element to $a and $b - then testing those. By default, it uses cmp which is an alphanumeric sort.
You've also got <=> which is a numeric sort, and the kind you're looking for. (Alpha sorts 10 ahead of 2 ).
So - all we need do is extract the numeric value of your key. There's a number of ways you could do this - the obvious being to take a subroutine that temporarily copies the variables:
#!/usr/bin/env perl
use strict;
use warnings;
sub compare_first_num {
my ( $a1 ) = split ( /:/, $a );
my ( $b1 ) = split ( /:/, $b );
return $a1 <=> $b1;
}
print sort compare_first_num <>;
This uses <> - the magic filehandle - to read STDIN or files specified on command line.
Or alternatively, in newer perls (5.16+):
print sort { $a =~ s/:.*//r <=> $b =~ s/:.*//r } <>;
We use the 'substitute-and-return' operation to compare just the substrings we're interested in. (Numerically).
Split on : and store in a hash of arrays. Then you can sort and print out the hash keys:
my %data;
while(<DATA>){
my #field = split(/:/);
$data{$field[0]} = [#field[1..2]];
}
print join (':', $_, #{$data{$_}}) for sort { $a <=> $b } keys %data;
print "\n";
1:file1:filed2
3:filed1:field2
4:field1:field2
7:field1:field2
10:filed1:field2
For simple and fast solution, use Sort::Key::Natural (fast natural sorting) module:
use warnings;
use strict;
use Sort::Key::Natural qw( natsort );
open my $fh, "<", "file.txt" or die $!;
my #files = natsort <$fh>;
close $fh;
print #files;
Output:
1:file1:filed2
3:filed1:field2
4:field1:field2
7:field1:field2
10:filed1:field2
I have been trying to find values that match between two columns (columns a and column b) of a large file and print the common values, plus the corresponding column d. I have been doing this by interating through hashes, however, because the file is so large, there is not enough memory to produce the output file. Is there any other way to do the same thing using less memory resources.
Any help is much appreciated.
The script I have written thus far is below:
#!usr/bin/perl
use warnings;
use strict;
open (FILE1, "<input.txt") || die "$!\n Couldn't open input.txt\n";
open (Output, ">output.txt")||die "Can't Open output.txt ";
my $hash1={};
my $hash2={};
while (<FILE1>) {
chomp (my $line=$_);
my ($a, $b, $c, $d) = split (/\t/, $line);
if ($a) {
$hash1->{$a}{info1} = "$d"; #original_ID-> YOB
}
if ($b) {
$hash2->{$b}{info2} = "$a"; #original_ID-> sire
}
foreach my $key (keys %$hash2) {
if (exists $hash1{$a}) {
$info1 = $hash1->{$a}->{info1};
print "$a\t$info1\n";
}
}
}
close FILE1;
close Output;
print "Done\n";
To clarify, the input file is a large pedigree file. An example is:
1 2 3 1977
2 4 5 1944
3 4 5 1950
4 5 6 1930
5 7 6 1928
An example of the output file is:
2 1944
4 1950
5 1928
Does the below work for you ?
#!/usr/local/bin/perl
use strict;
use warnings;
use DBM::Deep;
use List::MoreUtils qw(uniq);
my #seen;
my $db = DBM::Deep->new(
file => "foo.db",
autoflush => 1
);
while (<>) {
chomp;
my #fields = split /\s+/;
$$db{$fields[0]} = $fields[3];
push #seen, $fields[1];
}
for (uniq #seen) {
print $_ . " " . $$db{$_} . "\n" if exists $$db{$_};
}
What is the best/right way, in perl, of keeping the information from the previous and/or following line. For example, with this code:
while (<IN>) {
print;
}
how can it be changed to not print the line only if the previous or the next line in the file match foo, but printing otherwise?
Could you give code examples. Thanks.
Updated: Simplified exposition.
Basically, you need to keep track of two extra lines if you want to print the current lines based on information contained in two other lines. Here is a simple script with everything hard-coded:
#!/usr/bin/env perl
use strict;
use warnings;
my $prev = undef;
my $candidate = scalar <DATA>;
while (defined $candidate) {
my $next = <DATA>;
unless (
(defined($prev) && ($prev =~ /foo/)) ||
(defined($next) && ($next =~ /foo/))
) {
print $candidate;
}
($prev, $candidate) = ($candidate, $next);
}
__DATA__
1
2
foo
3
4
5
foo
6
foo
7
8
9
foo
We can generalize this to a function that takes a filehandle and a test (as a subroutine reference):
#!/usr/bin/env perl
use strict; use warnings;
print_mid_if(\*DATA, sub{ return !(
(defined($_[0]) && ($_[0] =~ /foo/)) ||
(defined($_[1]) && ($_[1] =~ /foo/))
)} );
sub print_mid_if {
my $fh = shift;
my $test = shift;
my $prev = undef;
my $candidate = scalar <$fh>;
while (defined $candidate) {
my $next = <$fh>;
print $candidate if $test->($prev, $next);
($prev, $candidate) = ($candidate, $next);
}
}
__DATA__
1
2
foo
3
4
5
foo
6
foo
7
8
9
foo
You could read your line into an array, and then if you get something that signals you in some way, pop out the last few elements of the array. Once you've finished reading everything in, you could print it:
use strict;
use warnings;
use feature qw(say);
use autodie; #Won't catch attempt to read from an empty file
use constant FILE_NAME => "some_name.txt"
or die qq(Cannot open ) . FILE_NAME . qq(for reading: $!\n);
open my $fh, "<", FILE_NAME;
my #output;
LINE:
while ( my $line = <DATA> ) {
chomp $line;
if ( $line eq "foo" ) {
pop #output; #The line before foo
<DATA>; #The line after foo
next LINE; #Skip line foo. Don't push it into the array
}
push #output, $line;
}
From there, you can print out the array with the values you don't want printed already taken care of.
for my $line ( #output ) {
say $line;
}
The only problem is that this takes memory. If your file is extremely large, you could run out of memory.
One way to get around this is to use a buffer. You store your values in an array, and shift out the last value when you push another in the array. If the value read in is foo, you can reset the array. In this case, the buffer will contain at most one line:
#! /usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
my #buffer;
LINE:
while ( my $line = <DATA> ) {
chomp $line;
if ( $line eq "foo" ) {
#buffer = (); #Empty buffer of previous line
<DATA>; #Get rid of the next line
next LINE; #Foo doesn't get pushed into the buffer
}
push #buffer, $line;
if ( #buffer > 1 ) { #Buffer is "full"
say shift #buffer; #Print out previous line
}
}
#
# Empty out buffer
#
for my $line ( #buffer ) {
say $line;
}
__DATA__
2
3
4
5
6
7
8
9
10
11
12
13
1
2
foo
3
4
5
foo
6
7
8
9
foo
Note that it is very possible that I might attempt to read from an empty file when I skip the next line. This is okay. The <$fh> will return either an empty string or undef, but I can ignore that. I'll catch the error when I go back to the top of my loop.
I didn't see that you had any specific criteria for "best", so I'll give you a solution that may be "best" along a different axis than those presented so far. You could use Tie::File and treat the entire file as an array, then iterate the array using an index. The previous and next lines are just $index-1 and $index+1 respectively. You just have to worry a little about your indices going beyond the bounds of your array. Here's an example:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.010; # just for "say"
use Tie::File;
tie my #array, 'Tie::File', "filename" or die;
for my $i (0..$#array) {
if ($i > 0 && $i < $#array) { # ensure $i-1 and $i+1 make sense
next if $array[$i-1] =~ /BEFORE/ &&
$array[$i+1] =~ /AFTER/;
}
say $array[$i];
}
If it's more convenient, you can specify a filehandle instead of a filename and Tie::File also has some parameters to control memory usage or change what it means to be a "line" if you want that. Check the docs for more info.
Anyway, that's another way to do what you want that might be conceptually simpler if you are familiar with arrays and like to think in terms of arrays.
I would read the file into an array, with each line being an array element, then you can do the comparisons. The only real design consideration is the size of the file being read into memory.