I have 100,000's of files that I would like to analyze. Specifically I would like to calculate the percentage of printable characters from a sample of the file of arbitrary size. Some of these files are from mainframes, Windows, Unix, etc. so it is likely that binary and control characters are included.
I started by using the Linux "file" command, but it did not provide enough detail for my purposes. The following code conveys what I am trying to do, but does not always work.
#!/usr/bin/perl -n
use strict;
use warnings;
my $cnt_n_print = 0;
my $cnt_print = 0;
my $cnt_total = 0;
my $prc_print = 0;
#Count the number of non-printable characters
while ($_ =~ m/[^[:print:]]/g) {$cnt_n_print++};
#Count the number of printable characters
while ($_ =~ m/[[:print:]]/g) {$cnt_print++};
$cnt_total = $cnt_n_print + $cnt_print;
$prc_print = $cnt_print/$cnt_total;
#Print the # total number of bytes read followed by the % printable
print "$cnt_total|$prc_print\n"
This is a test call that works:
echo "test_string of characters" | /home/user/scripts/prl/s16_count_chars.pl
This is how I intend to call it, and works for one file:
find /fct/inbound/trans/ -name "TRNST.20121115231358.xf2" -type f -print0 | xargs -0 head -c 2000 | /home/user/scripts/prl/s16_count_chars.pl
This does not work correctly:
find /fct/inbound/trans/ -type f -print0 | xargs -0 head -c 2000 | /home/user/scripts/prl/s16_count_chars.pl
Neither does this:
find /fct/inbound/trans/ -type f -print0 | xargs -0 head -c 2000 | perl -0 /home/user/scripts/prl/s16_count_chars.pl
Instead of executing the script once for EACH line returned by the find, it executes ONCE for ALL the results.
Thanks in advance.
Research so far:
Pipe and XARGS and separators
http://help.lockergnome.com/linux/help-understand-pipe-xargs--ftopict549399.html
http://en.wikipedia.org/wiki/Xargs#The_separator_problem
Clarification(s):
1.) Desired output: If there are 932 files in a directory, the output would be a 932 line list of file names, the total bytes read from the file and the % that were printable characters.
2.) Many of the files are binary. Script needs to handle embedded binary eol or eof sequences.
3.) Many of the files are large, so I would like to only read the first/last xx bytes. I had been trying to use head -c 256 or tail -c 128 to read either the first 256 bytes or the last 128 bytes respectively. Solution could either work in a pipe line or limit bytes within perl script.
The -n option wraps your entire code in a while(defined($_=<ARGV>) { ... } block. This means your my $cnt_print and other variable declarations are repeated for every line of input, essentially resetting all your variable values.
The workaround is to use global variables (declare them with our if you want to keep using use strict), and not to initialize them to 0, as they would be reinitialized for every line of input. You could say something like
our $cnt_print //= 0;
if you don't want $cnt_print and its friends to be undefined for the first line of input.
See this recent question with a similar issue.
You could have find pass you one arg at a time.
find /fct/inbound/trans/ -type f -exec perl script.pl {} \;
But I'd continue passing multiple files at a time, either through xargs, or using GNU find's -exec +.
find /fct/inbound/trans/ -type f -exec perl script.pl {} +
The following code snippets support both.
You can continue reading a line at a time:
#!/usr/bin/perl
use strict;
use warnings;
my $cnt_total = 0;
my $cnt_n_print = 0;
while (<>) {
$cnt_total += length;
++$cnt_n_print while /[^[:print:]]/g;
} continue {
if (eof) {
my $cnt_print = $cnt_total - $cnt_n_print;
my $prc_print = $cnt_print/$cnt_total;
print "$ARGV: $cnt_total|$prc_print\n";
$cnt_total = 0;
$cnt_n_print = 0;
}
}
Or you could read a whole file at a time:
#!/usr/bin/perl
use strict;
use warnings;
local $/;
while (<>) {
my $cnt_n_print = 0;
++$cnt_n_print while /[^[:print:]]/g;
my $cnt_total = length;
my $cnt_print = $cnt_total - $cnt_n_print;
my $prc_print = $cnt_print/$cnt_total;
print "$ARGV: $cnt_total|$prc_print\n";
}
Here is my working solution based on the feedback provided.
I would appreciate any further feedback on form or more efficient methods:
#!/usr/bin/perl
use strict;
use warnings;
# This program receives a file path and name.
# The program attempts to read the first 2000 bytes.
# The output is a list of files, the number of bytes
# actually read and the percent of tbe bytes that are
# ASCII "printable" aka [\x20-\x7E].
my ($data, $n_bytes, $file_name, $cnt_n_print, $cnt_print, $prc_print);
# loop through each file
foreach(#ARGV) {
$file_name = shift or die "Pass the file name on the command line.\n";
# open the file read only with "<" in "<$file_name"
open(FILE, "<$file_name") or die "Can't open $file_name: $!";
# open each file in binary mode to handle non-printable characters
binmode FILE;
# try to read 2000 bytes from FILE, save the results in $data and the
# actual number of bytes read in $n_bytes
$n_bytes = read FILE, $data, 2000;
$cnt_n_print = 0;
$cnt_print = 0;
# count the number of non-printable characters
++$cnt_n_print while ($data =~ m/[^[:print:]]/g);
$cnt_print = $n_bytes - $cnt_n_print;
$prc_print = $cnt_print/$n_bytes;
print "$file_name|$n_bytes|$prc_print\n";
close(FILE);
}
Here is a sample of how to call the above script:
find /some/path/to/files/ -type f -exec perl this_script.pl {} +
Here's a list of references I found helpful:
POSIX Bracket Expressions
Opening files in binmode
Read function
Open file read only
Related
I needed to use "iconv" to convert char encoding from some files generated on windows. Sometimes those files are very big and execution fails because it runs out of RAM. Googling i found a script which is called "iconv-chunks.pl" which is basically a perl script which processes the files and works pretty well, but it generates temporary files on my /tmp folder.
The problem is that this scripts runs automatically everyday for many files and it keeps generating garbage on my /tmp dir even though it has the cleanup flag ON.
The script im talking about is:
https://code.google.com/p/clschool-team4/source/browse/trunk/iconv-chunks.pl?r=53
#!/usr/bin/perl
our $CHUNK_SIZE = 1024 * 1024 * 100; # 100M
=head1 NAME
iconv-chunks - Process huge files with iconv
=head1 SYNOPSIS
iconv-chunks <filename> [iconv-options]
=head1 DESCRIPTION
The standard iconv program reads the entire input file into
memory, which doesn't work for large files (such as database exports).
This script is just a wrapper that processes the input file
in manageable chunks and writes it to standard output.
The first argument is the input filename (use - to specify standard input).
Anything else is passed through to iconv.
The real iconv needs to be somewhere in your PATH.
=head1 EXAMPLES
# Convert latin1 to utf-8:
./iconv-chunks database.txt -f latin1 -t utf-8 > out.txt
# Input filename of - means standard input:
./iconv-chunks - -f iso8859-1 -t utf8 < database.txt > out.txt
# More complex example, using compressed input/output to minimize disk use:
zcat database.txt.gz | ./iconv-chunks - -f iso8859-1 -t utf8 | \
gzip - > database-utf.dump.gz
=head1 AUTHOR
Maurice Aubrey <maurice.aubrey+iconv#gmail.com>
=cut
# $Id: iconv-chunks 6 2007-08-20 21:14:55Z mla $
use strict;
use warnings;
use bytes;
use File::Temp qw/ tempfile /;
# iconv errors:
# iconv: unable to allocate buffer for input: Cannot allocate memory
# iconv: cannot open input file `database.txt': File too large
#ARGV >= 1 or die "Usage: $0 <inputfile> [iconv-options]\n";
my #options = splice #ARGV, 1;
my($oh, $tmp) = tempfile(undef, CLEANUP => 1);
# warn "Tempfile: $tmp\n";
my $iconv = "iconv #options $tmp";
sub iconv { system($iconv) == 0 or die "command '$iconv' failed: $!" }
my $size = 0;
# must read by line to ensure we don't split multi-byte character
while (<>) {
$size += length $_;
print $oh $_;
if ($size >= $CHUNK_SIZE) {
iconv;
truncate $oh, 0 or die "truncate '$tmp' failed: $!";
seek $oh, 0, 0 or die "seek on '$tmp' failed: $!";
$size = 0;
}
}
iconv if $size > 0;
Any help finding the problem or how can it delete temporary files after finishing?
Regards
Change
my($oh, $tmp) = tempfile(undef, CLEANUP => 1);
to
my($oh, $tmp) = tempfile(UNLINK => 1);
CLEANUP is used to trigger removal of temporary directories on exit, not files. Note that passing undef as the first argument in order to use the default template is unnecessary.
I'm new in Perl, so it's maybe a very basic case that i still can't understand.
Case:
Program tell user to types the file name.
User types the file name (1 or more files).
Program read the content of file input.
If it's single file input, then it just prints the entire content of it.
if it's multi files input, then it combines the contents of each file in a sequence.
And then print result to a temporary new file, which located in the same directory with the program.pl .
file1.txt:
head
a
b
end
file2.txt:
head
c
d
e
f
end
SINGLE INPUT program ioSingle.pl:
#!/usr/bin/perl
print "File name: ";
$userinput = <STDIN>; chomp ($userinput);
#read content from input file
open ("FILEINPUT", $userinput) or die ("can't open file");
#PRINT CONTENT selama ada di file tsb
while (<FILEINPUT>) {
print ; }
close FILEINPUT;
SINGLE RESULT in cmd:
>perl ioSingle.pl
File name: file1.txt
head
a
b
end
I found tutorial code that combine content from multifiles input but cannot adapt the while argument to code above:
while ($userinput = <>) {
print ($userinput);
}
I was stucked at making it work for multifiles input,
How am i suppose to reformat the code so my program could give result like this?
EXPECTED MULTIFILES RESULT in cmd:
>perl ioMulti.pl
File name: file1.txt file2.txt
head
a
b
end
head
c
d
e
f
end
i appreciate your response :)
A good way to start working on a problem like this, is to break it down into smaller sections.
Your problem seems to break down to this:
get a list of filenames
for each file in the list
display the file contents
So think about writing subroutines that do each of these tasks. You already have something like a subroutine to display the contents of the file.
sub display_file_contents {
# filename is the first (and only argument) to the sub
my $filename = shift;
# Use lexical filehandl and three-arg open
open my $filehandle, '<', $filename or die $!;
# Shorter version of your code
print while <$filehandle>;
}
The next task is to get our list of files. You already have some of that too.
sub get_list_of_files {
print 'File name(s): ';
my $files = <STDIN>;
chomp $files;
# We might have more than one filename. Need to split input.
# Assume filenames are separated by whitespace
# (Might need to revisit that assumption - filenames can contain spaces!)
my #filenames = split /\s+/, $files;
return #filenames;
}
We can then put all of that together in the main program.
#!/usr/bin/perl
use strict;
use warnings;
my #list_of_files = get_list_of_files();
foreach my $file (#list_of_files) {
display_file_contents($file);
}
By breaking the task down into smaller tasks, each one becomes easier to deal with. And you don't need to carry the complexity of the whole program in you head at one time.
p.s. But like JRFerguson says, taking the list of files as command line parameters would make this far simpler.
The easy way is to use the diamond operator <> to open and read the files specified on the command line. This would achieve your objective:
while (<>) {
chomp;
print "$_\n";
}
Thus: ioSingle.pl file1.txt file2.txt
If this is the sole objective, you can reduce this to a command line script using the -p or -n switch like:
perl -pe '1' file1.txt file2.txt
perl -ne 'print' file1.txt file2.txt
These switches create implicit loops around the -e commands. The -p switch prints $_ after every loop as if you had written:
LINE:
while (<>) {
# your code...
} continue {
print;
}
Using -n creates:
LINE:
while (<>) {
# your code...
}
Thus, -p adds an implicit print statement.
Ok, so I'm very new to Perl. I have a text file and in the file there are 4 columns of data(date, time, size of files, files). I need to create a small script that can open the file and get the average size of the files. I've read so much online, but I still can't figure out how to do it. This is what I have so far, but I'm not sure if I'm even close to doing this correctly.
#!/usr/bin/perl
open FILE, "files.txt";
##array = File;
while(FILE){
#chomp;
($date, $time, $numbers, $type) = split(/ /,<FILE>);
$total += $numbers;
}
print"the total is $total\n";
This is how the data looks in the file. These are just a few of them. I need to get the numbers in the third column.
12/02/2002 12:16 AM 86016 a2p.exe
10/10/2004 11:33 AM 393 avgfsznew.pl
11/01/2003 04:42 PM 38124 c2ph.bat
Your program is reasonably close to working. With these changes it will do exactly what you want
Always use use strict and use warnings at the start of your program, and declare all of your variables using my. That will help you by finding many simple errors that you may otherwise overlook
Use lexical file handles, the three-parameter form of open, and always check the return status of any open call
Declare the $total variable outside the loop. Declaring it inside the loop means it will be created and destroyed each time around the loop and it won't be able to accumulate a total
Declare a $count variable in the same way. You will need it to calculate the average
Using while (FILE) {...} just tests that FILE is true. You need to read from it instead, so you must use the readline operator like <FILE>
You want the default call to split (without any parameters) which will return all the non-space fields in $_ as a list
You need to add a variable in the assignment to allow for athe AM or PM field in each line
Here is a modification of your code that works fine
use strict;
use warnings;
open my $fh, '<', "files.txt" or die $!;
my $total = 0;
my $count = 0;
while (<$fh>) {
my ($date, $time, $ampm, $numbers, $type) = split;
$total += $numbers;
$count += 1;
}
print "The total is $total\n";
print "The count is $count\n";
print "The average is ", $total / $count, "\n";
output
The total is 124533
The count is 3
The average is 41511
It's tempting to use Perl's awk-like auto-split option. There are 5 columns; three containing date and time information, then the size and then the name.
The first version of the script that I wrote is also the most verbose:
perl -n -a -e '$total += $F[3]; $num++; END { printf "%12.2f\n", $total / ($num + 0.0); }'
The -a (auto-split) option splits a line up on white space into the array #F. Combined with the -n option (which makes Perl run in a loop that reads the file name arguments in turn, or standard input, without printing each line), the code adds $F[3] (the fourth column, counting from 0) to $total, which is automagically initialized to zero on first use. It also counts the lines in $num. The END block is executed when all the input is read; it uses printf() to format the value. The + 0.0 ensures that the arithmetic is done in floating point, not integer arithmetic. This is very similar to the awk script:
awk '{ total += $4 } END { print total / NR }'
First drafts of programs are seldom optimal — or, at least, I'm not that good a programmer. Revisions help.
Perl was designed, in part, as an awk killer. There is still a program a2p distributed with Perl for converting awk scripts to Perl (and there's also s2p for converting sed scripts to Perl). And Perl does have an automatic (built-in) variable that keeps track of the number of lines read. It has several names. The tersest is $.; the mnemonic name $NR is available if you use English; in the script; so is $INPUT_LINE_NUMBER. So, using $num is not necessary. It also turns out that Perl does a floating point division anyway, so the + 0.0 part was unnecessary. This leads to the next versions:
perl -MEnglish -n -a -e '$total += $F[3]; END { printf "%12.2f\n", $total / $NR; }'
or:
perl -n -a -e '$total += $F[3]; END { printf "%12.2f\n", $total / $.; }'
You can tune the print format to suit your whims and fancies. This is essentially the script I'd use in the long term; it is fairly clear without being long-winded in any way. The script could be split over multiple lines if you desired. It is a simple enough task that the legibility of the one-line is not a problem, IMNSHO. And the beauty of this is that you don't have to futz around with split and arrays and read loops on your own; Perl does most of that for you. (Granted, it does blow up on empty input; that fix is trivial; see below.)
Recommended version
perl -n -a -e '$total += $F[3]; END { printf "%12.2f\n", $total / $. if $.; }'
The if $. tests whether the number of lines read is zero or not; the printf and division are omitted if $. is zero so the script outputs nothing when given no input.
There is a noble (or ignoble) game called 'Code Golf' that was much played in the early days of Stack Overflow, but Code Golf questions are no longer considered good questions. The object of Code Golf is to write a program that does a particular task in as few characters as possible. You can play Code Golf with this and compress it still further if you're not too worried about the format of the output and you're using at least Perl 5.10:
perl -Mv5.10 -n -a -e '$total += $F[3]; END { say $total / $. if $.; }'
And, clearly, there are a lot of unnecessary spaces and letters in there:
perl -Mv5.10 -nae '$t+=$F[3];END{say$t/$.if$.}'
That is not, however, as clear as the recommended version.
#!/usr/bin/perl
use warnings;
use strict;
open my $file, "<", "files.txt";
my ($total, $cnt);
while(<$file>){
$total += (split(/\s+/, $_))[3];
$cnt++;
}
close $file;
print "number of files: $cnt\n";
print "total size: $total\n";
printf "avg: %.2f\n", $total/$cnt;
Or you can use awk:
awk '{t+=$4} END{print t/NR}' files.txt
Try doing this :
#!/usr/bin/perl -l
use strict; use warnings;
open my $file, '<', "my_file" or die "open error [$!]";
my ($total, $count);
while (<$file>){
chomp;
next if /^$/;
my ($date, $time, $x, $numbers, $type) = split;
$total += $numbers;
$count++;
}
print "the average is " . $total/$count . " and the total is $total";
close $file;
It is as simple as this:
perl -F -lane '$a+=$F[3];END{print "The average size is ".$a/$.}' your_file
tested below:
> cat temp
12/02/2002 12:16 AM 86016 a2p.exe
10/10/2004 11:33 AM 393 avgfsznew.pl
11/01/2003 04:42 PM 38124 c2ph.bat
Now the execution:
> perl -F -lane '$a+=$F[3];END{print "The average size is ".$a/$.}' temp
The average size is 41511
>
explanation:
-F -a says store the line in an array format.with the default separator as space or tab.
so nopw $F[3] has you size of the file.
sum up all the sizes in the 4th column untill all the lines are processed.
END will be executed after processing all the lines in the file.
so $. at the end will gives the number of lines.
so $a/$. will give the average.
This solution opens the file and loops through each line of the file. It then splits the file into the five variables in the line by splitting on 1 or more spaces.
open the file for reading, "<", and if it fails, raise an error or die "..."
my ($total, $cnt) are our column total and number of files added count
while(<FILE>) { ... } loops through each line of the file using the file handle and stores the line in $_
chomp removes the input record separator in $_. In unix, the default separator is a newline \n
split(/\s+/, $_) Splits the current line represented by$_, with the delimiter \s+. \s represents a space, the + afterward means "1 or more". So, we split the next line on 1 or more spaces.
Next we update $total and $cnt
#!/usr/bin/perl
open FILE, "<", "files.txt" or die "Error opening file: $!";
my ($total, $cnt);
while(<FILE>){
chomp;
my ($date, $time, $am_pm, $numbers, $type) = split(/\s+/, $_);
$total += $numbers;
$cnt++;
}
close FILE;
print"the total is $total and count of $cnt\n";`
Duplicate data removal using Perl called within via a batch file within Windows
A DOS window in Windows called via a batch file.
A batch file calls the Perl script which carries out the actions. I have the batch file.
The code script I have works duplicate data is removal so long as the data file is not too big.
The problem that requires resolving is with data files which are larger, (2 GB or more), with this size of file a memory error occurs when trying to load the complete file in to an array for duplicate data removal.
The memory error occurs in the subroutine at:-
#contents_of_the_file = <INFILE>;
(A completely different method is acceptable so long as it solves this issue, please suggest).
The subroutine is:-
sub remove_duplicate_data_and_file
{
open(INFILE,"<" . $output_working_directory . $output_working_filename) or dienice ("Can't open $output_working_filename : INFILE :$!");
if ($test ne "YES")
{
flock(INFILE,1);
}
#contents_of_the_file = <INFILE>;
if ($test ne "YES")
{
flock(INFILE,8);
}
close (INFILE);
### TEST print "$#contents_of_the_file\n\n";
#unique_contents_of_the_file= grep(!$unique_contents_of_the_file{$_}++, #contents_of_the_file);
open(OUTFILE,">" . $output_restore_split_filename) or dienice ("Can't open $output_restore_split_filename : OUTFILE :$!");
if ($test ne "YES")
{
flock(OUTFILE,1);
}
for($element_number=0;$element_number<=$#unique_contents_of_the_file;$element_number++)
{
print OUTFILE "$unique_contents_of_the_file[$element_number]\n";
}
if ($test ne "YES")
{
flock(OUTFILE,8);
}
}
You are unnecessarily storing a full copy of the original file in #contents_of_the_file and -- if the amount of duplication is low relative to the file size -- nearly two other full copies in %unique_contents_of_the_file and #unique_contents_of_the_file. As ire_and_curses noted, you can reduce the storage requirements by making two passes over the data: (1) analyze the file, storing information about the line numbers of non-duplicate lines; and (2) process the file again to write non-dups to the output file.
Here is an illustration. I don't know whether I've picked the best module for the hashing function (Digest::MD5); perhaps others will comment on that. Also note the 3-argument form of open(), which you should be using.
use strict;
use warnings;
use Digest::MD5 qw(md5);
my (%seen, %keep_line_nums);
my $in_file = 'data.dat';
my $out_file = 'data_no_dups.dat';
open (my $in_handle, '<', $in_file) or die $!;
open (my $out_handle, '>', $out_file) or die $!;
while ( defined(my $line = <$in_handle>) ){
my $hashed_line = md5($line);
$keep_line_nums{$.} = 1 unless $seen{$hashed_line};
$seen{$hashed_line} = 1;
}
seek $in_handle, 0, 0;
$. = 0;
while ( defined(my $line = <$in_handle>) ){
print $out_handle $line if $keep_line_nums{$.};
}
close $in_handle;
close $out_handle;
You should be able to do this efficiently using hashing. You don't need to store the data from the lines, just identify which ones are the same. So...
Don't slurp - Read one line at a time.
Hash the line.
Store the hashed line representation as a key in a Perl hash of lists. Store the line number as the first value of the list.
If the key already exists, append the duplicate line number to the list corresponding to that value.
At the end of this process, you'll have a data-structure identifying all the duplicate lines. You can then do a second pass through the file to remove those duplicates.
Perl does heroic things with large files, but 2GB may be a limitation of DOS/Windows.
How much RAM do you have?
If your OS doesn't complain, it may be best to read the file one line at a time, and write immediately to output.
I'm thinking of something using the diamond operator <> but I'm reluctant to suggest any code because on the occasions I've posted code, I've offended a Perl guru on SO.
I'd rather not risk it. I hope the Perl cavalry will arrive soon.
In the meantime, here's a link.
Here's a solution that works no matter how big the file is. But it doesn't use RAM exclusively, so its slower than a RAM-based solution. You can also specify the amount of RAM you want this thing to use.
The solution uses a temporary file that the program treats as a database with SQLite.
#!/usr/bin/perl
use DBI;
use Digest::SHA 'sha1_base64';
use Modern::Perl;
my $input= shift;
my $temp= 'unique.tmp';
my $cache_size_in_mb= 100;
unlink $temp if -f $temp;
my $cx= DBI->connect("dbi:SQLite:dbname=$temp");
$cx->do("PRAGMA cache_size = " . $cache_size_in_mb * 1000);
$cx->do("create table x (id varchar(86) primary key, line int unique)");
my $find= $cx->prepare("select line from x where id = ?");
my $list= $cx->prepare("select line from x order by line");
my $insert= $cx->prepare("insert into x (id, line) values(?, ?)");
open(FILE, $input) or die $!;
my ($line_number, $next_line_number, $line, $sha)= 1;
while($line= <FILE>) {
$line=~ s/\s+$//s;
$sha= sha1_base64($line);
unless($cx->selectrow_array($find, undef, $sha)) {
$insert->execute($sha, $line_number)}
$line_number++;
}
seek FILE, 0, 0;
$list->execute;
$line_number= 1;
$next_line_number= $list->fetchrow_array;
while($line= <FILE>) {
$line=~ s/\s+$//s;
if($next_line_number == $line_number) {
say $line;
$next_line_number= $list->fetchrow_array;
last unless $next_line_number;
}
$line_number++;
}
close FILE;
Well you could use the inline replace mode of command line perl.
perl -i~ -ne 'print unless $seen{$_}++' uberbigfilename
In the "completely different method" category, if you've got Unix commands (e.g. Cygwin):
cat infile | sort | uniq > outfile
This ought to work - no need for Perl at all - which may, or may not, solve your memory problem. However, you will lose the ordering of the infile (as outfile will now be sorted).
EDIT: An alternative solution that's better able to deal with large files may be by using the following algorithm:
Read INFILE line-by-line
Hash each line to a small hash (e.g. a hash# mod 10)
Append each line to a file unique to the hash number (e.g. tmp-1 to tmp-10)
Close INFILE
Open and sort each tmp-# to a new file sortedtmp-#
Mergesort sortedtmp-[1-10] (i.e. open all 10 files and read them simultaneously), skipping duplicates and writing each iteration to the end output file
This will be safer, for very large files, than slurping.
Parts 2 & 3 could be changed to a random# instead of a hash number mod 10.
Here's a script BigSort that may help (though I haven't tested it):
# BigSort
#
# sort big file
#
# $1 input file
# $2 output file
#
# equ sort -t";" -k 1,1 $1 > $2
BigSort()
{
if [ -s $1 ]; then
rm $1.split.* > /dev/null 2>&1
split -l 2500 -a 5 $1 $1.split.
rm $1.sort > /dev/null 2>&1
touch $1.sort1
for FILE in `ls $1.split.*`
do
echo "sort $FILE"
sort -t";" -k 1,1 $FILE > $FILE.sort
sort -m -t";" -k 1,1 $1.sort1 $FILE.sort > $1.sort2
mv $1.sort2 $1.sort1
done
mv $1.sort1 $2
rm $1.split.* > /dev/null 2>&1
else
# work for empty file !
cp $1 $2
fi
}
I have a file with contents
abc
def
high
lmn
...
...
There are more than 2 million lines in the files.
I want to randomly sample lines from the files and output 50K lines. Any thoughts on how to approach this problem? I was thinking along the lines of Perl and its rand function (Or a handy shell command would be neat).
Related (Possibly Duplicate) Questions:
Randomly Pick Lines From a File Without Slurping It With Unix
How can I get exactly n random lines from a file with Perl?
Assuming you basically want to output about 2.5% of all lines, this would do:
print if 0.025 > rand while <$input>;
Shell way:
sort -R file | head -n 50000
From perlfaq5: "How do I select a random line from a file?"
Short of loading the file into a database or pre-indexing the lines in the file, there are a couple of things that you can do.
Here's a reservoir-sampling algorithm from the Camel Book:
srand;
rand($.) < 1 && ($line = $_) while <>;
This has a significant advantage in space over reading the whole file in. You can find a proof of this method in The Art of Computer Programming, Volume 2, Section 3.4.2, by Donald E. Knuth.
You can use the File::Random module which provides a function for that algorithm:
use File::Random qw/random_line/;
my $line = random_line($filename);
Another way is to use the Tie::File module, which treats the entire file as an array. Simply access a random array element.
Perl way:
use CPAN. There is module File::RandomLine that does exactly what you need.
If you need to extract an exact number of lines:
use strict;
use warnings;
# Number of lines to pick and file to pick from
# Error checking omitted!
my ($pick, $file) = #ARGV;
open(my $fh, '<', $file)
or die "Can't read file '$file' [$!]\n";
# count lines in file
my ($lines, $buffer);
while (sysread $fh, $buffer, 4096) {
$lines += ($buffer =~ tr/\n//);
}
# limit number of lines to pick to number of lines in file
$pick = $lines if $pick > $lines;
# build list of N lines to pick, use a hash to prevent picking the
# same line multiple times
my %picked;
for (1 .. $pick) {
my $n = int(rand($lines)) + 1;
redo if $picked{$n}++
}
# loop over file extracting selected lines
seek($fh, 0, 0);
while (<$fh>) {
print if $picked{$.};
}
close $fh;