sort files as per their dated names at the same time create filehandles for each file in perl - perl

I have the about 1000 files named as
CU20130404160033.TXT
CU20130405160027.TXT ....
CUYYYYMMDDHHMMSS.TXT
I need to append all the files into single file as per names where where min(date) to max(date)
How can i make the programm efficient where i should sort the files as per their Dates and create filehandles.
opendir (DIR, $Directory) or die $!;
#files = grep { (!/^\./) && -f "$Directory/$_" } readdir(DIR);
chdir($Directory);
#create an array of open filehandles.
#fh = map { open my $f, $_ or die "Cant open $_:$!"; $f } #files;
#create new file for output
open $out_file , ">$filename" or die "cant open new file $!";

Something like this should be helpful.
opendir (DIR, $dir);
#dir=readdir(DIR);
closedir(DIR);
#Sort file list by modification time and write to an output file
open(my $fh, ">", "output.txt") or die $!;
print $fh sort{ -M "$dir/$b" <=> -M "$dir/$a" }(#dir);
close $fh;

I propose this script:
use strict;
use warnings;
#1. Set up the general values
my $directory = "... the dir ...";
my $out_file = "... the out file ...";
#2. Fill an array with the names of your files
opendir(my $dh, $directory) or die $!;
while( my $file = readdir($dh) ) {
push #files, $file;
}
closedir $dh;
#3. Sort the array
#files = sort {$a cmp $b} #files;
#4. Open the target file
open $out_file , '>', $filename or die $!;
#5. Iterate for each input file, open it,
# and write line by line its contents to the target
foreach my $filename(#files) {
open my $ifh, '<', $filename or die $!;
while( my $line = <$ifh> ) {
print $out_file $line;
}
close $ifh;
}
#6. Close the target
close $out_file;

Related

To parse multiple files in Perl

Please correct my code, I cannot seem to open my file to parse.
The error is this line open(my $fh, $file) or die "Cannot open file, $!";
Cannot open file, No such file or directory at ./sample.pl line 28.
use strict;
my $dir = $ARGV[0];
my $dp_dpd = $ENV{'DP_DPD'};
my $log_dir = $ENV{'DP_LOG'};
my $xmlFlag = 0;
my #fileList = "";
my #not_proc_dir = `find $dp_dpd -type d -name "NotProcessed"`;
#print "#not_proc_dir\n";
foreach my $dir (#not_proc_dir) {
chomp ($dir);
#print "$dir\n";
opendir (DIR, $dir) or die "Couldn't open directory, $!";
while ( my $file = readdir DIR) {
next if $file =~ /^\.\.?$/;
next if (-d $file);
# print "$file\n";
next if $file eq "." or $file eq "..";
if ($file =~ /.xml$/ig) {
$xmlFlag = 1;
print "$file\n";
open(my $fh, $file) or die "Cannot open file, $!";
#fileList = <$fh>;
close $file;
}
}
closedir DIR;
}
Quoting readdir's documentation:
If you're planning to filetest the return values out of a readdir, you'd better prepend the directory in question. Otherwise, because we didn't chdir there, it would have been testing the wrong file.
Your open(my $fh, $file) should therefore be open my $fh, '<', "$dir/$file" (note how I also added '<' as well: you should always use 3-argument open).
Your next if (-d $file); is also wrong and should be next if -d "$dir/$file";
Some additional remarks on your code:
always add use warnings to your script (in addition to use strict, which you already have)
use lexical file/directory handle rather than global ones. That is, do opendir my $DH, $dir, rather than opendir DH, $dir.
properly indent your code (if ($file =~ /.xml$/ig) { is one level too deep; it makes it harder to read you code)
next if $file =~ /^\.\.?$/; and next if $file eq "." or $file eq ".."; are redundant (even though not technically equivalent); I'd suggest using only the latter.
the variable $dir defined in my $dir = $ARGV[0]; is never used.

Perl script returning 0 when a file is read

Im just trying to copy a file to a different directory before I process it. Here is the code:
use File::stat;
use File::Copy;
use LWP::UserAgent;
use strict;
use warnings;
use Data::Dumper;
use Cwd qw(getcwd);
my $dir = "\\folder\\music";
my $dir1 = "c:\\temp";
opendir(my $dh, $dir) or die "Cant open directory : $!\n";
#my #list = readdir($dh)
my #files = map { [ stat "$dir/$_", $_ ] }
grep( /Shakira.*.mp3$/, readdir( $dh ) );
closedir($dh);
sub rev_by_date
{
$b->[0]->ctime <=> $a->[0]->ctime
}
my #sorted_files = sort rev_by_date #files;
my #newest = #{$sorted_files[0]};
my $name = pop(#newest);
print "Name: $name\n";
#**********************
#Upto here is working fine
my $new;
open OLD,"<",$name or die "cannot open $old: $!";
from here the problem starts
open(NEW, "> $new") or die "can't open $new: $!";
while ()
{
print NEW $_ or die "can't write $new: $!";
}
close(OLD) or die "can't close $old: $!";
close(NEW) or die "can't close $new: $!";
The error im getting is :
cannot open Shakira - Try Everything (Official Video).mp3: No such file or directory at copy.pl line 49.
when Im chomping the filename, like
my $oldfile = chomp($name);
then the error is :
Name: Shakira - Try Everything (Official Video).mp3
old file is 0
cannot open 0: No such file or directory at copy.pl line 49.
Any idea?
chomp changes its argument in place and returns the number of removed characters. So the correct usage is
chomp(my $oldfile = $name);
Also, you probably wanted
while (<OLD>) {
instead of
while () {
which just loops infinitely.
Moreover, you correctly prepend $dir/ to a filename in the stat call, but you shold do so everywhere.

Remove the first line from my directory

how can i remove the first line from my list of file , this is my code,
open my directory:
use strict;
use warnings;
use utf8;
use Encode;
use Encode::Guess;
use Devel::Peek;
my $new_directory = '/home/lenovo/corpus';
my $directory = '/home/lenovo/corpus';
open( my $FhResultat, '>:encoding(UTF-8)', $FichierResulat );
my $dir = '/home/corpus';
opendir (DIR, $directory) or die $!;
my #tab;
while (my $file = readdir(DIR)) {
next if ($file eq "." or $file eq ".." );
#print "$file\n";
my $filename_read = decode('utf8', $file);
#print $FichierResulat "$file\n";
push #tab, "$filename_read";
}
closedir(DIR);
open my file:
foreach my $val(#tab){
utf8::encode($val);
my $filename = $val;
open(my $in, '<:utf8', $filename) or die "Unable to open '$filename' for read: $!";
rename file
my $newfile = "$filename.new";
open(my $out, '>:utf8', $newfile) or die "Unable to open '$newfile' for write: $!";
remove the first line
my #ins = <$in>; # read the contents into an array
chomp #ins;
shift #ins; # remove the first element from the array
print $out #ins;
close($in);
close $out;
the probem my new file is empty !
rename $newfile,$filename or die "unable to rename '$newfile' to '$filename': $!";
}
It seems true but the result is an empty file.
The accepted pattern for doing this kind of thing is as follows:
use strict;
use warnings;
my $old_file = '/path/to/old/file.txt';
my $new_file = '/path/to/new/file.txt';
open(my $old, '<', $old_file) or die $!;
open(my $new, '>', $new_file) or die $!;
while (<$old>) {
next if $. == 1;
print $new $_;
}
close($old) or die $!;
close($new) or die $!;
rename($old_file, "$old_file.bak") or die $!;
rename($new_file, $old_file) or die $!;
In your case, we're using $. (the input line number variable) to skip over the first line.

Combining two csv files together in perl

Hi i'm very new to perl and i've got litle knowledge on it but i'm trying to create a script that conbines two .csv files into a new one
#!/usr/bin/env perl
use strict;
use warnings;
use Text::CSV_XS;
my #rows;
{ # Read the CSV file
my $csv = Text::CSV_XS->new() or die "Cannot use Text::CSV_XS ($!)";
my $file = "file.csv";
open my $fh, '<', $file or die "Cannot open $file ($!)";
while (my $row = $csv->getline($fh)) {
push #rows, $row;
}
$csv->eof or $csv->error_diag();
close $fh or die "Failed to close $file ($!)";
}
{ # Gather the data
foreach my $row (#rows) {
foreach my $col (#{$row}) {
$col = uc($col);
}
print "\n";
}
}
# (over)Write the data
# Needs to be changed to ADD data
{
my $csv = Text::CSV_XS->new({ binary => 1, escape_char => undef })
or die "Cannot use Text::CSV ($!)";
my $file = "output.csv";
open my $fh, '>', $file or die "Cannot open $file ($!)";
$csv->eol("\n");
foreach my $row (#rows) {
$csv->print($fh, \#{$row}) or die "Failed to write $file ($!)";
}
close $fh or die "Failed to close $file ($!)";
}
this is my current code i do know this over write's the data insted of actually adding it to the new file but this is how far i managed to get with the limited time and knowledge i've got on perl
the csv format of both files are the same.
"Header1";"Header2";"Header3";"Header4";"Header5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
I believe the issue is here:
open my $fh, '>', $file
or die "Cannot open $file ($!)";
If I remember my Perl properly, the line should read:
open my $fh, '>>', $file
or die "Cannot open $file ($!)";
The >> should open the file handle $fh for append instead of for overwrite.
you could try something like this
opendir(hand,"DIRPATH");
#files = readdir(hand);
closedir(hand);
foreach(#files){
if(/\.csv$/i) { #if the filename has .csv at the end
push(#csvfiles,$_);
}
}
foreach(#csvfiles) {
$csvfile=$_;
open(hanr,"DIRPATH".$csvfile)or die"error $!\n"; #read handler
open(hanw , ">>DIRPATH"."outputfile.csv") or die"error $! \n"; #write handler for creating new sorted files
#lines=();
#lines=<hanr>;
foreach $line (#lines){
chomp $line;
$count++;
next unless $count; # skip header i.e the first line containing stock details
print hanw join $line,"\n";
}
$count= -1;
close(hanw);
close(hanr);
}`

merging two files using perl keeping the copy of original file in other file

I have to files like A.ini and B.ini ,I want to merge both the files in A.ini
examples of files:
A.ini::
a=123
b=xyx
c=434
B.ini contains:
a=abc
m=shank
n=paul
my output in files A.ini should be like
a=123abc
b=xyx
c=434
m=shank
n=paul
I want to this merging to be done in perl language and I want to keep the copy of old A.ini file at some other place to use old copy
A command line variant:
perl -lne '
($a, $b) = split /=/;
$v{$a} = $v{$a} ? $v{$a} . $b : $_;
END {
print $v{$_} for sort keys %v
}' A.ini B.ini >NEW.ini
How about:
#!/usr/bin/perl
use strict;
use warnings;
my %out;
my $file = 'path/to/A.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
$out{$key} = $val;
}
close $fh;
$file = 'path/to/B.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
if (exists $out{$key}) {
$out{$key} .= $val;
} else {
$out{$key} = $val;
}
}
close $fh;
$file = 'path/to/A.ini';
open my $fh, '>', $file or die "unable to open '$file' for writing: $!";
foreach(keys %out) {
print $fh $_,'=',$out{$_},"\n";
}
close $fh;
The two files to be merged can be read in a single pass and don't need to be treated as separate source files. That allows the use of <> to read all files passed as parameters on the command line.
Keeping a backup copy of A.ini is simply a matter of renaming it before writing the merged data to a new file of the same name.
This program appears to do what you need.
use strict;
use warnings;
my $file_a = $ARGV[0];
my (#keys, %values);
while (<>) {
if (/\A\s*(.+?)\s*=\s*(.+?)\s*\z/) {
push #keys, $1 unless exists $values{$1};
$values{$1} .= $2;
}
}
rename $file_a, "$file_a.bak" or die qq(Unable to rename "$file_a": $!);
open my $fh, '>', $file_a or die qq(Unable to open "$file_a" for output: $!);
printf $fh "%s=%s\n", $_, $values{$_} for #keys;
output (in A.ini)
a=123abc
b=xyx
c=434
m=shank
n=paul