There are two CSV files I want to compare. However, they have different order of headers and rows/values.
Here's a simple example:
INPUT FILE1:
NAME,AGE,BDAY
ABC,1,090214
DEF,1,122514
INPUT FILE2:
BDAY,NAME,AGE
122514,DEF,1
090214,ABC,1
INPUT FILE3:
BDAY,NAME,AGE
122514,DEFG,1
090214,ABC,1
Diff FILE1 and FILE2
No diffs.
Diff FILE1 and FILE3
Found diffs in FILE and FILE3.
<Any format of diffs is okay.>
I can easily create a perl script for this but before I do, does anyone know if there's an existing script/tool that already does this?
I have tried copying the files from UNIX to Windows, and sorting them using Excel. It works well but I encounter problems saving it.
I also have googled but can't find a reference for this.
Thanks for any inputs.
I think you need some kind of advanced comparation (with requires deeper analysis), so the use of a relational db approach maybe interesting.
In this respect, the module DBD::CSV is helpful. It allows writting SELECT statements, including join between tables.
Normalize your data
Use Text::CSV to reorder the columns of your CSV file.
Then you can use Perl’s sort or some other utility to reorder the rows of your files.
This also uses Text::Wrap to display the normalized files in a pleasing format:
use strict;
use warnings;
use autodie;
# Setup fake data
my #files;
{
local $/ = ''; # Paragraph mode
while (<DATA>) {
chomp;
my ( $file, $data ) = split "\n", $_, 2;
open my $fh, '>', $file;
print $fh $data, "\n";
push #files, $file;
}
}
# Normalize Files by Column Order
use Text::CSV;
my $csv = Text::CSV->new( { binary => 1, eol => $/ } )
or die "Cannot use CSV: " . Text::CSV->error_diag();
for my $file (#files) {
local #ARGV = $file;
local $^I = '.bak';
my #old_order;
my #new_order;
while (<>) {
if ( !$csv->parse($_) ) {
die "Bad parse $file, line $.: " . $csv->error_diag();
}
my #columns = $csv->fields();
if ( $. == 1 ) {
#old_order = #columns;
#new_order = sort #columns;
}
my %hash;
#hash{#old_order} = #columns;
if ( !$csv->combine( #hash{#new_order} ) ) {
die "Bad combine $file, line $.: " . $csv->error_diag();
}
print $csv->string();
}
unlink "$file$^I"; # Optionally delete backup
}
# Normalize Files by Row Order
for my $file (#files) {
my ( $header, #data ) = do { local #ARGV = $file; <> };
open my $fh, '>', $file;
print $fh $header, sort #data;
}
# View Normalized Files
use Text::Wrap;
for my $file (#files) {
open my $fh, '<', $file;
print wrap( sprintf( "%-12s", $file ), ' ' x 12, <$fh>, ), "\n";
}
__DATA__
file1.csv
NAME,AGE,BDAY
ABC,1,090214
DEF,1,122514
file2.csv
BDAY,NAME,AGE
122514,DEF,1
090214,ABC,1
file3.csv
BDAY,NAME,AGE
122514,DEFG,1
090214,ABC,1
Outputs:
file1.csv AGE,BDAY,NAME
1,090214,ABC
1,122514,DEF
file2.csv AGE,BDAY,NAME
1,090214,ABC
1,122514,DEF
file3.csv AGE,BDAY,NAME
1,090214,ABC
1,122514,DEFG
Related
I'm trying to build a primary key into a new file from an original File which has the following structure (tbl_20180615.txt):
573103150033,0664,54,MSS02VEN*',INT,zxzc,,,,,
573103150033,0665,54,MSS02VEN,INT,zxzc,,,,,
573103150080,0659,29,MSS05ARA',INT,zxzc,,,,,
573103150080,0660,29,MSS05ARA ,INT,zxzc,,,,,
573103154377,1240,72,MSSTRI01,INT,zxzc,,,,,
573103154377,1240,72,MSSTRI01,INT,zxzc,,,,,
I launch my perl Verify.pl then I send the arguments, the first one is the number of columns to build the primary key in the new file, after I have to send the name of file (original file).
(Verify.pl)
#!/usr/bin/perl
use strict;
use warnings;
my $n1 = $ARGV[0];
my $name = $ARGV[1];
$n1 =~ s/"//g;
my $n2 = $n1 + 1;
my %seen;
my ( $file3 ) = qw(log.txt);
open my $fh3, '>', $file3 or die "Can't open $file3: $!";
print "Loading file ...\n";
open( my $file, "<", "$name" ) || die "Can't read file somefile.txt: $!";
while ( <$file> ) {
chomp;
my #rec = split( /,/, $_, $n2 ); #$n2 sirve para armar la primary key, hacer le split en los campos deseados
for ( my $i = 0; $i < $n1; $i++ ) {
print $fh3 "#rec[$i],";
}
print $fh3 "\n";
}
close( $file );
print "Done!\n";
#########to check duplicates
my ($file4) = qw(log.txt);
print "Checking duplicates records...\n\n";
open (my $file4, "<", "log.txt") || die "Can't read file log.txt: $!";
while ( <$file4> ) {
print if $seen{$_}++;
}
close($file4);
if I send the following instruction
perl Verify.pl 2 tbl_20180615.txt
this code build a new file called "log.txt" with the following structure, splitting the original file () into two columns given by the first argument:
(log.txt)
573103150033,0664,
573103150033,0665,
573103150080,0659,
573103150080,0660,
573103154377,1240,
573103154377,1240,
That works ok, but if I want to read the new file log.txt to check duplicates, it doesn't work, but If I comment the lines to generate the file log.txt (listed above) before the line in the code (###############to check duplicates################) launch the next part of the code it works ok, giving me two duplicates lines and looks like this:
(Result in command line)
573103154377,1240
573103154377,1240
How can I solve this issue?
I think this does what you're asking for. It builds a unique list of derived keys before printing any of them, using a hash to check whether a key has already been generated
Note that I have assigned values to #ARGV to emulate input values. You must remove that statement before running the program with input from the command line
#!/usr/bin/perl
use strict;
use warnings;
use autodie; # Handle bad IO statuses automatically
local #ARGV = qw/ 2 tbl_20180615.txt /; # For testing only
tr/"//d for #ARGV; # "
my ($key_fields, $input_file) = #ARGV;
my $output_file = 'log.txt';
my (#keys, %seen);
print "Loading input ... ";
open my $in_fh, '<', $input_file;
while ( <$in_fh> ) {
chomp;
my #rec = split /,/;
my $key = join ',', #rec[0..$key_fields-1];
push #keys, $key unless $seen{$key}++;
}
print "Done\n";
open my $out_fh, '>', $output_file;
print $out_fh "$_\n" for #keys;
close $out_fh;
output log.txt
573103150033,0664
573103150033,0665
573103150080,0659
573103150080,0660
573103154377,1240
I basically want to do an out-of-order diff between two text files (in CSV style) where I compare the fields in the first two columns (I don't care about the 3rd columns value). I then print out the values that file1.txt has but aren't present in file2.txt and vice-versa for file2.txt compared to file1.txt.
file1.txt:
cat,val 1,43432
cat,val 2,4342
dog,value,23
cat2,value,2222
hedgehog,input,233
file2.txt:
cat2,value,312
cat,val 2,11
cat,val 3,22
dog,value,23
hedgehog,input,2145
bird,output,9999
Output would be something like this:
file1.txt:
cat,val 1,43432
file2.txt:
cat,val 3,22
bird,output,9999
I'm new to Perl so some of the better, less ugly methods to achieve this are outside of my knowledge currently. Thanks for any help.
current code:
#!/usr/bin/perl -w
use Cwd;
use strict;
use Data::Dumper;
use Getopt::Long;
my $myName = 'MyDiff.pl';
my $usage = "$myName is blah blah blah";
#retreive the command line options, set up the environment
use vars qw($file1 $file2);
#grab the specified values or exit program
GetOptions("file1=s" => \$file1,
"file2=s" => \$file2)
or die $usage;
( $file1 and $file2 ) or die $usage;
open (FH, "< $file1") or die "Can't open $file1 for read: $!";
my #array1 = <FH>;
close FH or die "Cannot close $file1: $!";
open (FH, "< $file2") or die "Can't open $file2 for read: $!";
my #array2 = <FH>;
close FH or die "Cannot close $file2: $!";
#...do a sort and match
Use a Hash for this with first 2 columns as key.
once you have these two hashes you can iterate and delete the common entries,
what remains in respective hashes will be what you are looking for.
Initialize,
my %hash1 = ();
my %hash2 = ();
Read in first file, join first two columns to form key and save it in hash. This assumes fields are comma separated. You could use a CSV module also for the same.
open( my $fh1, "<", $file1 ) || die "Can't open $file1: $!";
while(my $line = <$fh1>) {
chomp $line;
# join first two columns for key
my $key = join ",", (split ",", $line)[0,1];
# create hash entry for file1
$hash1{$key} = $line;
}
Do the same for file2 and create %hash2
open( my $fh2, "<", $file2 ) || die "Can't open $file2: $!";
while(my $line = <$fh2>) {
chomp $line;
# join first two columns for key
my $key = join ",", (split ",", $line)[0,1];
# create hash entry for file2
$hash2{$key} = $line;
}
Now go over the entries and delete the common ones,
foreach my $key (keys %hash1) {
if (exists $hash2{$key}) {
# common entry, delete from both hashes
delete $hash1{$key};
delete $hash2{$key};
}
}
%hash1 will now have lines which are only in file1.
You could print them as,
foreach my $key (keys %hash1) {
print "$hash1{$key}\n";
}
foreach my $key (keys %hash2) {
print "$hash2{$key}\n";
}
Perhaps the following will be helpful:
use strict;
use warnings;
my #files = #ARGV;
pop;
my %file1 = map { chomp; /(.+),/; $1 => $_ } <>;
push #ARGV, $files[1];
my %file2 = map { chomp; /(.+),/; $1 => $_ } <>;
print "$files[0]:\n";
print $file1{$_}, "\n" for grep !exists $file2{$_}, keys %file1;
print "\n$files[1]:\n";
print $file2{$_}, "\n" for grep !exists $file1{$_}, keys %file2;
Usage: perl script.pl file1.txt file2.txt
Output on your datasets:
file1.txt:
cat,val 1,43432
file2.txt:
cat,val 3,22
bird,output,9999
This builds a hash for each file. The keys are the first two columns and the associated values are the full lines. grep is used to filter the shared keys.
Edit: On relatively smaller files, using map as above to process the file's lines will work fine. However, a list of all of the file's lines is first created, and then passed to map. On larger files, it may be better to use a while (<>) { ... construct, to read one line at a time. The code below does this--generating the same output as above--and uses a hash of hashes (HoH). Because it uses a HoH, you'll note some dereferencing:
use strict;
use warnings;
my %hash;
my #files = #ARGV;
while (<>) {
chomp;
$hash{$ARGV}{$1} = $_ if /(.+),/;
}
print "$files[0]:\n";
print $hash{ $files[0] }{$_}, "\n"
for grep !exists $hash{ $files[1] }{$_}, keys %{ $hash{ $files[0] } };
print "\n$files[1]:\n";
print $hash{ $files[1] }{$_}, "\n"
for grep !exists $hash{ $files[0] }{$_}, keys %{ $hash{ $files[1] } };
I think the above prob can be solved by either of the mentioned algo
a) We can use the hash as mentioned above
b)
1. Sort both the files with Key1 and Key2 (use sort fun)
Iterate through FILE1
Match the key1 and key2 entry of FILE1 with FILE2
If yes then
take action by printing common lines it to desired file as required
Move to next row in File1 (continue with the loop )
If No then
Iterate through File2 startign from the POS-FILE2 until match is found
Match the key1 and key2 entry of FILE1 with FILE2
If yes then
take action by printing common lines it to desired file as required
setting FILE2-END as true
exit from the loop noting the position of FILE2
If no then
take action by printing unmatched lines to desired file as req.
Move to next row in File2
If FILE2-END is true
Rest of Lines in FILE1 doesnt exist in FILE2
I am trying to solve below issues.
I have 2 files. Address.txt and File.txt. I want to replace all A/B/C/D (File.txt) with corresponding string value (Read from Address.txt file) using perl script. It's not replacing in my output file. I am getting same content of File.txt.
I tried below codes.
Here is Address.txt file
A,APPLE
B,BAL
C,CAT
D,DOG
E,ELEPHANT
F,FROG
G,GOD
H,HORCE
Here is File.txt
A B C
X Y X
M N O
D E F
F G H
Here is my code :
use strict;
use warnings;
open (MYFILE, 'Address.txt');
foreach (<MYFILE>){
chomp;
my #data_new = split/,/sm;
open INPUTFILE, "<", $ARGV[0] or die $!;
open OUT, '>ariout.txt' or die $!;
my $src = $data_new[0];
my $des = $data_new[1];
while (<INPUTFILE>) {
# print "In while :$src \t$des\n";
$_ =~ s/$src/$des/g;
print OUT $_;
}
close INPUTFILE;
close OUT;
# /usr/bin/perl -p -i -e "s/A/APPLE/g" ARGV[0];
}
close (MYFILE);
If i Write $_ =~ s/A/Apple/g;
Then output file is fine and A is replacing with "Apple". But when dynamically coming it's not getting replaced.
Thanks in advance. I am new in perl scripting language . Correct me if I am wrong any where.
Update 1: I updated below code . It's working fine now. My questions Big O of this algo.
Code :
#!/usr/bin/perl
use warnings;
use strict;
open( my $out_fh, ">", "output.txt" ) || die "Can't open the output file for writing: $!";
open( my $address_fh, "<", "Address.txt" ) || die "Can't open the address file: $!";
my %lookup = map { chomp; split( /,/, $_, 2 ) } <$address_fh>;
open( my $file_fh, "<", "File1.txt" ) || die "Can't open the file.txt file: $!";
while (<$file_fh>) {
my #line = split;
for my $char ( #line ) {
( exists $lookup{$char} ) ? print $out_fh " $lookup{$char} " : print $out_fh " $char ";
}
print $out_fh "\n";
}
Not entirely sure how you want your output formatted. Do you want to keep the rows and columns as is?
I took a similar approach as above but kept the formatting the same as in your 'file.txt' file:
#!/usr/bin/perl
use warnings;
use strict;
open( my $out_fh, ">", "output.txt" ) || die "Can't open the output file for writing: $!";
open( my $address_fh, "<", "address.txt" ) || die "Can't open the address file: $!";
my %lookup = map { chomp; split( /,/, $_, 2 ) } <$address_fh>;
open( my $file_fh, "<", "file.txt" ) || die "Can't open the file.txt file: $!";
while (<$file_fh>) {
my #line = split;
for my $char ( #line ) {
( exists $lookup{$char} ) ? print $out_fh " $lookup{$char} " : print $out_fh " $char ";
}
print $out_fh "\n";
}
That will give you the output:
APPLE BAL CAT
X Y X
M N O
DOG ELEPHANT FROG
FROG GOD HORCE
Here's another option that lets Perl handle the opening and closing of files:
use strict;
use warnings;
my $addresses_txt = pop;
my %hash = map { $1 => $2 if /(.+?),(.+)/ } <>;
push #ARGV, $addresses_txt;
while (<>) {
my #array;
push #array, $hash{$_} // $_ for split;
print "#array\n";
}
Usage: perl File.txt Addresses.txt [>outFile.txt]
The last, optional parameter directs output to a file.
Output on your dataset:
APPLE BAL CAT
X Y X
M N O
DOG ELEPHANT FROG
FROG GOD HORCE
The name of the addresses' file is implicitly popped off of #ARGV for use later. Then, a hash is built, using the key/value pairs in File.txt.
The addresses' file is read, splitting each line into its single elements, and the defined-or (//) operator is used to returned the defined hash item or the single element, which is then pushed onto #array. Finally, the array is interpolated in a print statement.
Hope this helps!
First, here is your existing program, rewritten slightly
open the address file
convert the address file to a hash so that the letters are the keys and the strings the values
open the other file
read in the single line in it
split the line into single letters
use the letters to lookup in the hash
use strict;
use warnings;
open(my $a,"Address.txt")||die $!;
my %address=map {split(/,/) } map {split(' ')} <$a>;
open(my $f,"File.txt")||die $!;
my $list=<$f>;
for my $letter (split(' ',$list)) {
print $address{$letter}."\n" if (exists $address{$letter});
}
to make another file with the substitutions in place alter the loop that processes $list
for my $letter (split(' ',$list)) {
if (exists $address{$letter}) {
push #output, $address{$letter};
}
else {
push #output, $letter;
}
}
open(my $o,">newFile.txt")||die $!;
print $o "#output";
Your problem is that in every iteration of your foreach loop you overwrite any changes made earlier to output file.
My solution:
use strict;
use warnings;
open my $replacements, 'Address.txt' or die $!;
my %r;
foreach (<$replacements>) {
chomp;
my ($k, $v) = split/,/sm;
$r{$k} = $v;
}
my $re = '(' . join('|', keys %r) . ')';
open my $input, "<", $ARGV[0] or die $!;
while (<$input>) {
s/$re/$r{$1}/g;
print;
}
#!/usr/bin/perl -w
# to replace multiple text strings in a file with text from another file
#select text from 1st file, replace in 2nd file
$file1 = 'Address.txt'; $file2 = 'File.txt';
# save the strings by which to replace
%replacement = ();
open IN,"$file1" or die "cant open $file1\n";
while(<IN>)
{chomp $_;
#a = split ',',$_;
$replacement{$a[0]} = $a[1];}
close IN;
open OUT,">replaced_file";
open REPL,"$file2" or die "cant open $file2\n";
while(<REPL>)
{chomp $_;
#a = split ' ',$_; #replaced_data = ();
# replace strings wherever possible
foreach $i(#a)
{if(exists $replacement{$i}) {push #replaced_data,$replacement{$i};}
else {push #replaced_data,$i;}
}
print OUT trim(join " ",#replaced_data),"\n";
}
close REPL; close OUT;
########################################
sub trim
{
my $str = $_[0];
$str=~s/^\s*(.*)/$1/;
$str=~s/\s*$//;
return $str;
}
I want to convert excel-files to csv-files with Perl. For convenience I like to use the module File::Slurp for read/write operations. I need it in a subfunction.
While printing out to the screen, the program generates the desired output, the generated csv-files unfortunately just contain one row with semicolons, field are empty.
Here is the code:
#!/usr/bin/perl
use File::Copy;
use v5.14;
use Cwd;
use File::Slurp;
use Spreadsheet::ParseExcel;
sub xls2csv {
my $currentPath = getcwd();
my #files = <$currentPath/stage0/*.xls>;
for my $sourcename (#files) {
print "Now working on $sourcename\n";
my $outFile = $sourcename;
$outFile =~ s/xls/csv/g;
print "Output CSV-File: ".$outFile."\n";
my $source_excel = new Spreadsheet::ParseExcel;
my $source_book = $source_excel->Parse($sourcename)
or die "Could not open source Excel file $sourcename: $!";
foreach my $source_sheet_number ( 0 .. $source_book->{SheetCount} - 1 )
{
my $source_sheet = $source_book->{Worksheet}[$source_sheet_number];
next unless defined $source_sheet->{MaxRow};
next unless $source_sheet->{MinRow} <= $source_sheet->{MaxRow};
next unless defined $source_sheet->{MaxCol};
next unless $source_sheet->{MinCol} <= $source_sheet->{MaxCol};
foreach my $row_index (
$source_sheet->{MinRow} .. $source_sheet->{MaxRow} )
{
foreach my $col_index (
$source_sheet->{MinCol} .. $source_sheet->{MaxCol} )
{
my $source_cell =
$source_sheet->{Cells}[$row_index][$col_index];
if ($source_cell) {
print $source_cell->Value, ";"; # correct output!
write_file( $outFile, { binmode => ':utf8' }, $source_cell->Value, ";" ); # only one row of semicolons with empty fields!
}
}
print "\n";
}
}
}
}
xls2csv();
I know it has something to do with the parameter passing in the write_file function, but couldn't manage to fix it.
Has anybody an idea?
Thank you very much in advance.
write_file will overwrite the file unless the append => 1 option is given. So this:
write_file( $outFile, { binmode => ':utf8' }, $source_cell->Value, ";" );
Will write a new file for each new cell value. It does however not match your description of "only one row of semi-colons of empty fields", as it should only be one semi-colon, and one value.
I am doubtful towards this sentiment from you: "For convenience I like to use the module File::Slurp". While the print statement works as it should, using File::Slurp does not. So how is that convenient?
What you should do, if you still want to use write_file is to gather all the lines to print, and then print them all at once at the end of the loop. E.g.:
$line .= $source_cell->Value . ";"; # use concatenation to build the line
...
push #out, "$line\n"; # store in array
...
write_file(...., \#out); # print the array
Another simple option would be to use join, or to use the Text::CSV module.
Well, in this particular case, File::Slurp was indeed complicating this for me. I just wanted to avoid to repeat myself, which I did in the following clumsy working solution:
#!/usr/bin/perl
use warnings;
use strict;
use File::Copy;
use v5.14;
use Cwd;
use File::Basename;
use File::Slurp;
use Tie::File;
use Spreadsheet::ParseExcel;
use open qw/:std :utf8/;
# ... other functions
sub xls2csv {
my $currentPath = getcwd();
my #files = <$currentPath/stage0/*.xls>;
my $fh;
for my $sourcename (#files) {
say "Now working on $sourcename";
my $outFile = $sourcename;
$outFile =~ s/xls/csv/gi;
if ( -e $outFile ) {
unlink($outFile) or die "Error: $!";
print "Old $outFile deleted.";
}
my $source_excel = new Spreadsheet::ParseExcel;
my $source_book = $source_excel->Parse($sourcename)
or die "Could not open source Excel file $sourcename: $!";
foreach my $source_sheet_number ( 0 .. $source_book->{SheetCount} - 1 )
{
my $source_sheet = $source_book->{Worksheet}[$source_sheet_number];
next unless defined $source_sheet->{MaxRow};
next unless $source_sheet->{MinRow} <= $source_sheet->{MaxRow};
next unless defined $source_sheet->{MaxCol};
next unless $source_sheet->{MinCol} <= $source_sheet->{MaxCol};
foreach my $row_index (
$source_sheet->{MinRow} .. $source_sheet->{MaxRow} )
{
foreach my $col_index (
$source_sheet->{MinCol} .. $source_sheet->{MaxCol} )
{
my $source_cell =
$source_sheet->{Cells}[$row_index][$col_index];
if ($source_cell) {
print $source_cell->Value, ";";
open( $fh, '>>', $outFile ) or die "Error: $!";
print $fh $source_cell->Value, ";";
close $fh;
}
}
print "\n";
open( $fh, '>>', $outFile ) or die "Error: $!";
print $fh "\n";
close $fh;
}
}
}
}
xls2csv();
I'm actually NOT happy with it, since I'm opening and closing the files so often (I have many files with many lines). That's not very clever in terms of performance.
Currently I still don't know how to use the split or Text:CSV in this case, in order to put everything into an array and to open, write and close each file only once.
Thank you for your answer TLP.
I have 2 CSV files, file1.csv and file2.csv . I have to pick each row of column 3 in file1 and iterate through column 3 of file2 to find a match and if the match occurs then display the complete matched rows(from column 1,2 and 3)only from file2.csv in a third csv file.My code till now only fetches the column 3 from both the csv files. How can I match column 3 of both the files and display the matched rows ? Please help.
File1:
Comp_Name,Date,Files
Component1,2013/04/01,/Com/src/folder1/folder2/newfile.txt;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile24;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile25;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile26;
Component1,2013/04/25,/Com/src2;
File2:
Comp_name,Date,Files
Component1,2013/04/07,/Com/src/folder1/folder2/newfile.txt;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile24;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile25;
Component2,2013/04/23,/Com/src/folder1/folder2/newfile.txt;
Component3,2013/04/27,/Com/src/folder1/folder2/testfile24;
Component1,2013/04/25,/Com/src2;
Output format:
Comp_Name,Date,Files
Component1,2013/04/07,/Com/src/folder1/folder2/newfile.txt;
Component2,2013/04/23,/Com/src/folder1/folder2/newfile.txt;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile24;
Component3,2013/04/27,/Com/src/folder1/folder2/testfile24;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile25;
Component1,2013/04/25,/Com/src2;
Code:
use strict;
use warnings;
my $file1 = "C:\\pick\\file1.csv";
my $file2 = "C:\\pick\\file2.csv";
my $file3 = "C:\\pick\\file3.csv";
my $type;
my $type1;
my #fields;
my #fields2;
open(my $fh, '<:encoding(UTF-8)', $file1) or die "Could not open file '$file1' $!"; #Throw error if file doesn't open
while (my $row = <$fh>) # reading each row till end of file
{
chomp $row;
#fields = split ",",$row;
$type = $fields[2];
print"\n$type";
}
open(my $fh2, '<:encoding(UTF-8)', $file2) or die "Could not open file '$file2' $!"; #Throw error if file doesn't open
while (my $row2 = <$fh2>) # reading each row till end of file
{
chomp $row2;
#fields2 = split ",",$row2;
$type1 = $fields2[2];
print"\n$type1";
foreach($type)
{
if ($type eq $type1)
{
print $row2;
}
}
}
This is not a matter to over complicate.. I would personally use a module Text::CSV_XS or as mentioned already Tie::Array::CSV to perform here.
If you're having trouble using a module, I suppose this would be an alternative. You can modify to your desired wants and needs, I used the data you supplied and got the results you want.
use strict;
use warnings;
open my $fh1, '<', 'file1.csv' or die "failed open: $!";
open my $fh2, '<', 'file2.csv' or die "failed open: $!";
open my $out, '>', 'file3.csv' or die "failed open: $!";
my %hash1 = map { $_ => 1 } <$fh1>;
my %hash2 = map { $_ => 1 } <$fh2>;
close $fh1;
close $fh2;
my #result =
map { join ',', $hash1{$_->[2]} ? () : $_->[0], $_->[1], $_->[2] }
sort { $a->[1] <=> $b->[1] || $a->[2] cmp $b->[2] || $a->[0] cmp $b->[0] }
map { s/\s*$//; [split /,/] } keys %hash2;
print $out "$_\n" for #result;
close $out;
__OUTPUT__
Comp_name,Date,Files
Component1,2013/04/07,/Com/src/folder1/folder2/newfile.txt;
Component2,2013/04/23,/Com/src/folder1/folder2/newfile.txt;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile24;
Component3,2013/04/27,/Com/src/folder1/folder2/testfile24;
Component1,2013/04/24,/Com/src/folder1/folder2/testfile25;
Component1,2013/04/25,/Com/src2;
This is a job for a hash ( my %file1)
so instead of continually opening the files you can read the contents into hashes
#fields = split ",",$row;
$type = $fields[2];
$hash1{$type} = $row;
I see you have duplicates too so the hash entry will be replaced upon duplication
so you can store an array of values in the hash
$hash1{$type} = [] unless $hash1{$type};
push #{$hash1{$type}}, $row;
Your next problem is how to traverse the arrays inside hashes
Here is an example using my Tie::Array::CSV module. It uses some clever Perl tricks to represent each CSV file as a Perl array of arrayrefs. I use it to make an index of the first file, then to loop over the second file and finally to output to the third.
#!/usr/bin/env perl
use strict;
use warnings;
use Tie::Array::CSV;
tie my #file1, 'Tie::Array::CSV', 'file1' or die 'Cannot tie file1';
tie my #file2, 'Tie::Array::CSV', 'file2' or die 'Cannot tie file2';
tie my #output, 'Tie::Array::CSV', 'output' or die 'Cannot tie output';
# setup a match table from file2
my %match = map { ( $_->[-1] => 1 ) } #file1[1..$#file1];
#header
push #output, $file2[0];
# iterate over file2
for my $row ( #file2[1..$#file2] ) {
next unless $match{$row->[-1]}; # check for match
push #output, $row; # print to output if match
}
The output I get is different from yours, but I cannot figure out why your output does not include testfile25 and src2.