Getting an error as "No such file or directory" in perl - perl

Here is my code, I am passing the files with subroutine. From subroutine i am not able to open the file. and it is throwing an error a
"Couldn't open
inputFiles/Fundamental.FinancialLineItem.FinancialLineItem.SelfSourcedPublic.SHE.1.2017-01-11-2259.Full.txt
: No such file or directory at Practice_Dubugg.pl line 40."
use strict;
use warnings;
use Getopt::Std;
use FileHandle;
my %opts;
my $optstr = "i:o:";
getopts("$optstr", \%opts);
if($opts{i} eq '' || $opts{o} eq '' )
{
print "usage: perl TextCompare_Fund.pl <-i INPUTFILE> <-o MAPREDUCE OUTPUTFILE>\n";
die 1;
}
my $inputFilesPath=$opts{i};
my $outputFilesPath=$opts{o};
my #ifiles=`ls $inputFilesPath`;
my #ofiles=`ls $outputFilesPath`;
foreach my $ifile (#ifiles)
{
my $ifile_substr=substr("$ifile",0,-25);
foreach my $ofile (#ofiles)
{
my $ofile_substr=substr("$ofile",0,-21);
my $result=$ifile_substr cmp $ofile_substr;
if($result eq 0)
{
print "$result\n";
#print "$ifile\n";
compare($ifile,$ofile)
}
}
}
sub compare
{
my $afile="$_[0]";
my $bfile="$_[1]";
my $path1="$inputFilesPath/$afile";
my $path2="$outputFilesPath/$bfile";
#open FILE, "<", $path1 or die "$!:$path1";
open my $infile, "<", $path1 or die "Couldn't open $path1: $!";
my %a_lines;
my %b_lines;
my $count1=0;
while (my $line = <$infile>)
{
chomp $line;
$a_lines{$line} = undef;
$count1=$count1+1;
}
print"File1 records count : $count1\n";
close $infile;
my $file=substr("$afile",0,-25);
my $OUTPUT = "/hadoop/user/m6034690/Kishore/$file.comparision_result";
open my $outfile, "<", $path2 or die "Couldn't open $path2: $!";
open (OUTPUT, ">$OUTPUT") or die "Cannot open $OUTPUT \n";
my $count=0;
my $count2=0;
while (my $line = <$outfile>)
{
chomp $line;
$b_lines{$line} = undef;
$count2=$count2+1;
next if exists $a_lines{$line};
$count=$count+1;
print OUTPUT "$line \t===> The Line which is selected from file2/arg2 is mismatching/not available in file1\n";
}
print "File2 records count : $count2\n";
print "Total mismatching/unavailable records in file1 : $count\n";
close $outfile;
close OUTPUT;
}

try by adding the following lines after te below comment.
my $afile="$_[0]";
my $bfile="$_[1]";
my $path1="$inputFilesPath/$afile";
my $path2="$outputFilesPath/$bfile";
#comment, add the below chomps right under the above portion.
chomp $path1;
chomp $path2;
My test works as the path is now properly formatted.
my result:
File1 records count : 1
File2 records count : 1
Total mismatching/unavailable records in file1 : 1
It took a while to figure this issue out, but it was a bit confusing as the script says usage is -i INPUTFILE and -o OUTPUTFILE
This tells me I should add a file path and not path to folders, but regardless, issue should be resolved.
Edit:
an even better option, We should add the chomp where the ls occurs.
chomp( my #ifiles=`ls $inputFilesPath` );
chomp( my #ofiles=`ls $outputFilesPath` );

Related

Can't write to the file

Why can't I write output to the input file?
It prints it well, but isn't writing to the file.
my $i;
my $regex = $ARGV[0];
for (#ARGV[1 .. $#ARGV]){
open (my $fh, "<", "$_") or die ("Can't open the file[$_] ");
$i++;
foreach (<$fh>){
open (my $file, '>>', '/results.txt') or die ("Can't open the file "); #input file
for (<$file>){
print "Given regexp: $regex\nfile$i:\n line $.: $1\n" if $_ =~ /\b($regex)\b/;
}
}
}
It's unclear whether your problem has been solved.
My best guess is that you want your program to search for the regex passed as the first parameter in the files named in the following paramaters, appending the results to results.txt.
If that is right, then this is closer to what you need
use strict;
use warnings;
use autodie;
my $i;
my $regex = shift;
open my $out, '>>', 'results.txt';
for my $filename (#ARGV) {
open my $fh, '<', $filename;
++$i;
while (<$fh>) {
next unless /\b($regex)\b/;
print $out "Given regexp: $regex\n";
print $out "file$i:\n";
print $out "line $.: $1\n";
last;
}
}

perl + read multiple csv files + manipulate files + provide output_files

Apologies if this is a bit long winded, bu i really appreciate an answer here as i am having difficulty getting this to work.
Building on from this question here, i have this script that works on a csv file(orig.csv) and provides a csv file that i want(format.csv). What I want is to make this more generic and accept any number of '.csv' files and provide a 'output_csv' for each inputed file. Can anyone help?
#!/usr/bin/perl
use strict;
use warnings;
open my $orig_fh, '<', 'orig.csv' or die $!;
open my $format_fh, '>', 'format.csv' or die $!;
print $format_fh scalar <$orig_fh>; # Copy header line
my %data;
my #labels;
while (<$orig_fh>) {
chomp;
my #fields = split /,/, $_, -1;
my ($label, $max_val) = #fields[1,12];
if ( exists $data{$label} ) {
my $prev_max_val = $data{$label}[12] || 0;
$data{$label} = \#fields if $max_val and $max_val > $prev_max_val;
}
else {
$data{$label} = \#fields;
push #labels, $label;
}
}
for my $label (#labels) {
print $format_fh join(',', #{ $data{$label} }), "\n";
}
i was hoping to use this script from here but am having great difficulty putting the 2 together:
#!/usr/bin/perl
use strict;
use warnings;
#If you want to open a new output file for every input file
#Do it in your loop, not here.
#my $outfile = "KAC.pdb";
#open( my $fh, '>>', $outfile );
opendir( DIR, "/data/tmp" ) or die "$!";
my #files = readdir(DIR);
closedir DIR;
foreach my $file (#files) {
open( FH, "/data/tmp/$file" ) or die "$!";
my $outfile = "output_$file"; #Add a prefix (anything, doesn't have to say 'output')
open(my $fh, '>', $outfile);
while (<FH>) {
my ($line) = $_;
chomp($line);
if ( $line =~ m/KAC 50/ ) {
print $fh $_;
}
}
close($fh);
}
the script reads all the files in the directory and finds the line with this string 'KAC 50' and then appends that line to an output_$file for that inputfile. so there will be 1 output_$file for every inputfile that is read
issues with this script that I have noted and was looking to fix:
- it reads the '.' and '..' files in the directory and produces a
'output_.' and 'output_..' file
- it will also do the same with this script file.
I was also trying to make it dynamic by getting this script to work in any directory it is run in by adding this code:
use Cwd qw();
my $path = Cwd::cwd();
print "$path\n";
and
opendir( DIR, $path ) or die "$!"; # open the current directory
open( FH, "$path/$file" ) or die "$!"; #open the file
**EDIT::I have tried combining the versions but am getting errors.Advise greatly appreciated*
UserName#wabcl13 ~/Perl
$ perl formatfile_QforStackOverflow.pl
Parentheses missing around "my" list at formatfile_QforStackOverflow.pl line 13.
source dir -> /home/UserName/Perl
Can't use string ("/home/UserName/Perl/format_or"...) as a symbol ref while "strict refs" in use at formatfile_QforStackOverflow.pl line 28.
combined code::
use strict;
use warnings;
use autodie; # this is used for the multiple files part...
#START::Getting current working directory
use Cwd qw();
my $source_dir = Cwd::cwd();
#END::Getting current working directory
print "source dir -> $source_dir\n";
my $output_prefix = 'format_';
opendir my $dh, $source_dir; #Changing this to work on current directory; changing back
for my $file (readdir($dh)) {
next if $file !~ /\.csv$/;
next if $file =~ /^\Q$output_prefix\E/;
my $orig_file = "$source_dir/$file";
my $format_file = "$source_dir/$output_prefix$file";
# .... old processing code here ...
## Start:: This part works on one file edited for this script ##
#open my $orig_fh, '<', 'orig.csv' or die $!; #line 14 and 15 above already do this!!
#open my $format_fh, '>', 'format.csv' or die $!;
#print $format_fh scalar <$orig_fh>; # Copy header line #orig needs changeing
print $format_file scalar <$orig_file>; # Copy header line
my %data;
my #labels;
#while (<$orig_fh>) { #orig needs changing
while (<$orig_file>) {
chomp;
my #fields = split /,/, $_, -1;
my ($label, $max_val) = #fields[1,12];
if ( exists $data{$label} ) {
my $prev_max_val = $data{$label}[12] || 0;
$data{$label} = \#fields if $max_val and $max_val > $prev_max_val;
}
else {
$data{$label} = \#fields;
push #labels, $label;
}
}
for my $label (#labels) {
#print $format_fh join(',', #{ $data{$label} }), "\n"; #orig needs changing
print $format_file join(',', #{ $data{$label} }), "\n";
}
## END:: This part works on one file edited for this script ##
}
How do you plan on inputting the list of files to process and their preferred output destination? Maybe just have a fixed directory that you want to process all the cvs files, and prefix the result.
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
my $source_dir = '/some/dir/with/cvs/files';
my $output_prefix = 'format_';
opendir my $dh, $source_dir;
for my $file (readdir($dh)) {
next if $file !~ /\.csv$/;
next if $file =~ /^\Q$output_prefix\E/;
my $orig_file = "$source_dir/$file";
my $format_file = "$source_dir/$output_prefix$file";
.... old processing code here ...
}
Alternatively, you could just have an output directory instead of prefixing the files. Either way, this should get you on your way.

opening file in perl gives errors?

I am writing a perl script which reads a text file (which contains absolute paths of many files one below the other), calculates the file names from abs path & then appends all file names separated by a space to the same file. So, consider a test.txt file:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
So after running the script the same file will contain:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
temp.txt test.abc files.xyz
I have this script revert.pl:
use strict;
foreach my $arg (#ARGV)
{
open my $file_handle, '>>', $arg or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my #lines = <$file_handle>;
my $all_names = "";
foreach my $line (#lines)
{
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print $file_handle "\n\n$all_names";
close $file_handle;
}
When I run the script I am getting the following error:
>> perl ..\revert.pl .\test.txt
Too many arguments for open at ..\revert.pl line 5, near "$arg or"
Execution of ..\revert.pl aborted due to compilation errors.
What is wrong over here?
UPDATE: The problem is that we are using a very old version of perl. So changed the code to:
use strict;
for my $arg (#ARGV)
{
print "$arg\n";
open (FH, ">>$arg") or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my $all_names = "";
my $line = "";
for $line (<FH>)
{
print "$line\n";
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print "$line\n";
if ($all_names == "")
{
print "Could not detect any file name.\n";
}
else
{
print FH "\n\n$all_names";
print "Success!\n";
}
close FH;
}
Now its printing the following:
>> perl ..\revert.pl .\test.txt
.\test.txt
Opened File : .\test.txt
Could not detect any file name.
What could be wrong now?
Perhaps you are running an old perl version, so you have to use the 2 params open version:
open(File_handle, ">>$arg") or die "\nError trying to open the file $arg : $!";
note I wrote File_handle without the $. Also, read and writting operations to the file will be:
#lines = <File_handle>;
#...
print File_handle "\n\n$all_names";
#...
close File_handle;
Update: reading file lines:
open FH, "+>>$arg" or die "open file error: $!";
#...
while( $line = <FH> ) {
#...
}

copy text after a specific string from a file and append to another in perl

I want to extract the desired information from a file and append it into another. the first file consists of some lines as the header without a specific pattern and just ends with the "END OF HEADER" string. I wrote the following code for find the matching line for end of the header:
$find = "END OF HEADER";
open FILEHANDLE, $filename_path;
while (<FILEHANDLE>) {
my $line = $_;
if ($line =~ /$find/) {
#??? what shall I do here???
}
}
, but I don't know how can I get the rest of the file and append it to the other file.
Thank you for any help
I guess if the content of the file isn't enormous you can just load the whole file in a scalar and just split it with the "END OF HEADER" then print the output of the right side of the split in the new file (appending)
open READHANDLE, 'readfile.txt' or die $!;
my $content = do { local $/; <READHANDLE> };
close READHANDLE;
my (undef,$restcontent) = split(/END OF HEADER/,$content);
open WRITEHANDLE, '>>writefile.txt' or die $!;
print WRITEHANDLE $restcontent;
close WRITEHANDLE;
This code will take the filenames from the command line, print all files up to END OF HEADER from the first file, followed by all lines from the second file. Note that the output is sent to STDOUT so you will have to redirect the output, like this:
perl program.pl headfile.txt mainfile.txt > newfile.txt
Update Now modified to print all of the first file after the line END OF HEADER followed by all of the second file
use strict;
use warnings;
my ($header_file, $main_file) = #ARGV;
open my $fh, '<', $header_file or die $!;
my $print;
while (<$fh>) {
print if $print;
$print ||= /END OF HEADER/;
}
open $fh, '<', $main_file or die $!;
print while <$fh>;
use strict;
use warnings;
use File::Slurp;
my #lines = read_file('readfile.txt');
while ( my $line = shift #lines) {
next unless ($line =~ m/END OF HEADER/);
last;
}
append_file('writefile.txt', #lines);
I believe this will do what you need:
use strict;
use warnings;
my $find = 'END OF HEADER';
my $fileContents;
{
local $/;
open my $fh_read, '<', 'theFile.txt' or die $!;
$fileContents = <$fh_read>;
}
my ($restOfFile) = $fileContents =~ /$find(.+)/s;
open my $fh_write, '>>', 'theFileToAppend.txt' or die $!;
print $fh_write $restOfFile;
close $fh_write;
my $status = 0;
my $find = "END OF HEADER";
open my $fh_write, '>', $file_write
or die "Can't open file $file_write $!";
open my $fh_read, '<', $file_read
or die "Can't open file $file_read $!";
LINE:
while (my $line = <$fh_read>) {
if ($line =~ /$find/) {
$status = 1;
next LINE;
}
print $fh_write $line if $status;
}
close $fh_read;
close $fh_write;

merging two files using perl keeping the copy of original file in other file

I have to files like A.ini and B.ini ,I want to merge both the files in A.ini
examples of files:
A.ini::
a=123
b=xyx
c=434
B.ini contains:
a=abc
m=shank
n=paul
my output in files A.ini should be like
a=123abc
b=xyx
c=434
m=shank
n=paul
I want to this merging to be done in perl language and I want to keep the copy of old A.ini file at some other place to use old copy
A command line variant:
perl -lne '
($a, $b) = split /=/;
$v{$a} = $v{$a} ? $v{$a} . $b : $_;
END {
print $v{$_} for sort keys %v
}' A.ini B.ini >NEW.ini
How about:
#!/usr/bin/perl
use strict;
use warnings;
my %out;
my $file = 'path/to/A.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
$out{$key} = $val;
}
close $fh;
$file = 'path/to/B.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
if (exists $out{$key}) {
$out{$key} .= $val;
} else {
$out{$key} = $val;
}
}
close $fh;
$file = 'path/to/A.ini';
open my $fh, '>', $file or die "unable to open '$file' for writing: $!";
foreach(keys %out) {
print $fh $_,'=',$out{$_},"\n";
}
close $fh;
The two files to be merged can be read in a single pass and don't need to be treated as separate source files. That allows the use of <> to read all files passed as parameters on the command line.
Keeping a backup copy of A.ini is simply a matter of renaming it before writing the merged data to a new file of the same name.
This program appears to do what you need.
use strict;
use warnings;
my $file_a = $ARGV[0];
my (#keys, %values);
while (<>) {
if (/\A\s*(.+?)\s*=\s*(.+?)\s*\z/) {
push #keys, $1 unless exists $values{$1};
$values{$1} .= $2;
}
}
rename $file_a, "$file_a.bak" or die qq(Unable to rename "$file_a": $!);
open my $fh, '>', $file_a or die qq(Unable to open "$file_a" for output: $!);
printf $fh "%s=%s\n", $_, $values{$_} for #keys;
output (in A.ini)
a=123abc
b=xyx
c=434
m=shank
n=paul