How to read file in Perl and if it doesn't exist create it? - perl

In Perl, I know this method :
open( my $in, "<", "inputs.txt" );
reads a file but it only does so if the file exists.
Doing the other way, the one with the +:
open( my $in, "+>", "inputs.txt" );
writes a file/truncates if it exists so I don't get the chance to read the file and store it in the program.
How do I read files in Perl considering if the file exists or not?
Okay, I've edited my code but still the file isn't being read. The problem is it doesn't enter the loop. Anything mischievous with my code?
open( my $in, "+>>", "inputs.txt" ) or die "Can't open inputs.txt : $!\n";
while (<$in>) {
print "Here!";
my #subjects = ();
my %information = ();
$information{"name"} = $_;
$information{"studNum"} = <$in>;
$information{"cNum"} = <$in>;
$information{"emailAdd"} = <$in>;
$information{"gwa"} = <$in>;
$information{"subjNum"} = <$in>;
for ( $i = 0; $i < $information{"subjNum"}; $i++ ) {
my %subject = ();
$subject{"courseNum"} = <$in>;
$subject{"courseUnt"} = <$in>;
$subject{"courseGrd"} = <$in>;
push #subjects, \%subject;
}
$information{"subj"} = \#subjects;
push #students, \%information;
}
print "FILE LOADED.\n";
close $in or die "Can't close inputs.txt : $!\n";

Use the proper test file operator:
use strict;
use warnings;
use autodie;
my $filename = 'inputs.txt';
unless(-e $filename) {
#Create the file if it doesn't exist
open my $fc, ">", $filename;
close $fc;
}
# Work with the file
open my $fh, "<", $filename;
while( my $line = <$fh> ) {
#...
}
close $fh;
But if the file is new (without contents), the while loop won't be processed. It's easier to read the file only if the test is fine:
if(-e $filename) {
# Work with the file
open my $fh, "<", $filename;
while( my $line = <$fh> ) {
#...
}
close $fh;
}

You can use +>> for read/append, creates the file if it doesn't exist but doesn't truncate it:
open(my $in,"+>>","inputs.txt");

First check whether the file exists or not. Check the sample code below :
#!/usr/bin/perl
use strict;
use warnings;
my $InputFile = $ARGV[0];
if ( -e $InputFile ) {
print "File Exists!";
open FH, "<$InputFile";
my #Content = <FH>;
open OUT, ">outfile.txt";
print OUT #Content;
close(FH);
close(OUT);
} else {
print "File Do not exists!! Create a new file";
open OUT, ">$InputFile";
print OUT "Hello World";
close(OUT);
}

Related

How to split a file of 5000 lines in several files of 200 lines each with Perl?

I've a file of 5000 lines and I want several files of 200 lines each, and I tried this:
#!/usr/bin/perl
use strict;
use warnings;
my $targetfile = '200_lines.txt';
my $filename = '5000_lines.txt';
open ( my $DATA, '<', $filename ) or die "Could not open file '$filename': $!";
while ( my $line = <$DATA> ) {
my $counter++;
open (my $FILE, '>>', $targetfile ) or die "Could not open file '$targetfile': $!";
print $FILE $line;
close $FILE;
if ( $counter % 200 == 0
if ($. % 200 == 0) {
$targetfile =~ s/200/$counter/;
}
}
My $counter variable still at 1. I don't know why. So I got just one file named 200_lines.txt with 5000 lines.
You can use sprintf to generate new filenames after every 200 lines and use $. to keep track of line numbers in the file.
The below script will generate smaller chunks of files from the larger file with 200 lines each.
#!/usr/bin/perl
use strict;
use warnings;
open my $rfh,'<','file_5000' or die "unable to open file : $! \n";
my $filecount=0;
my $wfh;
while(<$rfh>){
if(($.-1) % 200 == 0){
close($wfh) if($wfh);
open($wfh, '>', sprintf("file%02d", ++$filecount)) or die $!;
}
print $wfh "$_";
}
close($rfh);
$counter is scoped within your while block. So it is reset each iteration.
You don't really need to do it like that, as you can test $. for the current line number
don't call your filehandle $DATA. There's a special filehandle called <DATA>.
How about:
#!/usr/bin/perl
use strict;
use warnings;
open( my $input, '<', '5000_lines.txt' ) or die $!;
open( my $output, '>', '200_lines.txt' ) or die $!;
while ( my $line = <$input> ) {
unless ( $. % 200 ) {
close($output);
open( $output, '>', int( $. / 200 + 2) . "00_lines.txt" ) or die $!;
}
print {$output} $line;
}
close($input);
close($output);
This creates files:
200_lines.txt
400_lines.txt
600_lines.txt
etc.
You have a bunch of errors in your code.
#!/usr/bin/perl
use strict;
use warnings;
# Creating a file with 5000 lines
my $filename = '5000_lines.txt';
open ( $DATA, '>', $filename ) or die "Could not open file '$filename': $!";
for (my $i=0;$i<5000;$i++){
print $DATA "$i\n";
}
close ( $DATA);
my $targetfile = '200_lines.txt';
open ( my $DATA, '<', $filename ) or die "Could not open file '$filename': $!";
my $counter = 0;
my $num = 0;
my $flag = 1;
while ( my $line = <$DATA> ) {
if ($flag == 1){
open (FILE, '>', $targetfile.'_'.$num ) or die "Could not open file '$targetfile.'_'.$num': $!";
}
print FILE $line;
$flag=0;
if ( $counter % 200 == 0){
$num = $counter/200;
close FILE;
$flag=1
}
$counter++;
}
close (FILE);
It will break the large file into smaller chunks of 200 lines each.

Combining two csv files together in perl

Hi i'm very new to perl and i've got litle knowledge on it but i'm trying to create a script that conbines two .csv files into a new one
#!/usr/bin/env perl
use strict;
use warnings;
use Text::CSV_XS;
my #rows;
{ # Read the CSV file
my $csv = Text::CSV_XS->new() or die "Cannot use Text::CSV_XS ($!)";
my $file = "file.csv";
open my $fh, '<', $file or die "Cannot open $file ($!)";
while (my $row = $csv->getline($fh)) {
push #rows, $row;
}
$csv->eof or $csv->error_diag();
close $fh or die "Failed to close $file ($!)";
}
{ # Gather the data
foreach my $row (#rows) {
foreach my $col (#{$row}) {
$col = uc($col);
}
print "\n";
}
}
# (over)Write the data
# Needs to be changed to ADD data
{
my $csv = Text::CSV_XS->new({ binary => 1, escape_char => undef })
or die "Cannot use Text::CSV ($!)";
my $file = "output.csv";
open my $fh, '>', $file or die "Cannot open $file ($!)";
$csv->eol("\n");
foreach my $row (#rows) {
$csv->print($fh, \#{$row}) or die "Failed to write $file ($!)";
}
close $fh or die "Failed to close $file ($!)";
}
this is my current code i do know this over write's the data insted of actually adding it to the new file but this is how far i managed to get with the limited time and knowledge i've got on perl
the csv format of both files are the same.
"Header1";"Header2";"Header3";"Header4";"Header5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
"Data1";"Data2";"Data3";"Data4";"Data5"
I believe the issue is here:
open my $fh, '>', $file
or die "Cannot open $file ($!)";
If I remember my Perl properly, the line should read:
open my $fh, '>>', $file
or die "Cannot open $file ($!)";
The >> should open the file handle $fh for append instead of for overwrite.
you could try something like this
opendir(hand,"DIRPATH");
#files = readdir(hand);
closedir(hand);
foreach(#files){
if(/\.csv$/i) { #if the filename has .csv at the end
push(#csvfiles,$_);
}
}
foreach(#csvfiles) {
$csvfile=$_;
open(hanr,"DIRPATH".$csvfile)or die"error $!\n"; #read handler
open(hanw , ">>DIRPATH"."outputfile.csv") or die"error $! \n"; #write handler for creating new sorted files
#lines=();
#lines=<hanr>;
foreach $line (#lines){
chomp $line;
$count++;
next unless $count; # skip header i.e the first line containing stock details
print hanw join $line,"\n";
}
$count= -1;
close(hanw);
close(hanr);
}`

Extracting specific multiple line of records that is pipe delimited in perl

I have a file that looks like
NAME|JOHN|TOKYO|JPN
AGE|32|M
INFO|SINGLE|PROFESSIONAL|IT
NAME|MARK|MANILA|PH
AGE|37|M
INFO|MARRIED|PROFESSIONAL|BPO
NAME|SAMANTHA|SYDNEY|AUS
AGE|37|F
INFO|MARRIED|PROFESSIONAL|OFFSHORE
NAME|LUKE|TOKYO|JPN
AGE|27|M
INFO|SINGLE|PROFESSIONAL|IT
I want to separate the records by country. I have stored each line into array variable #fields
my #fields = split(/\|/, $_ );
making $fields[3] as my basis for sorting it. I wanted it to separate into 2 output text files
OUTPUT TEXT FILE 1:
NAME|JOHN|TOKYO|JPN
AGE|32|M
INFO|SINGLE|PROFESSIONAL|IT
NAME|LUKE|TOKYO|JPN
AGE|27|M
INFO|SINGLE|PROFESSIONAL|IT
OUTPUT TEXT FILE 2
NAME|MARK|MANILA|PH
AGE|37|M
INFO|MARRIED|PROFESSIONAL|BPO
NAME|SAMANTHA|SYDNEY|AUS
AGE|37|F
INFO|MARRIED|PROFESSIONAL|OFFSHORE
Putting all that is from JPN to output text 1 & non-JPN country to output text file 2
here's the code that what trying to work out
use strict;
use warnings;
use Data::Dumper;
use Carp qw(croak);
my #fields;
my $tmp_var;
my $count;
;
my ($line, $i);
my $filename = 'data.txt';
open(my $input_fh, '<', $filename ) or croak "Can't open $filename: $!";
open(OUTPUTA, ">", 'JPN.txt') or die "wsl_reformat.pl: could not open $ARGV[0]";
open(OUTPUTB, ">", 'Non-JPN.txt') or die "wsl_reformat.pl: could not open $ARGV[0]";
my $fh;
while (<$input_fh>) {
chomp;
my #fields = split /\|/;
if ($fields[0] eq 'NAME') {
for ($i=1; $i < #fields; $i++) {
if ($fields[3] eq 'JPN') {
$fh = $_;
print OUTPUTA $fh;
}
else {
$fh = $_;
print OUTPUTB $fh;
}
}
}
}
close(OUTPUTA);
close(OUTPUTB)
Still has no luck on it :(
Here is the way I think ikegami was saying, but I've never tried this before (although it gave the correct results).
#!/usr/bin/perl
use strict;
use warnings;
open my $jpn_fh, ">", 'o33.txt' or die $!;
open my $other_fh, ">", 'o44.txt' or die $!;
my $fh;
while (<DATA>) {
if (/^NAME/) {
if (/JPN$/) {
$fh = $jpn_fh;
}
else {
$fh = $other_fh;
}
}
print $fh $_;
}
close $jpn_fh or die $!;
close $other_fh or die $!;
__DATA__
NAME|JOHN|TOKYO|JPN
AGE|32|M
INFO|SINGLE|PROFESSIONAL|IT
NAME|MARK|MANILA|PH
AGE|37|M
INFO|MARRIED|PROFESSIONAL|BPO
NAME|SAMANTHA|SYDNEY|AUS
AGE|37|F
INFO|MARRIED|PROFESSIONAL|OFFSHORE
NAME|LUKE|TOKYO|JPN
AGE|27|M
INFO|SINGLE|PROFESSIONAL|IT
You didn't say what you needed help with, so I'm assuming it's coming up with an algorithm. Here's a good one:
Open the file to read.
Open the file for the JPN entries.
Open the file for the non-JPN entries.
While not eof,
Read a line.
Parse the line.
If it's the first line of a record,
If the person's country is JPN,
Set current file handle to the file handle for JPN entries.
Else,
Set current file handle to the file handle for non-JPN entries.
Print the line to the current file handle.
my $jpn_qfn = '...';
my $other_qfn = '...';
open(my $jpn_fh, '>', $jpn_qfn)
or die("Can't create $jpn_qfn: $!\n");
open(my $other_fh, '>', $other_qfn)
or die("Can't create $other_qfn: $!\n");
my $fh;
while (<>) {
chomp;
my #fields = split /\|/;
if ($fields[0] eq 'NAME') {
$fh = $fields[3] eq 'JPN' ? $jpn_fh : $other_fh;
}
say $fh $_;
}
#!/usr/bin/env perl
use 5.012;
use autodie;
use strict;
use warnings;
# store per country output filehandles
my %output;
# since this is just an example, read from __DATA__ section
while (my $line = <DATA>) {
# split the fields
my #cells = split /[|]/, $line;
# if first field is NAME, this is a new record
if ($cells[0] eq 'NAME') {
# get the country code, strip trailing whitespace
(my $country = $cells[3]) =~ s/\s+\z//;
# if we haven't created and output file for this
# country, yet, do so
unless (defined $output{$country}) {
open my $fh, '>', "$country.out";
$output{$country} = $fh;
}
my $out = $output{$country};
# output this and the next two lines to
# country specific output file
print $out $line, scalar <DATA>, scalar <DATA>;
}
}
close $_ for values %output;
__DATA__
NAME|JOHN|TOKYO|JPN
AGE|32|M
INFO|SINGLE|PROFESSIONAL|IT
NAME|MARK|MANILA|PH
AGE|37|M
INFO|MARRIED|PROFESSIONAL|BPO
NAME|SAMANTHA|SYDNEY|AUS
AGE|37|F
INFO|MARRIED|PROFESSIONAL|OFFSHORE
NAME|LUKE|TOKYO|JPN
AGE|27|M
INFO|SINGLE|PROFESSIONAL|IT
Thanks for your Help heaps
I was able to solved this problem in perl,
many thanks
#!/usr/local/bin/perl
use strict;
use warnings;
use Data::Dumper;
use Carp qw(croak);
my #fields;
my $tmp_var;
my ($rec_type, $country);
my $filename = 'data.txt';
open (my $input_fh, '<', $filename ) or croak "Can't open $filename: $!";
open my $OUTPUTA, ">", 'o33.txt' or die $!;
open my $OUTPUTB, ">", 'o44.txt' or die $!;
my $Combline;
while (<$input_fh>) {
$_ = _trim($_);
#fields = split (/\|/, $_);
$rec_type = $fields[0];
$country = $fields[3];
if ($rec_type eq 'NAME') {
if ($country eq 'JPN') {
*Combline = $OUTPUTA;
}
else {
*Combline = $OUTPUTB;
}
}
print Combline;
}
close $OUTPUTA or die $!;
close $OUTPUTB or die $!;
sub _trim {
my $word = shift;
if ( $word ) {
$word =~ s/\s*\|/\|/g; #remove trailing spaces
$word =~ s/"//g; #remove double quotes
}
return $word;
}

copy text after a specific string from a file and append to another in perl

I want to extract the desired information from a file and append it into another. the first file consists of some lines as the header without a specific pattern and just ends with the "END OF HEADER" string. I wrote the following code for find the matching line for end of the header:
$find = "END OF HEADER";
open FILEHANDLE, $filename_path;
while (<FILEHANDLE>) {
my $line = $_;
if ($line =~ /$find/) {
#??? what shall I do here???
}
}
, but I don't know how can I get the rest of the file and append it to the other file.
Thank you for any help
I guess if the content of the file isn't enormous you can just load the whole file in a scalar and just split it with the "END OF HEADER" then print the output of the right side of the split in the new file (appending)
open READHANDLE, 'readfile.txt' or die $!;
my $content = do { local $/; <READHANDLE> };
close READHANDLE;
my (undef,$restcontent) = split(/END OF HEADER/,$content);
open WRITEHANDLE, '>>writefile.txt' or die $!;
print WRITEHANDLE $restcontent;
close WRITEHANDLE;
This code will take the filenames from the command line, print all files up to END OF HEADER from the first file, followed by all lines from the second file. Note that the output is sent to STDOUT so you will have to redirect the output, like this:
perl program.pl headfile.txt mainfile.txt > newfile.txt
Update Now modified to print all of the first file after the line END OF HEADER followed by all of the second file
use strict;
use warnings;
my ($header_file, $main_file) = #ARGV;
open my $fh, '<', $header_file or die $!;
my $print;
while (<$fh>) {
print if $print;
$print ||= /END OF HEADER/;
}
open $fh, '<', $main_file or die $!;
print while <$fh>;
use strict;
use warnings;
use File::Slurp;
my #lines = read_file('readfile.txt');
while ( my $line = shift #lines) {
next unless ($line =~ m/END OF HEADER/);
last;
}
append_file('writefile.txt', #lines);
I believe this will do what you need:
use strict;
use warnings;
my $find = 'END OF HEADER';
my $fileContents;
{
local $/;
open my $fh_read, '<', 'theFile.txt' or die $!;
$fileContents = <$fh_read>;
}
my ($restOfFile) = $fileContents =~ /$find(.+)/s;
open my $fh_write, '>>', 'theFileToAppend.txt' or die $!;
print $fh_write $restOfFile;
close $fh_write;
my $status = 0;
my $find = "END OF HEADER";
open my $fh_write, '>', $file_write
or die "Can't open file $file_write $!";
open my $fh_read, '<', $file_read
or die "Can't open file $file_read $!";
LINE:
while (my $line = <$fh_read>) {
if ($line =~ /$find/) {
$status = 1;
next LINE;
}
print $fh_write $line if $status;
}
close $fh_read;
close $fh_write;

merging two files using perl keeping the copy of original file in other file

I have to files like A.ini and B.ini ,I want to merge both the files in A.ini
examples of files:
A.ini::
a=123
b=xyx
c=434
B.ini contains:
a=abc
m=shank
n=paul
my output in files A.ini should be like
a=123abc
b=xyx
c=434
m=shank
n=paul
I want to this merging to be done in perl language and I want to keep the copy of old A.ini file at some other place to use old copy
A command line variant:
perl -lne '
($a, $b) = split /=/;
$v{$a} = $v{$a} ? $v{$a} . $b : $_;
END {
print $v{$_} for sort keys %v
}' A.ini B.ini >NEW.ini
How about:
#!/usr/bin/perl
use strict;
use warnings;
my %out;
my $file = 'path/to/A.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
$out{$key} = $val;
}
close $fh;
$file = 'path/to/B.ini';
open my $fh, '<', $file or die "unable to open '$file' for reading: $!";
while(<$fh>) {
chomp;
my ($key, $val) = split /=/;
if (exists $out{$key}) {
$out{$key} .= $val;
} else {
$out{$key} = $val;
}
}
close $fh;
$file = 'path/to/A.ini';
open my $fh, '>', $file or die "unable to open '$file' for writing: $!";
foreach(keys %out) {
print $fh $_,'=',$out{$_},"\n";
}
close $fh;
The two files to be merged can be read in a single pass and don't need to be treated as separate source files. That allows the use of <> to read all files passed as parameters on the command line.
Keeping a backup copy of A.ini is simply a matter of renaming it before writing the merged data to a new file of the same name.
This program appears to do what you need.
use strict;
use warnings;
my $file_a = $ARGV[0];
my (#keys, %values);
while (<>) {
if (/\A\s*(.+?)\s*=\s*(.+?)\s*\z/) {
push #keys, $1 unless exists $values{$1};
$values{$1} .= $2;
}
}
rename $file_a, "$file_a.bak" or die qq(Unable to rename "$file_a": $!);
open my $fh, '>', $file_a or die qq(Unable to open "$file_a" for output: $!);
printf $fh "%s=%s\n", $_, $values{$_} for #keys;
output (in A.ini)
a=123abc
b=xyx
c=434
m=shank
n=paul