perl "or" to go through a list - perl

I have written a small script that goes looking for a file with a certain ID-number in three different folders. My first attempt to do this was this:
open my $fh, "<", "dir1/file.o$ARGV[0]"
or open my $fh, "<", "dir2/file.o$ARGV[0]"
or open my $fh, "<", "dir3/file.o$ARGV[0]"
or die "Couldn't open `file.o$ARGV[0]' for reading: $!";
This resulted in an empty file handle for files in dir2, which I used for testing, and I have now written an elaborate if(...) else if (...) structure to do what I want. But I still don't understand why my first approach didn't work, so I'm hoping for some insights.
My expectation was that it would try to open the first file, if that failed, look at what comes after the or, try to open that file and so on. Where am I going wrong?
Bonus question: Is there an elegant way to do this?

You're duplicating the declaration of $fh. Move that out:
my $fh;
open $fh, '<', 'test1'
or open $fh, '<', 'test2'
or open $fh, '<', 'test3'
or die "Can't open whatever: $!";

If you enable warnings it will tell you "my" variable $fh masks earlier declaration in same statement
As an alternative, you can use foreach to loop over files,
my #files = map "dir$_/file.o$ARGV[0]", 1 ..3;
my $fh;
for (#files) {
open($fh, "<", $_) and last;
undef $fh;
}
$fh or die $!;

"Is there an elegant way to do this?"
The warning is because of multiple declarations of $fh, as has already been explained, but I would prefer to see the file selected separately, because
There may be more than one file with the desired ID
If the file is in dir1 or dir2 but the open fails for some other reason, your program will go on to try dir3 and wrongly report No such file or directory
I suggest something like this
use strict;
use warnings;
my ($id) = #ARGV;
my $filename = do {
my #files = grep -f, map "$_/file.o$id", qw/ dir1 dir2 dir3 /;
die #files . " matching files found" unless #files == 1;
$files[0];
};
open my $fh, '<', $filename or die qq{Couldn't open "$filename" for reading: $!};

Related

Counting number of lines with conditions

This is my script count.pl, I am trying to count the number of lines in a file.
The script's code :
chdir $filepath;
if (-e "$filepath"){
$total = `wc -l < file.list`;
printf "there are $total number of lines in file.list";
}
i can get a correct output, but i do not want to count blank lines and anything in the file that start with #. any idea ?
As this is a Perl program already open the file and read it, filtering out lines that don't count with
open my $fh, '<', $filename or die "Can't open $filename: $!";
my $num_lines = grep { not /^$|^\s*#/ } <$fh>;
where $filename is "file.list." If by "blank lines" you mean also lines with spaces only then chagne regex to /^\s*$|^\s*#/. See grep, and perlretut for regex used in its condition.
That filehandle $fh gets closed when the control exits the current scope, or add close $fh; after the file isn't needed for processing any more. Or, wrap it in a block with do
my $num_lines = do {
open my $fh, '<', $filename or die "Can't open $filename: $!";
grep { not /^$|^\s*#/ } <$fh>;
};
This makes sense doing if the sole purpose of opening that file is counting lines.
Another thing though: an operation like chdir should always be checked, and then there is no need for the race-sensitive if (-e $filepath) either. Altogether
# Perhaps save the old cwd first so to be able to return to it later
#my $old_cwd = Cwd::cwd;
chdir $filepath or die "Can't chdir to $filepath: $!";
open my $fh, '<', $filename or die "Can't open $filename: $!";
my $num_lines = grep { not /^$|^\s*#/ } <$fh>;
A couple of other notes:
There is no reason for printf. For all normal prints use say, for which you need use feature qw(say); at the beginning of the program. See feature pragma
Just in case, allow me to add: every program must have at the beginning
use warnings;
use strict;
Perhaps the original intent of the code in the question is to allow a program to try a non-existing location, and not die? In any case, one way to keep the -e test, as asked for
#my $old_cwd = Cwd::cwd;
chdir $filepath or warn "Can't chdir to $filepath: $!";
my $num_lines;
if (-e $filepath) {
open my $fh, '<', $filename or die "Can't open $filename: $!";
$num_lines = grep { not /^$|^\s*#/ } <$fh>;
}
where I still added a warning if chdir fails. Remove that if you really don't want it. I also added a declaration of the variable that is assigned the number of lines, with my $total_lines;. If it is declared earlier in your real code then of course remove that line here.
perl -ne '$n++ unless /^$|^#/ or eof; print "$n\n" if eof'
Works with multiple files too.
perl -ne '$n++ unless /^$|^#/ or eof; END {print "$n\n"}'
Better for a single file.
open(my $fh, '<', $filename);
my $n = 0;
for(<$fh>) { $n++ unless /^$|^#/}
print $n;
Using sed to filter out the "unwanted" lines in a single file:
sed '/^\s*#/d;/^\s*$/d' infile | wc -l
Obviously, you can also replace infile with a list of files.
The solution is very simple, no any magic.
use strict;
use warnings;
use feature 'say';
my $count = 0;
while( <> ) {
$count++ unless /^\s*$|^\s*#/;
}
say "Total $count lines";
Reference:
<>

Perl-Copying file from one location to other but content not copying

I am writing a script in perl where I am creating a file and getting input from user for file but when I am copying that file to other location the file is copying but it is empty only. My code is
# !/usr/bin/perl -w
for($i = 1;$i<5;$i++)
{
open(file1,"</u/man/fr$i.txt");
print "Enter text for file $i";
$txt = <STDIN>;
print file1 $txt;
open(file2,">/u/man/result/fr$i.txt");
while(<file1>)
{
print file2 $_;
}
close(file1);
close(file2);
}
fr1 to fr4 are creating but these are empty. like when I run my code it is asking for input i provide the input and code run without error but still the files are empty. Please help.
in line number 4 I changed < to > also as I thought for creating new file it might need that but still it is not working
You need to close the filehandle that was written to in order to be able to read from that file.
use warnings;
use strict;
use feature 'say';
for my $i (1..4)
{
my $file = "file_$i.txt";
open my $fh, '>', $file or die "Can't open $file: $!";
say $fh "Written to $file";
# Opening the same filehandle first *closes* it if already open
open $fh, '<', $file or die "Can't open $file: $!";
my $copy = "copy_$i.txt";
open my $fh_cp, '>', $copy or die "Can't open $copy: $!";
while (<$fh>) {
print $fh_cp $_;
}
close $fh_cp; # in case of early errors in later iterations
close $fh;
}
This creates the four files, file_1.txt etc, and their copies, copy_1.txt etc.
Please note the compulsory checking whether open worked.
You can't write to a filehandle that's not open for writing. You can't read from a filehandle that's not open for reading. Never ignore the return value of open.
# !/usr/bin/perl
use warnings; # Be warned about mistakes.
use strict; # Prohibit stupid things.
for my $i (1 .. 4) { # lexical variable, range
open my $FH1, '>', "/u/man/fr$i.txt" # 3 argument open, lexical filehandle, open for writing
or die "$i: $!"; # Checking the return value of open
print "Enter text for file $i: ";
my $txt = <STDIN>;
print {$FH1} $txt;
open my $FH2, '<', "/u/man/fr$i.txt" # Reopen for reading.
or die "$i: $!";
open my $FH3, '>', "/u/man/result/fr$i.txt" or die "$i: $!";
while (<$FH2>) {
print {$FH3} $_;
}
close $FH3;
}
I opened the file in write mode using filehandler1 Then i again opened the file in read mode using same filehandler1 then I opened filehandler2 for destiantion So it is working fine for me then.
system("cp myfile1.txt /somedir/myfile2.txt")
`cp myfile1.txt /somedir/myfile2.txt`

Perl Script: sorting through log files.

Trying to write a script which opens a directory and reads bunch of multiple log files line by line and search for information such as example:
"Attendance = 0 " previously I have used grep "Attendance =" * to search my information but trying to write a script to search for my information.
Need your help to finish this task.
#!/usr/bin/perl
use strict;
use warnings;
my $dir = '/path/';
opendir (DIR, $dir) or die $!;
while (my $file = readdir(DIR))
{
print "$file\n";
}
closedir(DIR);
exit 0;
What's your perl experience?
I'm assuming each file is a text file. I'll give you a hint. Try to figure out where to put this code.
# Now to open and read a text file.
my $fn='file.log';
# $! is a variable which holds a possible error msg.
open(my $INFILE, '<', $fn) or die "ERROR: could not open $fn. $!";
my #filearr=<$INFILE>; # Read the whole file into an array.
close($INFILE);
# Now look in #filearr, which has one entry per line of the original file.
exit; # Normal exit
I prefer to use File::Find::Rule for things like this. It preserves path information, and it's easy to use. Here's an example that does what you want.
use strict;
use warnings;
use File::Find::Rule;
my $dir = '/path/';
my $type = '*';
my #files = File::Find::Rule->file()
->name($type)
->in(($dir));
for my $file (#files){
print "$file\n\n";
open my $fh, '<', $file or die "can't open $file: $!";
while (my $line = <$fh>){
if ($line =~ /Attendance =/){
print $line;
}
}
}

Perl reading and writing in files

Alright, so I'm back with another question. I know in Python there is a way to read in a file without specifying which file it will be, until you are in the command prompt. So basically you can set the script up so that you can read in any file you want and don't have to go back and change the coding every time. Is there a way to do this in Perl? If so, can you write files like that too? Thanks.
This is what I have:
open (LOGFILE, "UNSUCCESSFULOUTPUT.txt") or die "Can't find file";
open FILE, ">", "output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
This is what I have nome:
#!/usr/local/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open FILE, ">>output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
And it's still not appending...
Command line arguments are provided in #ARGV. You can do as you please with them, including passing them as file names to open.
my ($in_qfn, $out_qfn) = #ARGV;
open(my $in_fh, '<', $in_qfn ) or die $!;
open(my $out_fh, '>', $out_qfn) or die $!;
print $out_fh $_ while <$in_fh>;
But that's not a very unixy way of doing things. In unix tradition, the following will read from every file specified on the command line, one line at a time:
while (<>) {
...
}
Output is usually placed in files through redirection.
#!/usr/bin/env perl
# This is mycat.pl
print while <>;
# Example usage.
mycat.pl foo bar > baz
# Edit foo in-place.
perl -i mycat.pl foo
The only time one usually touches #ARGV is to process options, and even then, one usually uses Getopt::Long instead of touching #ARGV directly.
Regarding your code, your script should be:
#!/usr/bin/env perl
while (<>) {
print "ERROR in line $.\n" if /Error/;
}
Usage:
perl script.pl UNSUCCESSFULOUTPUT.txt >output.txt
You can get rid of perl from the command if you make script.pl executable (chmod u+x script.pl).
This is what I believe you want:
#!usr/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open (FILE, ">output.txt") or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
Ran as from the command line:
> perl nameofpl.pl mytxt.txt
For appending change this line:
open (FILE, ">output.txt") or die $!;
To the remarkably similar:
open (FILE, ">>output.txt") or die $!;
I assume you are asking how to pass an argument to a perl script. This is done with the #ARGV variable.
use strict;
use warnings;
my $file = shift; # implicitly shifts from #ARGV
print "The file is: $file\n";
You can also make use of the magic of the diamond operator <>, which will open the arguments to the script as files, or use STDIN if no arguments are supplied. The diamond operator is used as a normal file handle, typically while (<>) ...
ETA:
With the code you supplied, you can make it more flexible by doing this:
use strict;
use warnings; # always use these
my $file = shift; # first argument, required
my $outfile = shift // "output.txt"; # second argument, optional
open my $log, "<", $file or die $!;
open my $out, ">", $outfile or die $!;
while (<$log>) {
print $out "ERROR in line $.\n" if (/Error/);
}
Also see ikegami's answer on how to make it more like other unix tools, e.g. accept STDIN or file arguments, and print to STDOUT.
As I commented in your earlier question, you may simply wish to use an already existing tool for the job:
grep -n Error input.txt > output.txt

perl: Writing file at Nth position

I am trying to write in to file at Nth POSITION. I have tried with below example but it writes at the end. Please help to achieve this.
#!/usr/bin/perl
open(FILE,"+>>try.txt")
or
die ("Cant open file try.txt");
$POS=5;
seek(FILE,$POS,0);
print FILE "CP1";
You are opening the file in read-write appending mode. Try opening the file in read-write mode:
my $file = "try.txt";
open my $fh, "+<", $file
or die "could not open $file: $!";
Also, note the use of the three argument open, the lexical filehandle, and $!.
#!/usr/bin/perl
use strict;
use warnings;
#create an in-memory file
my $fakefile = "1234567890\n";
open my $fh, "+<", \$fakefile
or die "Cant open file: $!";
my $offset = 5;
seek $fh, $offset, 0
or die "could not seek: $!";
print $fh "CP1";
print $fakefile;
The code above prints:
12345CP190
If I understand you correctly, if the file contents are
123456789
you want to change that to
1234CP157689
You cannot achieve that using modes supplied to open (regardless of programming language).
You need to open the source file and another temporary file (see File::Temp. Read up to the insertion point from the source and write the contents to the temporary file, write what you want to insert, then write the remainder of the source file to the temporary file, close the source and rename the temporary to the source.
If you are going to do this using seek, both files must be opened in binary mode.
Here is an example using line oriented input and text mode:
#!/usr/bin/perl
use strict; use warnings;
use File::Temp qw( :POSIX );
my $source = 'test.test';
my $temp = tmpnam;
open my $source_h, '<', $source
or die "Failed to open '$source': $!";
open my $temp_h, '>', $temp
or die "Failed to open '$temp' for writing: $!";
while ( my $line = <$source_h> ) {
if ( $line =~ /^[0-9]+$/ ) {
$line = substr($line, 0, 5) . "CP1" . substr($line, 5);
}
print $temp_h $line;
}
close $temp_h
or die "Failed to close '$temp': $!";
close $source_h
or die "Failed to close '$source': $!";
rename $temp => $source
or die "Failed to rename '$temp' to '$source': $!";
this works for me
use strict;
use warnings;
open( my $fh, '+<', 'foo.txt' ) or die $!;
seek( $fh, 3, 0 );
print $fh "WH00t?";
this is also a more "modern" use of open(), see http://perldoc.perl.org/functions/open.html
The file will be closed when $fh goes out of scope ..
"Inserting" a string into a function can (mostly) be done in place. See the lightly used truncate built-in function.
open my $fh, '+<', $file or die $!;
seek $fh, 5, 0;
$/ = undef;
$x = <$fh>; # read everything after the 5th byte into $x
truncate $fh, 5;
print $fh "CPI";
print $fh $x;
close $fh;
If your file is line or record oriented, you can insert lines or modify individual lines easily with the core module Tie::File This will allow the file to be treated as an array and Perl string and array manipulation to be used to modify the file in memory. You can safely operate on huge files larger than your RAM with this method.
Here is an example:
use strict; use warnings;
use Tie::File;
#create the default .txt file:
open (my $out, '>', "nums.txt") or die $!;
while(<DATA>) { print $out "$_"; }
close $out or die $!;
tie my #data, 'Tie::File', "nums.txt" or die $!;
my $offset=5;
my $insert="INSERTED";
#insert in a string:
$data[0]=substr($data[0],0,$offset).$insert.substr($data[0],$offset)
if (length($data[0])>$offset);
#insert a new array element that becomes a new file line:
splice #data,$offset,0,join(':',split(//,$insert));
#insert vertically:
$data[$_]=substr($data[$_],0,$offset) .
substr(lc $insert,$_,1) .
substr($data[$_],$offset) for (0..length($insert));
untie #data; #close the file too...
__DATA__
123456789
234567891
345678912
456789123
567891234
678912345
789123456
891234567
912345678
Output:
12345iINSERTED6789
23456n7891
34567s8912
45678e9123
56789r1234
I:N:St:E:R:T:E:D
67891e2345
78912d3456
891234567
912345678
The file modifications with Tie::File are made in place and as the array is modified. You could use Tie::File just on the first line of you file to modify and insert as you requested. You can put sleep between the array mods and use tail -n +0 -f on the file and watch the file change if you wish...
Alternatively, if your file is reasonable size and you want to treat it like characters, you can read the entire file into memory, do string operations on the data, then write the modified data back out. Consider:
use strict; use warnings;
#creat the default .txt file:
open (my $out, '>', "nums.txt") or die $!;
while(<DATA>) { print $out "$_"; }
close $out or die $!;
my $data;
open (my $in, '<', "nums.txt") or die $!;
{ local $/=undef; $data=<$in>; }
close $in or die $!;
my $offset=5;
my $insert="INSERTED";
open (my $out, '>', "nums.txt") or die $!;
print $out substr($data,0,$offset).$insert.substr($data,$offset);
close $out or die $!;
__DATA__
123456789
2
3
4
5
6
7
8
9
Output:
12345INSERTED6789
2
3
4
5
6
7
8
9
If you treat files as characters, beware that under Windows, files in text mode have a \r\n for a new line. That is two characters if opened in binary mode.