I am writing a script in Perl where I have to open the same file twice in my code. This is my outline of the code:
#!/usr/bin/perl
use strict;
use warnings;
my %forward=();
my %reverse=();
while(<>){
chomp;
# store something
}
}
while(<>){ # open the same file again
chomp;
#print something
}
I am using the diamond operator so I am running the script like this
perl script.pl input.txt
But this is not producing any output. If I open the File using filehandle, the script works. What can be possibly wrong here?
Save your #ARGV before exhausting it. Of course, this will only work for actual files specified on the command line, and not with STDIN.
#!/usr/bin/env perl
use strict;
use warnings;
run(#ARGV);
sub run {
my #argv = #_;
first(#argv);
second(#argv);
}
sub first {
local #ARGV = #_;
print "First pass: $_" while <>;
}
sub second {
local #ARGV = #_;
print "Second pass: $_" while <>;
}
You read all there was to be read in the first loop, leaving nothing to read in the second.
If the input aren't huge, you can simply load it into memory.
my #lines = <>;
chomp( #lines );
for (#lines) {
...
}
for (#lines) {
...
}
Related
I'm writing a perl code that print a massage/send a mail if there is a repeated line found in a file.
My code so far:
#!/usr/bin/perl
use strict;
my %prv_line;
open(FILE, "somefile") || die "$!";
while(<FILE>){
if($prv_line{$_}){
$prv_line{$_}++;
}
#my problem: print I saw this line X times
}
close FILE
My problem: How do generate a static msg with output: print "I saw this line X times" without printing the script output
Thanks
probably, here's what you want:
#!/usr/bin/perl
use strict;
use warnings;
my %lines;
while(<DATA>) {
chomp;
$lines{$_}++;
}
while (my($key, $value) = each %lines) {
print "I saw the line '$key' $value times\n";
}
__DATA__
abc
def
def
def
abc
blabla
avaddv
bla
abc
Of course, it can be improved.
Your original code is very close. Well done for use strict and putting $! in the die string. You should also always use warnings, use the three-parameter form of open, and use lexical file handles.
This program should help you.
use strict;
use warnings;
my %prv_line;
open (my $FILE, '<', 'somefile') || die $!;
while (<$FILE>) {
if ( $prv_line{$_} ) {
print "I saw this line $prv_line{$_} times\n";
}
$prv_line{$_}++;
}
I already did some research on Perl script debugging but couldn't find what I was looking for.
Let me explain my problem here.
I have a Perl script which is not entering into last while loop it seems cos it is not printing anything inside as instructed.
So, I want to know is there any easier method available to see all lines one by one like we can see in shell script using
set -x
Here is my Perl script code
#!/usr/bin/perl -w
my $ZONEADM = "/usr/sbin/zoneadm list -c";
use strict;
use diagnostics;
use warnings;
system("clear");
print "Enter the app\n";
chomp(my $INS = <>);
print "\nEnter the Symmitrix ID\n";
chomp(my $SYMM = <>);
print "\nEnter the Server\n";
chomp(my $SRV = <>);
print "\nEnter the devices\n";
while (<>) {
if($_ !~ m/(q|quit)/) {
chomp($_);
my $TEMP_FILE = "/export/home/ptiwari/scripts/LOG.11";
open (my $FH, '>>', $TEMP_FILE);
my #arr = split(/:/, $_);
if($arr[3]) {
print $FH "/".$INS."db/".$arr[0]." ".$SYMM." ".$arr[1]." ".$arr[2]." ".$arr[3]."\n";
}
else {
print $FH "/".$INS."db/".$arr[0]." ".$SYMM." ".$arr[1]." ".$arr[2]."\n";
}
undef #arr;
close $FH;
}
else {
exit;
}
}
my $IS_ZONE = qx($ZONEADM|grep -i $SRV|grep -v global);
if($IS_ZONE) {
$IS_ZONE = "yes";
}
else {
$IS_ZONE = "no";
}
open(my $FLH, '<', "/export/home/ptiwari/scripts/LOG.11");
my #lines;
while(<$FLH>) {
my ($GLOBAL_MTPT, $SYM, $SYM_DEV, $SIZE, $NEWFS) = split;
print $GLOBAL_MTPT." ".$SYM." ".$SYM_DEV;
print "\n";
}
I already tried perl -d but it didn't show me anything which can help me to troubleshoot why it didn't enter the while loop.
Your while(<>) loop doesn't have sensible termination conditions. The /q|quit/ regex is buggy.
You exit the whole script if any line contains q or quit. You will also exit, if the device descriptions contains things like quill or acquisition. The effect of typing an accidental q is similar to a CtrlC.
The only way to finish the loop and go on with the script is to send an EOF. This requires the user to punch CtrlD into the keyboard, or a file to simply end. Then your script will continue.
There are some other things wrong/weird with this script.
Main criticism: (a) all-uppercase variables are informally reserved for Perl and pragmatic modules. Lowercase or mixed case variables work too. (b) Your script contains quite some redundant code. Either refactor it into subs, or rewrite your logic
Here is an example rewrite that may be easier to debug / may not contain some of the bugs.
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
use constant DEBUG_FLAG => 1; # set to false value for release
my $zoneadm_command = "/usr/sbin/zoneadm list -c";
my $temp_file_name = "/export/home/ptiwari/scripts/LOG.11";
sub prompt { print "\n", $_[0], "\n"; my $answer = <>; chomp $answer; return $answer }
sub DEBUG { print STDERR "DEBUG> ", #_, "\n" if DEBUG_FLAG }
system("clear");
my $app_name = prompt("Enter the app");
my $symm_id = prompt("Enter the Symmitrix ID");
my $server = prompt("Enter the server name");
print "Enter the devices.\n";
print qq(\tTo terminate the script, type "q" or "quit".\n);
print qq(\tTo finish the list of devices, type Ctrl+D.\n);
open my $temp_file, ">>", $temp_file_name
or die "Can't open log file: $!";
while (<>) {
chomp; # remove trailing newline
exit if /^q(?:uit)?$/; # terminate the script if the input line *is* `q` or `quit`.
my #field = split /:/;
# grep: select all true values
#field = grep {$_} ("/${app_name}db/$field[0]", $symm_id, #field[1 .. 3]);
print $temp_file join(" ", #field), "\n";
}
close $temp_file;
DEBUG("finished the reading loop");
# get the zones with only *one* extra process
my #zones =
grep {not /global/}
grep {/\Q$server\E/i}
map {chomp; $_}
qx($zoneadm_command);
my $is_zone = #zones ? "yes" : "no";
DEBUG("Am I in the zone? $is_zone");
open my $device_file, "<", $temp_file_name or die "Can't open $temp_file_name: $!";
while (<$device_file>) {
chomp;
my ($global_mtpt, $sym, $sym_dev) = split;
print join(" ", $global_mtpt, $sym, $sym_dev), "\n";
# or short: print join(" ", (split)[0 .. 2]), "\n";
}
You need something like this for stepping into the script:
http://www.devshed.com/c/a/Perl/Using-The-Perl-Debugger/
You can really use the debugger: http://perldoc.perl.org/perldebug.html
But if your preference is to trace like bash -x, take a look at this discussion:
http://www.perlmonks.org/?node_id=419653
The Devel::Trace Perl module is designed to mimic sh -x tracing for shell programs.
Try to remove the "my $" from the last open statement and the "$" from there in the last while statement. Or better yet, try this:
open(my FLH, '<', "/export/home/ptiwari/scripts/LOG.11");
my #lines = <FLH>;
foreach (#lines) {
my ($GLOBAL_MTPT, $SYM, $SYM_DEV, $SIZE, $NEWFS) = split;
print $GLOBAL_MTPT." ".$SYM." ".$SYM_DEV;
print "\n";
}
Let me know about the results.
I want each (small) file specified with ARGV read in its own array. If I don't test $ARGV, <> will slurp all files in a single table. Is there a better/shorter/simpler way of doing it?
# invocation: ./prog.pl *.txt
#table = ();
$current = "";
while (<>)
{
if ($ARGV ne $current)
{
#ar = ();
$current = $ARGV;
if ($current)
{
push #table, \#ar;
}
}
push #ar;
}
The eof function can be used to detect the end of each file:
#!/usr/bin/env perl
use strict;
use warnings;
my #files;
my $file_ctr = 0;
while (<>) {
chomp;
push #{ $files[$file_ctr] }, $_;
}
continue { $file_ctr++ if eof }
Relevant documentation:
In a while (<>) loop, eof or eof(ARGV) can be used to detect the
end of each file, whereas eof() will detect the end of the very last
file only.
Please always use strict and use warnings at the top of your programs, and declare variables close to their first point of use using my.
It is simplest to test end of file on the ARGV filehandle to determine when a new file is about to be opened.
This code uses a state variable $eof to record whether the previous file has been completely read to avoid unnecessarily adding a new element to the #table array when the end of the #ARGV list is reached.
use strict;
use warnings;
my #table;
my $eof = 1;
while (<>) {
chomp;
push #table, [] if $eof;
push #{$table[-1]}, $_;
$eof = eof;
}
#Alan Haggai Alavi's idea of incrementing an index at end of file instead of setting a flag is far better as it avoids the need to explicitly create an empty array at the start of each file.
Here is my take on his solution, but it is completely dependent on Alan's post and he should gete the credit for it.
use strict;
use warnings;
my #table;
my $index = 0;
while (<>) {
chomp;
push #{$table[$index]}, $_;
$index++ if eof;
}
You can leverage File::Slurp to avoid opening and closing the files yourself.
use strict;
use warnings;
use File::Slurp;
my #table = ();
foreach my $arg ( #ARGV ) {
push #table, read_file( $arg, array_ref => 1 );
}
A hash for array refs of files:
my %files;
while (<>) {
push #{$files{$ARGV}}, $_;
}
What I am trying to do is get my program to recurse through a directory and for all of those files within the directory, search each file for the word "ERROR" and then print the instance of it out in a seperate file. I was able to do this without making it recursive, i.e. just entering which files to check manually in the cmd. I was wondering what the proper way to use ARGV when recursing is. Here is my code thus far:
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my $dir = "c:/programs";
find(\&searchForErrors, $dir);
sub searchForErrors()
{
my $seen = 0;
if (-f){
my $file = $_;
my #errors = ();
open FILE, $file;
my #lines = <FILE>;
close FILE;
for my $line (#lines){
if (/ERROR/ ){
push(#errors, "ERROR in line $.\n");
print FILE "ERROR in line $.:$1\n" if (/Error\s+(.+)/);
}
open FILE, ">$file";
print FILE #lines;
close FILE;
}
}
}
What I need to know is how I can incorporate ARGV so that the program will read in each file in the directory, perform the search, and then output the results of the search to a file. I hope I have explained my question adequately, if you need any clarification, let me know what is confusing. The more explanation you can give with the answer, the better. Thank you!
ARGV is usually used to iterate over files provided from outside of Perl.
$ find -type f --exec perl script.pl {} +
# script.pl
while (<>) {
print "$ARGV:$.: $1\n" if /Error\s+(.+)/;
} continue {
close(ARGV) if eof; # Reset $.
}
But it's not necessary. You could also do:
use File::Find::Rule qw( );
#ARGV = File::Find::Rule->file->in('.');
while (<>) {
print "$ARGV:$.: $1\n" if /Error\s+(.+)/;
} continue {
close(ARGV) if eof; # Reset $.
}
I prefer File::Find::Rule, but you could stick with File::Find for reasons that should be obvious if you compare the above snippet with the following snippet:
use File::Find qw( find );
#ARGV = ();
find({ wanted => sub { push #ARGV, $_ if -f }, no_chdir => 1 }, '.');
while (<>) {
print "$ARGV:$.: $1\n" if /Error\s+(.+)/;
} continue {
close(ARGV) if eof; # Reset $.
}
PS - You're replacing each file with an exact copy of itself, and you're populating an array you never use. I omitted that code from my version.
I quickly jotted off a Perl script that would average a few files with just columns of numbers. It involves reading from an array of filehandles. Here is the script:
#!/usr/local/bin/perl
use strict;
use warnings;
use Symbol;
die "Usage: $0 file1 [file2 ...]\n" unless scalar(#ARGV);
my #fhs;
foreach(#ARGV){
my $fh = gensym;
open $fh, $_ or die "Unable to open \"$_\"";
push(#fhs, $fh);
}
while (scalar(#fhs)){
my ($result, $n, $a, $i) = (0,0,0,0);
while ($i <= $#fhs){
if ($a = <$fhs[$i]>){
$result += $a;
$n++;
$i++;
}
else{
$fhs[$i]->close;
splice(#fhs,$i,1);
}
}
if ($n){ print $result/$n . "\n"; }
}
This doesn't work. If I debug the script, after I initialize #fhs it looks like this:
DB<1> x #fhs
0 GLOB(0x10443d80)
-> *Symbol::GEN0
FileHandle({*Symbol::GEN0}) => fileno(6)
1 GLOB(0x10443e60)
-> *Symbol::GEN1
FileHandle({*Symbol::GEN1}) => fileno(7)
So far, so good. But it fails at the part where I try to read from the file:
DB<3> x $fhs[$i]
0 GLOB(0x10443d80)
-> *Symbol::GEN0
FileHandle({*Symbol::GEN0}) => fileno(6)
DB<4> x $a
0 'GLOB(0x10443d80)'
$a is filled with this string rather than something read from the glob. What have I done wrong?
You can only use a simple scalar variable inside <> to read from a filehandle. <$foo> works. <$foo[0]> does not read from a filehandle; it's actually equivalent to glob($foo[0]). You'll have to use the readline builtin, a temporary variable, or use IO::File and OO notation.
$text = readline($foo[0]);
# or
my $fh = $foo[0]; $text = <$fh>;
# or
$text = $foo[0]->getline; # If using IO::File
If you weren't deleting elements from the array inside the loop, you could easily use a temporary variable by changing your while loop to a foreach loop.
Personally, I think using gensym to create filehandles is an ugly hack. You should either use IO::File, or pass an undefined variable to open (which requires at least Perl 5.6.0, but that's almost 10 years old now). (Just say my $fh; instead of my $fh = gensym;, and Perl will automatically create a new filehandle and store it in $fh when you call open.)
If you are willing to use a bit of magic, you can do this very simply:
use strict;
use warnings;
die "Usage: $0 file1 [file2 ...]\n" unless #ARGV;
my $sum = 0;
# The current filehandle is aliased to ARGV
while (<>) {
$sum += $_;
}
continue {
# We have finished a file:
if( eof ARGV ) {
# $. is the current line number.
print $sum/$. , "\n" if $.;
$sum = 0;
# Closing ARGV resets $. because ARGV is
# implicitly reopened for the next file.
close ARGV;
}
}
Unless you are using a very old perl, the messing about with gensym is not necessary. IIRC, perl 5.6 and newer are happy with normal lexical handles: open my $fh, '<', 'foo';
I have trouble understanding your logic. Do you want to read several files, which just contains numbers (one number per line) and print its average?
use strict;
use warnings;
my #fh;
foreach my $f (#ARGV) {
open(my $fh, '<', $f) or die "Cannot open $f: $!";
push #fh, $fh;
}
foreach my $fh (#fh) {
my ($sum, $n) = (0, 0);
while (<$fh>) {
$sum += $_;
$n++;
}
print "$sum / $n: ", $sum / $n, "\n" if $n;
}
Seems like a for loop would work better for you, where you could actually use the standard read (iteration) operator.
for my $fh ( #fhs ) {
while ( defined( my $line = <$fh> )) {
# since we're reading integers we test for *defined*
# so we don't close the file on '0'
#...
}
close $fh;
}
It doesn't look like you want to shortcut the loop at all. Therefore, while seems to be the wrong loop idiom.