I'm trying to get an Perl program working. I get an error
readline() on closed filehandle IN at Test.pl line 368, <IN> line 65.
Lines 363-369 of the program looks like this
print "Primer3 is done! \n";
my $forward = "$snpid.for";
my #forward_out;
my $i=0;
open(IN,$forward);
while(<IN>){
chomp;s/\r//;
And refers to the configuration file. The last line (line 65) looks like this
num_cpus = 40
So the configuration file is not correct, or Perl does not recognize that this is the end of the file.
Is there a way to solve this?
Update
Based on the comments I added the open() or die command and got this:
No such file or directory at Test.pl line 367.
The open command is part of a subroutine Primer3_Run
sub Primer3_Run {
my $snpid = shift;
my $seq = shift;
my $tmp_input = "Primer3.tmp.input";
my $len = length($seq);
open(OUT, ">$tmp_input");
close OUT;
if ( -e "$snpid.for" ) {
system "del
$snpid.for";
}
if ( -e "$snpid.rev" ) {
system "del $snpid.rev";
}
system
"$params{'primer3'}
Primer3.tmp.input
Primer3.Log.txt 2>&1 ";
my $forward = "$snpid.for";
my #forward_out;
my $i = 0;
open(IN, $forward) or die $!;
From what you've revealed, the problem will be this block
if ( -e "$snpid.for" ) { system "del $snpid.for"; }
followed soon afterwards by this
my $forward = "$snpid.for";
open(IN, $forward) or die $!;
The new or die $! is presumably from my advice, so previously you were deleting "$snpid.for" and then trying to open it for input. The error you saw should be expected
No such file or directory at Test.pl line 367.
You just deleted it!
All I can think of that may help is to be more organised with your coding. What did you mean when you tried to open a file that you had just made sure didn't exist?
Beyond that, you must always add use strict and use warnings 'all' at the top of every Perl program you write. Use lexical file handles together with the three-parameter form of open and always check the status of every open call, using die including the value of $! in the die string to explain why the open failed.
Related
I'm trying to create a script which get from the website a log file(content) then inputting it to a text file, but I am having errors if use strict is present:
Can't use string ("/home/User/Downloads/text") as a symbol ref while "strict refs" in use at ./scriptname line 92.
Also by removing the use strict: I get another error which is:
File name too long at ./scriptname line 91.
I tried the Perl: Read web text file and "open" it
But, it did not work for me. Plus I am a newbie at Perl and confuse of the Perl syntax.
Are there any suggestions or advices available?
Note: The code does it greps the entire line with the RoomOutProcessTT present and display it together with how many times it appears.
Here is the code.
my $FOutput = get "http://website/Logs/Log_number.ini";
my $FInput = "/home/User/Downloads/text";
open $FInput, '<', $FOutput or die "could not open $FInput: $!";
my $ctr;
my #results;
my #words = <$FInput>;
#results = grep /RoomOutProcessTT/, #words;
print "#results\n";
close $FInput;
open $FInput, '<', $FOutput or die "could not open $FInput: $!";
while(<$FInput>){
$ctr = grep /RoomOutProcessTT/, split ' ' , $_;
$ctr += $ctr;
}
print "RoomOutProcessTT Count: $ctr\n";
close $FInput;
The first argument to open is the filehandle name, not the actual name of the file. That comes later in the open function.
Change your code to:
my $FOutput = get "http://website/Logs/Log_number.ini"; # your content should be stored in this
# variable, you need to write data to your output file.
my $FInput = "/home/User/Downloads/text";
open OUTPUT_FILEHANDLE, '>', $FInput or die "could not open $FInput: $!"; # give a name to the file
# handle, then supply the file name itself after the mode specifier.
# You want to WRITE data to this file, open it with '>'
my $ctr;
my #results;
my #words = split(/(\r|\n)/, $FOutput); # create an array of words from the content from the logfile
# I'm not 100% sure this will work, but the intent is to show
# an array of 'lines' corresponding to the data
# here, you want to print the results of your grep to the output file
#results = grep /RoomOutProcessTT/, #words;
print OUTPUT_FILEHANDLE "#results\n"; # print to your output file
# close the output file here, since you re-open it in the next few lines.
close OUTPUT_FILEHANDLE;
# not sure why you're re-opening the file here... but that's up to your design I suppose
open INPUT_FILEHANDLE, '<', $FInput or die "could not open $FInput: $!"; # open it for read
while(<INPUT_FILEHANDLE>){
$ctr = grep /RoomOutProcessTT/, split ' ' , $_;
$ctr += $ctr;
}
print "RoomOutProcessTT Count: $ctr\n"; # print to stdout
close INPUT_FILEHANDLE; # close your file handle
I might suggest switching the terms you use to identify "input and output", as it's somewhat confusing. The input in this case is actually the file you pull from the web, output being your text file. At least that's how I interpret it. You may want to address that in your final design.
I open a file and print some data on the screen , but I want to clean the screen after I output the data , I use clear; in the program but I don't see the effect of clean . It didn't clean .Does there
any command or function can let me do that?
I want to see the contain of a file only , but not to see some of the previous file on the screen ...
Here is my programs
`ls > File_List`;
open List , "<./File_List";
while(eof(List)!=1)
{
$Each = readline(*List);
chomp $Each;
print $Each;
print "\n";
`clear`;
open F , "<./$Each";
while(eof(F)!=1)
{
for($i=0;$i<20;$i++)
{
$L = readline(*F);
print $L;
}
last;
}
close(F);
sleep(3);
$Each = "";
}
close List;
Thanks
Your program uses non-idiomatic Perl. A more natural style would be
#!/usr/bin/env perl
use strict;
use warnings;
no warnings 'exec';
opendir my $dh, "." or die "$0: opendir: $!";
while (defined(my $name = readdir $dh)) {
if (-T $name) {
system("clear") == 0 or warn "$0: clear exited " . ($? >> 8);
print $name, "\n";
system("head", "-20", $name) == 0 or warn "$0: head exited " . ($? >> 8);
sleep 3;
}
}
Instead of writing a list of names to another file, read the names directly with opendir and readdir. The defined check is necessary in case you have a file named 0, which Perl considers to be a false value and would terminate the loop prematurely.
You don’t want to print everything. The directory entry may be a directory or an executable image or a tarball. The -T file test attempts to guess whether the file is a text file.
Invoke the external clear command using Perl’s system.
Finally, use the external head command to print the first 20 lines of each text file.
clear isn't working because the control sequence it outputs to clear the screen is being captured and returned to your program instead of being sent to the display.
Try
print `clear`
or
system('clear')
instead
The solution you provided doesn't work because the clear command is performed in a sub-shell. I suggest the use of a CPAN module (and multi platform supported): Term::Screen::Uni
Example:
use Term::Screen::Uni;
my $screen = Term::Screen::Uni->new;
$screen->clrscr;
Use system(), it works.
system("ls > File_List");
system("clear;");
I have a perl script to which i supply input(text file) from batch or sometimes from command prompt. When i supply input from batch file sometimes the file may not exisits. I want to catch the No such file exists error and do some other task when this error is thrown. Please find the below sample code.
while(<>) //here it throws an error when file doesn't exists.
{
#parse the file.
}
#if error is thrown i want to handle that error and do some other task.
Filter #ARGV before you use <>:
#ARGV = grep {-e $_} #ARGV;
if(scalar(#ARGV)==0) die('no files');
# now carry on, if we've got here there is something to do with files that exist
while(<>) {
#...
}
<> reads from the files listed in #ARGV, so if we filter that before it gets there, it won't try to read non-existant files. I've added the check for the size of #ARGV because if you supply a list files which are all absent, it will wait on stdin (the flipside of using <>). This assumes that you don't want to do that.
However, if you don't want to read from stdin, <> is probably a bad choice; you might as well step through the list of files in #ARGV. If you do want the option of reading from stdin, then you need to know which mode you're in:
$have_files = scalar(#ARGV);
#ARGV = grep {-e $_} #ARGV;
if($have_files && scalar(grep {defined $_} #ARGV)==0) die('no files');
# now carry on, if we've got here there is something to do;
# have files that exist or expecting stdin
while(<>) {
#...
}
The diamond operator <> means:
Look at the names in #ARGV and treat them as files you want to open.
Just loop through all of them, as if they were one big file.
Actually, Perl uses the ARGV filehandle for this purpose
If no command line arguments are given, use STDIN instead.
So if a file doesn't exist, Perl gives you an error message (Can't open nonexistant_file: ...) and continues with the next file. This is what you usually want. If this is not the case, just do it manually. Stolen from the perlop page:
unshift(#ARGV, '-') unless #ARGV;
FILE: while ($ARGV = shift) {
open(ARGV, $ARGV);
LINE: while (<ARGV>) {
... # code for each line
}
}
The open function returns a false value when a problem is encountered. So always invoke open like
open my $filehandle "<", $filename or die "Can't open $filename: $!";
The $! contains a reason for the failure. Instead of dieing, we can do some other error recovery:
use feature qw(say);
#ARGV or #ARGV = "-"; # the - symbolizes STDIN
FILE: while (my $filename = shift #ARGV) {
my $filehandle;
unless (open $filehandle, "<", $filename) {
say qq(Oh dear, I can't open "$filename". What do you wan't me to do?);
my $tries = 5;
do {
say qq(Type "q" to quit, or "n" for the next file);
my $response = <STDIN>;
exit if $response =~ /^q/i;
next FILE if $response =~ /^n/i;
say "I have no idea what that meant.";
} while --$tries;
say "I give up" and exit!!1;
}
LINE: while (my $line = <$filehandle>) {
# do something with $line
}
}
I'm trying to remove one line from a text file. Instead, what I have wipes out the entire file. Can someone point out the error?
removeReservation("john");
sub removeTime() {
my $name = shift;
open( FILE, "<times.txt" );
#LINES = <FILE>;
close(FILE);
open( FILE, ">times.txt" );
foreach $LINE (#LINES) {
print NEWLIST $LINE unless ( $LINE =~ m/$name/ );
}
close(FILE);
print("Reservation successfully removed.<br/>");
}
Sample times.txt file:
04/15/2012&08:00:00&bob
04/15/2012&08:00:00&john
perl -ni -e 'print unless /whatever/' filename
Oalder's answer is correct, but he should have tested whether the open statements succeeded or not. If the file times.txt doesn't exist, your program would continue on its merry way without a word of warning that something terrible has happened.
Same program as oalders' but:
Testing the results of the open.
Using the three part open statement which is more goof proof. If your file name begins with > or |, your program will fail with the old two part syntax.
Not using global file handles -- especially in subroutines. File handles are normally global in scope. Imagine if I had a file handle named FILE in my main program, and I was reading it, I called this subroutine. That would cause problems. Use locally scoped file handle names.
Variable names should be in lowercase. Constants are all uppercase. It's just a standard that developed over time. Not following it can cause confusion.
Since oalders put the program in a subroutine, you should pass the name of your file in the subroutine as well...
Here's the program:
#!/usr/bin/env perl
use strict;
use warnings;
removeTime( "john", "times.txt" );
sub removeTime {
my $name = shift;
my $time_file = shift;
if (not defined $time_file) {
#Make sure that the $time_file was passed in too.
die qq(Name of Time file not passed to subroutine "removeTime"\n);
}
# Read file into an array for processing
open( my $read_fh, "<", $time_file )
or die qq(Can't open file "$time_file" for reading: $!\n);
my #file_lines = <$read_fh>;
close( $read_fh );
# Rewrite file with the line removed
open( my $write_fh, ">", $time_file )
or die qq(Can't open file "$time_file" for writing: $!\n);
foreach my $line ( #file_lines ) {
print {$write_fh} $line unless ( $line =~ /$name/ );
}
close( $write_fh );
print( "Reservation successfully removed.<br/>" );
}
It looks like you're printing to a filehandle which you have not yet defined. At least you haven't defined it in your sample code. If you enable strict and warnings, you'll get the following message:
Name "main::NEWLIST" used only once: possible typo at remove.pl line 16.
print NEWLIST $LINE unless ($LINE =~ m/$name/);
This code should work for you:
#!/usr/bin/env perl
use strict;
use warnings;
removeTime( "john" );
sub removeTime {
my $name = shift;
open( FILE, "<times.txt" );
my #LINES = <FILE>;
close( FILE );
open( FILE, ">times.txt" );
foreach my $LINE ( #LINES ) {
print FILE $LINE unless ( $LINE =~ m/$name/ );
}
close( FILE );
print( "Reservation successfully removed.<br/>" );
}
A couple of other things to note:
1) Your sample code calls removeReservation() when you mean removeTime()
2) You don't require the round brackets in your subroutine definition unless your intention is to use prototypes. See my example above.
This is in the FAQ.
How do I change, delete, or insert a line in a file, or append to the beginning of a file?
It's always worth checking the FAQ.
Just in case someone wants to remove all lines from a file.
For example, a file (4th line is empty; 5th line has 3 spaces):
t e st1
test2 a
e
aa
bb bb
test3a
cc
To remove lines which match a pattern some might use:
# Remove all lines with a character 'a'
perl -pi -e 's/.*a.*//' fileTest && sed -i '/^$/d' fileTest;
The result:
t e st1
e
bb bb
cc
Related:
perl -h
# -p assume loop like -n but print line also, like sed
# -i[extension] edit <> files in place (makes backup if extension supplied)
# -e program one line of program (several -e's allowed, omit programfile)
sed -h
# -i[SUFFIX], --in-place[=SUFFIX]
# edit files in place (makes backup if SUFFIX supplied)
Reference 1, Reference 2
input file:
1,a,USA,,
2,b,UK,,
3,c,USA,,
i want to update the 4th column in the input file from taking values from one of the table.
my code looks like this:
my $number_dbh = DBI->connect("DBI:Oracle:$INST", $USER, $PASS ) or die "Couldn't
connect to datbase $INST";
my $num_smh;
print "connected \n ";
open FILE , "+>>$input_file" or die "can't open the input file";
print "echo \n";
while(my $line=<FILE>)
{
my #line_a=split(/\,/,$line);
$num_smh = $number_dbh->prepare("SELECT phone_no from book where number = $line_a[0]");
$num_smh->execute() or die "Couldn't execute stmt, error : $DBI::errstr";
my $number = $num_smh->fetchrow_array();
$line_a[3]=$number;
}
Looks like your data is in CSV format. You may want to use Parse::CSV.
+>> doesn't do what you think it does. In fact, in testing it doesn't seem to do anything at all. Further, +< does something very strange:
% cat file.txt
1,a,USA,,
2,b,UK,,
3,c,USA,,
% cat update.pl
#!perl
use strict;
use warnings;
open my $fh, '+<', 'file.txt' or die "$!";
while ( my $line = <$fh> ) {
$line .= "hello\n";
print $fh $line;
}
% perl update.pl
% cat file.txt
1,a,USA,,
1,a,USA,,
hello
,,
,,
hello
%
+> appears to truncate the file.
Really, what you want to do is to write to a new file, then copy that file over the old one. Opening a file for simultaneous read/write looks like you'd be entering a world of hurt.
As an aside, you should use the three-argument form of open() (safer for "weird" filenames) and use lexical filehandles (they're not global, and when they go out of scope your file automatically closes for you).