When writing a daemon, I want to close STDIN, STDOUT and STDERR for "good daemon behavior". But I got surprised. Subsequent opening of files require the same properties as the old STDIN, STDOUT and STDERR (because their fileno-s got re-opened?)
Here is warn.pl:
use warnings;
my $nrWarnings = 0;
$SIG{__WARN__} = sub {
no warnings;
$nrWarnings++;
open my $log, '>>', '/tmp/log.txt'
or die;
printf $log "%d: %s", $nrWarnings, #_;
close $log;
};
close STDOUT;
close STDERR;
close STDIN;
open my $o, '>', '/tmp/foobar.txt'
or die;
open my $i, '<', '/etc/passwd'
or die;
open my $i2, '<', '/etc/passwd'
or die;
open my $i3, '<', '/etc/passwd'
or die;
exit $nrWarnings;
And here I run it:
> rm -f /tmp/log.txt ; perl warn.pl; echo $? ; cat /tmp/log.txt
3
1: Filehandle STDIN reopened as $o only for output at warn.pl line 20.
2: Filehandle STDOUT reopened as $i only for input at warn.pl line 22.
3: Filehandle STDERR reopened as $i2 only for input at warn.pl line 24.
I was expecting no warnings and $? == 0. Where is the bug? In my code or in perl?
This may appear similar to How can I reinitialize Perl's STDIN/STDOUT/STDERR?, but there the accepted solution was to close STDIN, STDOUT and STDERR like I do.
Those are warnings, not errors. I suppose they exist because if your program subsequently forked and execed a different program, that program would be mightily confused that its standard input stream is opened for output and its standard output and error streams are opened for input.
It's perfectly legitimate to suppress warnings when you're sure you know what you're doing. In this case, you'd just add no warnings 'io'; prior to your opens.
Now, right after hitting submit, I think of looking in perldoc perldiag and the warning text is listed there. That leads me to Perl bug #23838 which basically states: "Well, don't close those handles, re-open them to '/dev/null' instead".
And the bug is marked as resolved after that.
I disagree that re-opening to '/dev/null' is the correct way (tm), but now we venture into opinion, which is off-topic for stackoverflow, so I'll mark this as answered.
Sorry for the noise.
Related
I am completely new to this and this should be the easiest thing to do but for some reason I cannot get my local text file to print. After trying multiple times with different code I came to use the following code but it doesn't print.
I have searched for days on various threads to solve this and have had no luck. Please help. Here is my code:
#!/usr/bin/perl
$newfile = "file.txt";
open (FH, $newfile);
while ($file = <FH>) {
print $file;
}
I updated my code to the following:
#!/user/bin/perl
use strict; # Always use strict
use warnings; # Always use warnings.
open(my $fh, "<", "file.txt") or die "unable to open file.txt: $!";
# Above we open file using 3 handle method
# or die die with error if unable to open it.
while (<$fh>) { # While in the file.
print $_; # Print each line
}
close $fh; # Close the file
system('C:\Users\RSS\file.txt');
It returns the following: my first report generated by perl. I do not know where this is coming from. Nowhere do I have a print "my first report generated by perl."; statement and it definitely is not in my text file.
My text file is full of various emails, addresses, phone numbers and snippets of emails.
Thank you all for your help. I figured out my problem. I somehow managed to kick myself out of my directory and did not realize it.
This is most likely a combination of a failure to open the file, and a failure to check the return value of open.
If you are completely new to perl, I warmly recommend reading the excellent "perlintro" man page, using either man perlintro or perldoc perlintro on the command line, or taking a look here: https://perldoc.perl.org/perlintro.html.
The "Files and I/O" section there gives a good and concise way of doing this:
open(my $in, "<", "input.txt") or die "Can't open input.txt: $!";
while (<$in>) { # assigns each line in turn to $_
print "Just read in this line: $_";
}
This version will give you an explanation and abort if anything goes wrong while trying to open the file. For example, if there is no file named file.txt in the current working directory, your version will quietly fail to open the file, and afterwards it will quietly fail to read from the closed file handle.
Also, always adding at least one of these to your perl scripts will save you a lot of trouble in the long run:
use warnings; # or use the -w command line switch to turn warnings on globally
use diagnostics;
These won't catch the failure to open the file, but will alert on the failed read.
In the first example here you can see that without the diagnostics module, the code fails without any error messages. The second example shows how the diagnostics module changes this.
$ perl -le 'open FH, "nonexistent.txt"; while(<FH>){print "foo"}'
$ perl -le 'use diagnostics; open FH, "nonexistent.txt"; while(<FH>){print "foo"}'
readline() on closed filehandle FH at -e line 1 (#1)
(W closed) The filehandle you're reading from got itself closed sometime
before now. Check your control flow.
By the way, the legendary "Camel Book" is basically the perl man pages formatted for paper printing, so reading the perldocs in the order listed in perldoc perl will give you a high level of understanding of the language in a reasonably accessible and inexpensive manner.
Happy hacking!
This is simple and including explanations.
use strict; # Always use strict
use warnings; # Always use warnings.
open(my $fh, "<", "file.txt") or die "unable to open file.txt: $!";
# Above we open file using 3 handle method
# or die die with error if unable to open it.
while (<$fh>) { # While in the file.
print $_; # Print each line
}
close $fh; # Close the file
There is then also the case where you are trying to open a file which is not in a location where you think it is. So consider doing full path, if not in the same dir.
open(my $fh, "<", 'F:\Workdir\file.txt') or die "unable to open < input.txt: $!";
EDIT: After your comments, it seems that you are opening an empty file. Please add this at the bottom of that same script and rerun. It will open the file in C:\Users\RSS and make sure it does actually contain data?
system('C:\Users\RSS\file.txt');
First, of all as you are starting out, it is better to enable all warnings by 'use warnings' and disable all such expression which can lead to uncertain behavior or are difficult to debug by pragma 'use strict'.
As you are dealing with file stream, it is always recommended to the check if you were able to open the stream. so, try to use croak or die both would terminate the program with a given message.
Instead of reading inside the while condition, I would recommend checking for end of file. So, loop breaks as end is found. Usually, when reading a line you would use it for further processing, so it is good idea to remove end of lines using chomp.
A sample for reading a file in perl can be as follows:
#!/user/bin/perl
use strict;
use warnings;
my $newfile = "file.txt";
open (my $fh, $newfile) or die "Could not open file '$newfile' $!";
while (!eof($fh))
{
my $line=<$fh>;
chomp($line);
print $line , "\n";
}
Is there a simple way in Perl to send STDOUT or STDERR to multiple places without forking, using File::Tee, or opening a pipe to /usr/bin/tee?
Surely there is a way to do this in pure perl without writing 20+ lines of code, right? What am I missing? Similar questions have been asked, both here on SO and elsewhere, but none of the answers satisfy the requirements that I not have to
fork
use File::Tee / IO::Tee / some other module+dependencies
whose code footprint is 1000x larger than my actual script
open a pipe to the actual tee command
I can see the use of a Core module as a tradeoff here, but really is that needed?
It looks like I can simply do this:
BEGIN {
open my $log, '>>', 'error.log' or die $!;
$SIG{__WARN__} = sub { print $log #_ and print STDERR #_ };
$SIG{__DIE__} = sub { warn #_ and exit 1 };
}
This simply and effectively sends most error messages both to the original STDERR and to a log file (apparently stuff trapped in an eval doesn't show up, I'm told). So there are downsides to this, mentioned in the comments. But as mentioned in the original question, the need was specific. This isn't meant for reuse. It's for a simple, small script that will never be more than 100 lines long.
If you are looking for a way to do this that isn't a "hack", the following was adapted from http://grokbase.com/t/perl/beginners/096pcz62bk/redirecting-stderr-with-io-tee
use IO::Tee;
open my $save_stderr, '>&STDERR' or die $!;
close STDERR;
open my $error_log, '>>', 'error.log' or die $!;
*STDERR = IO::Tee->new( $save_stderr, $error_log ) or die $!;
I'm looking for an example of redirecting stdout to a file using Perl. I'm doing a fairly straightforward fork/exec tool, and I want to redirect the child's output to a file instead of the parents stdout.
Is there an equivilant of dup2() I should use? I can't seem to find it
From perldoc -f open:
open STDOUT, '>', "foo.out"
The docs are your friend...
As JS Bangs said, an easy way to redirect output is to use the 'select' statement.
Many thanks to stackoverflow and their users. I hope this is helpful
for example:
print "to console\n";
open OUTPUT, '>', "foo.txt" or die "Can't create filehandle: $!";
select OUTPUT; $| = 1; # make unbuffered
print "to file\n";
print OUTPUT "also to file\n";
print STDOUT "to console\n";
# close current output file
close(OUTPUT);
# reset stdout to be the default file handle
select STDOUT;
print "to console";
The child itself can do select $filehandle to specify that all of its print calls should be directed to a specific filehandle.
The best the parent can do is use system or exec or something of the sort to do shell redirection.
open my $fh, '>', $file;
defined(my $pid = fork) or die "fork: $!";
if (!$pid) {
open STDOUT, '>&', $fh;
# do whatever you want
...
exit;
}
waitpid $pid, 0;
print $? == 0 ? "ok\n" : "nok\n";
A strictly informational but impractical answer:
Though there's almost certainly a more elegant way of going about this depending on the exact details of what you're trying to do, if you absolutely must have dup2(), its Perl equivalent is present in the POSIX module. However, in this case you're dealing with actual file descriptors and not Perl filehandles, and correspondingly you're restricted to using the other provided functions in the POSIX module, all of which are analogous to what you would be using in C. To some extent, you would be writing C in very un-Perlish Perl.
http://perldoc.perl.org/POSIX.html
I am having some trouble trying to print from a file. Any ideas? Thanks
open(STDOUT,">/home/int420_101a05/shttpd/htdocs/receipt.html");
#Results of a sub-routine
&printReceipt;
close(STDOUT);
open(INF,"/home/int420_101a05/shttpd/htdocs/receipt.html"); $emailBody = <INF>;
close(INF);
print $emailBody;
ERRORS: Filehandle STDOUT reopened as INF only for input at ./test.c line 6.
print() on closed filehandle STDOUT at ./test.c line 9.
This discussion addresses the technical reason for the message. Relevant info from the thread is this:
From open(2) manpage:
When the call is successful, the file descriptor returned will be
the lowest file descriptor not currently open for the process.
But STDOUT still refers to the
filehandle #1. This warning could be
useful. Although one can argue that
further uses of STDOUT as an output
filehandle will trigger a warning as
well...
So, to summarize, you closed STDOUT (file descriptor 1) and your file will be open as FD#1. That's due to open()'s properties.
As other have noted, the real reason you're having this problem is that you should not use STDOUT for printing to a file unless there's some special case where it's required.
Instead, open a file for writing using a new file handle:
open(OUTFILE,">/home/int420_101a05/shttpd/htdocs/receipt.html")
|| die "Could not open: $!";
print OUTFILE "data";
close(OUTFILE);
To print to filehandle from subroutine, just pass the file handle as a parameter.
The best way of doing so is to create an IO::File object and pass that object around
my $filehandle = IO::File->new(">$filename") || die "error: $!";
mySub($filehandle);
sub mySub {
my $fh = shift;
print $fh "stuff" || die "could not print $!";
}
You can also set a particular filehandle as a default filehandle to have print print to that by default using select but that is a LOT more fragile and should be avoidded in favor of IO::File solution.
If you want to temporarily change the standard output, use the select builtin. Another option is to localize the typeglob first:
{
local *STDOUT;
open STDOUT, '>', 'outfile.txt' or die $!;
print "Sent to file\n";
}
Don't try to open the STDOUT handle. If you want to print to STDOUT, just use print (with no filehandle argument). If you want to print to something other than STDOUT, use a different name.
I am getting this error while executing my Perl script. Please, tell me how to rectify this error in Perl.
print() on closed filehandle MYFILE
This is the code that is giving the error:
sub return_error
{
$DATA= "Sorry this page is corrently being updated...<p>";
$DATA.= " Back ";
open(MYFILE,">/home/abc/xrt/sdf/news/top.html");
print MYFILE $DATA;
close(MYFILE);
exit;
}
I hope that now I'm clearer.
You want to do some action on MYFILE after you (or the interpreter itself because of an error) closed it.
According to your code sample, the problem could be that open doesn't really open the file, the script may have no permission to write to the file.
Change your code to the following to see if there was an error:
open(MYFILE, ">", "/home/abc/xrt/sdf/news/top.html") or die "Couldn't open: $!";
Update
ysth pointed out that -w is not really good at checking if you can write to the file, it only ‘checks that one of the relevant flags in the mode is set’. Furthermore, brian d foy told me that the conditional I've used isn't good at handling the error. So I removed the misleading code. Use the code above instead.
It appears that the open call is failing. You should always check the status when opening a filehandle.
my $file = '/home/abc/xrt/sdf/news/top.html';
open(MYFILE, ">$file") or die "Can't write to file '$file' [$!]\n";
print MYFILE $DATA;
close MYFILE;
If the open is unsuccessful, the built-in variable $! (a.k.a. $OS_ERROR) will contain the OS-depededant error message, e.g. "Permission denied"
It's also preferable (for non-archaic versions of Perl) to use the three-argument form of open and lexical filehandles:
my $file = '/home/abc/xrt/sdf/news/top.html';
open(my $fh, '>', $file) or die "Can't write to file '$file' [$!]\n";
print {$fh} $DATA;
close $fh;
An alternate solution to saying or die is to use the autodie pragma:
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
open my $fh, "<", "nsdfkjwefnbwef";
print "should never get here (unless you named files weirdly)\n";
The code above produces the following error (unless a file named nsdfkjwefnbwef exists in the current directory):
Can't open 'nsdfkjwefnbwef' for reading: 'No such file or directory' at example.pl line 7
This:
open(MYFILE,">/home/abc/xrt/sdf/news/top.html");
In modern Perl, it could be written as:
open(my $file_fh, ">", "/home/abc/xrt/sdf/news/top.html") or die($!);
This way you get a $variable restricted to the scope, there is no "funky business" if you have weird filenames (e.g. starting with ">") and error handling (you can replace die with warn or with error handling code).
Once you close $file_fh or simply go out of scope, you can not longer print to it.
I had this problem when my files were set to READ-ONLY.
Check this also, before giving up! :)
Check that the open worked
if(open(my $FH, ">", "filename") || die("error: $!"))
{
print $FH "stuff";
close($FH);
}
If you use a global symbol MYFILE as your filehandle, rather than a local lexical ($myfile), you will invariably run into issues if your program is multithreaded, e.g. if it is running via mod_perl. One process could be closing the filehandle while another process is attempting to write to it. Using $myfile will avoid this issue as each instance will have its own local copy, but you will still run into issues where one process could overwrite the data that another is writing. Use flock() to lock the file while writing to it.
Somewhere in you're script you will be doing something like:
open MYFILE, "> myfile.txt";
# do stuff with myfile
close MYFILE;
print MYFILE "Some more stuff I want to write to myfile";
The last line will throw an error because MYFILE has been closed.
Update
After seeing your code, it looks like the file you are trying to write to can't be opened in the first place. As others have already mentioned try doing something like:
open MYFILE, "> myfile.txt" or die "Can't open myfile.txt: $!\n"
Which should give you some feedback on why you can't open the file.