I'm using some system commands in Perl.
In the below case I was getting output as follows:
ls: import-log.*: No such file or directory
ls: error-log.*: No such file or directory
No specified files found for deletion
My code:
sub monthoryear()
{
#importlog = `ls -al import-log.*`;
#errorlog = `ls -al error-log.*`;
}
I don't want to see the following in the output even if there are no files.
ls: import-log.*: No such file or directory &
ls: error-log.*: No such file or directory
While the other answers are correct about the exact technical question you asked, you should also consider not writing what is effectively a shell script in Perl.
You should use Perl native methods of getting file list (e.g. glob() or File::Find) instead of calling a backticked ls.
Redirect STDERR to the null device:
use File::Spec;
open STDERR, '>', File::Spec->devnull() or die "could not open STDERR: $!\n";
You can add stderr redirection in your subshell commands:
#importlog = `ls -al import-log.* 2>/dev/null`;
#errorlog = `ls -al error-log.* 2>/dev/null`;
Check out perlfaq8. If you don't care if it's STDOUT or STDERR, you can get both redirected to STDOUT.
$output = `$cmd 2>&1`;
In your case, you probably just want to get rid of STDERR:
$output = `$cmd 2>/dev/null`;
However, I agree with DVK's answer. Using an external command to get a list of files just seems silly. You should use File::Find. This way you can use Perl's normal error handling in case something fails.
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my #importlog;
my #errorlog;
find(sub {
push #importlog, $File::Find::name if /^import-log\.*/;
push #errorlog, $File::Find::name if /^error-log\.*/;
}, '.');
print "Import log:\n", join("\n", #importlog), "\n";
print "Error log:\n", join("\n", #errorlog), "\n";
Create a new warn hook, then do something with the message, store it, ignore it etc...
local $SIG{__WARN__} = sub {
my $message = shift;
## do nothing to ignore all together
## ignore specific message
# warn $message unless $message =~ /No such file or directory/;
## or do something else
# die $message ## make fatal
# open my $fh, '>', 'file.log'; print $fh $message;
};
You can redirect the stderr to /dev/null as:
#importlog = `ls -al import-log.* 2> /dev/null`;
#errorlog = `ls -al error-log.* 2> /dev/null`;
Subshells will inherit the parent's STDERR, so if you want to do it on a global level, you can do this:
open(STDERR,'>/dev/null');
`ls non-existent-file`;
`ls non-existent-file2`;
`ls non-existent-file3`;
`ls non-existent-file4`;
`ls non-existent-file5`;
Often you also want to restore STDERR later. I do this like this:
#!/usr/bin/perl
print STDERR "This will be send to the old (readable) STDERR\n";
my $oldstderr = readlink "/proc/self/fd/2"; #Remember the location of the old STDERR
open(STDERR, ">/dev/null"); #Ignore STDERR now
print STDERR "This will be send to a STDERR that is 'gone'\n";
open(STDERR, ">$oldstderr"); #restores STDERR to the old state
print STDERR "This will be send to the old (readable) STDERR again\n";
OR
You can use the Capture::Tiny module which makes it easier to read and more portable.
#!/usr/bin/perl
use Capture::Tiny qw/capture_stderr/;
print STDERR "This will be send to the old (readable) STDERR\n";
capture_stderr sub {
print STDERR "This will be send to a STDERR that is 'gone'\n";
};
print STDERR "This will be send to the old (readable) STDERR again\n";
Here's how you can suppress STDERR, capture error messages that occur while turned off, restore STDERR, and report back any captured error messages.
#!/usr/bin/perl
use warnings;
use strict;
print STDERR "STDERR is on.\n";
my ($stderr_fh, $err_msg_ref) = suppress_std_err();
print "STDERR is now off and error messages are being suppressed and saved.\n";
print STDERR "I'm an error message.\n";
restore_std_err($stderr_fh);
print STDERR "STDERR is back on\n";
print "Errors reported while STDERR was off: $$err_msg_ref\n";
#Saves STDERR in filehandle then turns it off.
#Any error messages that occur while STDERR is off are stored for safekeeping.
sub suppress_std_err {
my $suppressed_std_error_messages;
open (my $saved_std_err_fh, ">&", STDERR);
close STDERR;
open (STDERR, ">", \$suppressed_std_error_messages);
return ($saved_std_err_fh, \$suppressed_std_error_messages);
}
#Restores STDERR from saved filehandle.
sub restore_std_err {
my $old_std_err_fh = shift;
close STDERR;
open (STDERR, ">&", $old_std_err_fh);
}
Related
Redirecting STDERR to an external file is pretty easy
my $stderr = '/home/logs/stderr.log.txt';
open STDERR, '>' . $stderr;
...
$_ = 1/0; # Error: Illegal division by zero
To make this error log file more readable I want to prepend a timestamp information each time sometimes is sent to STDERR.
How can be that accomplished?
Easiest way without too many disruptions to the rest of your code is to use tied filehandles. Tied filehandles allow you to write customized functions that are invoked when your filehandle is read from, written to, or has any other operation performed on it.
Proof-of-concept
package TimeStamper;
sub TIEHANDLE {
my ($pkg,$file) = #_;
open my $fh, ">", $file; # the "real" filehandle
return bless [$fh],$pkg;
}
sub PRINT {
my ($self,#msg) = #_;
print {$self->[0]} "[",scalar localtime,"] ",#msg;
}
package main;
my $stderr = "/home/logs/stderr.log.txt";
tie *STDERR, "TimeStamper", $stderr;
print STDERR "Hello world\n";
I am writing a perl script to open a textfile and perform some transformations on it. script is throwing an error saying "No such file or directory exists" whenever text file is unavailable.
I want to catch that error and create textfile then.
while (<>) { #i am passing filename from the batch file
#some task
}
# if the above while loop fails it throws no such file or directory exists error. I want to catch it and do some other task.
Those particular errors are warnings sent to STDERR by the "magic" behind ARGV. Why don't you just redirect STDERR?
perl script.pl foo bar 2>error.log
If that's not good enough, you'll have to start using $SIG{__WARN__} (yuck) or stop using ARGV (<> with no file handle defaults to using ARGV).
for my $argv (#ARGV ? #ARGV : '-') {
open(my $argv_fh, $argv)
or do {
... print message to log file ...
next;
};
while (<$argv_fh>) {
...
}
}
Instead of trying to catch the warning that the file doesn't exist, why not try passing the file path via getopt and test for file existence/readability before opening using file test operators.
edit: updated with example
#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Std;
my %opts;
getopt('f', \%opts);
die "use -f to specify the file" unless defined $opts{f};
if(! -e $opts{f} ){
print "file doesn't exist\n";
}
elsif(! -r $opts{f} ){
print "file isn't readable\n";
}
elsif(! -f $opts{f} ){
print "file is not a normal file\n";
}
else{
open( my $fh, '<', $opts{f} ) or print "whatever error handling logic\n";
}
I need to work with some libraries that unfortunately log diagnostic
messages to STDOUT and STDERR. By using tie, I can redirect those
writes to a function that captures those. Since I don't want all
STDOUT and STDERR output of my programs to be captured thtough the
tied handle, I'd like to do this only for certain packages.
I have come up with a solution where the actual behavior is determined
by looking at caller() as can be seen below, but I have the feeling
that there has to be a better way... Is there a more elegant solution?
package My::Log::Capture;
use strict;
use warnings;
use 5.010;
sub TIEHANDLE {
my ($class, $channel, $fh, $packages) = #_;
bless {
channel => lc $channel,
fh => $fh,
packages => $packages,
}, $class;
}
sub PRINT {
my $self = shift;
my $caller = (caller)[0];
if ($caller ~~ $self->{packages}) {
local *STDOUT = *STDOUT;
local *STDERR = *STDERR;
given ($self->{channel}) {
when ('stdout') {
*STDOUT = $self->{fh};
}
when ('stderr') {
*STDERR = $self->{fh};
}
}
# Capturing/Logging code goes here...
} else {
$self->{fh}->print(#_);
}
}
1;
package main;
use My::Foo;
# [...]
use My::Log::Capture;
open my $stderr, '>&', *STDERR;
tie *STDERR, 'My::Log::Capture', (stderr => $stderr, [qw< My::Foo >]);
# My::Foo's STDERR output will be captured, everyone else's STDERR
# output will just be relayed.
Aside from fixing the libraries, I can think of only one solution that might be better.
You can re-open STDOUT and STDERR file handles into your own file handles. Then, re-open STDOUT and STDERR with your tied handles.
For example, here's how you do it for STDOUT:
open my $fh, ">&", \*STDOUT or die "cannot reopen STDOUT: $!";
close STDOUT;
open STDOUT, ">", "/tmp/test.txt";
say $fh "foo"; # goes to real STDOUT
say "bar"; # goes to /tmp/test.txt
You can read perldoc -f open for all the gory details on what ">&" and such does.
Anyway, instead of "/tmp/test.txt" you can replace that open call with the setup for your tied file handle.
Your code will have to always use an explicit file handle to write or use select to switch file handles:
select $fh;
say "foo"; # goes to real STDOUT
select STDOUT;
say "bar"; # goes to /tmp/test.txt
I want to redirect STDERR and STDOUT to a variable. I did this.
close(STDOUT);
close(STDERR);
my $out;
open(STDOUT, ">>", \$out);
open(STDERR, ">>", \$out);
for(1..10)
{
print "print\n"; # this is ok.
warn "warn\n"; # same
system("make"); # this is lost. neither in screen nor in variable.
}
The problem with system. I want the output of this call to be captured too.
use Capture::Tiny!
Are you seeking to capture the output in a variable? If so, you have use backticks or qx{} with appropriate redirection. For example, you could use:
#/usr/bin/env perl
use strict;
use warnings;
# Ensure we have a way to write messages
open my $fh, '>', "output" or die;
close(STDOUT);
close(STDERR);
my $out;
open(STDOUT, ">>", \$out) or do { print $fh, "failed to open STDOUT ($!)\n"; die };
open(STDERR, ">>", \$out) or do { print $fh, "failed to open STDERR ($!)\n"; die };
foreach my $i (1..10)
{
print "print $i\n";
warn "warn $i\n";
my $extra = qx{make pth$i 2>&1};
print $fh "<<$i>><<$out>><<$extra>>\n";
}
(I happen to have programs pth1, pth2 and pth3 in the directory - they were made OK; pth4 and above write errors to stderr; the redirection was necessary.)
You should always check the success of operations such as open().
Why is this necessary? Because writing to a variable requires the cooperation of the process doing the writing - and make doesn't know how to cooperate.
There are several ways to redirect and restore STDOUT. Some of them work with STDERR too. Here are my two favorites:
Using select:
my $out;
open my $fh, ">>", \$out;
select $fh;
print "written to the variable\n";
select STDOUT;
print "written to original STDOUT\n";
Using local:
my $out
do {
local *STDOUT;
open STDOUT, ">>", \$out;
print "written to the variable\n";
};
print "written to original STDOUT\n";
Enjoy.
The reason this is happening is that the STDOUT and STDERR "filehandles" are not equivalent to stderr and stdout handles provided by the shell to the perl binary. In order to achieve what you want, you should use open instead of system
Why not use IPC::Open3?
TLDR Answer
use Capture::Tiny;
Merged STDOUT and STDERR
If you want STDOUT (from print()s) and STDERR (from warn()s) to be merged, then use...
my ($merged, #result) = capture_merged { print "Hello, world!" }; # static code
my ($merged, #result) = capture_merged { eval $codetoeval }; # code in variable
Separated STDOUT and STDERR
If you want them separated...
my ($stdout, $stderr, #result) = capture { print "Hello, world!" }; # static code
my ($stdout, $stderr, #result) = capture { eval $codetoeval }; # code in variable
Results of Eval
#result indicates the success, with success being [1], and failure being []. Tiny has a ton of other functions that you can look through for other cases, like code references, etc.. But I think the code above should cover most of any Perl developer's needs.
You are given either an IO::File object or a typeglob (\*STDOUT or Symbol::symbol_to_ref("main::FH")); how would you go about determining if it is a read or write handle? The interface cannot be extended to pass this information (I am overriding close to add calls to flush and sync before the actual close).
Currently I am attempting to flush and sync the filehandle and ignoring the error "Invalid argument" (which is what I get when I attempt to flush or sync a read filehandle):
eval { $fh->flush; 1 } or do {
#this seems to exclude flushes on read handles
unless ($! =~ /Invalid argument/) {
croak "could not flush $fh: $!";
}
};
eval { $fh->sync; 1 } or do {
#this seems to exclude syncs on read handles
unless ($! =~ /Invalid argument/) {
croak "could not sync $fh: $!";
}
};
Have a look at the fcntl options. Maybe F_GETFL with O_ACCMODE.
Edit: I did a little googling and playing over lunch and here is some probably non-portable code but it works for my Linux box, and probably any Posix system (perhaps even Cygwin, who knows?).
use strict;
use Fcntl;
use IO::File;
my $file;
my %modes = ( 0 => 'Read only', 1 => 'Write only', 2 => 'Read / Write' );
sub open_type {
my $fh = shift;
my $mode = fcntl($fh, F_GETFL, 0);
print "File is: " . $modes{$mode & 3} . "\n";
}
print "out\n";
$file = new IO::File();
$file->open('> /tmp/out');
open_type($file);
print "\n";
print "in\n";
$file = new IO::File();
$file->open('< /etc/passwd');
open_type($file);
print "\n";
print "both\n";
$file = new IO::File();
$file->open('+< /tmp/out');
open_type($file);
Example output:
$ perl test.pl
out
File is: Write only
in
File is: Read only
both
File is: Read / Write