I'm redirecting standard out for a perl program. Example:
perl run_program.pl > /log/run_program.log
Is there a way to know what the standard out is. So in this case I'm looking to have the value of '/log/run_program.log'.
If it's not possible is there another/better way to get the same result?
Thanks in advance!
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
On my system, you can use
readlink("/proc/$$/fd/1")
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
Just to let you know, you might be able to use the select command to redefine the FD for the default output:
use strict;
use warnings;
use autodie;
open my $output_fd, ">", "/log/run_program.log";
my $old_default_fd = select( $output_fd );
print "I'm now going into /log/run_program.log\n";
select ($old_default_fd; # Restore the default when you no longer need it
This may work with most of your Perl modules. Just hope that they're not doing something stupid like:
print STDOUT "Ha, ha. I'm still going to STDOUT.\n".
I hate it when Perl modules print stuff.
<soapbox>
To you Perl Module writers:
Perl modules should not be printing (unless that's their main purpose). You should instead return what you want to print and let the caller decide what to do with the output.
</soapbox>
For the first part of your question, no. There's no way for the perl program to know where STDOUT is directed to.
The redirection happens external to the program, and is "wired up" before the perl process even starts. STDOUT could be pointed to a device, a file, or another process (a pipe).
The whole purpose of redirection from stdout to a file is to adapt a program which typically writes to stdout and redirect it to a file. The OS doesn't give you the name of the file, because it figures your program is too stupid to know what to do with a file name.
So your best bet is to get it as my $file_name = shift; and open it yourself. (A shift in the mainline pulls from #ARGV.)
Give a chance to this ideas:
...
my $log_path = "/log/run_program.log"; # or using $0 in some manner
open $log_handler, "<", $log_path or die;
...
Now you could code a myprint subroutine that will call print $log_handler and use it into the whole program, or better, having a look to OVERRIDING CORE FUNCTIONS you could self redefine print doing like this:
...
use subs 'print';
sub print { #redefine here }
...
Related
I've been learning about filehandles in Perl, and I was curious to see if there's a way to alter the source code of a program as it's running. For example, I created a script named "dynamic.pl" which contained the following:
use strict;
use warnings;
open(my $append, ">>", "dynamic.pl");
print $append "print \"It works!!\\n\";\n";
This program adds the line
print "It works!!\n";
to the end of it's own source file, and I hoped that once that line was added, it would then execute and output "It works!!"
Well, it does correctly append the line to the source file, but it doesn't execute it then and there.
So I assume therefore that when perl executes a program that it loads it to memory and runs it from there, but my question is, is there a way to access this loaded version of the program so you can have a program that can alter itself as you run it?
The missing piece you need is eval EXPR. This compiles, "evaluates", any string as code.
my $string = q[print "Hello, world!";];
eval $string;
This string can come from any source, including a filehandle.
It also doesn't have to be a single statement. If you want to modify how a program runs, you can replace its subroutines.
use strict;
use warnings;
use v5.10;
sub speak { return "Woof!"; }
say speak();
eval q[sub speak { return "Meow!"; }];
say speak();
You'll get a Subroutine speak redefined warning from that. It can be supressed with no warnings "redefine".
{
# The block is so this "no warnings" only affects
# the eval and not the entire program.
no warnings "redefine";
eval q[sub speak { return "Shazoo!"; }];
}
say speak();
Obviously this is a major security hole. There is many, many, many things to consider here, too long for an answer, and I strongly recommend you not do this and find a better solution to whatever problem you're trying to solve this way.
One way to mitigate the potential for damage is to use the Safe module. This is like eval but limits what built in functions are available. It is by no means a panacea for the security issues.
With a warning about all kinds of issues, you can reload modules.
There are packages for that, for example, Module::Reload. Then you can write code that you intend to change in a module, change the source at runtime, and have it reloaded.
By hand you would delete that from %INC and then require, like
# ... change source code in the module ...
delete $INC{'ModuleWithCodeThatChages.pm'};
require ModuleWithCodeThatChanges;
The only reason I can think of for doing this is experimentation and play. Otherwise, there are all kinds of concerns with doing something like this, and whatever your goal may be there are other ways to accomplish it.
Note The question does specify a filehandle. However, I don't see that to be really related to what I see to be the heart of the question, of modifying code at runtime.
The source file isn't used after it's been compiled.
You could just eval it.
use strict;
use warnings;
my $code = <<'__EOS__'
print "It works!!\n";
__EOS__
open(my $append_fh, ">>", "dynamic.pl")
or die($!);
print($append_fh $code);
eval("$code; 1")
or die($#);
There's almost definitely a better way to achieve your end goal here. BUT, you could recursively make exec() or system() calls -- latter if you need a return value. Be sure to setup some condition or the dominoes will keep falling. Again, you should rethink this, unless it's just practice of some sort, or maybe I don't get it!
Each call should execute the latest state of the file; also be sure to close the file before each call.
i.e.,
exec("dynamic.pl"); or
my retval;
retval = system("perl dynamic.pl");
Don't use eval ever.
I basically want to reopen STDERR/STDOUT so they write to one logfile with both the stream and the timestamp included on every line. So print STDERR "Hello World" prints STDERR: 20130215123456: Hello World. I don't want to rewrite all my print statements into function calls, also some of the output will be coming from external processes via system() calls anyway which I won't be able to rewrite.
I also need for the output to be placed in the file "live", i.e. not only written when the process completes.
(p.s. I'm not asking particularly for details of how to generate timestamps, just how to redirect to a file and prepend a string)
I've worked out the following code, but it's messy:
my $mode = ">>";
my $file = "outerr.txt";
open(STDOUT, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDOUT: \$\_"; }'));
open(STDERR, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDERR: \$\_"; }'));
(The above doesn't add dates, but that should be trivial to add)
I'm looking for a cleaner solution, one that doesn't require quoting perl code and passing it on the command line, or at least module that hides some of the complexity. Looking at the code for Capture::Tiny it doesn't look like it can handle writing a part of output, though I'm not sure about that. annotate-output only works on an external command sadly, I need this to work on both external commands and ordinary perl printing.
The child launched via system doesn't write to STDOUT because it does not have access to variables in your program. Therefore, means having code run on a Perl file handle write (e.g. tie) won't work.
Write another script that runs your script with STDOUT and STDERR replaced with pipes. Read from those pipes and print out the modified output. I suggest using IPC::Run to do this, because it'll save you from using select. You can get away without it if you combine STDOUT and STDERR in one stream.
We maintain a huge number of perl modules, actualy its so huge that we don't even know of all modules that we are responsible for. We would like to track what scripts and modules accesses another module in some sort of log, preferably stored by module name, so that we can evaluate whether its a risk to update a module and so that we can know what we might affect.
Is there any simple way to do this?
You could do a simple regex search:
use strict;
use warnings;
my %modules;
foreach my $perl_file (#file_list) {
open FILE, $perl_file or die "Can't open $perl_file ($!)";
while (<FILE>) {
if (/\s*(?:use|require)\s*([^;]+);/) {
$modules{$1}{$perl_file}++;
}
}
}
This is quick and dirty, but it should work pretty well. You end up with a hash of modules, each of which points to a hash of the files that use it.
Of course, it will catch things like use strict; but those will be easy enough to ignore.
If you have things like use Module qw/function/; You will grab the whole thing before the semicolon, but it shouldn't be a big deal. You can just search the keys for your known module names.
A downside is that it doesn't track dependencies. If you need that information you could add it by getting it from cpan or something.
Update: If you want to log this at runtime, you could create a wrapper script and have your perl command point to the wrapper on your system. Then make the wrapper something like this:
use strict;
use warnings;
use Module::Loaded;
my $script = shift #ARGV;
#run program
do $script;
#is_loaded() gets the path of these modules if they are loaded.
print is_loaded('Some::Module');
print is_loaded('Another::Module');
You might run the risk of funny side effects, though, since the method of calling your script has changed. It depends on what you are doing.
Maybe edit sitecustomize.pl so that each time when Perl runs, it would write some info in a log, and then analyse it? Add something like this to sitecustomize.pl:
open (LOG, '>>',"absolutepathto/logfile.txt");
print LOG $0,"\t",$$,"\t",scalar(localtime),"\n";
open SELF, $0;
while (<SELF>) {
print LOG $_ if (/use|require/);
}
close SELF;
print LOG "_" x 80,"\n";
close LOG;
EDIT:
Also, we forgot about %INC hash, so the code above may be rewritten as follows, to include more data about which modules were actually loaded + include files required by do function:
open (LOG, '>>',"absolutepathto/logfile.txt");
print LOG $0,' ',$$,' ',scalar(localtime),"\n";
open SELF, $0;
while (<SELF>) {
print LOG $_ if (/use|require/);
}
close SELF;
END {
local $" = "\n";
print LOG "Files loaded by use, eval, or do functions at the end of this program run:\n";
print LOG "#{[values %INC]}","\n";
print LOG "_" x 80,"\n";
close LOG;
}
I have a Perl script that has to wrap a PHP script that produces a lot of output, and takes about half an hour to run.
At moment I'm shelling out with:
print `$command`;
This works in the sense that the PHP script is called, and it does it's job, but, there is no output rendered by Perl until the PHP script finishes half an hour later.
Is there a way I could shell out so that the output from PHP is printed by perl as soon as it receives it?
The problem is that Perl's not going to finish reading until the PHP script terminates, and only when it finishes reading will it write. The backticks operator blocks until the child process exits, and there's no magic to make a read/write loop implicitly.
So you need to write one. Try a piped open:
open my $fh, '-|', $command or die 'Unable to open';
while (<$fh>) {
print;
}
close $fh;
This should then read each line as the PHP script writes it, and immediately output it. If the PHP script doesn't output in convenient lines and you want to do it with individual characters, you'll need to look into using read to get data from the file handle, and disable output buffering ($| = 1) on stdout for writing it.
See also http://perldoc.perl.org/perlipc.html#Using-open()-for-IPC
Are you really doing print `$command`?
If you are only running a command and not capturing any of its output, simply use system $command. It will write to stdout directly without passing through Perl.
You might want to investigate Capture::Tiny. IIRC something like this should work:
use strict;
use warnings;
use Capture::Tiny qw/tee/;
my ($stdout, $stderr, #result) = tee { system $command };
Actually, just using system might be good enough, YMMV.
I am a newb to Perl. I am writing some scripts and want to define my own print called myprint() which will print the stuff passed to it based on some flags (verbose/debug flag)
open(FD, "> /tmp/abc.txt") or die "Cannot create abc.txt file";
print FD "---Production Data---\n";
myprint "Hello - This is only a comment - debug data";
Can someone please help me with some sample code to for myprint() function?
Do you care more about writing your own logging system, or do you want to know how to put logging statements in appropriate parts of your program which you can turn off (and, incur little performance penalty when they are turned off)?
If you want a logging system that is easy to start using, but also offers a world of features which you can incrementally discover and use, Log::Log4perl is a good option. It has an easy mode, which allows you to specify the desired logging level, and emits only those logging messages that are above the desired level.
#!/usr/bin/env perl
use strict; use warnings;
use File::Temp qw(tempfile);
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init({level => $INFO});
my ($fh, $filename) = tempfile;
print $fh "---Production Data---\n";
WARN 'Wrote something somewhere somehow';
The snippet also shows a better way of opening a temporary file using File::Temp.
As for overriding the built-in print … It really isn't a good idea to fiddle with built-ins except in very specific circumstances. perldoc perlsub has a section on Overriding Built-in Functions. The accepted answer to this question lists the Perl built-ins that cannot be overridden. print is one of those.
But, then, one really does not need to override a built-in to write a logging system.
So, if an already-written logging system does not do it for you, you really seem to be asking "how do I write a function that prints stuff conditionally depending on the value of a flag?"
Here is one way:
#!/usr/bin/env perl
package My::Logger;
{
use strict; use warnings;
use Sub::Exporter -setup => {
exports => [
DEBUG => sub {
return sub {} unless $ENV{MYDEBUG};
return sub { print 'DEBUG: ' => #_ };
},
]
};
}
package main;
use strict; use warnings;
# You'd replace this with use My::Logger qw(DEBUG) if you put My::Logger
# in My/Logger.pm somewhere in your #INC
BEGIN {
My::Logger->import('DEBUG');
}
sub nicefunc {
print "Hello World!\n";
DEBUG("Isn't this a nice function?\n");
return;
}
nicefunc();
Sample usage:
$ ./yy.pl
Hello World!
$ MYDEBUG=1 ./yy.pl
Hello World!
DEBUG: Isn't this a nice function?
I wasn't going to answer this because Sinan already has the answer I'd recommend, but tonight I also happened to be working on the "Filehandle References" chapter to the upcoming Intermediate Perl. That are a couple of relevant paragraphs which I'll just copy directly without adapting them to your question:
IO::Null and IO::Interactive
Sometimes we don't want to send our output anywhere, but we are forced
to send it somewhere. In that case, we can use IO::Null to create
a filehandle that simply discards anything that we give it. It looks
and acts just like a filehandle, but does nothing:
use IO::Null;
my $null_fh = IO::Null->new;
some_printing_thing( $null_fh, #args );
Other times, we want output in some cases but not in others. If we are
logged in and running our program in our terminal, we probably want to
see lots of output. However, if we schedule the job through cron, we
probably don't care so much about the output as long as it does the job.
The IO::Interactive module is smart enough to tell the difference:
use IO::Interactive;
print { is_interactive } 'Bamboo car frame';
The is_interactive subroutine returns a filehandle. Since the
call to the subroutine is not a simple scalar variable, we surround
it with braces to tell Perl that it's the filehandle.
Now that you know about "do nothing" filehandles, you can replace some
ugly code that everyone tends to write. In some cases you want output
and in some cases you don't, so many people use a post-expression
conditional to turn off a statement in some cases:
print STDOUT "Hey, the radio's not working!" if $Debug;
Instead of that, you can assign different values to $debug_fh based
on whatever condition you want, then leave off the ugly if $Debug
at the end of every print:
use IO::Null;
my $debug_fh = $Debug ? *STDOUT : IO::Null->new;
$debug_fh->print( "Hey, the radio's not working!" );
The magic behind IO::Null might give a warning about "print() on
unopened filehandle GLOB" with the indirect object notation (e.g.
print $debug_fh) even though it works just fine. We don't get that
warning with the direct form.