I am a newb to Perl. I am writing some scripts and want to define my own print called myprint() which will print the stuff passed to it based on some flags (verbose/debug flag)
open(FD, "> /tmp/abc.txt") or die "Cannot create abc.txt file";
print FD "---Production Data---\n";
myprint "Hello - This is only a comment - debug data";
Can someone please help me with some sample code to for myprint() function?
Do you care more about writing your own logging system, or do you want to know how to put logging statements in appropriate parts of your program which you can turn off (and, incur little performance penalty when they are turned off)?
If you want a logging system that is easy to start using, but also offers a world of features which you can incrementally discover and use, Log::Log4perl is a good option. It has an easy mode, which allows you to specify the desired logging level, and emits only those logging messages that are above the desired level.
#!/usr/bin/env perl
use strict; use warnings;
use File::Temp qw(tempfile);
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init({level => $INFO});
my ($fh, $filename) = tempfile;
print $fh "---Production Data---\n";
WARN 'Wrote something somewhere somehow';
The snippet also shows a better way of opening a temporary file using File::Temp.
As for overriding the built-in print … It really isn't a good idea to fiddle with built-ins except in very specific circumstances. perldoc perlsub has a section on Overriding Built-in Functions. The accepted answer to this question lists the Perl built-ins that cannot be overridden. print is one of those.
But, then, one really does not need to override a built-in to write a logging system.
So, if an already-written logging system does not do it for you, you really seem to be asking "how do I write a function that prints stuff conditionally depending on the value of a flag?"
Here is one way:
#!/usr/bin/env perl
package My::Logger;
{
use strict; use warnings;
use Sub::Exporter -setup => {
exports => [
DEBUG => sub {
return sub {} unless $ENV{MYDEBUG};
return sub { print 'DEBUG: ' => #_ };
},
]
};
}
package main;
use strict; use warnings;
# You'd replace this with use My::Logger qw(DEBUG) if you put My::Logger
# in My/Logger.pm somewhere in your #INC
BEGIN {
My::Logger->import('DEBUG');
}
sub nicefunc {
print "Hello World!\n";
DEBUG("Isn't this a nice function?\n");
return;
}
nicefunc();
Sample usage:
$ ./yy.pl
Hello World!
$ MYDEBUG=1 ./yy.pl
Hello World!
DEBUG: Isn't this a nice function?
I wasn't going to answer this because Sinan already has the answer I'd recommend, but tonight I also happened to be working on the "Filehandle References" chapter to the upcoming Intermediate Perl. That are a couple of relevant paragraphs which I'll just copy directly without adapting them to your question:
IO::Null and IO::Interactive
Sometimes we don't want to send our output anywhere, but we are forced
to send it somewhere. In that case, we can use IO::Null to create
a filehandle that simply discards anything that we give it. It looks
and acts just like a filehandle, but does nothing:
use IO::Null;
my $null_fh = IO::Null->new;
some_printing_thing( $null_fh, #args );
Other times, we want output in some cases but not in others. If we are
logged in and running our program in our terminal, we probably want to
see lots of output. However, if we schedule the job through cron, we
probably don't care so much about the output as long as it does the job.
The IO::Interactive module is smart enough to tell the difference:
use IO::Interactive;
print { is_interactive } 'Bamboo car frame';
The is_interactive subroutine returns a filehandle. Since the
call to the subroutine is not a simple scalar variable, we surround
it with braces to tell Perl that it's the filehandle.
Now that you know about "do nothing" filehandles, you can replace some
ugly code that everyone tends to write. In some cases you want output
and in some cases you don't, so many people use a post-expression
conditional to turn off a statement in some cases:
print STDOUT "Hey, the radio's not working!" if $Debug;
Instead of that, you can assign different values to $debug_fh based
on whatever condition you want, then leave off the ugly if $Debug
at the end of every print:
use IO::Null;
my $debug_fh = $Debug ? *STDOUT : IO::Null->new;
$debug_fh->print( "Hey, the radio's not working!" );
The magic behind IO::Null might give a warning about "print() on
unopened filehandle GLOB" with the indirect object notation (e.g.
print $debug_fh) even though it works just fine. We don't get that
warning with the direct form.
Related
I have pretty big perl script executed quite frequently (from cron).
Most executions require pretty short & simple tests.
How to split single file script into two parts with "part two" compiled based on "part 1" decision?
Considered solution:
using BEGIN{ …; exit if …; } block for trivial test.
two file solution with file_1 using require to compile&execute file_2.
I would prefer single file solution to ease maintenance if the cost is reasonable.
First, you should measure how long the compilation really takes, to see if this "optimization" is even necessary. If it does happen to be, then since you said you'd prefer a one-file solution, one possible solution is using the __DATA__ section for code like so:
use warnings;
use strict;
# measure compliation and execution time
use Time::HiRes qw/ gettimeofday tv_interval /;
my $start;
BEGIN { $start = [gettimeofday] }
INIT { printf "%.06f\n", tv_interval($start) }
END { printf "%.06f\n", tv_interval($start) }
my $condition = 1; # dummy for testing
# conditionally compile and run the code in the DATA section
if ($condition) {
eval do { local $/; <DATA>.'; 1' } or die $#;
}
__DATA__
# ... lots of code here ...
I see two ways of achieving what you want. The simple one would be to divide the script in two parts. The first part will do the simple tests. Then, if you need to do more complicated tests you may "add" the second part. The way to do this is using eval like this:
<first-script.pl>
...
eval `cat second-script.pl`;
if( $# ) {
print STDERR $#, "\n";
die "Errors in the second script.\n";
}
Or using File::Slurp in a more robust way:
eval read_file("second-script.pl", binmode => ':utf8');
Or following #amon suggestion and do:
do "second-script.pl";
Only beware that do is different from eval in this way:
It also differs in that code evaluated with do FILE cannot see lexicals in the enclosing scope; eval STRING does. It's the same, however, in that it does reparse the file every time you call it, so you probably don't want to do this inside a loop.
The eval will execute in the context of the first script, so any variables or initializations will be available to that code.
Related to this, there is this question: Best way to add dynamic code to a perl application, which I asked some time ago (and answered myself with the help of the comments provided and some research.) I took some time to document everything I could think of for anyone (and myself) to refer to.
The second way I see would be to turn your testing script into a daemon and have the crontab bit call this daemon as necessary. The daemon remains alive so any data structures that you may need will remain in memory. On the down side, this will take resources in a continuos way as the daemon process will always be running.
I've been learning about filehandles in Perl, and I was curious to see if there's a way to alter the source code of a program as it's running. For example, I created a script named "dynamic.pl" which contained the following:
use strict;
use warnings;
open(my $append, ">>", "dynamic.pl");
print $append "print \"It works!!\\n\";\n";
This program adds the line
print "It works!!\n";
to the end of it's own source file, and I hoped that once that line was added, it would then execute and output "It works!!"
Well, it does correctly append the line to the source file, but it doesn't execute it then and there.
So I assume therefore that when perl executes a program that it loads it to memory and runs it from there, but my question is, is there a way to access this loaded version of the program so you can have a program that can alter itself as you run it?
The missing piece you need is eval EXPR. This compiles, "evaluates", any string as code.
my $string = q[print "Hello, world!";];
eval $string;
This string can come from any source, including a filehandle.
It also doesn't have to be a single statement. If you want to modify how a program runs, you can replace its subroutines.
use strict;
use warnings;
use v5.10;
sub speak { return "Woof!"; }
say speak();
eval q[sub speak { return "Meow!"; }];
say speak();
You'll get a Subroutine speak redefined warning from that. It can be supressed with no warnings "redefine".
{
# The block is so this "no warnings" only affects
# the eval and not the entire program.
no warnings "redefine";
eval q[sub speak { return "Shazoo!"; }];
}
say speak();
Obviously this is a major security hole. There is many, many, many things to consider here, too long for an answer, and I strongly recommend you not do this and find a better solution to whatever problem you're trying to solve this way.
One way to mitigate the potential for damage is to use the Safe module. This is like eval but limits what built in functions are available. It is by no means a panacea for the security issues.
With a warning about all kinds of issues, you can reload modules.
There are packages for that, for example, Module::Reload. Then you can write code that you intend to change in a module, change the source at runtime, and have it reloaded.
By hand you would delete that from %INC and then require, like
# ... change source code in the module ...
delete $INC{'ModuleWithCodeThatChages.pm'};
require ModuleWithCodeThatChanges;
The only reason I can think of for doing this is experimentation and play. Otherwise, there are all kinds of concerns with doing something like this, and whatever your goal may be there are other ways to accomplish it.
Note The question does specify a filehandle. However, I don't see that to be really related to what I see to be the heart of the question, of modifying code at runtime.
The source file isn't used after it's been compiled.
You could just eval it.
use strict;
use warnings;
my $code = <<'__EOS__'
print "It works!!\n";
__EOS__
open(my $append_fh, ">>", "dynamic.pl")
or die($!);
print($append_fh $code);
eval("$code; 1")
or die($#);
There's almost definitely a better way to achieve your end goal here. BUT, you could recursively make exec() or system() calls -- latter if you need a return value. Be sure to setup some condition or the dominoes will keep falling. Again, you should rethink this, unless it's just practice of some sort, or maybe I don't get it!
Each call should execute the latest state of the file; also be sure to close the file before each call.
i.e.,
exec("dynamic.pl"); or
my retval;
retval = system("perl dynamic.pl");
Don't use eval ever.
I'm redirecting standard out for a perl program. Example:
perl run_program.pl > /log/run_program.log
Is there a way to know what the standard out is. So in this case I'm looking to have the value of '/log/run_program.log'.
If it's not possible is there another/better way to get the same result?
Thanks in advance!
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
On my system, you can use
readlink("/proc/$$/fd/1")
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
Just to let you know, you might be able to use the select command to redefine the FD for the default output:
use strict;
use warnings;
use autodie;
open my $output_fd, ">", "/log/run_program.log";
my $old_default_fd = select( $output_fd );
print "I'm now going into /log/run_program.log\n";
select ($old_default_fd; # Restore the default when you no longer need it
This may work with most of your Perl modules. Just hope that they're not doing something stupid like:
print STDOUT "Ha, ha. I'm still going to STDOUT.\n".
I hate it when Perl modules print stuff.
<soapbox>
To you Perl Module writers:
Perl modules should not be printing (unless that's their main purpose). You should instead return what you want to print and let the caller decide what to do with the output.
</soapbox>
For the first part of your question, no. There's no way for the perl program to know where STDOUT is directed to.
The redirection happens external to the program, and is "wired up" before the perl process even starts. STDOUT could be pointed to a device, a file, or another process (a pipe).
The whole purpose of redirection from stdout to a file is to adapt a program which typically writes to stdout and redirect it to a file. The OS doesn't give you the name of the file, because it figures your program is too stupid to know what to do with a file name.
So your best bet is to get it as my $file_name = shift; and open it yourself. (A shift in the mainline pulls from #ARGV.)
Give a chance to this ideas:
...
my $log_path = "/log/run_program.log"; # or using $0 in some manner
open $log_handler, "<", $log_path or die;
...
Now you could code a myprint subroutine that will call print $log_handler and use it into the whole program, or better, having a look to OVERRIDING CORE FUNCTIONS you could self redefine print doing like this:
...
use subs 'print';
sub print { #redefine here }
...
I have two file tmp.pl and tmp2.pl. I want to call tmp2.pl with a require command but also send a parameter. Is there a better way of doing this?
tmp.pl
require "tmp2.pl" "passed parameter";
tmp2.pl
print #_;
As far as I know, require cannot be used to send a parameter. But that's a good thing, I think, because I cannot think of a reason why you should want to. Looks to me that your design is wrong.
tmp2.pl should be either:
a independent perl program, which you should run with system or qx()
a module, with optional exported tags etc.
a package which defines a class
but that's just my idea....
There's probably a better way to accomplish whatever it is you're trying to do, but you could achieve your current sub goal with something like
{
local #_ = ("passed parameter");
require "tmp2.pl";
}
I might consider this idiom in a place where I wanted to run a perl script from within a perl script. That is, I could say
{
local #ARGV = ("foo","bar");
require "my_script.pl";
}
instead of
system("perl","my_script.pl","foo","bar");
(There are plenty of subtle and not-so-subtle differences between these two calls, so a lot depends on what "features" of these calls you need)
Yes, I think this should work. Use "require" to include the script. After that you can pass the parameter by calling the sub function. The modified script can be
require "tmp2.pl" ;
subfunc(parameter);
print #_;
Supposing that you want to execute other program and capture its output I'm partial to using the IPC::Run module.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Run qw( run );
run( [ './tmp2.pl', #ARGV ], \'', \my $out, \my $err );
print "out: $out\n";
print "err: $err\n";
Here's a scenario. You have a large amount of legacy scripts, all using a common library. Said scripts use the 'print' statement for diagnostic output. No changes are allowed to the scripts - they range far and wide, have their approvals, and have long since left the fruitful valleys of oversight and control.
Now a new need has arrived: logging must now be added to the library. This must be done automatically and transparently, without users of the standard library needing to change their scripts. Common library methods can simply have logging calls added to them; that's the easy part. The hard part lies in the fact that diagnostic output from these scripts were always displayed using the 'print' statement. This diagnostic output must be stored, but just as importantly, processed.
As an example of this processing, the library should only record the printed lines that contain the words 'warning', 'error', 'notice', or 'attention'. The below Extremely Trivial and Contrived Example Code (tm) would record some of said output:
sub CheckPrintOutput
{
my #output = #_; # args passed to print eventually find their way here.
foreach my $value (#output) {
Log->log($value) if $value =~ /warning|error|notice|attention/i;
}
}
(I'd like to avoid such issues as 'what should actually be logged', 'print shouldn't be used for diagnostics', 'perl sucks', or 'this example has the flaws x y and z'...this is greatly simplified for brevity and clarity. )
The basic problem comes down to capturing and processing data passed to print (or any perl builtin, along those lines of reasoning). Is it possible? Is there any way to do it cleanly? Are there any logging modules that have hooks to let you do it? Or is it something that should be avoided like the plague, and I should give up on ever capturing and processing the printed output?
Additional: This must run cross-platform - windows and *nix alike. The process of running the scripts must remain the same, as must the output from the script.
Additional additional: An interesting suggestion made in the comments of codelogic's answer:
You can subclass http://perldoc.perl.org/IO/Handle.html and create your
own file handle which will do the logging work. – Kamil Kisiel
This might do it, with two caveats:
1) I'd need a way to export this functionality to anyone who uses the common library. It would have to apply automatically to STDOUT and probably STDERR too.
2) the IO::Handle documentation says that you can't subclass it, and my attempts so far have been fruitless. Is there anything special needed to make sublclassing IO::Handle work? The standard 'use base 'IO::Handle' and then overriding the new/print methods seem to do nothing.
Final edit: Looks like IO::Handle is a dead end, but Tie::Handle may do it. Thanks for all the suggestions; they're all really good. I'm going to give the Tie::Handle route a try. If it causes problems I'll be back!
Addendum: Note that after working with this a bit, I found that Tie::Handle will work, if you don't do anything tricky. If you use any of the features of IO::Handle with your tied STDOUT or STDERR, it's basically a crapshoot to get them working reliably - I could not find a way to get the autoflush method of IO::Handle to work on my tied handle. If I enabled autoflush before I tied the handle it would work. If that works for you, the Tie::Handle route may be acceptable.
There are a number of built-ins that you can override (see perlsub). However, print is one of the built-ins that doesn't work this way. The difficulties of overriding print are detailed at this perlmonk's thread.
However, you can
Create a package
Tie a handle
Select this handle.
Now, a couple of people have given the basic framework, but it works out kind of like this:
package IO::Override;
use base qw<Tie::Handle>;
use Symbol qw<geniosym>;
sub TIEHANDLE { return bless geniosym, __PACKAGE__ }
sub PRINT {
shift;
# You can do pretty much anything you want here.
# And it's printing to what was STDOUT at the start.
#
print $OLD_STDOUT join( '', 'NOTICE: ', #_ );
}
tie *PRINTOUT, 'IO::Override';
our $OLD_STDOUT = select( *PRINTOUT );
You can override printf in the same manner:
sub PRINTF {
shift;
# You can do pretty much anything you want here.
# And it's printing to what was STDOUT at the start.
#
my $format = shift;
print $OLD_STDOUT join( '', 'NOTICE: ', sprintf( $format, #_ ));
}
See Tie::Handle for what all you can override of STDOUT's behavior.
You can use Perl's select to redirect STDOUT.
open my $fh, ">log.txt";
print "test1\n";
my $current_fh = select $fh;
print "test2\n";
select $current_fh;
print "test3\n";
The file handle could be anything, even a pipe to another process that post processes your log messages.
PerlIO::tee in the PerlIO::Util module seems to allows you to 'tee' the output of a file handle to multiple destinations (e.g. log processor and STDOUT).
Lots of choices. Use select() to change the filehandle that print defaults to. Or tie STDOUT. Or reopen it. Or apply an IO layer to it.
This isn't the answer to your issue but you should be able to adopt the logic for your own use. If not, maybe someone else will find it useful.
Catching malformed headers before they happen...
package PsychicSTDOUT;
use strict;
my $c = 0;
my $malformed_header = 0;
open(TRUE_STDOUT, '>', '/dev/stdout');
tie *STDOUT, __PACKAGE__, (*STDOUT);
sub TIEHANDLE {
my $class = shift;
my $handles = [#_];
bless $handles, $class;
return $handles;
}
sub PRINT {
my $class = shift;
if (!$c++ && #_[0] !~ /^content-type/i) {
my (undef, $file, $line) = caller;
print STDERR "Missing content-type in $file at line $line!!\n";
$malformed_header = 1;
}
return 0 if ($malformed_header);
return print TRUE_STDOUT #_;
}
1;
usage:
use PsychicSTDOUT;
print "content-type: text/html\n\n"; #try commenting out this line
print "<html>\n";
print "</html>\n";
You could run the script from a wrapper script that captures the original script's stdout and writes the output somewhere sensible.