Have a look here
Can Perl method calls be intercepted?
It shows how to rewrite the symbol table for a simple sub. The print command can take a list I believe, so what is the right way to intercept/rewrite it? I wish to get a program to delay printing while maintaining the same signature, and instead push the output into an array, pre-sort it, then regurgitate all the output at the very end.
Intercepting print itself isn't the way to go -- it has a number of operating modes, including writing to a file or socket. Instead, take a look at the select function, which can be used to change the default filehandle which print will write to.
Also, look at the concept of a "tied" IO handle, as used by IO::Capture.
Related
I do not want to write my own recursive-descent math parser or think too deeply about grammar, so I am (re-)using the Perl module Safe.pm as an arithmetic calculator with variables. My task is to let one anonymous web user A type into a textfield a couple of math expressions, like:
**Input Formula:** $x= 2; $y=sqrt(2*$x+(25+$x)*$x); $z= log($y); ...
Ideally, this should only contain math expressions, but not generic Perl code. Later, I want to use it for web user B:
**Input Print:** you start with x=$x and end with z=$z . you don't know $a.
to <pre> text output that looks like this:
**Output Txt:** you start with x=2 and end with z=2.03 . you don't know $a.
(The fact that $a was not replaced is its own warning.) Ideally, I want to check that my web users have not only not tried to break in, but also have made no syntax errors.
My current Safe.pm-based implementation has drawbacks:
I want only math expressions in the first textfield. Alas, :base_math only extends Safe.pm beyond :base_core, so I have to live with the user having access to more than just math algebra expressions. For example, the web users could accidentally try to use a Perl reserved name, define subs, or do who knows what. Is there a better solution that picks off only the recursive descent math grammar parser? (and, subs like system() should not be permitted math functions!)
For the printing, I can just wrap a print "..." around the text and go another Safe eval, but this replaces $a with undef. What I really mean my code to do is to go through the table of newly added variables ($x, $y, and $z) and if they appear unescaped, then replace them; others should be ignored. I also have to watch carefully here that my guys are not working together to try to escape and type text like "; system("rm -rf *"); print ";, though Safe would catch this particular issue. More likely, A could try to inject some nasty JavaScript for B or who knows what.
Questions:
Is Safe.pm the right tool for the job? Perl seems like a heavy cannon here, but not having to reinvent the wheel is nice.
Can one further restrict Safe.pm to Perl's arithmetic only?
Is there a "new symbols" table that I can iterate over for substitution?
Safe.pm seems like a bad choice, because you're going to run the risk of overlooking some exploitable operation. I would suggest looking at a parsing tool, such as Marpa. It even has the beginnings of a calculator implementation which you could probably adapt to your purposes.
I was looking at a rather inconclusive question about whether it is best to use for(;;) or while(1) when you want to make an infinite loop and I saw an interesting solution in C where you can #define "EVER" as a constant equal to ";;" and literally loop for(EVER).
I know defining an extra constant to do this is probably not the best programming practice but purely for educational purposes I wanted to see if this could be done with Perl as well.
I tried to make the Perl equivalent, but it only loops once and then exits the loop.
#!/usr/bin/perl -w
use strict;
use constant EVER => ';;';
for (EVER) {
print "FOREVER!\n";
}
Output:
FOREVER!
Why doesn't this work in perl?
C's pre-processor constants are very different from the constants in most languages.
A normal constant acts like a variable which you can only set once; it has a value which can be passed around in most of the places a variable can be, with some benefits from you and the compiler knowing it won't change. This is the type of constant that Perl's constant pragma gives you. When you pass the constant to the for operator, it just sees it as a string value, and behaves accordingly.
C, however, has a step which runs before the compiler even sees the code, called the pre-processor. This actually manipulates the text of your source code without knowing or caring what most of it means, so can do all sorts of things that you couldn't do in the language itself. In the case of #DEFINE EVER ;;, you are telling the pre-processor to replace every occurrence of EVER with ;;, so that when the actual compiler runs, it only sees for(;;). You could go a step further and define the word forever as for(;;), and it would still work.
As mentioned by Andrew Medico in comments, the closest Perl has to a pre-processor is source filters, and indeed one of the examples in the manual is an emulation of #define. These are actually even more powerful than pre-processor macros, allowing people to write modules like Acme::Bleach (replaces your whole program with whitespace while maintaining functionality) and Lingua::Romana::Perligata (interprets programs written in grammatically correct Latin), as well as more sensible features such as adding keywords and syntax for class and method declarations.
It doesn't run forever because ';;' is an ordinary string, not a preprocessor macro (Perl doesn't have an equivalent of the C preprocessor). As such, for (';;') runs a single time, with $_ set to ';;' that one time.
Andrew Medico mentioned in his comment that you could hack it together with a source filter.
I confirmed this, and here's an example.
use Filter::cpp;
#define EVER ;;
for (EVER) {
print "Forever!\n";
}
Output:
Forever!
Forever!
Forever!
... keeps going ...
I don't think I would recommend doing this, but it is possible.
This is not possible in Perl. However, you can define a subroutine named forever which takes a code block as a parameter and runs it again and again:
#!/usr/bin/perl
use warnings;
use strict;
sub forever (&) {
$_[0]->() while 1
}
forever {
print scalar localtime, "\n";
sleep 1;
};
My WinDbg command window gets always polluted by useless AFW-traces... Therefore I would like to print my own stuff in another window. How can I do that?
No way that I'm aware of. You can use .ofilter to filter the output though, which may be sufficient.
I'm running the debugger in noninteractive mode, with the output written to a file. I want to print out each line of my Perl script as it executes, but only lines in the script itself. I don't want to see the library code (File::Basename, Exporter::import, etc.) that the script calls. This seems like the sort of thing that should be easy to do, but the documentation for perldebug only discusses limiting the depth for dumping structures. Is what I want possible, and if so, how?
Note that I'm executing my program as follows:
PERLDB_OPTS="LineInfo=temp.txt NonStop=1 AutoTrace=1 frame=2" perl -dS myprog.pl arg0 arg1
By default, Devel::DumpTrace doesn't step into system modules, and you can exercise fine control over what modules the debugger will step into (it's not easy, but it's possible). Something like
DUMPTRACE_FH=temp.txt perl -d:DumpTrace=quiet myprog.pl
would be similar to what you're apparently trying to do.
Devel::DumpTrace also does a lot more processing on each line -- figuring out variable values and including them in the output -- so it may be overkill and run a lot slower than perl -dS ...
(Crikey, that's now two plugs for Devel::DumpTrace this week!)
Are you talking about not wanting to step through functions outside of your own program? For that, you want to use n instead of s.
From perldebug:
s [expr] Single step. Executes until the beginning of another
statement, descending into subroutine calls. If an
expression is supplied that includes function calls, it too
will be singleāstepped.
n [expr] Next. Executes over subroutine calls, until the beginning
of the next statement. If an expression is supplied that
includes function calls, those functions will be executed
with stops before each statement.
I was refactoring some old code (by other people) and I came across the following at the top of some CGI scripts:
#Turn on output buffering
local $| = 1;
perlcritic as usual unhelpfully points out the obvious : "Magic punctuation used". Are there any alternatives to this or perlcritic is just grumpy?
Furthermore, on closer inspection. I think the code is wrong.
If I'm not mistaken, it means exactly the opposite from what the comment says. It turns off the output buffering. My memory is a little bit rusty and I can't seem to find the Perl documentation that describes this magic punctuation. The scripts run in mod_perl.
Does messing around with Perl's buffering behavior desirable and results in any performance gain? Most of the stuff written about this comes from the early part of the first decade of the 21st century. Is this still even a valid good practice?
$| is one of a number of punctuation variables that are really per-filehandle. The variable gets or sets the value for the currently selected output filehandle (by default, STDOUT). ($. is slightly different; it is bound to the last filehandle read from.)
The "modern" way to access these is via a method on the filehandle:
use IO::Handle;
$fh->autoflush(1); # instead of $|=1
The method corresponding to each variable is documented in perldoc perlvar.
Your question seems a bit scattered, but I'll try my best to answer thoroughly.
You want to read perldoc pervar. The relevant section says:
$| If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or not; $| tells you only whether you've asked Perl explicitly to
flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this
variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want
to see the output as it's happening. This has no effect on input buffering. See "getc" in perlfunc for that. See "select" in perldoc
on how to select the output channel. See also IO::Handle. (Mnemonic: when you want your pipes to be piping hot.)
So yes, the comment is incorrect. Setting $| = 1 does indeed disable buffering, not turn it on.
As for performance, the reason output buffering is enabled by default is because this improves performance--even in 2011--and probably until the end of time, unless quantum I/O somehow changes the way we understand I/O entirely.
The reasons to disable output buffering are not to improve performance, but to change some other behavior at the expense of performance.
Since I have no idea what your code does, I cannot speculate as to its reason for wanting to disable output buffering.
Some (but by no means all) possible reasons to disable output buffering:
You're writing to a socket or pipe, and the other end expects an immediate response.
You're writing status updates to the console, and want the user to see them immediately, not at the end of a line. This is especially common when you output a period after each of many operations, etc.
For a CGI script, you may want the browser to display some of the HTML output before processing has finished.
The comment, as others have stated, is incorrect. In contrast, local $| = 1 disables output buffering.
To comply with Perl::Critic's policies, you could make use of the English module:
use English qw( -no_match_vars );
local $OUTPUT_AUTOFLUSH = 1; # equivalent to: local $| = 1
As you can check in manuals, $|=1 turn off buffering saying that it is true that the buffers must be flushed, so the comment is wrong.
About it being good or not, I don't know, but me too have seen that always done in CGI scripts, so I suspect it is a good thing in this particular case, maybe it is since normally CGI scripts want to make the data available as soon as they are written.