I'm looking to play with perl parser manipulation. It looks like the various B::Hooks modules are what people use. I was wondering:
Best place to start for someone who has no XS experience (yet). Any relevant blog posts?
How much work would be involved in creating a new operator, for example:
$a~>one~>two~>three
~> would work like -> but it would not try to call on undef and would instead simply return undef to LHS.
Although a source filter would work -- I'm more interested in seeing how you can manipulate the parser at a deeper level.
I don't believe you can add infix operators (operators whose operands are before and after the operator), much less symbolic ones (as opposed to named operators), but you could write an an op checker that replaces method calls. This means you could cause ->foo to behave differently. By writing your module as a pragma, you could limit the effect of your module to a lexical scope (e.g. { use mypragma; ...}).
Related
Is there a reason why CGI.pm examples rely on list concatenation rather than on string concatenation? are the two interchangeable? think
print q->hidden(-name =>'rm', -value => $var).
q->submit(-name =>"rm$var");
vs.
print q->hidden(-name =>'rm', -value => $var),
q->submit(-name =>"rm$var");
I have a specific reason for asking. it is very convenient to build up a page from strings. after all, perl understands scalar strings as basic types.
however, I am completely perplexed by some odd behavior in the string concat context. Specifically, I have encountered occasional cases in which the $var in the hidden is not the same as the one in the submit button. I could work around this, but I would rather understand CGI.pm .
could someone please explain whether string concat should work?
Unless you change the default value of $,,
print EXPR1, EXPR2;
and
print EXPR1 . EXPR2;
produce the same result if the expressions aren't context-specific. The functions in question always return an HTML string, so you're good.
You're right that the two examples have the same effect (well, as ikegami says, unless you have changed $,). The only difference is, of course, that in the first example, print is passed one string and in the second example, it gets two.
But have you read the comments about the HTML generation functions in recent versions of the CGI.pm documentation?
All HTML generation functions within CGI.pm are no longer being
maintained. Any issues, bugs, or patches will be rejected unless they
relate to fundamentally broken page rendering.
The rationale for this is that the HTML generation functions of CGI.pm
are an obfuscation at best and a maintenance nightmare at worst. You
should be using a template engine for better separation of concerns.
See CGI::Alternatives for an example of using CGI.pm with the
Template::Toolkit module.
These functions, and perldoc for them, will continue to exist in the
v4 releases of CGI.pm but may be deprecated (soft) in v5 and beyond.
I would seriously consider moving way from these functions (and, indeed, CGI.pm itself) for new work.
Just reading this page: https://github.com/book/perlsecret/blob/master/lib/perlsecret.pod , and was really surprised with the statements like:
Discovered by Philippe Bruhat, 2012.
Discovered by Abigail, 2010. (Alternate nickname: "grappling hook")
Discovered by Rafaƫl Garcia-Suarez, 2009.
Discovered by Philippe Bruhat, 2007.
and so on...
The above operators are DISCOVERED, so they are not intentional by perl-design?
Thats mean than here is possibility than perl sill have some random character sequences what in right order doing something useful like the ()x!! "operator"?
Is here any other language what have discovered operatos?
From the page you linked:
They are like operators in the sense that these Perl programmers see
them often enough to recognize them without thinking about their
smaller parts, and eventually add them to their toolbox. And they are
like secrets in the sense that they have to be discovered by their
future user (or be transmitted by a fellow programmer), because they
are not explicitly documented.
That is, they are not really their own operators, but they are made up of smaller operators compounded to do something combinedly.
For example, the 'venus' operator (0+ or +0) numifies the object on its left or right. That's what adding zero in any form does, "secret" operator or not.
Perl has a bunch of operators that do special things, as well as characters that do special things when interpreted in a specific context. Rather than these being actual "operators" (i.e., not explicitly recognized by the Perl parser), think of them as combinations of certain functions/operations. For example ()X!!, which is known as the "Enterprise" operator, consists of () which is a list, followed by x, which is a repetition operator, followed by !! (the "bang bang" operator), which performs a boolean conversion. This is one of the reasons that Perl is so expressive.
Is there a good reason for map to not read from #_ (in functions) or #ARGV (anywhere else) when not given an argument list?
I can't say why Larry didn't make map, grep and the other list functions operate on #_ like pop and shift do, but I can tell you why I wouldn't. Default variables used to be in vogue, but Perl programmers have discovered that most of the "default" behaviors cause more problems than they solve. I doubt they would make it into the language today.
The first problem is remembering what a function does when passed no arguments. Does it act on a hidden variable? Which one? You just have to know by rote, and that makes it a lot more work to learn, read and write the language. You're probably going to get it wrong and that means bugs. This could be mitigated by Perl being consistent about it (ie. ALL functions which take lists operate on #_ and ALL functions which take scalars operate on $_) but there's more problems.
The second problem is the behavior changes based on context. Take some code outside of a subroutine, or put it into a subroutine, and suddenly it works differently. That makes refactoring harder. If you made it work on just #_ or just #ARGV then this problem goes away.
Third is default variables have a tendency to be quietly modified as well as read. $_ is dangerous for this reason, you never know when something is going to overwrite it. If the use of #_ as the default list variable were adopted, this behavior would likely leak in.
Fourth, it would probably lead to complicated syntax problems. I'd imagine this was one of the original reasons keeping it from being added to the language, back when $_ was in vogue.
Fifth, #ARGV as a default makes some sense when you're writing scripts that primarily work with #ARGV... but it doesn't make any sense when working on a library. Perl programmers have shifted from writing quick scripts to writing libraries.
Sixth, using $_ as default is a way of chaining together scalar operations without having to write the variable over and over again. This might have been mitigated if Perl was more consistent about its return values, and if regexes didn't have special syntax, but there you have it. Lists can already be chained, map { ... } sort { ... } grep /.../, #foo, so that use case is handled by a more efficient mechanism.
Finally, it's of very limited use. It's very rare that you want to pass #_ to map and grep. The problems with hidden defaults are far greater than avoiding typing two characters. This space savings might have slightly more sense when Perl was primarily for quick and dirty work, but it makes no sense when writing anything beyond a few pages of code.
PS shift defaulting to #_ has found a niche in my $self = shift, but I find this only shines because Perl's argument handling is so poor.
The map function takes in a list, not an array. shift takes an array. With lists, on the other hand, #_/#ARGV may or may not be fair defaults.
I have seen several modules (example: Iterator::Simple) that make use of Perl's angle operator as an approximate equivalent to Python generators. Specifically, providing the ability to iterate over a list of values without actually loading the whole list in memory. Is this generally considered to be an appropriate extension of the functionality of the operator, or is it considered to be an abuse of it?
The <HANDLE> operator is just syntactic sugar for the readline HANDLE function, which is very much an iterator over the handle. If an object provides iterative access, I don't see any problem with overloading <> to provide flexibility to the end user.
The <> operator does not approximate the generator, the module does that. All that
while (<$iterator>) {...}
gives you is a fancy way to write
while (defined ($_ = $iterator->next)) {...}
Perl is a very expressive language due to the many different ways it allows you to solve problems. Many modules choose to offer alternative interfaces in this spirit. This allows users to code the way that works best for them.
It is "common knowledge" that source filters are bad and should not be used in production code.
When answering a a similar, but more specific question I couldn't find any good references that explain clearly why filters are bad and when they can be safely used. I think now is time to create one.
Why are source filters bad?
When is it OK to use a source filter?
Why source filters are bad:
Nothing but perl can parse Perl. (Source filters are fragile.)
When a source filter breaks pretty much anything can happen. (They can introduce subtle and very hard to find bugs.)
Source filters can break tools that work with source code. (PPI, refactoring, static analysis, etc.)
Source filters are mutually exclusive. (You can't use more than one at a time -- unless you're psychotic).
When they're okay:
You're experimenting.
You're writing throw-away code.
Your name is Damian and you must be allowed to program in latin.
You're programming in Perl 6.
Only perl can parse Perl (see this example):
#result = (dothis $foo, $bar);
# Which of the following is it equivalent to?
#result = (dothis($foo), $bar);
#result = dothis($foo, $bar);
This kind of ambiguity makes it very hard to write source filters that always succeed and do the right thing. When things go wrong, debugging is awkward.
After crashing and burning a few times, I have developed the superstitious approach of never trying to write another source filter.
I do occasionally use Smart::Comments for debugging, though. When I do, I load the module on the command line:
$ perl -MSmart::Comments test.pl
so as to avoid any chance that it might remain enabled in production code.
See also: Perl Cannot Be Parsed: A Formal Proof
I don't like source filters because you can't tell what code is going to do just by reading it. Additionally, things that look like they aren't executable, such as comments, might magically be executable with the filter. You (or more likely your coworkers) could delete what you think isn't important and break things.
Having said that, if you are implementing your own little language that you want to turn into Perl, source filters might be the right tool. However, just don't call it Perl. :)
It's worth mentioning that Devel::Declare keywords (and starting with Perl 5.11.2, pluggable keywords) aren't source filters, and don't run afoul of the "only perl can parse Perl" problem. This is because they're run by the perl parser itself, they take what they need from the input, and then they return control to the very same parser.
For example, when you declare a method in MooseX::Declare like this:
method frob ($bubble, $bobble does coerce) {
... # complicated code
}
The word "method" invokes the method keyword parser, which uses its own grammar to get the method name and parse the method signature (which isn't Perl, but it doesn't need to be -- it just needs to be well-defined). Then it leaves perl to parse the method body as the body of a sub. Anything anywhere in your code that isn't between the word "method" and the end of a method signature doesn't get seen by the method parser at all, so it can't break your code, no matter how tricky you get.
The problem I see is the same problem you encounter with any C/C++ macro more complex than defining a constant: It degrades your ability to understand what the code is doing by looking at it, because you're not looking at the code that actually executes.
In theory, a source filter is no more dangerous than any other module, since you could easily write a module that redefines builtins or other constructs in "unexpected" ways. In practice however, it is quite hard to write a source filter in a way where you can prove that its not going to make a mistake. I tried my hand at writing a source filter that implements the perl6 feed operators in perl5 (Perl6::Feeds on cpan). You can take a look at the regular expressions to see the acrobatics required to simply figure out the boundaries of expression scope. While the filter works, and provides a test bed to experiment with feeds, I wouldn't consider using it in a production environment without many many more hours of testing.
Filter::Simple certainly comes in handy by dealing with 'the gory details of parsing quoted constructs', so I would be wary of any source filter that doesn't start there.
In all, it really depends on the filter you are using, and how broad a scope it tries to match against. If it is something simple like a c macro, then its "probably" ok, but if its something complicated then its a judgement call. I personally can't wait to play around with perl6's macro system. Finally lisp wont have anything on perl :-)
There is a nice example here that shows in what trouble you can get with source filters.
http://shadow.cat/blog/matt-s-trout/show-us-the-whole-code/
They used a module called Switch, which is based on source filters. And because of that, they were unable to find the source of an error message for days.