When using keys %:: to get a list of the currently loaded root namespaces, the Internals:: package is loaded by default (along with UNIVERSAL:: and a few others). However, I haven't found any documentation for the functions in Internals::
keys %{Internals::} returns SvREFCNT hv_clear_placeholders hash_seed SvREADONLY HvREHASH rehash_seed
All of these can probably be looked up in Perl's C API docs, but is there any Perl level documentation for them? Is the package stable? It's used by several core modules (Hash::Util for one), so I imagine it is, but the lack of documentation is a bit troubling.
I didn't see Internals.pm in the Perl distribution (different name maybe?), and it is not the Internals module up on CPAN.
Note: I fully understand that the functions in Internals:: are potentially dangerous, and I do not have any particular use in mind. I was reading through Hash::Util's source and came across it.
IIRC the code is not Internals.pm but libinternals.c. It looks like they used to be in universal.c in Perl 5.8 but got migrated out.
As per 03/2009 and Perl 5.10 they were not documented as per this perlmonks thread.
Also, in the same thread, ysth states:
Undocumented things in universal.c
should not be depended on; they should
only be used by core modules. They
aren't documented on purpose, to allow
them to be changed whenever and
however necessary. For those purposes,
the code is good enough documentation.
Related
In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.
Perl is one of such language which supports the function overloading by return type.
Simple example of this is wantarray().
Few nice modules are available in CPAN which extends this wantarray() and provide overloading for many other return types. These modules are Contextual::Return and Want. Unfortunately, I can not use these modules as both of these fails the perl critic with perl version 5.8.9 (I can not upgrade this perl version).
So, I am thinking to write my own module like Contextual::Return and Want but with very minimal. I tried to understand Contextual::Return and Want modules code, but I am not an expert.
I need function overloading for return types BOOL, OBJREF, LIST, SCALAR only.
Please help me by providing some guidelines, how can I start for it.
Modules that play with Perl's syntax in the way that Contextual::Return and Want do are pretty much bound to fall foul of Perl::Critic. In this case the main transgressions are occasionally disabling strict and using subroutine prototypes, which are minimal.
I personally believe it is a foolish rule that insists that all code must pass an arbitrary set of tests with no exceptions, but I also think that any code that behaves very differently depending on the context in which it is called is likely to be badly designed and difficult to understand and maintain. It is rare to see even wantarray used, as Perl generally does the right thing without you having to explain.
I think you may have come across a module that looks interesting to use, and want to incorporate it into your code somehow. Can you change my mind by showing an example of a subroutine that would require the comprehensive context checking that you describe?
I'm looking for a directory of perl modules which are perl only. That means (in DSLIP speak):
Perl-only, no compiler needed, should be platform independent
As far as I see the well known search engines does not allow to search in this category only. Am I wrong? Maybe there is a nice way to get a complete module list...?
http://deps.cpantesters.org/ shows whether a module is not pure-perl. The criteria are explained in A Brief Note on Purity warnings, the source implementing this is somewhere in http://www.cantrell.org.uk/cgit/cgit.cgi/cpandeps/.
Combine this with visitcpan to get data for every distribution.
It seems strange to me that Perl would allow a package to export symbols into another package's namespace. The exporting package doesn't know if the using package already defined a symbol by the same name, and it certainly can't guarantee that it's the only package exporting a symbol by that name.
A very common problem caused by this is using CGI and LWP::Simple at the same time. Both packages export head() and cause an error. I know, it's easy enough to work around, but that's not the point. You shouldn't have to employ work arounds to use two practically-core Perl libraries.
As far as I can see, the only reason to do this is laziness. You save some key strokes by not typing Foo:: or using an object interface, but is it really worth it?
The practice of exporting all the functions from a module by default is not the recommended one for Perl. You should only export functions if you have a good reason. The recommended practice is to use EXPORT_OK so that the user has to type the name of the required function, like:
use My::Module 'my_function';
Modules from way back when, like LWP::Simple and CGI, were written before this recommendation came in, and it's hard now to alter them to not export things since it would break existing software. I guess the recommendation came about through people noticing problems like that.
Anyway Perl's object-oriented objects or whatever it's called doesn't require you to export anything, and you don't not have to say $foo->, so that part of your question is wrong.
Exporting is a feature. Like every other feature in any language, it can cause problems if you (ab)use it too frequently, or where you shouldn't. It's good when used wisely and bad otherwise, just like any other feature.
Back in the day when there weren't many modules around, it didn't seem like such a bad thing to export things by default. With 15,000 packages on CPAN, however, there are bound to be conflicts and that's unfortunate. However, fixing the modules now might break existing code. Whenever you make a poor interface choice and release it to the public, you're committed to it even if you don't like it.
So, it sucks, but that's the way it is, and there are ways around it.
The exporting package doesn't know if the using package already defined a symbol by the same name, and it certainly can't guarantee that it's the only package exporting a symbol by that name.
If you wanted to, I imagine your import() routine could check, but the default Exporter.pm routine doesn't check (and probably shouldn't, because it's going to get used a lot, and always checking if a name is defined will cause a major slowdown (and if you found a conflict, what is Exporter expected to do?)).
As far as I can see, the only reason to do this is laziness. You save some key strokes by not typing Foo:: or using an object interface, but is it really worth it?
Foo:: isn't so bad, but consider My::Company::Private::Special::Namespace:: - your code will look much cleaner if we just export a few things. Not everything can (or should) be in a top-level namespace.
The exporting mechanism should be used when it makes code cleaner. When namespaces collide, it shouldn't be used, because it obviously isn't making code cleaner, but otherwise I'm a fan of exporting things when requested.
It's not just laziness, and it's not just old modules. Take Moose, "the post-modern object system", and Rose::DB::Object, the object interface to a popular ORM. Both import the meta method into the useing package's symbol table in order to provide features in that module.
It's not really any different than the problem of multiply inheriting from modules that each provide a method of the same name, except that the order of your parentage would decide which version of that method would get called (or you could define your own overridden version that somehow manually folded the features of both parents together).
Personally I'd love to be able to combine Rose::DB::Object with Moose, but it's not that big a deal to work around: one can make a Moose-derived class that “has a” Rose::DB::Object-derived object within it, rather than one that “is a” (i.e., inherits from) Rose::DB::Object.
One of the beautiful things about Perl's "open" packages is that if you aren't crazy about the way a module author designed something, you can change it.
package LWPS;
require LWP::Simple;
for my $sub (#LWP::Simple::EXPORT, #LWP::Simple::EXPORT_OK) {
no strict 'refs';
*$sub = sub {shift; goto &{'LWP::Simple::' . $sub}};
}
package main;
my $page = LWPS->get('http://...');
of course in this case, LWP::Simple::get() would probably be better.
Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.
Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.
As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's "abandonware" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).
Disclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.
I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.
I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.
Just make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the "Why you should [not] use Test::Class" sections in the docs.
The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.
You might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.
Finally, using the module-starter tool (from Module::Starter) will give you a really nice "CPAN standard" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.
For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:
use Test::Command tests => 3;
my $echo_test = Test::Command->new( cmd => 'echo out' );
$echo_test->exit_is_num(0, 'exit normally');
$echo_test->stdout_is_eq("out\n", 'echoes out');
$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )
The module also has a functional interface too, if it's more to your liking.