I'm looking for a directory of perl modules which are perl only. That means (in DSLIP speak):
Perl-only, no compiler needed, should be platform independent
As far as I see the well known search engines does not allow to search in this category only. Am I wrong? Maybe there is a nice way to get a complete module list...?
http://deps.cpantesters.org/ shows whether a module is not pure-perl. The criteria are explained in A Brief Note on Purity warnings, the source implementing this is somewhere in http://www.cantrell.org.uk/cgit/cgit.cgi/cpandeps/.
Combine this with visitcpan to get data for every distribution.
Related
I have been on a "cleaning spree" lately at work, doing a lot of touch-up stuff that should have been done awhile ago. One thing I have been doing is deleted modules that were imported into files and never used, or they were used at one point but not anymore. To do this I have just been deleting an import and running the program's test file. Which gets really, really tedious.
Is there any programmatic way of doing this? Short of me writing a program myself to do it.
Short answer, you can't.
Longer possibly more useful answer, you won't find a general purpose tool that will tell you with 100% certainty whether the module you're purging will actually be used. But you may be able to build a special purpose tool to help you with the manual search that you're currently doing on your codebase. Maybe try a wrapper around your test suite that removes the use statements for you and ignores any error messages except messages that say Undefined subroutine &__PACKAGE__::foo and other messages that occur when accessing missing features of any module. The wrapper could then automatically perform a dumb source scan on the codebase of the module being purged to see if the missing subroutine foo (or other feature) might be defined in the unwanted module.
You can supplement this with Devel::Cover to determine which parts of your code don't have tests so you can manually inspect those areas and maybe get insight into whether they are using code from the module you're trying to purge.
Due to the halting problem you can't statically determine whether any program, of sufficient complexity, will exit or not. This applies to your problem because the "last" instruction of your program might be the one that uses the module you're purging. And since it is impossible to determine what the last instruction is, or if it will ever be executed, it is impossible to statically determine if that module will be used. Further, in a dynamic language, which can extend the program during it's run, analysis of the source or even the post-compile symbol tables would only tell you what was calling the unwanted module just before run-time (whatever that means).
Because of this you won't find a general purpose tool that works for all programs. However, if you are positive that your code doesn't use certain run-time features of Perl you might be able to write a tool suited to your program that can determine if code from the module you're purging will actually be executed.
You might create alternative versions of the modules in question, which have only an AUTOLOAD method (and import, see comment) in it. Make this AUTOLOAD method croak on use. Put this module first into the include path.
You might refine this method by making AUTOLOAD only log the usage and then load the real module and forward the original function call. You could also have a subroutine first in #INC which creates the fake module on the fly if necessary.
Of course you need a good test coverage to detect even rare uses.
This concept is definitely not perfect, but it might work with lots of modules and simplify the testing.
In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.
Whenever I see the term source filter I am left wondering as to what it refers to.
Aside from a formal definition, I think an example would also be helpful to drive the message home.
A source filter is a module that modifies some other code before it is evaluated. Therefore the code that is executed is not what the programmer sees when it is written. You can read more about source filters (in the Perl context) at perldoc perlfilter. Some examples are Smart::Comments which allows the programmer to leave debugging commands in comments in the code and employ them only if desired, another is PDL::NiceSlice which is sugar for slicing PDL objects.
Edit:
For more information on usage (should you wish to brave the beast), read the documentation for Filter::Simple which is a typical way to create filters.
Alternatively there is a new and different way to muck about with the source: Devel::Declare lets you interact with Perl's own parser, letting you do many of the same type of thing as a source filter, but without the source filter. This can be "safer" in some respect, yet it has a more limited scope.
A source filter is a form of module which affects the way in which a file use-ing it will be parsed. They are commonly used to simulate syntactical features which Perl does not have natively -- for instance, the Switch source filter was often used to simulate switch statements before Perl's given { } construction was available.
Source filters work by taking the text of the module as input, performing some processing on it, and outputting the filtered source code. For a simple example of how a source filter is implemented, as well as more details, see the perldoc page for perlfilter.
They are pre-processors. They change the source code before it reaches the Perl compiler. You can do scary things with them, in effect implementing your own language, with all the effects this has on readability (for others), robustness (writing parsers is hard) and maintainability (debugging gets tricky when your idea of what the source code is differs from what compiler and runtime think it is).
When using keys %:: to get a list of the currently loaded root namespaces, the Internals:: package is loaded by default (along with UNIVERSAL:: and a few others). However, I haven't found any documentation for the functions in Internals::
keys %{Internals::} returns SvREFCNT hv_clear_placeholders hash_seed SvREADONLY HvREHASH rehash_seed
All of these can probably be looked up in Perl's C API docs, but is there any Perl level documentation for them? Is the package stable? It's used by several core modules (Hash::Util for one), so I imagine it is, but the lack of documentation is a bit troubling.
I didn't see Internals.pm in the Perl distribution (different name maybe?), and it is not the Internals module up on CPAN.
Note: I fully understand that the functions in Internals:: are potentially dangerous, and I do not have any particular use in mind. I was reading through Hash::Util's source and came across it.
IIRC the code is not Internals.pm but libinternals.c. It looks like they used to be in universal.c in Perl 5.8 but got migrated out.
As per 03/2009 and Perl 5.10 they were not documented as per this perlmonks thread.
Also, in the same thread, ysth states:
Undocumented things in universal.c
should not be depended on; they should
only be used by core modules. They
aren't documented on purpose, to allow
them to be changed whenever and
however necessary. For those purposes,
the code is good enough documentation.
Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.
Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.
As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's "abandonware" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).
Disclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.
I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.
I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.
Just make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the "Why you should [not] use Test::Class" sections in the docs.
The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.
You might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.
Finally, using the module-starter tool (from Module::Starter) will give you a really nice "CPAN standard" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.
For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:
use Test::Command tests => 3;
my $echo_test = Test::Command->new( cmd => 'echo out' );
$echo_test->exit_is_num(0, 'exit normally');
$echo_test->stdout_is_eq("out\n", 'echoes out');
$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )
The module also has a functional interface too, if it's more to your liking.