I am working on perl module that I would like to submit in CPAN.
But I have a small query in regards to the directory structure of module.
As per the perlmonk article the module code directory structure should be as below:
Foo-Bar-0.01/Bar.pm
Foo-Bar-0.01/Makefile.PL
Foo-Bar-0.01/MANIFEST
Foo-Bar-0.01/Changes
Foo-Bar-0.01/test.pl
Foo-Bar-0.01/README
But when I am using the command, the structure is generated as below
h2xs -AX Foo::Bar
Writing Foo-Bar/lib/Foo/Bar.pm
Writing Foo-Bar/Makefile.PL
Writing Foo-Bar/README0
Writing Foo-Bar/t/Foo-Bar.t
Writing Foo-Bar/Changes
Writing Foo-Bar/MANIFEST
The article in question is advocating a considerably-older module structure. It certainly could be used, but it loses a lot of the advancements that have been put into place as far as good testing, building, and distribution practices.
To break down the differences:
modules have moved from the top level to the lib/ directory. This unifies the location where your module "lives" (i.e., the place where you work on the code and create the baseline modules to be tested and eventually distributed). It also makes it easier to set up any hierarchy that you need (e.g. subclasses, or helper modules); the newer setup will just pick these up. The older one may but I'm not familiar enough with it to say yes or no.
Makefile.PL in the newer setup will, when "make" is run. create a library called "blib", the *b*uild *lib*rary - this is where the code is built for actual testing. It will pretty much be a copy of lib/ unless you have XS code, in which case this is where the compiled XS code ends up. This makes the process of building and testing the code simpler; if you update a file in lib/, the Makefile will rebuild the file into blib before trying to test it.
the t/ directory replaces test.pl; "make test" will execute all the *.t files in t/, as opposed to you having to put all your tests in test.pl. This makes it far easier to write tests, as you can be sure you have a consistent state at the beginning of each test.
MANIFEST and Changes are the same in both: MANIFEST (built by "make manifest") is used to determine which files in the build library should be redistributed when the module is packaged for upload, and used to verify that a package is complete when it's downloaded and unpacked for building. Changes is simply a changelog, which you edit by hand to record the changes made in each distributed version.
As recommended in the comments on your question, using Module::Starter or Dist::Zilla (be warned that Dist::Zilla is Moose-based and will install a lot of prereqs) is a better approach to building modules in a more modern way. Of the two, the h2xs version is closer to modern packaging standards, but you're really better off using one of the recommended package starter options (and probably Module::Build, which uses a Build Perl script instead of a Makefile to build the code).
Related
I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.
I'm trying to set up a large-ish project, written in Perl. The IBM MakeMaker tutorial has been very helpful so far, but I don't understand how to link all the modules into the main program. In my project root, I have MANIFEST, Makefile.PL, README, a bin directory, and a lib directory. In my bin directory, I have my main script (Main.pl). In the lib directory, I have each of my modules, divided up into their own respective directories (i.e. Utils::Util1 and Utils::Utils2 in the utils directory, etc). In each module directory, there is also a t directory, containing tests
My MANIFEST file has the following:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
lib/Utils/t/Utils1.t
lib/Utils/t/Utils2.t
Makefile.PL
MANIFEST
README
Makefile.PL is the following:
use ExtUtils::MakeMaker;
WriteMakefile(
'NAME'=>'Foo',
'VERSION_FROM'=>'bin/Main.pl',
'PREREQ_PM'=>{
"XML::Simple"=> 2.18}, #The libraries that we need and their
#minimum version numbers
'EXE_FILES' =>[("bin/Main.pl")]
);
After I make and run, the program crashes, complaining that it cannot find Utils::Util1, and when I run 'make test, it says no tests defined. Can anyone make any suggestions? I have never done a large scale project like this in perl, and I will need to add many more modules
If you are just starting to create Perl modules (which is also Perl's equivalent of a project), don't use Makemaker. Module::Build is the way to go, and it's now part of the standard library. Makemaker is for us old salts who haven't converted to Module::Build yet. :) I'll strike that now that Module::Build is unmaintained and out of favor; I still use MakeMaker.
You should never start off a Perl project by trying to create the structure yourself. It's too much work and you'll always forget something.
There's h2xs, a program that comes with perl and was supposed to be a tool to convert .h files into Perl's glue language XS. It works fine, but its advantage is that it comes with perl:
% h2xs -AXn Module::Name
Something like Module::Starter is a bit more sophisticated, although you have to get it from CPAN. It's the tool we use in Intermediate Perl because it's simple. It fills in some templates with your information:
% module-starter --author=... --email=... --module=...
If you are doing to do this quite a bit, you might then convert that to Distribution::Cooker so you can customize your files and contents. It's a dinky utility I wrote for myself so I could use my own templates.
% dist_cooker Module::Name
If you're really hard core, you might want Dist::Zilla, but that's more for people who already know what they are doing.
Might I also suggest module-starter? It'll automatically create a skeleton project which "Just Works". I learned what little I know about Perl modules organization by reading the generated skeleton files. It's all well-documented, and quite easy to use as a base for growing a larger project in. You can check out the getting-started docs to see what it gives you.
Running module-starter will give you a Perl distribution, consisting of a number of modules (use the command line option --module, such as:
module-starter --distro=Project --module=Project::Module::A,Project::Module::B [...]
to create multiple modules in a single distribution). It's then up to you whether you'd prefer to organize your project as a single distribution consisting of a number of modules working together, or as a number of distributions which can be released separately but which depend on each other (as configured in your Build or Makefile.PL file) to provide a complete system.
Try this structure:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
Makefile.PL
MANIFEST
README
t/Utils1.t
t/Utils2.t
As ysth said, make does not install your modules, it just builds them in a blib directory. (In your case it just copies them there, but if you had XS code, it would be compiled with a C compiler.) Use make install to install your modules for regular scripts to use.
If you want to run your script between make and make install, you can do:
perl -Mblib bin/Main.pl
The -Mblib instructs perl to temporarily add the appropriate directories to the search path, so you can try out an uninstalled module. (make test does that automatically.)
By default, tests are looked for in a top-level t directory (or a test.pl file, but that has some limitations, so should be avoided).
You say "After I make and run"...make puts things into a blib directory structure ready to be installed, but doesn't do anything special to make running a script access them. (make test is special; it does add appropriate paths from blib to perl's #INC to be able to run the tests.) You will need to do a "make install" to install the modules where your script will find them (or use a tool like PAR to package them together with your script).
I have an app that I pack into "binary" form using PerlApp for distribution. Since my clients want a simple install for their Win32 systems, this works very nicely.
Now a client has decided that they need to run all unit tests, like in a standard install. However, they still will not install a normal Perl.
So, I find myself in need of a way to package my unit tests for operation on my client's systems.
My first thought was that I could pack up prove in one file and pack each of my tests separately. Then ship a zip file with the appropriate structure.
A bit of research showed that Test::Harness::Straps invokes perl from the command line.
Is there an existing tool that helps with this process?
Perhaps I could use PAR::Packer's parl tool to handle invocation of my test scripts.
I'm interested in thoughts on how to apply either PAR or PerlApp, as well as any thoughts on how to approach overriding Test::Harness and friends.
Thanks.
Update: I don't have my heart set on PAR or PerlApp. Those are just the tools I am familiar with. If you have an idea or a solution that requires a different packager (such as Cava Packager), I would love to hear about it.
Update 2: tsee pointed out a great new feature in PAR that gets me close. Are there any TAP experts out there that can supply some ideas or pointers on where to look in the new Test::Harness distribution?
I'm probably not breaking big news if I tell you that PAR (and probably also perlapp) aren't meant to package the whole test suite and plethora of CPAN-module build byproducts. They're intended to package stand-alone applications or binary JAR-like module libraries.
This being said, you can add arbitrary files to a PAR archive (both to .par libraries and stand-alone .exe's) using pp's -a switch. In case of the stand-alone executable, the contents will be extracted to $ENV{PAR_TEMP}."/inc" at run-time.
That leaves you with the problem of reusing the PAR-packaged executable to run the test harness (and letting that run your executable as a "perl"). Now, I have no ready and done solution for that, but I've recently worked on making PAR-packaged executables re-useable as more-or-less general purpose perl interpreters. Two gotchas before I explain how you can use that:
Your application isn't going to magically be called "perl" and add itself to your $PATH.
The "reuse" of the application as a general purpose perl requires special options and does not currently support the normal perl options (those in perlrun). It can simply run an external perl script of your choice.
Unfortunately, the latter problem is what may kill this approach for you. Support for perl command line options is something I've been thinking about, but won't implement any time soon.
Here's the recipe how you get PAR with "reusable exe" support:
Install the newest version of PAR from CPAN.
Install the newest, developer version of PAR::Packer from CPAN (0.992_02 or 03).
Add the "--reusable" option to your pp command line.
Run your executable with the following options to run an external script "foo.pl":
./myapp --par-options --reuse foo.pl FOO-PL-OPTIONS-HERE
Unfortunately, how you're going to teach Test::Harness that "./myapp --par-options --reuse" is a perl interpreter is beyond me.
Cava Packager allows you to package test scripts with your packaged executables. This is primarily to allow you to run tests against the packaged code before distribution. However the option is there to also distribute the tests and test capability to your end users.
Note: As indicated by my name, I am affiliated with Cava Packager.
This is a follow-up to my previous question about developing Perl applications. Let’s say I develop an application as a CPAN module using Module::Install. Now I upload the code to the production server, say using a git push, and I would like to install the application dependencies listed in Makefile.PL. If I simply run cpan ., the thing tries to install the application like a regular Perl module, ie. starts to copy the modules and documentation to standard Perl directories all over the system.
Is this the way it’s supposed to be? Do you install the application into the standard Perl directories? I am used to having my Perl applications in one directory with separate lib. Otherwise it seems I’d have to manage a lot of other things, like installing the resources somewhere on path etc. If I just want to install the deps declared in Makefile.PL and run the application tests to make sure everything works, what should I do?
(Is this documented somewhere? I mean, is there something like best practice for deploying and updating non-trivial Perl applications? Or is everybody doing this his own way?)
I might be misunderstanding, but I think what you're looking for is
perl Makefile.PL
make installdeps
If you are using Module::Install, you're really using ExtUtils::MakeMaker behind the scenes. You can use all of the MakeMaker features and the targets it provides. Although the documentation doesn't show every feature, there are some valuable things to be found in the generated Makefile.
However, MakeMaker is old news and most everyone has asked Santa Claus for it to disappear. If you want better control, including creating your own targets and process, Module::Build is orders of magnitude easier to work with as well as cross-platform (even if that just means not using a different make, gmake, or whatever on the same OS on different boxes). If you deviate from the normal, consumer-grade installation process, you're life will be easier without MakeMaker.
Some people appreciate the brevity of the Module::Install build file, but once constructed, you don't spend a lot of time messing with your build file so it's not that much of a real benefit. When the little benefit you get locks you into MakeMaker, it's not a win at all.
A 2014 update: Module::Build has now fallen out of favor and needs a maintainer. It never quite got to the point where people could use it to build and distribute XS modules. It was deprecated in Perl v5.19 although you can still get it from CPAN.
You could look at Module::ScanDeps to generate a list of dependent modules for installing. Or Par::Packer for packaging up the whole thing as an "app".
I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like:
$ cpan -i `said-script --list-deps`
Or even:
$ list-deps said-script > required-modules # on my machine
$ cpan -i `cat required-modules` # on his machine
Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.)
Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about #INC. I settled with something like this:
print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC);
Which prints out:
Moose::Meta::TypeConstraint::Registry
Moose::Meta::Role::Application::ToClass
Class::C3
List::Util
Imager::Color
…
Looks like this will work.
Check out Module::ScanDeps and the "scandeps.pl" utility that comes with it. It can do a static (and recursive) analysis of your code for dependencies as well as the %INC dump either after compiling or running the program.
Please note that the static source scanning always errs on the side of including too many dependencies. (It is the dependency scanner used by PAR and aims at being easiest on the end-user.)
Finally, you could choose to distribute your script as a CPAN distribution. That sounds much more complicated than it really is. You can use something like Module::Starter to set up a basic skeleton of a tentative App::YourScript distribution. Put your script in the bin/ subdirectory and edit the Makefile.PL to reference all of your direct dependencies. Then, for distribution you do:
perl Makefile.PL
make
make dist
The last step generates a nice App-YourScript-VERSION.tar.gz
Now, when the client wants to install all dependencies, he does the following:
Set up the CPAN client correctly. Simply run it and answer the questions. But you're requiring that already anyway.
"tar -xz App-YourScript-VERSION.tar.gz && cd App-YourScript-VERSION"
Run "cpan ."
The CPAN client will now install all direct dependencies and the dependencies of those distributions automatically. Depending on how you set it up, it will either follow the prerequisites recursively automatically or prompt with a y/n each time.
As an example of this, you might check out a few of the App::* distributions on CPAN. I would think App::Ack is a good example. Maybe one of the App::* distributions from my CPAN directory (SMUELLER).
You could dump %INC at the end of your script. It will contain all used and required modules. But of course, this will only be helpful if you don't require modules conditionally (require Foo if $bar).
For quick-and-dirty, infrequent use, the %INC is the best way to go. If you have to do this with continuous integration testing or something more robust, there are some other tools to help.
Steffen already mentioned the Module::ScanDeps.
The code in Test::Prereq does this, but it has an additional layer that ensures that your Makefile.PL or Build.PL lists them as a dependency. If you make your scripts look like a normal Perl distribution, that makes it fairly easy to check for new dependencies; just run the test suite again.
Aside from that, you might use a tool such as Module::Extract::Use, which parses the static code looking for use and require statements (although it won't find them in string evals). That gets you just the modules you told your script to load.
Also, once you know which modules you loaded, you can combine that with David Cantrell's CPANdeps tool that has already created the dependency tree for most CPAN modules.
Note that you also have to think about optional features too. Your code in this case my not have them, but sometimes you don't load a module until you need it:
sub foo
{
require Bar; # don't load until we need to use it
....
}
If you don't exercise that feature in your trial run or test, you won't see that you need Bar for that feature. A similar problem comes up when a module loads a different set of dependency modules in a different environment (say, mod_perl or Windows, and so on).
There's not a good, automated way of testing optional features like that so you can get their dependencies. However, I think that should be on my To Do list since it sounds like an interesting problem.
Another tool in this area, which is used by Dist::Zilla and its AutoPrereqs plugin, is Perl::PrereqScanner. It installs a scan-perl-prereqs program that will use PPI and a few plugins to search for most kinds of prereq declaration, using the minimum versions you define. In general, I suggest this over scanning %INC, which can bring in bogus requirements and ignores versions.
Today I develop my Perl apps as CPAN-like distributions using Dist::Zilla that can take care of the dependencies through the AutoPrereq plugin. Another interesting piece of code in this area is carton.