ExtUtils::MakeMaker how to install configuration file - perl

I'm using ExtUtils::MakeMaker to distribute my perl module, which is composed of a PM file and an executable.
The executable have to load a configuration file.
I want my Makefile.PL to :
generate the configuration file ;
install it in the correct PREFIX ;
modify the executable to set the real path of the configuration file.
But I have no idea of how to do this.

This is where Makemaker is a real pain. You have to know how to do this in make.
First, Makefile.PL is just a Perl program, so you can do anything you want with that.
Second, you can use a .PL file as a program that runs to generate the real file. For instance, you could have a lib/Module.pm.PL. At build time, the system runs that program and uses the output to create blib/lib/Module.pm. See the documentation for PL_FILES.
Third, you can add tasks to the make targets so your processing happens at the right time. Double colon targets add to the tasks already defined.
all ::
perl create_config.pl > config.pl
install ... config.pl
To add this to Makefile from Makefile.PL, you have to add special subroutines to the MY namespace. It's all documented in Makemaker, but the docs assume you are comfortable with make.
But, all of that is really a pain in the ass even if you know how to do it. You think you have it figured out, then someone has a different setup or a different sort of make. You spend all your time with fiddly bits to make it compatible everywhere. This is one of the major reasons the Perl gods invented Module::Build. It's much easier to add custom processing and modify build targets, and you get to do it all in Perl! The rule of thumb is that if you don't know how to do it with Makemaker, it's time to use Module::Build.

Related

Perl to generate one executable file for a script which uses any number of modules and libraries

I am working on creating an agent in perl which does several actions. It uses several modules which are in .pm format and also few libraries. Now i want to convert it as one executable file so that i can install in n number of servers by copying that single file. Is it something i can achieve in perl? I am just a beginner in perl, perhaps my question might sound dumb but it will teach me something.
pp script provided with PAR::Packer is able to create single-file executables. An example from its page:
pp -o foo foo.pl bar.pl # Pack 'foo.pl' and 'bar.pl' into 'foo'
Some modules are included with Perl, so even though they're separate modules, they will work on other Perl installs without installing those modules. These include File::Copy, File::Find, Time::Piece.
You can see the listing of all standard modules on the Perldoc home page. Be sure to set the drop down version field (located on the left side) to the version that you're using. It goes all the way back to Perl 5.8.8 which is on Solaris.
It is entirely possible that the modules you need are already included in the standard Perl distribution, so there's no need to worry. Sometimes, you can substitute a non-standard module that's being used for one that's a standard module with little rewriting.
Some modules include compiled C code and can't be redistributed. They must be compiled on the machine they'r running on and installed. However, most modules are pure Perl modules, and can be redistributed with a program.
If a module isn't a standard module, and it's a pure Perl module, there are two ways it can be redistributed:
Perl has an #INC list that says what directories to search for when you search for modules. There's a Perl use lib pragma that allows you to add directories. You could include modules as sub directories for your program, and then zip up the entire structure. Users would unzip the entire directory tree which would include your program and the modules you need. By the way, the default #INC usually includes the current directory.
The other way is to append the modules to your program and then remove the use statement for that module (since it's now part of the file). This is a bit tricky, but it means a single program file.
Just remember that a module might require another module, so check thoroughly.
Another thing you can do is check for the module, and if it isn't there, download it via CPAN. Testing is easy:
BEGIN {
eval {
require My::Module; Module->import( LIST );
};
if ($#) {
die qq(Module doesn't exist);
}
}
Of course, doing a die is sort of silly because use would do that. However, it might be possible instead of dying to load the module via the CPAN module programmer's interface. I've never done that, and I don't know people who have. But, it is possible.
So, your best bet is to check to see if your program uses standard Perl modules, and if not, see if you can modify the program to use them. For example, if your program uses Archive::Zip, you might be able to modify it to use IO::Uncompress::Unzip and IO::Compress::Zip instead.
Otherwise, your choice is to try include those modules for installation (and watch for recursiveness and non-Pure Perl modules) or to try to detect that a module isn't installed, and programmatically install it.
The answer is a bit complicated.
The nature of Perl makes it practically impossible to compile a perl script in most use cases, so that a single executable could be distributed (with executable in the Windows sense). There are ways to do something similar, but sadly I don't know them.
But you can actually embed the Perl interpreter inside any C application, including the Perl source (your scripts + modules). When you statically link all C libraries, this should work as well. You can then use the Perl API to call your scripts.
If all of the servers you target are guaranteed to run the exact same OS, using the exact same libraries, and are preferably a *nix of some sort, it would be possible to pack all required files into an archive and write an install script. It is possible to write self-extracting shell scripts that contain the archive they are about to unpack. Same goes with perl, using the special __DATA__ command and the DATA filehandle:
#!/usr/bin/perl
print for <DATA>;
__DATA__
1
2
3
prints
1
2
3
Works great for piping data to tar as well.
You should include all dependent modules and all compiled libraries into the file and figure out a metadata system to install all files to the correct place.
As a general rule, software should rather be compiled on the target system itself, than just copying the binary files. It is too easy to overlook architecture differencies, configuration files or special registration entries hidden from view.
If you have to target different systems, it might be better to write a script that delegates the bulk of the installation to cpan or whatever perl package manager you prefer. This will be more flexible than hard-coding filepaths.
#!/bin/bash
cpan install Foo::Bar
cpan install Acme
cpan install ...
# etc.
I would stick with that.
The most elegant solution would be to create your own package or distribution like the ones you download from CPAN. As you would include a metadata file referencing all your dependencies, cpan would figure out everything by itself and do possibly neccessary compilation. I don't think this exactly is a beginners topic, but it would give you max flexibility and maintainability (easy upgrades!). This should make it fairly easy to include some installation tests.
This is just for starters, I am sure the internet or somebody else with more knowledge will elaborate.

Can I move a Perl installation from one computer to another computer?

I am trying to set up an application dependant on few Perl modules, but the server I am installing to, does not have Internet connection. I read about offline module installs via ppd files, however I would have to resolve all the dependencies one by one.. All the more tedious considering I don't have direct internet connection.
I am hoping to find a solution, where I install ActivePerl on my PC and install all the libraries that I want and then copy paste the directories to my server. If it is just a matter of fixing some environment variables, that would be fine. Just want to know the definitive list of variables to modify. Not sure whether it is mandatory to install the perl libraries on the computer in which it is intended to run? (One is 32 bit platform and other one is 64 bit, but the server is already running various 32 bit applications so I hope it is not a major problem) For best compatibility, I plan to install ActivePerl on both the systems and merge the library directories to be identical.
The answer was on Perl FAQ, my bad didn't go through it properly.
I copied the perl binary from one machine to another, but scripts don't work.
That's probably because you forgot libraries, or library paths differ.
You really should build the whole distribution on the machine it will
eventually live on, and then type "make install". Most other approaches
are doomed to failure.
One simple way to check that things are in the right place is to print
out the hard-coded #INC that perl looks through for libraries:
% perl -le 'print for #INC'
If this command lists any paths that don't exist on your system, then
you may need to move the appropriate libraries to these locations, or
create symbolic links, aliases, or shortcuts appropriately. #INC is also
printed as part of the output of
% perl -V
You might also want to check out "How do I keep my own module/library
directory?" in perlfaq8.
From this link
Occasionally, you will not be able to
use any of the methods to install
modules. This may be the case if you
are a particularly under-privileged
user - perhaps you are renting web
space on a server, where you are not
given rights to do anything.
It is possible, for some modules, to
install the module without compiling
anything, and so you can just drop the
file in place and have it work.
Without going into a lot of the
detail, some Perl modules contain a
portion written in some other language
(such as C or C++) and some are
written in just in Perl. It is the
latter type that this method will work
for. How will you know? Well, if there
are no files called something.c and
something.h in the package, chances
are that it is a module that contains
only Perl code.
In these cases, you can just unpack
the file, and then copy just the *.pm
files to a directory from which you
will run the modules. Two examples of
this should suffice to illustrate how
this is done.
IniConf.pm is a wonderful little
module that allows you to read
configuration information out of a
.ini-style config file. IniConf.pm is
written only in Perl, and has no C
portion. When you unpack the .tar.gz
file that you got from CPAN, you will
find several files in there, and one
of them is called IniConf.pm. This is
the only file that you are actually
interested in. Copy that file to the
directory where you have the Perl
programs that will be using this
module. You can then use the module as
you would if it was installed
``correctly,'' with just the line:
use IniConf;
Time::CTime is another very handy
module that lets you print times in
any format that strikes your fancy. It
is written just in Perl, without a C
component. You will install it just
the same way as you did with IniConf,
except that the file, called CTime.pm,
must be placed in a subdirectory
called Time. The colons, as well as
indicating an organization of modules,
also indicates a directory structure
on your file system.

Do I have to run make/make install to test each change to a Perl distribution file?

Do I have to run make and make install each time I change a .pm file for Perl? I'm doing a ton of testing and this is becoming cumbersome.
You don't have to install the module to test it.
If I'm testing inside my distribution directory, I just use the test target:
% make test
Or, if I'm using Module::Build:
% ./Build test
Since make is a dependency management tool, it also takes care of any other steps it needs to perform so it can run the test target. You don't need to run each target separately. Module::Build does the same thing.
If I want to test a single file, I combine the make command with a call to perl that also uses the blib module to set the right #INC:
% make; perl -Mblib t/single_test.t
Some people like using prove for the same thing. No matter which method I use, I'm probably using the arrow keys to move back to a previous command line to re-run it. I do very little typing in any of this.
It depends on module setup, but under the standard MakeMaker I use, "make test" runs a "make" if any files have been modified, so when doing intra-module development "make test" is the only command you need until you've finished.
Evan Carroll got it basically right. To expand on his answer: use the testing tools that come with Perl to tighten the workflow.
Let's say you are in your project directory and you hack on the files in its lib/ subdirectory. Execute prove -l to run all tests. That's easier than messing with absolute paths in the PERL5LIB environment variable.
Presumably you're editing a lib module in a non-lib location, rather than clobbering a global library for each modification - do the sensible thing and change the library path perl uses with PERL5LIB, which will append internally to #INC (the use search path):
PERL5LIB=/home/user/code/perl/project/lib perl myapp.pl
If your program isn't pure-perl and requires a make system, there is no way to do this short of rebuilding, but pure-perl (PP) doesn't really require make under normal circumstances. If you do it this way, running perl under a normal environment will yield the predictable and tested results, running it with your PERL5LIB will allow you to test the program.

How do I start a new Perl module distribution?

I'm trying to set up a large-ish project, written in Perl. The IBM MakeMaker tutorial has been very helpful so far, but I don't understand how to link all the modules into the main program. In my project root, I have MANIFEST, Makefile.PL, README, a bin directory, and a lib directory. In my bin directory, I have my main script (Main.pl). In the lib directory, I have each of my modules, divided up into their own respective directories (i.e. Utils::Util1 and Utils::Utils2 in the utils directory, etc). In each module directory, there is also a t directory, containing tests
My MANIFEST file has the following:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
lib/Utils/t/Utils1.t
lib/Utils/t/Utils2.t
Makefile.PL
MANIFEST
README
Makefile.PL is the following:
use ExtUtils::MakeMaker;
WriteMakefile(
'NAME'=>'Foo',
'VERSION_FROM'=>'bin/Main.pl',
'PREREQ_PM'=>{
"XML::Simple"=> 2.18}, #The libraries that we need and their
#minimum version numbers
'EXE_FILES' =>[("bin/Main.pl")]
);
After I make and run, the program crashes, complaining that it cannot find Utils::Util1, and when I run 'make test, it says no tests defined. Can anyone make any suggestions? I have never done a large scale project like this in perl, and I will need to add many more modules
If you are just starting to create Perl modules (which is also Perl's equivalent of a project), don't use Makemaker. Module::Build is the way to go, and it's now part of the standard library. Makemaker is for us old salts who haven't converted to Module::Build yet. :) I'll strike that now that Module::Build is unmaintained and out of favor; I still use MakeMaker.
You should never start off a Perl project by trying to create the structure yourself. It's too much work and you'll always forget something.
There's h2xs, a program that comes with perl and was supposed to be a tool to convert .h files into Perl's glue language XS. It works fine, but its advantage is that it comes with perl:
% h2xs -AXn Module::Name
Something like Module::Starter is a bit more sophisticated, although you have to get it from CPAN. It's the tool we use in Intermediate Perl because it's simple. It fills in some templates with your information:
% module-starter --author=... --email=... --module=...
If you are doing to do this quite a bit, you might then convert that to Distribution::Cooker so you can customize your files and contents. It's a dinky utility I wrote for myself so I could use my own templates.
% dist_cooker Module::Name
If you're really hard core, you might want Dist::Zilla, but that's more for people who already know what they are doing.
Might I also suggest module-starter? It'll automatically create a skeleton project which "Just Works". I learned what little I know about Perl modules organization by reading the generated skeleton files. It's all well-documented, and quite easy to use as a base for growing a larger project in. You can check out the getting-started docs to see what it gives you.
Running module-starter will give you a Perl distribution, consisting of a number of modules (use the command line option --module, such as:
module-starter --distro=Project --module=Project::Module::A,Project::Module::B [...]
to create multiple modules in a single distribution). It's then up to you whether you'd prefer to organize your project as a single distribution consisting of a number of modules working together, or as a number of distributions which can be released separately but which depend on each other (as configured in your Build or Makefile.PL file) to provide a complete system.
Try this structure:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
Makefile.PL
MANIFEST
README
t/Utils1.t
t/Utils2.t
As ysth said, make does not install your modules, it just builds them in a blib directory. (In your case it just copies them there, but if you had XS code, it would be compiled with a C compiler.) Use make install to install your modules for regular scripts to use.
If you want to run your script between make and make install, you can do:
perl -Mblib bin/Main.pl
The -Mblib instructs perl to temporarily add the appropriate directories to the search path, so you can try out an uninstalled module. (make test does that automatically.)
By default, tests are looked for in a top-level t directory (or a test.pl file, but that has some limitations, so should be avoided).
You say "After I make and run"...make puts things into a blib directory structure ready to be installed, but doesn't do anything special to make running a script access them. (make test is special; it does add appropriate paths from blib to perl's #INC to be able to run the tests.) You will need to do a "make install" to install the modules where your script will find them (or use a tool like PAR to package them together with your script).

How do I find the module dependencies of my Perl script?

I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like:
$ cpan -i `said-script --list-deps`
Or even:
$ list-deps said-script > required-modules # on my machine
$ cpan -i `cat required-modules` # on his machine
Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.)
Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about #INC. I settled with something like this:
print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC);
Which prints out:
Moose::Meta::TypeConstraint::Registry
Moose::Meta::Role::Application::ToClass
Class::C3
List::Util
Imager::Color
…
Looks like this will work.
Check out Module::ScanDeps and the "scandeps.pl" utility that comes with it. It can do a static (and recursive) analysis of your code for dependencies as well as the %INC dump either after compiling or running the program.
Please note that the static source scanning always errs on the side of including too many dependencies. (It is the dependency scanner used by PAR and aims at being easiest on the end-user.)
Finally, you could choose to distribute your script as a CPAN distribution. That sounds much more complicated than it really is. You can use something like Module::Starter to set up a basic skeleton of a tentative App::YourScript distribution. Put your script in the bin/ subdirectory and edit the Makefile.PL to reference all of your direct dependencies. Then, for distribution you do:
perl Makefile.PL
make
make dist
The last step generates a nice App-YourScript-VERSION.tar.gz
Now, when the client wants to install all dependencies, he does the following:
Set up the CPAN client correctly. Simply run it and answer the questions. But you're requiring that already anyway.
"tar -xz App-YourScript-VERSION.tar.gz && cd App-YourScript-VERSION"
Run "cpan ."
The CPAN client will now install all direct dependencies and the dependencies of those distributions automatically. Depending on how you set it up, it will either follow the prerequisites recursively automatically or prompt with a y/n each time.
As an example of this, you might check out a few of the App::* distributions on CPAN. I would think App::Ack is a good example. Maybe one of the App::* distributions from my CPAN directory (SMUELLER).
You could dump %INC at the end of your script. It will contain all used and required modules. But of course, this will only be helpful if you don't require modules conditionally (require Foo if $bar).
For quick-and-dirty, infrequent use, the %INC is the best way to go. If you have to do this with continuous integration testing or something more robust, there are some other tools to help.
Steffen already mentioned the Module::ScanDeps.
The code in Test::Prereq does this, but it has an additional layer that ensures that your Makefile.PL or Build.PL lists them as a dependency. If you make your scripts look like a normal Perl distribution, that makes it fairly easy to check for new dependencies; just run the test suite again.
Aside from that, you might use a tool such as Module::Extract::Use, which parses the static code looking for use and require statements (although it won't find them in string evals). That gets you just the modules you told your script to load.
Also, once you know which modules you loaded, you can combine that with David Cantrell's CPANdeps tool that has already created the dependency tree for most CPAN modules.
Note that you also have to think about optional features too. Your code in this case my not have them, but sometimes you don't load a module until you need it:
sub foo
{
require Bar; # don't load until we need to use it
....
}
If you don't exercise that feature in your trial run or test, you won't see that you need Bar for that feature. A similar problem comes up when a module loads a different set of dependency modules in a different environment (say, mod_perl or Windows, and so on).
There's not a good, automated way of testing optional features like that so you can get their dependencies. However, I think that should be on my To Do list since it sounds like an interesting problem.
Another tool in this area, which is used by Dist::Zilla and its AutoPrereqs plugin, is Perl::PrereqScanner. It installs a scan-perl-prereqs program that will use PPI and a few plugins to search for most kinds of prereq declaration, using the minimum versions you define. In general, I suggest this over scanning %INC, which can bring in bogus requirements and ignores versions.
Today I develop my Perl apps as CPAN-like distributions using Dist::Zilla that can take care of the dependencies through the AutoPrereq plugin. Another interesting piece of code in this area is carton.