I am working on a module that I would like to have two backends, a Module(::PerlArray) and Module::PDL (which can will depend on Module). Both need access to a functions.c/.h file for building. This file has the rather complicated logic needed for the module. Rather than distribute it separately with each module, is there some way to keep it with the Module::PP on the system and then add it to the appropriate build flags in EU::MM or M::B (given the complexity here probably the latter)?
To put it more visually
--Module--
Module.pm
Module/PerlArray.pm
Module/PerlArray.xs (#include functions.h
#include perlarray_backend.h)
Module/src/functions.c
Module/src/perlarray_backend.c
Module/inc/functions.h
Module/inc/perlarray_backend.h
--Module::PDL--
Module/PDL.pm
Module/PDL.xs (#include functions.h /*from Module*/
#include pdl_backend.h)
Module/src/pdl_backend.c
Module/inc/pdl_backend.h
and the compilation makes functions.o and links. I'm sure I can figure out how to set the flags appropriately but how can I make Module keep the functions.c file while installing, and how can I then find it when installing Module::PDL? Is there some location I can place the functions.c/.h?
Have you looked at DBI? It does what you suggest: it installs some .h file(s) that the DBD drivers can #include in their XS code, as well as a library that the DBD drivers can call.
Modules should be independently installable. That is, providing I have the pre-requisite Perl modules installed (but not necessarily still lying around in source form), then it should be possible to install all the modules in one distributed tar file without reference to the source for any other module.
You have options. One is to have a single source directory create several distributed tar balls, and they can each have a copy of the shared function.[ch] in the distributed source.
The other main option is to bundle both modules into a single distributed tar ball.
Related
I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.
I would like to include a few additional .pl files in my CPAN module. These files are not essential to use the module, but are provide useful functionality/glue when the module is used in some common frameworks and applications.
Currently, I just include the .pl files in a "extras" directory of the distribution. This has the drawback that the files are not installed on make install. Is there a way to include them in the installation and where should they go? (They aren't executables and don't belong in "bin".) Would "share" make sense? Or are these kinds of files usually just not installed and it is left to the user to get them out of the .tgz archive and use as needed?
I use Dist::Zilla to manage my distribution.
I would suggest the following:
If they're actual complete programs or are almost complete, polish then up and make them standalone items that could go into /bin with POD of their own.
If they're utility glue, make a ::Utils module for them to live in and document their usage.
If these are useful code snippets but not something you can install somewhere or are sample usages or handy idioms, create a ::Cookbook all-POD module and include them there with the appropriate illuminating explanation for each one.
I don't know exactly how Dist::Zilla works, but the resulting archive has to be compatible with what ExtUtils::MakeMaker creates.
When you create a module with module-starter, it creates a module template using ExtUtils::MakeMaker. It creates several files and directories like the lib directory where your module lives and the t directories where your tests live.
One thing it doesn't create is a bin directory. However, if you create a bin directory, and you put files under this directory (such as Perl scripts), these files will be installed under the bin directory in your Perl's distribution and linked to /usr/local/bin or /usr/bin. Would this be a good place for your scripts?
I liked #Joe's answer, except that in my case the files were WebWork macros -- individual .pl files that make my module callable from WebWork end-user's code. So they don't fit under any of the categories discussed here, and as .pl files can't be made a module.
This is what I ended up doing:
put all .pl macro files into 'extras/WebWork' in the distribution.
add to "dist.ini" file a [ShareDir] stanza with dir = extras property.
now the WebWork admin can install my distribution from CPAN and then use perl -MFile::ShareDir -e 'print File::ShareDir::dist_dir("Statistics-R-IO")' to find the macros and make them available in WebWork.
I am working on perl module that I would like to submit in CPAN.
But I have a small query in regards to the directory structure of module.
As per the perlmonk article the module code directory structure should be as below:
Foo-Bar-0.01/Bar.pm
Foo-Bar-0.01/Makefile.PL
Foo-Bar-0.01/MANIFEST
Foo-Bar-0.01/Changes
Foo-Bar-0.01/test.pl
Foo-Bar-0.01/README
But when I am using the command, the structure is generated as below
h2xs -AX Foo::Bar
Writing Foo-Bar/lib/Foo/Bar.pm
Writing Foo-Bar/Makefile.PL
Writing Foo-Bar/README0
Writing Foo-Bar/t/Foo-Bar.t
Writing Foo-Bar/Changes
Writing Foo-Bar/MANIFEST
The article in question is advocating a considerably-older module structure. It certainly could be used, but it loses a lot of the advancements that have been put into place as far as good testing, building, and distribution practices.
To break down the differences:
modules have moved from the top level to the lib/ directory. This unifies the location where your module "lives" (i.e., the place where you work on the code and create the baseline modules to be tested and eventually distributed). It also makes it easier to set up any hierarchy that you need (e.g. subclasses, or helper modules); the newer setup will just pick these up. The older one may but I'm not familiar enough with it to say yes or no.
Makefile.PL in the newer setup will, when "make" is run. create a library called "blib", the *b*uild *lib*rary - this is where the code is built for actual testing. It will pretty much be a copy of lib/ unless you have XS code, in which case this is where the compiled XS code ends up. This makes the process of building and testing the code simpler; if you update a file in lib/, the Makefile will rebuild the file into blib before trying to test it.
the t/ directory replaces test.pl; "make test" will execute all the *.t files in t/, as opposed to you having to put all your tests in test.pl. This makes it far easier to write tests, as you can be sure you have a consistent state at the beginning of each test.
MANIFEST and Changes are the same in both: MANIFEST (built by "make manifest") is used to determine which files in the build library should be redistributed when the module is packaged for upload, and used to verify that a package is complete when it's downloaded and unpacked for building. Changes is simply a changelog, which you edit by hand to record the changes made in each distributed version.
As recommended in the comments on your question, using Module::Starter or Dist::Zilla (be warned that Dist::Zilla is Moose-based and will install a lot of prereqs) is a better approach to building modules in a more modern way. Of the two, the h2xs version is closer to modern packaging standards, but you're really better off using one of the recommended package starter options (and probably Module::Build, which uses a Build Perl script instead of a Makefile to build the code).
I am trying to set up an application dependant on few Perl modules, but the server I am installing to, does not have Internet connection. I read about offline module installs via ppd files, however I would have to resolve all the dependencies one by one.. All the more tedious considering I don't have direct internet connection.
I am hoping to find a solution, where I install ActivePerl on my PC and install all the libraries that I want and then copy paste the directories to my server. If it is just a matter of fixing some environment variables, that would be fine. Just want to know the definitive list of variables to modify. Not sure whether it is mandatory to install the perl libraries on the computer in which it is intended to run? (One is 32 bit platform and other one is 64 bit, but the server is already running various 32 bit applications so I hope it is not a major problem) For best compatibility, I plan to install ActivePerl on both the systems and merge the library directories to be identical.
The answer was on Perl FAQ, my bad didn't go through it properly.
I copied the perl binary from one machine to another, but scripts don't work.
That's probably because you forgot libraries, or library paths differ.
You really should build the whole distribution on the machine it will
eventually live on, and then type "make install". Most other approaches
are doomed to failure.
One simple way to check that things are in the right place is to print
out the hard-coded #INC that perl looks through for libraries:
% perl -le 'print for #INC'
If this command lists any paths that don't exist on your system, then
you may need to move the appropriate libraries to these locations, or
create symbolic links, aliases, or shortcuts appropriately. #INC is also
printed as part of the output of
% perl -V
You might also want to check out "How do I keep my own module/library
directory?" in perlfaq8.
From this link
Occasionally, you will not be able to
use any of the methods to install
modules. This may be the case if you
are a particularly under-privileged
user - perhaps you are renting web
space on a server, where you are not
given rights to do anything.
It is possible, for some modules, to
install the module without compiling
anything, and so you can just drop the
file in place and have it work.
Without going into a lot of the
detail, some Perl modules contain a
portion written in some other language
(such as C or C++) and some are
written in just in Perl. It is the
latter type that this method will work
for. How will you know? Well, if there
are no files called something.c and
something.h in the package, chances
are that it is a module that contains
only Perl code.
In these cases, you can just unpack
the file, and then copy just the *.pm
files to a directory from which you
will run the modules. Two examples of
this should suffice to illustrate how
this is done.
IniConf.pm is a wonderful little
module that allows you to read
configuration information out of a
.ini-style config file. IniConf.pm is
written only in Perl, and has no C
portion. When you unpack the .tar.gz
file that you got from CPAN, you will
find several files in there, and one
of them is called IniConf.pm. This is
the only file that you are actually
interested in. Copy that file to the
directory where you have the Perl
programs that will be using this
module. You can then use the module as
you would if it was installed
``correctly,'' with just the line:
use IniConf;
Time::CTime is another very handy
module that lets you print times in
any format that strikes your fancy. It
is written just in Perl, without a C
component. You will install it just
the same way as you did with IniConf,
except that the file, called CTime.pm,
must be placed in a subdirectory
called Time. The colons, as well as
indicating an organization of modules,
also indicates a directory structure
on your file system.
I'm trying to set up a large-ish project, written in Perl. The IBM MakeMaker tutorial has been very helpful so far, but I don't understand how to link all the modules into the main program. In my project root, I have MANIFEST, Makefile.PL, README, a bin directory, and a lib directory. In my bin directory, I have my main script (Main.pl). In the lib directory, I have each of my modules, divided up into their own respective directories (i.e. Utils::Util1 and Utils::Utils2 in the utils directory, etc). In each module directory, there is also a t directory, containing tests
My MANIFEST file has the following:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
lib/Utils/t/Utils1.t
lib/Utils/t/Utils2.t
Makefile.PL
MANIFEST
README
Makefile.PL is the following:
use ExtUtils::MakeMaker;
WriteMakefile(
'NAME'=>'Foo',
'VERSION_FROM'=>'bin/Main.pl',
'PREREQ_PM'=>{
"XML::Simple"=> 2.18}, #The libraries that we need and their
#minimum version numbers
'EXE_FILES' =>[("bin/Main.pl")]
);
After I make and run, the program crashes, complaining that it cannot find Utils::Util1, and when I run 'make test, it says no tests defined. Can anyone make any suggestions? I have never done a large scale project like this in perl, and I will need to add many more modules
If you are just starting to create Perl modules (which is also Perl's equivalent of a project), don't use Makemaker. Module::Build is the way to go, and it's now part of the standard library. Makemaker is for us old salts who haven't converted to Module::Build yet. :) I'll strike that now that Module::Build is unmaintained and out of favor; I still use MakeMaker.
You should never start off a Perl project by trying to create the structure yourself. It's too much work and you'll always forget something.
There's h2xs, a program that comes with perl and was supposed to be a tool to convert .h files into Perl's glue language XS. It works fine, but its advantage is that it comes with perl:
% h2xs -AXn Module::Name
Something like Module::Starter is a bit more sophisticated, although you have to get it from CPAN. It's the tool we use in Intermediate Perl because it's simple. It fills in some templates with your information:
% module-starter --author=... --email=... --module=...
If you are doing to do this quite a bit, you might then convert that to Distribution::Cooker so you can customize your files and contents. It's a dinky utility I wrote for myself so I could use my own templates.
% dist_cooker Module::Name
If you're really hard core, you might want Dist::Zilla, but that's more for people who already know what they are doing.
Might I also suggest module-starter? It'll automatically create a skeleton project which "Just Works". I learned what little I know about Perl modules organization by reading the generated skeleton files. It's all well-documented, and quite easy to use as a base for growing a larger project in. You can check out the getting-started docs to see what it gives you.
Running module-starter will give you a Perl distribution, consisting of a number of modules (use the command line option --module, such as:
module-starter --distro=Project --module=Project::Module::A,Project::Module::B [...]
to create multiple modules in a single distribution). It's then up to you whether you'd prefer to organize your project as a single distribution consisting of a number of modules working together, or as a number of distributions which can be released separately but which depend on each other (as configured in your Build or Makefile.PL file) to provide a complete system.
Try this structure:
bin/Main.pl
lib/Utils/Util1.pm
lib/Utils/Util2.pm
Makefile.PL
MANIFEST
README
t/Utils1.t
t/Utils2.t
As ysth said, make does not install your modules, it just builds them in a blib directory. (In your case it just copies them there, but if you had XS code, it would be compiled with a C compiler.) Use make install to install your modules for regular scripts to use.
If you want to run your script between make and make install, you can do:
perl -Mblib bin/Main.pl
The -Mblib instructs perl to temporarily add the appropriate directories to the search path, so you can try out an uninstalled module. (make test does that automatically.)
By default, tests are looked for in a top-level t directory (or a test.pl file, but that has some limitations, so should be avoided).
You say "After I make and run"...make puts things into a blib directory structure ready to be installed, but doesn't do anything special to make running a script access them. (make test is special; it does add appropriate paths from blib to perl's #INC to be able to run the tests.) You will need to do a "make install" to install the modules where your script will find them (or use a tool like PAR to package them together with your script).