Let's say I've created a directory using module-starter, and written several additional modules and tests since.
make test would then run all tests in t/ on all modules in lib/, however make dist will only pack files mentioned in MANIFEST into tar.gz.
So I got burnt recently by running make test && make dist and still getting a broken package.
My question is: am I missing something, or this can be reported as a minor bug in MakeMaker? (Which Makefile.PL seems to rely upon).
You can use make disttest which will create a distribution directory from the MANIFEST (equivalent to make distdir) and run make test in that. This guarantees you're running against the same files as will be shipped.
I also rebuild my MANIFEST as part of making a release, which requires keeping your MANIFEST.SKIP up to date.
All in all, my basic release script is:
perl Makefile.PL
make manifest
make disttest
make dist
Run make distcheck before you release your package. This will warn you about anything potentially missing from your MANIFEST.
Some modules generate files during the build process (including under lib/), so files missing in the MANIFEST shouldn't necessarily cause make dist to fail.
Related
I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.
I've been reading about building RPMs, and the process is quite complex. Is there any program/software that works like this:
Download tar.gz file. Extract to directory
cd into directory
Run
RPM file is output into the directory
Does any such program exist? It seems as if it should. After all, when I run make, make install etc, I don't need to specify spec files, provide locations for where the software has to be installed. So why should I have to do all that for creating RPMs?
I've tried using checkinstall, but I keep getting errors like "Directory not found: /root/rpmbuild/BUILDROOT/hello-2.10-1.x86_64/usr"
So is there an easier way?
No. There is no easier way.
Sometimes upstream provide 'make rpm' target. Sometime checkinstall works. But often you have to create the spec file manually.
BTW that error from checkinstall reveals two things:
you are running that command as root. That is very very unwise.
you should create few build directories. Run command rpmdev-setuptree it will create them for you.
I'm building an RPM that needs to run a number of scripts to configure it after it's been installed to complete the installation. I have to run the scripts in the %post section because the configuration is dependent upon the type of host. All this is fairly easy and well, but every time I run into a bug with the %post section, I have to rebuild the entire package which takes about 20 minutes. Is there a way to skip recompiling everything and just build a new package with just the changes from %post?
If your spec file doesn't create a random build directory and won't delete that build tree afterwards, the more time consuming compiling can be omitted by make. I.e. Similar to not using the --clean option in rpmbuild.
You can then also use the --short-circuit flag to rpmbuild to skip the first stages in building.
Is the script doing something different when run manually vs. run from the RPM install? You can make a test RPM that has no %post, then manually run the script under test (as root). Do this on every host type that needs to be tested until you think you got it. Then add it as the %post and give it a try.
I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.
I am working on perl module that I would like to submit in CPAN.
But I have a small query in regards to the directory structure of module.
As per the perlmonk article the module code directory structure should be as below:
Foo-Bar-0.01/Bar.pm
Foo-Bar-0.01/Makefile.PL
Foo-Bar-0.01/MANIFEST
Foo-Bar-0.01/Changes
Foo-Bar-0.01/test.pl
Foo-Bar-0.01/README
But when I am using the command, the structure is generated as below
h2xs -AX Foo::Bar
Writing Foo-Bar/lib/Foo/Bar.pm
Writing Foo-Bar/Makefile.PL
Writing Foo-Bar/README0
Writing Foo-Bar/t/Foo-Bar.t
Writing Foo-Bar/Changes
Writing Foo-Bar/MANIFEST
The article in question is advocating a considerably-older module structure. It certainly could be used, but it loses a lot of the advancements that have been put into place as far as good testing, building, and distribution practices.
To break down the differences:
modules have moved from the top level to the lib/ directory. This unifies the location where your module "lives" (i.e., the place where you work on the code and create the baseline modules to be tested and eventually distributed). It also makes it easier to set up any hierarchy that you need (e.g. subclasses, or helper modules); the newer setup will just pick these up. The older one may but I'm not familiar enough with it to say yes or no.
Makefile.PL in the newer setup will, when "make" is run. create a library called "blib", the *b*uild *lib*rary - this is where the code is built for actual testing. It will pretty much be a copy of lib/ unless you have XS code, in which case this is where the compiled XS code ends up. This makes the process of building and testing the code simpler; if you update a file in lib/, the Makefile will rebuild the file into blib before trying to test it.
the t/ directory replaces test.pl; "make test" will execute all the *.t files in t/, as opposed to you having to put all your tests in test.pl. This makes it far easier to write tests, as you can be sure you have a consistent state at the beginning of each test.
MANIFEST and Changes are the same in both: MANIFEST (built by "make manifest") is used to determine which files in the build library should be redistributed when the module is packaged for upload, and used to verify that a package is complete when it's downloaded and unpacked for building. Changes is simply a changelog, which you edit by hand to record the changes made in each distributed version.
As recommended in the comments on your question, using Module::Starter or Dist::Zilla (be warned that Dist::Zilla is Moose-based and will install a lot of prereqs) is a better approach to building modules in a more modern way. Of the two, the h2xs version is closer to modern packaging standards, but you're really better off using one of the recommended package starter options (and probably Module::Build, which uses a Build Perl script instead of a Makefile to build the code).