I have set up a recipe of mine to use autotools to make my project. I recently decided to run bitbake with the verbose output turned on. I noticed nowhere in my build or compilation does autogen.sh get called. It goes straight to my configure.ac.
Why is this? I thought autogen.sh was required. Is there a way to make it use autogen.sh?
OE-Core's autotools.bbclass is setup to run autoreconf, libtoolize and the other tools needed itself without using the autogen.sh since they can vary a lot in quality. Your software will have configure regenerated, it just won't use the helper script.
Related
The automated code review service LGTM uses "wrapper scripts around the popular build tools like pkg-config, CMake, and qmake" to detect missing files during the build process and to install the corresponding packages automatically.
My project uses CMake, but to get some path from Qt, I need qmake in one place:
get_target_property(qt5_qmake Qt5::qmake IMPORTED_LOCATION)
execute_process(COMMAND ${qt5_qmake} -query QT_INSTALL_TRANSLATIONS OUTPUT_VARIABLE QT_QM_PATH)
These lines work fine everywhere I've tested them, except in the LGTM environment.
In the LGTM environment, the desired information is not put into the QT_QM_PATH, but printed (it's visible in the build log).
I strongly suspect the wrapper causing that since other commands work as expected.
Question: How can I stop LGTM from wrapping qmake or how can I trick cmake to capture the output of the wrapped qmake?
I'm using a couple of macros from the autoconf archive in my configure.ac. When aclocal is run, the macros are placed in aclocal.m4. Since this file is automatically generated, I typically wouldn't put it in source control. However, the autogeneration won't work unless the user has the macros installed on their computer in the first place (on Ubuntu I had to do apt-get install autoconf-archive). Is it typical practice to include aclocal.m4 in source control?
Edit: summary: Do not include aclocal.m4 in source control. It is acceptable to include acinclude.m4.
No, it is not best practice to do so. However, it is probably typical practice. Best practice and common practice often diverge widely when dealing with the autotools. In my opinion, the expectation is that a developer who is running the autotools is capable of satisfying the dependencies and making the macros available (eg, by installing autoconf-archive), so the file should not be included in the version control system. It is, however, perfectly acceptable to put the macros in acinclude.m4 and put that file in source control. When invoked, aclocal will look for definitions in acinclude.m4 so that the developer running aclocal (and this is the point that seems to throw a lot of projects; there are really only a small handful of people who should ever be invoking the autotools on a project, and everyone else should be building from a release distribution. If developers working on a project are not modifying the autotool meta-files, there is no reason for them to be running the autotools) does not need to install the autoconf archive.
I have a module I'd like to release to CPAN, and I like using dzil to do the packaging and releasing. However, the module relies on an external application, and while I know where it is installed on my machine, I'd like to ask users to input where it is installed on their machine. Having read Prompt user during unit test in Perl I see ExtUtils::MakeMaker::prompt does just what I want to do.
How would I incorporate that (or something similar) when using dzil?
The standard MakeMaker dzil plugin has no support for anything but a basic Makefile.PL. (Well, it can use File::ShareDir::Install, but that's its limit.) If you need more complex install-time behavior, you'll need to use something else.
I recommend my MakeMaker::Custom plugin. You write your own Makefile.PL, which can do anything that ExtUtils::MakeMaker is capable of, including prompt for information. You can still have dzil add things like your prerequisites at dzil build time, so you can still use AutoPrereqs. (Actually, I recommend ModuleBuild::Custom instead, but if you want to stick with MakeMaker, that's ok.)
Note: You should also allow the information you're prompting for to be supplied on the command line. This will help people who are trying to package your distribution using automated build tools. But that's a MakeMaker issue, not a Dist::Zilla one.
The user should not be installing via Dist::Zilla at all. It is an author-tool only, as its documentation explicitly says. Dist::Zilla is meant to build a distribution that is installed via EUMM or M::B.
Edit: Given your comment, I would instead say, it sounds like your build process isn't a good candidate for using Dist::Zilla, at least consistently. I would suggest using it to build it once more and then move to using the EUMM or M::B that it builds, modify it to your purposes and keep developing that.
If you're using ExtUtil::MakeMaker to install your distribution, then you can use the dzil plugin Dist::Zilla::Plugin::MakeMaker::Runner (that's a mouthful) to bundle a custom Makefile.PL with your dist instead of generating the default one.
That will allow you to use prompt to gather custom information from within the Makefile.PL if you need it.
Let's say I've created a directory using module-starter, and written several additional modules and tests since.
make test would then run all tests in t/ on all modules in lib/, however make dist will only pack files mentioned in MANIFEST into tar.gz.
So I got burnt recently by running make test && make dist and still getting a broken package.
My question is: am I missing something, or this can be reported as a minor bug in MakeMaker? (Which Makefile.PL seems to rely upon).
You can use make disttest which will create a distribution directory from the MANIFEST (equivalent to make distdir) and run make test in that. This guarantees you're running against the same files as will be shipped.
I also rebuild my MANIFEST as part of making a release, which requires keeping your MANIFEST.SKIP up to date.
All in all, my basic release script is:
perl Makefile.PL
make manifest
make disttest
make dist
Run make distcheck before you release your package. This will warn you about anything potentially missing from your MANIFEST.
Some modules generate files during the build process (including under lib/), so files missing in the MANIFEST shouldn't necessarily cause make dist to fail.
I have an app that I pack into "binary" form using PerlApp for distribution. Since my clients want a simple install for their Win32 systems, this works very nicely.
Now a client has decided that they need to run all unit tests, like in a standard install. However, they still will not install a normal Perl.
So, I find myself in need of a way to package my unit tests for operation on my client's systems.
My first thought was that I could pack up prove in one file and pack each of my tests separately. Then ship a zip file with the appropriate structure.
A bit of research showed that Test::Harness::Straps invokes perl from the command line.
Is there an existing tool that helps with this process?
Perhaps I could use PAR::Packer's parl tool to handle invocation of my test scripts.
I'm interested in thoughts on how to apply either PAR or PerlApp, as well as any thoughts on how to approach overriding Test::Harness and friends.
Thanks.
Update: I don't have my heart set on PAR or PerlApp. Those are just the tools I am familiar with. If you have an idea or a solution that requires a different packager (such as Cava Packager), I would love to hear about it.
Update 2: tsee pointed out a great new feature in PAR that gets me close. Are there any TAP experts out there that can supply some ideas or pointers on where to look in the new Test::Harness distribution?
I'm probably not breaking big news if I tell you that PAR (and probably also perlapp) aren't meant to package the whole test suite and plethora of CPAN-module build byproducts. They're intended to package stand-alone applications or binary JAR-like module libraries.
This being said, you can add arbitrary files to a PAR archive (both to .par libraries and stand-alone .exe's) using pp's -a switch. In case of the stand-alone executable, the contents will be extracted to $ENV{PAR_TEMP}."/inc" at run-time.
That leaves you with the problem of reusing the PAR-packaged executable to run the test harness (and letting that run your executable as a "perl"). Now, I have no ready and done solution for that, but I've recently worked on making PAR-packaged executables re-useable as more-or-less general purpose perl interpreters. Two gotchas before I explain how you can use that:
Your application isn't going to magically be called "perl" and add itself to your $PATH.
The "reuse" of the application as a general purpose perl requires special options and does not currently support the normal perl options (those in perlrun). It can simply run an external perl script of your choice.
Unfortunately, the latter problem is what may kill this approach for you. Support for perl command line options is something I've been thinking about, but won't implement any time soon.
Here's the recipe how you get PAR with "reusable exe" support:
Install the newest version of PAR from CPAN.
Install the newest, developer version of PAR::Packer from CPAN (0.992_02 or 03).
Add the "--reusable" option to your pp command line.
Run your executable with the following options to run an external script "foo.pl":
./myapp --par-options --reuse foo.pl FOO-PL-OPTIONS-HERE
Unfortunately, how you're going to teach Test::Harness that "./myapp --par-options --reuse" is a perl interpreter is beyond me.
Cava Packager allows you to package test scripts with your packaged executables. This is primarily to allow you to run tests against the packaged code before distribution. However the option is there to also distribute the tests and test capability to your end users.
Note: As indicated by my name, I am affiliated with Cava Packager.