How can I make a FindMyPackage.cmake module fall back to downloading? - github

I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.

Related

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

Chocolatey: Simple Automatic Upgrades of Local Folder Packages

I have a folder like c:\chocopkg where I put a couple of packages which I can't find on the official repo.
Creating nupkg archives was really simple and fun. Instead, the Automatic Updater (AU) is too much for me: there is no simple cinst au; one should clone a git repo and also setup a new one even for a local run.
What I need is very simple. I added a script <package>\tools\chocolateyBeforeUpgrade.ps1, with trivial Invoke-WebRequest regexps. It checks for new versions on the vendor's site and can update chocolateyInstall.ps1.
My first question is: Is there some config option to have cup all running a script like this before checking for package status?
If this is not possible, it would be also simple to wrap cup in a, say, cup2 checking and running automatic upgrades, but what file should this wrapper edit before giving control to actual cup?
cup page just says it "upgrades a package or a list of packages", but I don't understand how. I can speculate it looks at the .nuspec version. However in a local share there is no such info without unzipping the .nupkg file and for remote packages this would require a possibly large download.
AU essentially does what you'll need, even if it is a little more setup and work. I know it may feel like too much to start, but you can just start with the files that run the updates.
What I need is very simple. I added a script <package>\tools\chocolateyBeforeUpgrade.ps1, with trivial Invoke-WebRequest regexps. It checks for new versions on the vendor's site and can update chocolateyInstall.ps1.
This isn't going to help with upgrades as it is a chicken and egg scenario. You need the updated package first to be able to upgrade a package. So putting something into beforemodify or the install script is only going to help you on installation. BeforeModify only runs from the already installed package on upgrade/uninstall, so unless there

How to get useful files from a github package?

Although I've been using github for a while, but every now and then this problem pops up. From a github project, how do I know which filed/folders are actually what I need?(Generally, I don't need those files that are only used by developers)
Take this project (https://github.com/mupchrch/split-diff) for example, I am trying to install this plugin into my Atom editor so I can compare two files, but I find very little is said about how and which files are to be installed/copied, and to where. This "lack" of information happens to many many github projects I stumbled with. Some indicates npm install xxx which I am OK with, some says nothing at all, like the above example. So I conclude I must have missed some very important information regarding how to use github package, something that goes without saying.
Can anyone help me or give me any hint?
I noticed that here is a package.json file, which must indicate this package can be installed by npm. But I need more specific instructions:
Do I need to download all the files and folders? To where? And where do I launch npm install? Or as far as I know, Atom editor has its own install command (apm is it?), where do I run this apm?

Version controlled South migrations in virtualenv

I have a Django site placed in folder site/. It's under version control. I use South for schema and data migrations for my applications. Site-specific applications are under folder site/ so they are all version-controlled along with their migrations.
I manage a virtualenv to keep third party components dry and safe. I install packages via PyPI. The installed packages' list are frozen in requirements.txt so the they can be easily installed in another environment. The virtualenv is not under VCS. I think it is a good way if virtualenv can be easly deleted and reconstructed at any time. If I need to test my site, for instance, using another version of Python interpreter, simply activate another virtulalenv.
I'd like to use South for third party packages, though. Here comes the problem. Migration scripts stored in the application's folder so they are outside of my site's repository. But I want migration scripts to be under version control so I can run them on different stages as well.
I don't want to version control the whole virtualenv but the migration scripts for third party applications. How can I resolve this conflict? Is there any misconcept in my scenario?
The SOUTH_MIGRATION_MODULES setting allows you to put migration modules for specified apps wherever you want them (i.e. inside your project tree).
I think it depends a litte bit on your version control system. I recommend to use a sparse tree, one that only manages the migration folders of the various packages. Here I see two alternatives:
Make a truly sparse tree for all packages, one that you check out before creating the virtualenv. Then populate the virtualenv, putting stuff into the existing folders.
Collect all migrations into a separate repository, with a folder per project/external dependency. Check this out into the virtualenv, and create symlinks, linking from each project to its migration folder.
In either case, I believe you can arrange for the migrations to exist as a separate project, so you can install it with the same process as you install everything else (easy_install/pip/whatever).

What is a good way to deploy a Perl application?

I posted this question looking for something similar to Buildout for Perl. I think Shipwright is what I'm looking for but I'm not really sure. I've played around with it and I created a project, imported all of my source and dependencies and I've exported everything to a vessel then the documentation sort of just stopped. What do I do with a shipyard vessel? Do I do my actual development work in the vessel, or do I do my development in the Shipyard? I'm assuming that the vessel is only for deployment, but how do I actually deploy a vessel to a web server (say I'm using linux, apache and just running straight cgi).
Is Shipwright the right thing for what I'm trying to accomplish or is there something else that would be more appropriate? Ideally I could use Shipwright similar to how I use Buildout. I use Buildout to create a nice isolated environment for my development, and also I use Buildout when deploying to live servers to manage all of my application's dependencies.
EDIT: Here are the highlights of what I can do with Buildout that I would like to be able to do in Perl.
With Buildout, I have a file in my codebase that lists dependencies (which for Perl would either be CPAN modules or other source repositories). I can run a bootstrap script that will fetch all of those dependencies and drop them into a directory within my project and NOT install them at a system level. Buildout also creates utility scripts which can do anything you want (run tests, other command line tools, anything really) and those scripts explicitly add the dependencies to the path so that as my scripts are running all of my dependencies are available to be imported.
What this really does very well is that it allows me to manage my dependencies without having to ever install anything at a system level. Which makes changing from one version to another very easy. Also, it allows me to have multiple Buildout projects running on the same system using different versions of the same module. Finally, one huge benefit is that with Buildout's directory structure, I can just commit the dependencies to source control and to deploy to a new machine I just need to do a checkout and all of my dependencies are already satisfied without having to touch anything installed at a system level.
I don't think you'll find anything exactly like Buildout in Perl, but you could put together a couple of things that would do the trick.
You could use a standard Build.PL script for Module::Build for managing your dependencies and having commands to run tests, etc.
Then you could use cpanminus to do the installation of those dependencies into a local (non-system) directory.
Then you might be able to use Shipwright to do the bundling and deployment of the project with these now-local dependencies.