Configure dependencies in RPM - centos

I have built a RPM-package for Centos 6.6 that is installed on a machine of our customer.
This package contains our own software, customized for the specific use case, but also uses the open-source package HAProxy.
HAProxy (RPM-version 1.5.4-2.el6_7.1) comes with a default-configuration in /etc/haproxy/haproxy.conf and it cannot be customized without changing this file.
But I want the configuration to be part my generated package. RPM throws an error if the /etc/haproxy/haproxy.conf file is in my package, because it is also part of the haproxy-package.
I have worked around this problem by providing a custom upstart-script which starts HAProxy with a different config file, but this does not seem to be the right way to do this.
Is there a preferred way to handle such customizations?

In cases like this, I've created an RPM which installs configuration files into a different subdirectory, and in its %post and %preun scriptlets modifies the uncooperative package's config-files:
when installing, I renamed the original config-files, and made symbolic links from those pathnames to the overwriting config-files, and
when uninstalling, the package removed the symbolic links and restored the original package's files.
Doing it that way of course meant that my config-RPM was dependent on the original RPM. A little awkward to describe, but it works.
In followup, the issue of updating was mentioned. Updating an RPM requires special handling to avoid uninstalling things. The rpm program passes a parameter $1 which you can test in the %pre and %preun scriptlets to notice that this is an upgrade and that there is no need to save the original config-files (or restore them). The rest of the scriptlet would be the same, by copying the new versions of your config-files over the others.
Further reading:
Defining installation scripts (shows the use of `$1)
RPM upgrade uninstalls the RPM

Your approach is correct. On EL6 and sysv there is no other choice than creating custom haproxy package or custom haproxy service or create script which customer runs after installation. I see creating another service as best option.
Note that on EL7 with SystemD you have much better option as you can use Drop-In feature of SystemD. For more information see:
https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html
https://wiki.archlinux.org/index.php/systemd#Drop-in_snippets
https://wiki.archlinux.org/index.php/Systemd/User#Service_example

The usual way this is done is to have a drop-in configuration directory, e.g. /etc/httpd/conf.d/, where your package would drop its configuration, and you would tell the other daemon, e.g. httpd, to do a graceful restart in your %post/%postun.
I don't know anything about HAProxy, but a quick search implies that they do not support this configuration directory concept that has been around for many years. A few people have hacked it in, but unless it is out-of-the-box, you will run into your original problem again.

Related

Perl module installation structure and version control

I am just starting to organize some stuff on the cluster and would like some advice on it. I posted a recent question How to organize Perl modules and got some good answers about what I was doing incorrectly. I was trying to install each perl module independently by setting the PREFIX for Makefile.PL each time to /path/to/lib/module-name/module-version/installation happens here.
For e.g. for a module JSON, I installed it like this:
perl Makefile.PL --PREFIX=/path/to/lib/perl5/5.22.1/JSON/2.53
make
make test
make install
For module Data-UUID, I did it like this:
perl Makefile.PL --PREFIX=/path/to/lib/perl5/5.22.1/Data-UUID/1.221
make
make test
make install
So it made a directory JSON/2.53 in /path/to/lib/perl5/5.22.1 and that's where it installed the package. But because I change the PREFIX for each individual module, I have to set the PATH in the bash_profile accordingly, which is kind of messy.
My main goal to do this was for version control. In a hypothetical scenario where different versions work for two of your teammates, say JSON/2.52 works for X and JSON/2.53 works for Y, how do you control for versions without having to ask them to install their versions locally? In another scenario, what if a version worked for you 3 months back and the updated version doesn't work for you anymore? How do you keep track of versions if you install everything in one directory?
I also have more questions on the module local::lib but I will post it as another question.
Thanks!
Maintaining concurrent versions of CPAN modules is asking for grief. I would suggest instead - don't, use docker for anything that's got any sort of deployment. That way you can have local installation of stuff + deps in an isolated container.
It's a bit like early days yet on docker, but they're a significant amount of enthusiasm and support for it from some very big names.
Personally I'm just using it to bundle up "mojolicious" perl webapps behind a reverse proxy, and maintain their dependencies as a self contained installation (which I can run/test/deploy autonomously)

How to make a Dist::Zilla based Perl module (or app) install files into /etc/?

I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.

What is a good way to deploy a Perl application?

I posted this question looking for something similar to Buildout for Perl. I think Shipwright is what I'm looking for but I'm not really sure. I've played around with it and I created a project, imported all of my source and dependencies and I've exported everything to a vessel then the documentation sort of just stopped. What do I do with a shipyard vessel? Do I do my actual development work in the vessel, or do I do my development in the Shipyard? I'm assuming that the vessel is only for deployment, but how do I actually deploy a vessel to a web server (say I'm using linux, apache and just running straight cgi).
Is Shipwright the right thing for what I'm trying to accomplish or is there something else that would be more appropriate? Ideally I could use Shipwright similar to how I use Buildout. I use Buildout to create a nice isolated environment for my development, and also I use Buildout when deploying to live servers to manage all of my application's dependencies.
EDIT: Here are the highlights of what I can do with Buildout that I would like to be able to do in Perl.
With Buildout, I have a file in my codebase that lists dependencies (which for Perl would either be CPAN modules or other source repositories). I can run a bootstrap script that will fetch all of those dependencies and drop them into a directory within my project and NOT install them at a system level. Buildout also creates utility scripts which can do anything you want (run tests, other command line tools, anything really) and those scripts explicitly add the dependencies to the path so that as my scripts are running all of my dependencies are available to be imported.
What this really does very well is that it allows me to manage my dependencies without having to ever install anything at a system level. Which makes changing from one version to another very easy. Also, it allows me to have multiple Buildout projects running on the same system using different versions of the same module. Finally, one huge benefit is that with Buildout's directory structure, I can just commit the dependencies to source control and to deploy to a new machine I just need to do a checkout and all of my dependencies are already satisfied without having to touch anything installed at a system level.
I don't think you'll find anything exactly like Buildout in Perl, but you could put together a couple of things that would do the trick.
You could use a standard Build.PL script for Module::Build for managing your dependencies and having commands to run tests, etc.
Then you could use cpanminus to do the installation of those dependencies into a local (non-system) directory.
Then you might be able to use Shipwright to do the bundling and deployment of the project with these now-local dependencies.

How do I install Perl modules on machines without an Internet connection?

I need to install my Perl-based software on networked machines which aren't connected to the internet. Therefore, I would like to download specific versions and/or latest versions of the Perl modules and I would also like to know if there is an install procedure required for these modules.
Background:
The machines aren't connected to the internet for security reasons and its deemed unnecessary also.
I would place the downloaded modules on a machine that I call the 'install server' and it contains my Perl based software and would also contain the local copies of the Perl modules.
I call a machine that I want to install my Perl-based software on, the 'target machine', also not connected to the internet. There can be several target machines, each can run this software that I want to install. I log onto the target machine and run an install script which would connect to the install machine via the local network to obtain the Perl-based software and dependent Perl modules and installs them.
So I need to know:
How/Where to get specific versions of Perl modules, e.g. CGI.pm etc
How to install these Perl modules. Is it a case of just placing them in a directory somewhere, e.g. a library path and making sure that this directory path is in the #INC library path environmental variable, if it is not already?
I would prefer not to have to do anything like make install etc. as part of installing the modules. I would like to modules to be pre-compiled or prepared as necessary so it is as simple as possible to install them. I want to avoid additional dependencies like make and its configuration, and having to parse its output to check whether it was successful.
Please help me by asking the above specific questions as I am not able to change the concept of 'install machine' and 'target machine' which aren't connected to the internet - I have to provide a solution that works within this arrangement.
The usual way to solve "I want to install stuff from CPAN but without network" problems is to use a minicpan as David Dorward wrote in his answer. But since you're going one step further, saying that you'd rather not do any real installation on the client (target) machines at all, and that you want to use precompiled modules if possible, I urge you to check out PAR and specifically PAR::Repository (server) and PAR::Repository::Client.
Since this approach needs some research before you're up to speed, I wouldn't suggest it for "I just need Foo.pm" like problems. Once you're talking about a handful of dependencies and at least a handful of clients, then it becomes a more appropriate solution.
For an outline of how it works, check out the slides of my talk at YAPC::EU 2008. It also hints at solutions to the bootstrapping problem of making the PAR::Repository::Client module available on the clients (hint: PAR can generate self-contained executables).
You can create a MiniCPAN that has just the latest versions of everything from CPAN. You can insert additional, non-public modules into it with CPAN::Mini::Inject. If you need to greater control over versions (i.e. not choosing the latest versions), you might want to create a DPAN.
With any of these solutions, you can configure your CPAN client to pull from your local source. That could be a directory you know ahead of time or something that you figure out dynamically, like a CD or a thumb-drive. It's just a matter of setting up the configuration correctly.
You might be able to get away with creating operating-system packages for most of your work, but that still means you have to compile them at least the first time.
1) How/Where to get specific versions of Perl modules, e.g. CGI.pm etc
http://search.cpan.org/
If you don't want the latest version, you can get an earlier version by following the link in the breadcrumbs.
http://img.skitch.com/20091209-bu7kt3bj65374k7iijfnhrue2y.png
2) How to install these Perl modules. Is it a case of just placing them
in a directory somewhere, e.g. a library path and making sure that this
directory path is in the #INC library path environmental variable, if
it is not already?
That sometimes work, but you really should go through the perl Makefile.PL && make && make test && make install process.
Doing this would require that you manually chase all the dependencies though. You would probably be better off with something like minicpan.

How do I use rpm to update/replace existing files?

I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!