RPM conditional Requires in spec file possible - rpm-spec

As the subject reads, I wonder whether it is possible in the RPM spec file to make required packages dependent on a condition?
For instance check in a shell statement if on the install target e.g. the host is using bonding interfaces
and only if have the Require become effective.

As an answer to your original question - Yes this is possible, but what you can implement depends what you want to use as a condition, and those are written to the package during build-time of the package, not during installation. You could easily do something in the .spec like
%if some_condition_is_true
Requires: foo
%else
Requires: bar
%endif
Question is, how much sense does that make, and what is your particular use case?
For what you described, that is not possible in that way, as you cannot change the package during install-time. You have a few options for your scenario:
create two versions of your package, one for bonding, one for hosts without
separate the part that is necessary for bonding hosts in a
subpackage, and only install that on the hosts that need it
put the logic for the bonding hosts in a %pre/%post script, so that it only runs conditionally.
use some virtual requires, which are fulfilled by multiple packages, and then add some config on the host which gives priority to the package you need, the one for bonding or for no bonding. But that is highly distro-specific ...
For more details on conditional/dynamic macros which are mostly available since rpm.org v4.12 see https://web.archive.org/web/20160513021804/http://rpm.org/wiki/DynamicDependencies
For more details on the %pre/%post and other scriptlets see e.g. https://fedoraproject.org/wiki/Packaging:ScriptletSnippets

No. You can hand modify the Requires line and turn off autodetection, and then handle what is optional as needed.

Related

Configure dependencies in RPM

I have built a RPM-package for Centos 6.6 that is installed on a machine of our customer.
This package contains our own software, customized for the specific use case, but also uses the open-source package HAProxy.
HAProxy (RPM-version 1.5.4-2.el6_7.1) comes with a default-configuration in /etc/haproxy/haproxy.conf and it cannot be customized without changing this file.
But I want the configuration to be part my generated package. RPM throws an error if the /etc/haproxy/haproxy.conf file is in my package, because it is also part of the haproxy-package.
I have worked around this problem by providing a custom upstart-script which starts HAProxy with a different config file, but this does not seem to be the right way to do this.
Is there a preferred way to handle such customizations?
In cases like this, I've created an RPM which installs configuration files into a different subdirectory, and in its %post and %preun scriptlets modifies the uncooperative package's config-files:
when installing, I renamed the original config-files, and made symbolic links from those pathnames to the overwriting config-files, and
when uninstalling, the package removed the symbolic links and restored the original package's files.
Doing it that way of course meant that my config-RPM was dependent on the original RPM. A little awkward to describe, but it works.
In followup, the issue of updating was mentioned. Updating an RPM requires special handling to avoid uninstalling things. The rpm program passes a parameter $1 which you can test in the %pre and %preun scriptlets to notice that this is an upgrade and that there is no need to save the original config-files (or restore them). The rest of the scriptlet would be the same, by copying the new versions of your config-files over the others.
Further reading:
Defining installation scripts (shows the use of `$1)
RPM upgrade uninstalls the RPM
Your approach is correct. On EL6 and sysv there is no other choice than creating custom haproxy package or custom haproxy service or create script which customer runs after installation. I see creating another service as best option.
Note that on EL7 with SystemD you have much better option as you can use Drop-In feature of SystemD. For more information see:
https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html
https://wiki.archlinux.org/index.php/systemd#Drop-in_snippets
https://wiki.archlinux.org/index.php/Systemd/User#Service_example
The usual way this is done is to have a drop-in configuration directory, e.g. /etc/httpd/conf.d/, where your package would drop its configuration, and you would tell the other daemon, e.g. httpd, to do a graceful restart in your %post/%postun.
I don't know anything about HAProxy, but a quick search implies that they do not support this configuration directory concept that has been around for many years. A few people have hacked it in, but unless it is out-of-the-box, you will run into your original problem again.

How would I make "read" command work in RPM spec file?

I wrote a SPEC file to build RPM package. I need to let end user to determine the value of an variable in the %pre section. So I use "read < my_variable >" command in the %pre section. But, when installing, the "read" command seems ignored by system, because the system didn't wait for me to enter the value. Why? and Is there any good method to do the above thing?
Rather than embed the read within your package, RPM has a conditional mechanism which can be used via command-line parameters. Most usage of conditionals in RPMs tests constants defined in the system's RPM macros or making simple filesystem checks. You should investigate those first, because it allows your package to install without help from the person doing the install.
Here are some useful pages discussing RPM conditionals:
Passing conditional parameters into a rpm build (rpm.org)
PackagerDocs/ConditionalBuilds (rpm.org)
Conditionals (Maximum RPM: Taking the Red Hat Package Manager to the Limit)
openSUSE:RPM conditional builds
As one can see from the suggested reading, these are build-time rather than install-time features. You cannot make an "interactive" RPM install. To read more about that, see these pages:
Is it possible to get user's input during installation of rpm?
RPM - Install time parameters
The latter is clear that this is intentional on the part of the developers. As an aside, one response mentions the --relocate option, implying that this solves the problem. However, it is actually different. Read more about that here:
Relocatable packages
Chapter 15. Making a Relocatable Package (Maximum RPM)

how to define OR logic for an RPM dependency

I'm creating an RPM and I need to check that a version of Java 8 is installed on the machine.
The problem is that Oracle provides version-tied RPMs with names like jdk1.8.0_45 and Redhat provides RPMs with names like java-oracle-8. I don't really care which one is installed, as long as one of them is installed, so how can I define OR condition logic on a Java 8? (Note this is for a RHEL5 or RHEL6 target, so new fangled features can't be used)
As far as I'm aware RPM has no such functionality. You cannot declare a requirement like this.
That being said what RPM does have, and which is used as a rough equivalent to this, is the concept of "Provides".
Any package can Provide: some_capability and then other packages can Require: some_capability the same way they can Require: <some_package>.
You can also Require: /some/file/path if absolutely necessary (though avoid this whenever possible).
So, you need to compare the provided capabilities of the RPMs you care about and look for any common capability that you can depend on instead. Hopefully there's something in common there you can use. If there isn't then you are left with no choice other than to drop the requirement in your RPM and hope they have it and detect it at runtime (with a startup script perhaps).
(Technically you could also do a check during %pre and exit with a failure if you can't find java somewhere but I strongly recommend not doing that.)
FYI - boolean logic in dependecy is called "Rich dependency" and is currently being introduce into RPM and will probably land in Fedora 24. See http://lists.rpm.org/pipermail/rpm-maint/2014-February/003666.html

Internal CPAN - what module

I want to setup in-house CPAN for distributing our internal code.
So I was looking at CPAN::Mini as recommended here. But it looks there are other options as CPAN::Site, CPAN::Dark, Dist::Zilla ...
I'm little bit overwhelmed with all these options. What do people mostly use/ recommend?
What I need is a way to push internal modules to repository which can be accesses from several machines.
The quick answer is that you want to use CPAN::Mini to create a local mirror of all that is current on the CPAN, and then CPAN::Mini::Inject to add your own distributions to it.
The long answer is that it helps to understand how a CPAN mirror is constructed. Broadly speaking, it is simply a directory that contains two sub-directories.
The 'modules' directory contains in turn two files, 03modlist.data.gz, whose contents is ignored by modern CPAN clients but there's legacy code that assumes this file exists, so just copy it from an existing mirror. The other is 02packages.details.txt.gz, which I shall describe later.
The 'authors' directory contains a file '01mailrc.txt.gz' which is another relic of the past whose contents can be ignored, so just copy it from another mirror, and it contains the 'id' directory. This in turn contains sub-directories and distributions, whose names follow a pattern. For example, my PAUSE id is DCANTRELL, and one of my distributions is XML-Tiny-2.06.tar.gz, so that file lives at .../authors/id/D/DC/DCANTRELL/XML-Tiny-2.06.tar.gz.
The 02packages.details.txt.gz file is the index that maps module names to distributions, and this must be up to date for your mirror to work properly. It consists of a few header lines, which must be present and correct, followed by a blank line, followed by one line for each module. Those lines are three fields separated by spaces:
module name
module version
distribution filename
eg
XML::Tiny 2.06 D/DC/DCANTRELL/XML-Tiny-2.06.tar.gz
(you may also see .tgz, .zip, and a coupla others)
A distribution may appear in several lines, once for each module it contains. eg
XML::Tiny::DOM 1.1 D/DC/DCANTRELL/XML-Tiny-DOM-1.1.tar.gz
XML::Tiny::DOM::Element 1.1 D/DC/DCANTRELL/XML-Tiny-DOM-1.1.tar.gz
In a normal CPAN mirror, there may be several versions of a distribution, and several versions of a module - for example, the current version and a few older ones, or the current stable one and a dev one. The index file contains the most recent stable version. You can tell dev versions of distributions because they have an underscore in their version, or contain the string '-TRIAL'.
So, knowing all that, you can construct a CPAN-a-like that contains only your code. But using CPAN::Mini and CPAN::Mini::Inject to add your stuff to a "real" CPAN is less work.
Once you've created your CPAN-a-like, you can either expose it on HTTP and access it using any client as normal, or you could just have it in the filesystem and configure the CPAN client to access it using a file:/// URL.
You might also consider Pinto. Pinto allows you to curate your own stable CPAN repository, which can contain any number of both public and private distributions. Pinto also helps you to manage change as your dependencies evolve over time.
DrHyde gave a very nice answer to the question. But if you don't want to maintain a CPAN mirror, you can use MyCPAN::App::DPAN together with MyCPAN::Indexer.
Cave: Both distributions are under development. Not all combinations will work. What I use is the latest version of MyCPAN::App::DPAN on github (1.28_11) and MyCPAN::Indexer version 1.28_10 (later versions don't work with MyCPAN::App::DPAN).
MyCPAN::App::DPAN will create a CPAN-like directory structure on your local disk from the distributions you feed it. You will need to create a config file for it (say, .dpanrc):
# contents of .dpanrc
indexer_id Edward Baudrez <my.email.address#example.org>
dpan_dir /home/ebaudrez/rsync.net/dpan
merge_dirs /home/ebaudrez/rsync.net/dpan/dists
report_dir /home/ebaudrez/rsync.net/dpan/indexer_reports
Put your distribution tarballs in the directory merge_dirs (I think there's no reason that directory should reside under dpan_dir, but I'm too lazy to figure it out right now). Then call dpan:
dpan -f $HOME/.dpanrc
dpan will create a CPAN-like structure in dpan_dir (containing, in particular, authors and modules). This directory can then be used with cpanm (for instance):
cpanm --mirror $HOME/rsync.net/dpan --mirror http://search.cpan.org/CPAN
Note that I use real CPAN as fallback, because the DarkPAN is by definition incomplete. If you also happen to have a mini CPAN mirror, you can also use it here:
cpanm --mirror $HOME/rsync.net/dpan --mirror $HOME/mirrors/minicpan --mirror-only
Do note that, for this scheme to work, you will need to create distribution tarballs from your source code. I like and use Dist::Zilla, but note that you can also generate tarballs from Makefile.PL, so you definitely don't need to use Dist::Zilla. But it takes care of a lot of the details.
Creating a real distribution from your source code may seem like a lot of work, but Dist::Zilla helps lift the burden, and the transition to a real CPAN module, some day in the future ;-), is also simplified a lot when you already have a distribution.

Automating Solaris custom software deployment and configuration for multiple nodes

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.
Most of those steps sound like things handled by use of a packaging system to install your package. On Solaris 10, that would be the SVR4 packaging system included with the OS.