Redistributing PostgreSQL Command Line Tools - postgresql

Does anyone know if there is a proper way to redistribute the PostgreSQL command line tools, such as pg_dump and pg_restore?
For Windows, the binaries are available in the 'One-Click' installers, but I'm only looking to redistribute the command line tools for automating backups and restores, so that > 100MB payload is completely overkill.
I could obviously just try and copy the entire bin folder, but I'd prefer knowing exactly what I need.

There is no official client-only distribution. When this question comes up on the mailing list the usual answer is that you should grab the .zip distribution then use Dependency Walker (depends.exe) to determine which DLLs are required by the binaries you wish to distribute, bundle them up and include them in your app.
The PostgreSQL server (postgres.exe) uses dlopen() / LoadLibrary() very heavily so you can't rely on depends.exe for it. Thankfully the client side is simpler and you can just use depends.exe to determine what it needs. There are no additional scripts, dlopen'd libraries etc required by pg_dump, pg_restore or psql.
I've never been entirely satisfied with that answer, but I've never wanted to improve it enough to want to mess with the packaging either. It doesn't help that EnterpriseDB's one-click installer is closed source, so I can't modify it to offer a client-only minimal package. For now, extracting the executables and associated libraries is the best you're going to get.
If you think about it, it's not really that different to what'd happen if you compiled it yourself. You'd still be pulling out only the client bits to distribute.

Related

How to know what silentArg to use when creating Chocolatey packages?

How do I find out what silentArg I need to use for creating a Chocolatey package?
I know that each installer will have different silentArgs but I just don't know where I can find which one has which. Also, I am using strictly .exe files (embedded too).
You'll have to work with the documentation/support provided by the software maintainer, but I'll provide some suggestions here.
Typically, MSI installers support the same silent installation parameters (many will simply work with /qn), but sometimes an installer might support additional variables or an input file you must provide.
EXE installers are a free-for-all, unfortunately. It depends on what parameters are coded into the setup program to support, even for the setup.exe installers that call another MSI. Depending on what built the EXE installer, you might be able to try some common options. The following techniques are suggestions to get you started on de-mystifying different common EXE installers:
A setup.exe that extracts and runs MSIs might be able to have the MSIs extracted and run on their own, but this is likely unsupported by the software maintainer. You will need to test this on your own per package to know if this approach will work.
Nullsoft Installers typically support a common array of options that can be used to deploy your application.
InstallShield Installers typically support the /S parameter along with an answer file, but you would still need to work with the software maintainer or read the software documentation to know what to put in the answer file.
As I've mentioned in other answers, the best thing you can do here is reach out to the vendor or software maintainer and ask for a deployment guide for that software, or at least documentation on how to silently install.

Should we deploy scrrun.dll (Windows Scripting Runtime)?

Our VB6 application relies heavily on the use of scrrun.dll (Windows Scripting Host). Until a year ago we used to deploy this dll with our installer. Since the Windows Scripitng Host is supposed to be part of Windows we removed the dll from the installation package. However, now and then surface customers who have a non functional scrrun.dll on their system and we have to help them reinstall or reregister it.
So, should we put the scrrun.dll back in the installation package? Should we perform some check on installation? Or should we just live with the fact that we have to provide hands on support to some of our customers to set their systems right?
Don't try to deploy these libraries as part of a normal setup.
Microsoft Scripting Runtime must be installed through the use of a
self-extracting .exe file. For versions of Scripting Runtime mentioned
at the beginning of this article, the only way to distribute it is to
use the complete self extracting .exe file located at the following
locations...
It is possible that some users employ an older anti-malware suite, many of which tried to disable scripting. It is more likely though that some users have managed to break their Windows installation, either themselves or by using applications improperly packaged to try to include these libraries - and blindly remove them from the system on uninstall (cough, cough - Inno).
The libraries involved have been tailored code for some time. This is why the ancient .CAB file was "recalled" long ago. There is no single copy of them intended to run on any random version of Windows, and there are no redist packs for any modern version of Windows. The correct fix is a system restore or repair install.
While this can't be blamed directly on InnoSetup because it is the result of poorly authored scripts it is frustrating enough and common enough that I won't cry when its signature is added to anti-malware suites. There are just too many poorly written examples loose in the wild copy/pasted by too many people.
I spend plenty of time undoing the damage caused by uninstalls of these applications and have grown quite weary of it. Where possible I use isolated assemblies now in self-defense, which helps a lot. Windows File Protection is getting better about preventing abusive action for system files too.
But in general you are much better off avoiding any dependency on scripting tools in an application. There isn't very much that they can do as well as straight code anyway, though it may take some time to write alternative logic.

Automating Solaris custom software deployment and configuration for multiple nodes

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.
Most of those steps sound like things handled by use of a packaging system to install your package. On Solaris 10, that would be the SVR4 packaging system included with the OS.

How do I use rpm to update/replace existing files?

I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!

What's the best system for installing a Perl web app?

It seems that most of the installers for Perl are centered around installing Perl modules, not applications. Things like ExtUtils::MakeMaker and Module::Build are very well suited for modules, but require some additional work for Web Apps.
Ideally it would be nice to be able to do the following after checking out the source from the repository:
Have missing dependencies detected
Download and install dependencies from CPAN
Run a command to "Build" the source into a final state (perform any source parsing or configuration necessary for the local environment).
Run a command to install the built files into the appropriate locations. Not only the perl modules, but also things like template (.tt) files, and CGI scripts, JS and image files that should be web-accessible.
Make sure proper permissions are set on installed files (and SELinux context if necessary).
Right now we have a system based on Module::Build that does most of this. The work was done by done by my co-worker who was learning to use Module::Build at the time, and we'd like some advice on generalizing our solution, since it's fairly app-specific right now. In particular, our system requires us to install dependencies by hand (although it does detect them).
Is there any particular system you've used that's been particularly successful? Do you have to write an installer based on Module::Build or ExtUtils::MakeMaker that's particular to your application, or is something more general available?
EDIT: To answer brian's questions below:
We can log into the machines
We do not have root access to the machines
The machines are all (ostensibly) identical builds of RHEL5 with SELinux enabled
Currently, the people installing the machines are only programmers from our group, and our source is not available to the general public. However, it's conceivable our source could eventually be installed on someone else's machines in our organization, to be installed by their programmers or systems people.
We install by checking out from the repository, though we'd like to have the option of using a distributed archive (see above).
The answer suggesting RPM is definitely a good one. Using your system's package manager can definitely make your life easier. However, it might mean you also need to package up a bunch of other Perl modules.
You might also take a look at Shipwright. This is a Perl-based tool for packaging up an app and all its Perl module dependencies. It's early days yet, but it looks promising.
As far as installing dependencies, it wouldn't be hard to simply package up a bunch of tarballs and then have you Module::Build-based solution install them. You should take a look at pip, which makes installing a module from a tarball quite trivial. You could package this with your code base and simply call it from your own installer to handle the deps.
I question whether relying on CPAN is a good idea. The CPAN shell always fetches the latest version of a distro, rather than a specific version. If you're interested in ensuring repeatable installs, it's not the right tool.
What are your limitations for installing web apps? Can you log into the machine? Are all of the machines running the same thing? Are the people installing the web apps co-workers or random people from the general public? Are the people installing this sysadmins, programmers, web managers, or something else? Do you install by distributed an archive or checking out from source control?
For most of my stuff, which involves sysadmins familiar with Perl installing in control environments, I just use MakeMaker. It's easy to get it to do all the things you listed if you know a little about MakeMaker. If you want to know more about that, ask a another question. ;) Module::Build is just as easy, though, and the way to go if you don't already like using MakeMaker.
Module::Build would be a good way to go to handle lots of different situations if the people are moderately clueful about the command line and installing software. You'll have a lot of flexibility with Module::Build, but also a bit more work. And, the cpan tool (which comes with Perl), can install from the current directory and handle dependencies for you. Just tell it to install the current directory:
$ cpan .
If you only have to install on a single platorm, you'll probably have an easier time making a package in the native format. You could even have Module::Build make that package for you so the developers have the flexibility of Module::Build, but the installers have the ease of the native process. Sticking with Module::Build also means that you could create different packages for different platforms from a single build tool.
If the people installing the web application really have no idea about command lines, CPAN, and other things, you'll probably want to use a packager and installer that doesn't scare them or make them think about what is going on, and can accurately report problems to you automatically.
As Dave points out, using a real CPAN mirror always gets you the latest version of a module, but you can also make your own "fake" CPAN mirror with exactly the distributions you want and have the normal CPAN tools install from that. For our customers, we make "CPAN on a CD" (although thumb drives are good now too). With a simple "run me" script everything gets installed in exactly the versions they need. See, for instance, my Making my own CPAN talk if you're interested in that. Again, consider the audience when you think about that. It's not something you'd hand to the general public.
Good luck, :)
I'd recommend seriously considering a package system such as RPM to do this. Even if you're running on Windows I'd consider RPM and cygwin to do the installation. You could even set up a yum or apt repository to deliver the packages to remote systems.
If you're looking for a general installer for customers running any number of OSes and distros, then the problem becomes much harder.
Take a look at PAR.
Jonathan Rockway as a small section on using this with Catalyst in his book.