Can I run Postgres 8.4 and Postgres 9 on the same machine? - postgresql

Is it possible to run Postgres 8.4 AND 9 at the same time (two installations)?
Thank you

Short answer: yes
Long answer:
You didn't specify your OS, so it's difficult to say how to do it. For example in Debian/Ubuntu you can just install second version from package (postgresql-8.4 and postgresql-9.0) and everything works out of the box (thanks to postgresql-common). On other systems you probably need to do it manually using "low level" commands such initdb and pg_ctl. Make sure that second installation (database cluster) uses different port (for example 5433) and not same data directory.

Yes, provided the following three preconditions are satisfied:
PostgreSQL is listening on a unique IP/port (check out pgbouncer and you can probably hide both copies of PostgreSQL behind a single IP/port and reduce your memory footprint by reducing the number of active connections)
You have enough SYSV shared memory available (this is frequently the limiting factor)
You use different PGDATA directories.
I can't recommend using pgbouncer enough.

On Windows you don't need to do anything as the installer automatically creates unique data directories and detects an existing installation and adjusts the ports automatically.
For example - your first installation will listen on 5432 and your second installation will listen on 5433, as the installer configures this for you.

You always can, the question is how hard it will be to install two versions at the same time, and that depends on your operating system. On RedHat Linux derived systems for example, this is very hard to do. The PostgreSQL RPM packages are only intended to have a single version installed at any one time. Sometimes the only reasonable way to proceed is to build your own PostgreSQL from source for the second version you want to install, which is an interesting adventure if you've never done it before.
On Debian Linux two versions at once is pretty easy. I believe it's straightforward on Windows too, but it may depend on which installer you are using.
Once you get two different versions of the database installed, only then do you have to worry about the things everyone else is talking about: making each database run on its own port and have its own installation directory. Those are often trivial compared with the work it takes to get two versions installed at once though.

yes, you just put the data directores in different locations.

Yes you can. You'll need to run them on different ports and use different data directories.
The port and data directory can both be set in postgresql.conf.
There are I believe several other ways of specifying the data directory including using the PGDATA environment variable.

Related

What is the recommended workflow and environment for working on the FreeBSD code base?

I want to develop a new feature or change and existing program of the FreeBSD distribution, specifically the user space¹. To do so, I need to make changes to the FreeBSD code base and then compile and test them.²
Doing so on the tree in /usr/src and installing the result on the system seems like a bad idea, given that it requires you to run your development machine on CURRENT, to develop with root privileges and hoses your system if you make a mistake. I suppose there must be a better way and possibly a standard setup FreeBSD developers use.³
What is the recommended workflow to develop the FreeBSD code base?
¹ so considerations specific to kernel development aren't terribly important
² I'm familiar with the process to submit changes after I have developed them
³ I have previously read both the development handbook and the FreeBSD handbook chapter on building the source but neither seem to recommend a specific process.
I am a src committer.
I often start with the lowest release that I intend to back port to (e.g., RELENG_11_3.
I would then do (before or after making changes):
make buildworld
then deploy to a jail directory:
make DESTDIR=/usr/jails/test installworld
This jail directory, as the first responder hinted, can be used with bhyve, but I find it easier to configure a jail or even just use chroot.
I like to configure my jails in /etc/rc.conf instead of /etc/jail.conf:
Example /etc/rc.conf contents:
jail_enable="YES"
jail_list="test"
jail_test_rootdir="/usr/jails/test"
jail_test_hostname="test"
jail_test_devfs_enable="YES"
I can provide more in-depth examples, ones where the jail has a private networking stack so you can SSH into it, for example, but I don't get the sense that a networking stack is important to your testing from the posted question.
You can see the running jail with "jls" and you can enter the running jail with "jexec test bash"
Inside the jail you can test your changes.
When doing this kind of sandboxing, jails work so long as the /usr/src that you built/installed to the jail is from a release that is:
Older than the guest OS, or
In the same STABLE branch as the guest OS, or
At the very least binary-compatible with the guest OS
Situations 1 and 2 are pretty safe, while situation 3 (e.g., running a newer /usr/src than the guest OS) can get dodgy. For example, trying to run /usr/src head (13.0-CURRENT) on a 12.0-RELEASE-pX guest OS where the KBI, KPI, and API can all differ between kernel and userland (with jails, each jail runs under the guest OS's kernel).
If you find that you have to run the newest sources against an older guest OS, then bhyve is definitely the solution. You would take that jail directory and instead of running a jail with that root directory, run a bhyve instance with the jail directory as its root. I don't use bhyve that often, so I can't recall if you first have to deposit the contents inside a disk image and point bhyve at the disk image first -- others and/or Google would know the answer to that.
I'm a ports committer, not a src one, but AFAIK running CURRENT is a common practice amongst developers.
Another way to work is to setup a CURRENT VM, share it over NFS, mount from the host and install into by running make install DESTDIR=/mnt/current. You can use BHyVe for virtualizing, by the way.

Changing installation directory of installed DB2 in AIX 7.1

I have installed DB2 10.1 in AIX 7.1 at /opt/IBM/db2/V10.1. But there is a script which is expecting DB2 at /opt/db2_10_1.
I am not sure if it is possible to change the directory of an installed software and if I do it, what are the points I have to keep in my mind before performing this step.
FYI- I am not an AIX or DB2 expert. I am just performing this task as instructed.
Did your instructions specify a non-default path for the Db2-installation?
(The path /opt/IBM/db2/V10.1 is a typical default for AIX )
Do not manually hack to change the installation directory of Db2, just because a script is badly written! Responsible admins would never allow such mistakes on production environments.
It is an error for a script to hard-code a Db2-installation path. That script should be coded correctly to determine the Db2-installation path, or to have that information provided via configuration or arguments.
A possible option is to create a symbolic link so that /opt/db2_10_1 points to the real path at /opt/IBM/db2/V10.1 , but this is not guaranteed to work for all situations, it depends on how badly written is the script - so other different errors may appear later from that script (although Db2 itself will function normally).
A separate matter is that it is unwise to install a Db2 version that is already out of support (end of life). Does the business understand the consequences of installing an out-of-support version? (unless the business has purchased an extended support contract from IBM).
You have to make new install
stop instance
rename sqllib directory
recreate instance using db2icrt in new binaries in install directory
import catalogued database with db2cfimp previously exported using db2cfexp

Vagrant Berkshelf - Shelf Path?

Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.
If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH

Automating Solaris custom software deployment and configuration for multiple nodes

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.
Most of those steps sound like things handled by use of a packaging system to install your package. On Solaris 10, that would be the SVR4 packaging system included with the OS.

Deploying Perl to a share nothing cluster

Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.