postgreSQL with different data directories - postgresql

Today I installed postgreSQL to work with. When I was reading documents about postgreSQL, I found there can be more than one data directory present. Is it more than one data directory for single installation ? Or I understood wrongly ?
In my installation, data directory is in
C:\Program Files\PostgreSQL\8.3\data
if it can be more than one data directory for single installation, how will be the directory structure ? Please help me in understanding.

I think you mean a tablespace, please check the manual: CREATE TABLESPACE

There can be only one data directory for each PostgreSQL cluster. A cluster is a postmaster listening on a port, managing several databases from a single data directory. You can have multiple clusters by starting multiple postmasters with pg_ctl or via a system service, each listening on a different port and using a different data dir.
If you have multiple clusters on a machine you have multiple data directories. It's unusual to need to do this, but it is possible.
It would be immensely helpful if you'd link to the documents you're talking about when asking questions about them.

Related

Can two postgres service share one common PGDATA folder, one at a time

Can I share data between two postgres services in separate machines (PGDATA folder will be in a shared location) while only one service will run at a time?
PostgreSQL has a number of ways to make sure that you cannot start two postmaster processes on the same data directory, but if you mount a filesystem on two machines, these mechanisms will fail. So you would have to make very sure that you don't start servers on both machines; that would lead to data corruption. Moreover, you'd have to make sure that the remote file system is reliable. A Windows network share isn't, for example.
So, all in all, my only recommendation is "don't do that". For high availability, use a proven shared-nothing architecture like Patroni.

Orient DB Distributed Database

I have been trying for hours now to set up a distributed database with Orient DB. I have followed their instructions here https://orientdb.com/docs/last/Tutorial-Setup-a-distributed-database.html but have had no luck. I am able to start a server on the first node. I then copy the directory and start a server in that copied directory, but the two nodes won't communicate with each other. They each just are acting as the first node. I tried using tcp instead of multicast in the hazelcast.xml file but that didn't seem to help. Any help would be greatly appreciated.
If you really want to setup a distributed environment that way, just copy the OrientDB folder before you setup any nodes or just download a fresh new installation.

How to configure mongo to use different volumes for databases?

I see that mongo has the configuration option storage.directoryPerDB. But I only see storage.dbPath to specify which path data is stored.
We have 2 small frequently used "settings" databases that will be stored locally in the default location. There is another "results" database for large image files, that will be written often, but queried infrequently, which has a dedicated SSD drive for its storage, this data needs to be on is own drive because our application can generate hundreds of gigs of image data in a small amount of time.
How can I configure mongod to store a database on a different drive? The server is running on windows, if that makes any difference.
Nevermind. The documentation at http://docs.mongodb.org/manual/reference/configuration-options/#storage.directoryPerDB explains how to do it perfectly.
Along with http://technet.microsoft.com/en-us/library/cc753321.aspx#BKMK_CMD
which describes how to mount a drive to a folder location.

Using vagrant & puppet, how to create and restore a database on fresh postgresql-server instance?

I have fresh provisioned instances of apache and postgres all set to go. I would like to restore a dump or mount a logical volume with data to the postgres instance. Likewise, I'd like to ensure that the dump is written out or the volume unmounted when i bring the instance down.
Can I use a logical volume this way? How should I approach?
I see this:
How to handle data such as Mysql, web sites sources with Vagrant?
The other answer had the following suggestions. Below I will discuss their implications for PostgreSQL.
In the current version of Vagrant (1.0.3), you have two main options:
Use shared folders. You can put your MySQL data directory into a shared folder so that the data comes back onto your host machine. The
con of this is that shared folders are actually quite slow compared to
the native VM filesystem in VirtualBox, and you can run into weird
permission issues as well.
Setup a task (rake, make, etc.) to copy your MySQL data to your shared folder on demand. Then, before you decide to destroy your VM,
you can run the task to export your data to your shared folder, then
you can reimport the data when you bring your VM back up.
The shared folders approach may work, but if you do this you need to be extremely careful with file permissions. PostgreSQL tends to be very paranoid about this, so you may have to be cautious about group permissions.
I would recommend something based on the second approach with a base backup (using pg_basebackup) since you get a copy of your database. You can also archive your wal segments to that directory to have something that can be restored on demand to near-present conditions.

Sphinx - NFS index

We are running sphinx 1.10 version. We are having multiple sphinx servers under Loadbalancer where searchd is running. We want to share the same index file across multiple servers via NFS. We do not want to do rsync as it would have different servers getting updated with indexes at different time and hence would create inconsistency in the search output.
Due to the .lock file creation, currently via NFS we are unable to start searchd in multiple servers. Any solution would be of great help!
You can use rsync, and then rotate all the servers in unison. basically you can do the reindexing, syncing. And then control when the servers actully rotate in the new index.
Works well. A couple of mentions of it here
http://sphinxsearch.com/forum/search.html?q=rsync+sighup&f=1
I can say it is impossible to share indexes between two or more searchd instances.
You have to implement something similar to rsync, see how we are doing Sphinx replication.