Is it possible to run Postgres on a write-protected file system? Or a shared file system? - postgresql

I'm trying to set up a distributed processing environment,
with all of the data sitting in a single shared network drive.
I'm not going to write anything to it, and just be reading from it,
so we're considering write-protecting the network drive as well.
I remember when I was working with MSSQL,
I could back up databases to a DVD and load it directly as a read-only database.
If I can do something like that in Postgres,
I should be able to give it an abstraction like a read-only DVD,
and all will be good.
Is something like this possible in Postgres,
if not, any alternatives? (MySQL? sqlite even?)
Or if that's not possible is there some way to specify a shared file system?
(Make it know that other processes are reading from it as well?)
For various reasons, using a parallel dbms is not possible,
and I need two DB processes running parallel...
Any help is greatly appreciated.
Thanks!!

Write-protecting the data directory will cause PostgreSQL to fail to start, as it needs to be able to write postmaster.pid. PostgreSQL also needs to be able to write temporary files and tablespaces, set hint bits, manage the visibility map, and more.
In theory it might be possible to modify the PostgreSQL server to support running on a read-only database, but right now AFAIK this is not supported. Don't expect it to work. You'll need to clone the data directory for each instance.
If you want to run multiple PostgreSQL instances for performance reasons, having them fighting over shared storage would be counter-productive anyway. If the DB is small enough to fit in RAM it'd be OK ... but in that case it's also easy to just clone it to each machine. If the DB isn't big enough to be cached in RAM then both DB instances would be I/O bottlenecked and unlikely to perform any better than (probably slightly worse than) a single DB not subject to storage contention.
There's some chance that you could get it to work by:
Moving the constant data into a new tablespace onto read-only shared storage
Taking a basebackup of the database, minus the newly separated tablespace for shared data
Copying the basebackup of the DB to read/write private storage on each host that'll run a DB
Mounting the shared storage and linking the tablespace in place where Pg expects it
Starting pg
... at least if you force hint-bit setting and VACUUM FREEZE everything in the shared tablespace first. It isn't supported, it isn't tested, it probably won't work, there's no benefit over running private instances, and I sure as hell wouldn't do it, but if you really insist you could try it. Crashes, wrong query results, and other bizarre behaviour are not unlikely.

I've never tried it, but it may be possible to run postgres with a data dir which is mostly on a RO file system if all your use is indeed read-only. You will need to be sure to disable autovacuum. I think even read activity may generate xlog mutation, so you will probably have to symlink the pg_xlog directory onto a writeable file system. Sometimes read queries will spill to disk for large sorts or other temp requirements, so you should also link base/pgsql_tmp to a writeable disk area.
As Richard points out there are visibility hint bits in the data heap. May want to try VACUUM FULL FREEZE ANALYZE on the db before putting it on the RO file system.

"Is something like this possible in Postgres, if not, any alternatives? (MySQL? sqlite even?)"
I'm trying to figure out if I can do this with postgres as well, to port over a system from sqlite. I can confirm that this works just fine with sqlite3 database files on a read-only NFS share. Sqlite does work nicely for this purpose.
When done with sqlite, we cut over to a new directory with new sqlite files whenever there are updates. We don't ever insert into the in-use database. I'm not sure if inserts would pose any problems (with either database). Caching read-only data at the OS level could be an issue if another database instance mounted the dir read-write. This is something I would ideally like to be able to do.

Related

Postgresql support which type of sanitization processes?

If we just drop the database from PostgreSQL then can we say that data deleted permanently, and securely?
How can we follow industry-standard sanitization processes, e.g., NIST 800-88 with postgresql ?
There is no support for that built into PostgreSQL, since that belongs to the physical properties of the file system, and PostgreSQL uses the kernel's file system API and has no knowledge about the file system's inner workings.
Even if PostgreSQL went as far as overwriting files with random data before deleting them (which it doesn't), that wouldn't achieve anything on a copy-on-write file system after a snapshot has been taken.
You are approaching this on the wrong layer. This requirement has to be handled on the file system level.

Postgres - Is it necessary to create tablespace in my case?

I have a mobile/web project, using pg9.3 as database, and linux as server.
The data won't be huge, but as time goes on, the data increase.
For long term considering, I want to know about:
Questions:
1. Is it necessary for me to create tablespace for my database, or just use the default one?
2. If I create new tablespace, what is the proper location on linux to create the folder, and why?
3. If I don't create it now, and wait until I have to, till then, will it be easy for me to migrate db with data to new tablespace?
Just use the default tablespace, do not create new tablespaces. Tablespaces are only useful if you have multiple physical disks, so you can define which data is stored on which physical disk. The directory where your data is located is not that important for the workings of postgres, so if you only have one disk it is useless to use tablespaces
Should your data grow beyond the capacity of 1 disk, you will have to perform a full data migration anyway to move it to another physical disk, so you can configure tablespaces at that time
The idea behind defining which data is located on which disk (with tablespaces) is that you can do things like putting a big table which is hardly used on a slow disk, and putting this very intensively used table on a separated faster disk. But I assume you're not there yet, so don't over complicate things

postgres copy database to another server reduces database size

Installed postgres 9.1 in both the machine.
Initially the DB size is 7052 MB then i used the following command for copy to another server.
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
After successfully copies, In destination machine i check size it shows 6653 MB.
then i checked for table count its same.
Has there been data loss? Is there missing data?
Note:
Two machines have same hardware and software configuration.
i used:
SELECT pg_size_pretty(pg_database_size('dbname'));
One of the PostgreSQL's most sophisticated features is so called Multi-Version Concurrency Control (MVCC), a standard technique for avoiding conflicts between reads and writes of the same object in database. MVCC guarantees that each transaction sees a consistent view of the database by reading non-current data for objects modified by concurrent transactions. Thanks to MVCC, PostgreSQL has great scalability, a robust hot backup tool and many other nice features comparable to the most advanced commercial databases.
Unfortunately, there is one downside to MVCC, the databases tend to grow over time and sometimes it can be a problem. In recent versions of PostgreSQL there is a separate server process called the autovacuum daemon (pg_autovacuum), whose purpose is to keep the database size reasonable. It does that by trying to recover reusable chunks of the database files. Still, there are many scenarios that will force the database to grow, even if the amount of the useful data in it doesn't really change. That happens typically if you have lots of UPDATE and/or DELETE statements in the applications that are using the database.
When you do a COPY, you recover extraneous space and so your copied DB appears smaller.
That looks normal. Databases are often smaller after restore, because a newly created b-tree index is more compact than one that's been progressively built by inserts. Additionally, UPDATEs and DELETEs leave empty space in the tables.
So you have nothing to worry about. You'll find that if you diff an SQL dump from the old DB and a dump taken from the just-restored DB, they'll be the same except for comments.

Physical location of objects in a PostgreSQL database?

I'm interested to get the physical locations of tables, views, functions, data/content available in the tables of PostgreSQL in Linux OS. I've a scenario that PostgreSQL could be installed in SD-Card facility and Hard-Disk. If I've tables, views, functions, data in SD, I want to get the physical locations of the same and merge/copy into my hard-disk whenever I wish to replace the storage space. I hope the storage of database should be in terms of plain files architecture.
Also, is it possible to view the contents of the files? I mean, can I access them?
Kevin and Mike already provided pointers where to find the data directory. For the physical location of a table in the file system, use:
SELECT pg_relation_filepath('my_table');
Don't mess with the files directly unless you know exactly what you are doing.
A database as a whole is represented by a subdirectory in PGDATA/base:
If you use tablespaces it gets more complicated. Read details in the chapter Database File Layout in the manual:
For each database in the cluster there is a subdirectory within
PGDATA/base, named after the database's OID in pg_database. This
subdirectory is the default location for the database's files; in
particular, its system catalogs are stored there.
...
Each table and index is stored in a separate file. For ordinary
relations, these files are named after the table or index's filenode
number, which can be found in pg_class.relfilenode.
...
The pg_relation_filepath() function shows the entire path (relative to
PGDATA) of any relation.
Bold emphasis mine.
The manual about the function pg_relation_filepath().
The query show data_directory; will show you the main data directory. But that doesn't necessarily tell you where things are stored.
PostgreSQL lets you define new tablespaces. A tablespace is a named directory in the filesystem. PostgreSQL lets you store individual tables, indexes, and entire databases in any permissible tablespace. So if a database were created in a specific tablespace, I believe none of its objects would appear in the data directory.
For solid run-time information about where things are stored on disk, you'll probably need to query pg_database, pg_tablespace, or pg_tables from the system catalogs. Tablespace information might also be available in the information_schema views.
But for merging or copying to your hard disk, using these files is almost certainly a Bad Thing. For that kind of work, pg_dump is your friend.
If you're talking about copying the disk files as a form of backup, you should probably read this, especially the section on Continuous Archiving and Point-in-Time Recovery (PITR):
http://www.postgresql.org/docs/current/interactive/backup.html
If you're thinking about trying to directly access and interpret data in the disk files, bypassing the database management system, that is a very bad idea for a lot of reasons. For one, the storage scheme is very complex. For another, it tends to change in every new major release (issued once per year). Thirdly, the ghost of E.F. Codd will probably haunt you; see rules 8, 9, 11, and 12 of Codd's 12 rules.

Postgresql PITR backup: best practices to handle multiple databases?

Hy guys, i have a postgresql 8.3 server with many database.
Actually, im planning to backup those db with a script that will store all the backup in a folder with the same name of the db, for example:
/mypath/backup/my_database1/
/mypath/backup/my_database2/
/mypath/backup/foo_database/
Every day i make 1 dump each 2 hours, overwriting the files every day... for example, in the my_database1 folder i have:
my_database1.backup-00.sql //backup made everyday at the 00.00 AM
my_database1.backup-02.sql //backup made everyday at the 02.00 AM
my_database1.backup-04.sql //backup made everyday at the 04.00 AM
my_database1.backup-06.sql //backup made everyday at the 06.00 AM
my_database1.backup-08.sql //backup made everyday at the 08.00 AM
my_database1.backup-10.sql //backup made everyday at the 10.00 AM
[...and so on...]
This is how i actually assure myself to be able to restore everydatabase loosing at least 2 hours of data.
2 hours still looks too much.
I've got a look to the postgresql pitr trought the WAL files, but, those files seem to contain all the data about all my database.
I'll need to separate those files, in the same way i do separate the dump files.
How to?
Otherwise, there is another easy-to-install to have a backup procedure that allo me to restore just 1 backup at 10 seconds earlier, but without creating a dump file every 10 seconds?
It is not possible with one instance of PostgresSQL.
You can divide your 500 tables between several instances, each listening on different port, but it would mean that they will not use resources like memory effectively (memory reserved but unused in one instance can not be used by another).
Slony will also not work here, as it does not replicate DDL statements, like dropping a table.
I'd recommend doing both:
continue to do your pg_dump backups, but try to smooth it - throttle pg_dump io bandwith, so it will not cripple a server, and run it continuously - when it finishes with the last database then immediately start with a first one;
additionally setup PITR.
This way you can restore a single database fast, but you can loose some data. If you'll decide that you cannot afford to loose that much data then you can restore your PITR backup to a temporary location (with fsync=off and pg_xlog symlinked to ramdisk for speed), pg_dump affected database from there and restore it to your main database.
Why do you want to separate the databases?
The way the PITR works, it is not possible to do since it works on the complete cluster.
What you can do in that case is to create a data directory and a separate cluster for each of those databases (not recommended though since it will require different ports, and postmaster instances).
I believe that the benefits of using PITR instead of regular dumps outweigh having separate backups for each database, so perhaps you can re-think the reasons for why you need to separate it.
Another way could be to set up some replication with Slony-I but that would require a separate machine (or instance) that receives the data. On the other hand, that way you would have a replicated system in near real-time.
Update for comment:
To recover from mistakes, like deleting a table, PITR would be perfect since you can replay to a specific time. However, for 500 databases I understand that can be a lot of overhead. Slony-I would probably not work, since it is replicating. Not sure how it handles table deletions.
I am not aware of any other ways you can go. What I would do would probably still be going for PITR and just not do any mistakes ;). Jokes aside, depending how frequently mistakes are being made this could be a solution:
Set it up for PITR
have a second instance ready on standby.
When a mistake happens, replay the restore to the point in time on the second instance.
Do a pg_dump of the affected database from that instance.
Do a pg_restore on the production instance for that database.
However, it would require you to have a second instance ready, either on the same server or a different one (different is recommended). Also, the restore time would be a bit longer since it would require you to do one extra dump and restore.
I think the way you are doing this is flawed. You should have one database with multiple schemas and roles. Then you can use PITR. However PITR is not a replacement for dumps.