How safe is to use Postgres multiple schemas? - postgresql

Heroku warns of using Multiple Schemas with Postgres. But does not specify numerous operational problems caused.
As posted on Heroku docs:
The most common use case for using multiple schemas in a database is
building a software-as-a-service application wherein each customer has
their own schema. While this technique seems compelling, we strongly
recommend against it as it has caused numerous cases of operational
problems. For instance, even a moderate number of schemas (> 50) can
severely impact the performance of Heroku’s database snapshots tool,
PG Backups.
I think, the problem of Backups can be solved by adding a follower db.
I have 60 tables per schema, so with 1000 schemas I will have 60,000 tables. How will this impact database performance? What kind of problems I can expect while scaling?

The first issues with running a large volume of schemas and/or tables don't typically interfere with a running database. The major problem that operators encounter is that they will be unable to create logical backups of the database. Running heroku pg:backups is likely to fail as is running pg_dump manually. Typically, this is the error you'll see in the attempted backup logs:
ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction
The large volume of locks needed end up causing OOM conditions for the database. This isn't always a problem on Heroku. If you're using a production database you can rely on their point-in-time recovery option as your DR solution. That being said, exporting the data off Heroku will be difficult if you can't run a logical backup since they don't currently support external replication. It's less than ideal but hypothetically you could try to dump the database schema by schema in order avoid the OOM conditions.

Related

Postgres pg_dumpall consistency across databases that are backed up

Is there a way to backup all of the databases on a hyperscaler managed postgres server as of a certain time in order to maintain data consistency between the databases with either pg_dumpall, pg_dump or something else?
Background:
With the utilization of micro-services, an application may have many databases associated to it on a single hyperscaler managed postgres server. The hyperscalers do perform a functional snapshot backup; however, when a hyperscaler managed postgres server is accidentally deleted, the postgres backups are lost as well. These hyperscalers provide locks to prevent accidental deletes of a postgres server and mention that their support teams can be contacted to restore a deleted server, however, we still had a postgres server get deleted. We were able to recover by contacting the hyperscalers support team but would like to have a second way of backing up a hyperscaler managed postgres server.
I realize that the micro-services should be able to auto-recover to a data consistent point but the reality is that many of the micro-services have not been designed nor written to that requirement. I really do not want to get into the aspect of micro-service design and want to retain this to be a DBA backup question.
pg_dumpall will not take a consistent backup across all databases. Each database backup will be consistent, but the snapshots for the backups of the different databases will be taken at different times.
If you need a consistent backup across several databases in a single cluster, use an online file system backup with pg_basebackup.
You can use pg_basebackup which most of the PostgreSQL DBA’s would end up using on a daily basis be it via scripts or manually. This creates the base backup of the database which can help in recovering in multiple situations. This takes an online backup of the database and hence is very useful when being used in production
you can review this for more details

Postgres and multiple locations of data storage

Postgres and the default location for its storage is at my C-drive. I would like to restore a backup to another database but to access it via the same Postgres server instance - the issue is that the size of the DB is too big to be restore on the same c-drive ...would it be possible to tell Postgres that the second database should be restore and placed on another location/drive (while still remaining the first one)? Like database1 at my C-drive and database2 at my D-drive?
Otherwise the second best solution would be to install 2 separate Postgres instances - but that also seems a bit overkill?
That should be entirely achievable, if you've used the postgres pg_dump command.
The pg_dump command does not create the database, so you create it yourself first. Use CREATE TABLESPACE to specify the location.
CREATE TABLESPACE secondspace LOCATION 'D:\postgresdata';
CREATE DATABASE seconddb TABLESPACE secondspace;
This creates an empty database on the D: drive.
Then the standard restore from a pg_dump should work:
psql seconddb < dumpfile
Replication
Sounds like you need database replication.
There are several ways to do this with Postgres, one built-in, and other approaches using add-on libraries.
Built-in replication feature
The built-in replication feature is likely to suit your needs. See the manual. In this approach, you have an instance of Postgres running on your primary server, doing reads and writes of your data. On a second server, an entirely separate computer, you run another instance of Postgres known as the replica. You first set up the replica by doing a full backup of your database on the first server, and restore to the second server.
Next you configure the replication feature. The replica needs to know it is playing the role of a replica rather than a regular database server. And the primary server needs to know the replica exists, so that every database change, every insert, modification, and deletion, can be communicated.
WAL
This communication happens via WAL files.
The Write-Ahead Log (WAL) feature in Postgres is where the database writes all changes first to the WAL, and only after that is complete, then writes to the actual database. In case of crash, power outage, or other failure, the database upon restarting can detect a transaction left incomplete. If incomplete, the transaction is rolled back, and the database server can try again by seeing the "To-Do" list of work listed in the WAL.
Every so often the current WAL is closed, with a new WAL file created to take over the work. With replication enabled, the closed WAL file is copied to the replica. The replica then incorporates that WAL file, to follow the same "To-Do" list of changes as written in that WAL file. So all changes are made to the replica database exactly as they were made to the primary database. Your replica is an exact match to the primary, except for a slight lag in time. The replica is always just one WAL file behind the progress of the primary.
In times of trouble, the replica serves as a warm stand-by. You can shutdown the primary, then tell the replica that it is now the primary. You can even configure the replica to be a hot stand-by, meaning it will automatically take-over when the primary seems to have failed. There are pros and cons to hot stand-by.
Offload read-only queries
As a bonus feature, the replica can be used for read-only queries. If your database is heavily used, you can offload some of the work burden from your primary to the replica. Any queries that do not require the absolute latest information can be shifted by connecting to the replica rather than the original. For example, a quarterly sales report likely does not need the latest data stored in the active WAL file that has not yet arrived on the replica.
Physical replication means all databases are copied
Caveat: This built-in replication feature is physical replication. This means all the changes to the entire Postgres installation (formally known as a cluster, not to be confused with a hardware cluster) is copied to the replica. If you use one Postgres server to server multiple databases, all those databases must be replicated – you cannot pick and choose which get copied over. There may be alternative replication features in the future related to logical replication.
More to learn
I am being brief here. The topics of replication, high-availability, and disaster-recovery are broad and complex, too much for an Answer on Stack Overflow.
Tip: This kind of Question might have been better asked on the sister site, DBA.StackExchange.com.

Do backups using pg_dump cause server outage if the database is too busy?

I have a Postgres database in production environment, and it has millions of records in tables. So I wanted to take a backup using pg_dump for some investigation.
But this database is so busy. So I am afraid if backup operation is caused any server issue like slow down server or crash database etc. as it is busy database.
Can anyone share if there is any risk? And please give some idea about best practice to take a backup from Postgres with no risk.
Running pg_dump will not cause a server crash, but it will add some extra CPU and particularly I/O load. You can test if that is a problem, pg_dump can be canceled any time.
On a busy database, it can also lead to table bloat, because old row versions have to be retained for the duration of pg_dump and cannot be vacuumed.
There are some alternatives:
Run pg_dump against a standby server.
Use pg_basebackup to perform a physical backup. That can be throttled to reduce the I/O load.

How to restore multiple databases on a single Google Cloud SQL for PostgreSQL instance to the same point in time?

I have 2 separate PostgreSQL databases in Google Cloud SQL for PostgreSQL. The databases should be separate due to security concerns, but they can reside in the same database instance. When restoring from a backup, it is important for me to have the databases restored to the same point in time, due to data relations.
I tried to understand how the backup works. I tried to research the backup documentation and restore one, but found no clue.
Can I be sure that restoring an instance would bring all its databases to the same point in time?
If you restore a Cloud SQL instance backup, it will restore all the databases in that instance to the point when backup was taken. One thing to note here is that if you restore the backup in the same instance it will overwrite the current data with the one in backup.
You can see the General tips about performing a restore to further help you understand the restoration.

Backing up the DB vs. backing up the VM

We're serving a Django/Postgres site running on a VM hypervisor. We're now trying to figure out our back up strategy and have two probable options:
Back up the DB directly using pg_dump
Back up the VM directly by copying the VM image
I'm with the latter as I think, I could simply back up everything that has to do with the site. I'm not sure whether I have to shut down the VM for this though.
What is a better and more recommended way of backing up a DB? Are there any reasons for not using the VM backup?
Thanks
The question basically boils down to, can you consider a hot copy of PostgreSQL's data files a backup?
The answer is: not really. PostgreSQL tries very hard through the use of WAL to ensure that its files are in a consistent state all the time and that it can survive a power failure, but starting it up from a copy of these files puts PostgreSQL into recovery mode. If the backup happened at the wrong second and PostgreSQL can't recover from the state of these files, your backup is useless. You don't want your backup/restore mechanism to depend on the recovery mechanism (unless you're dealing with "crash only" software, which PostgreSQL is not).
The probability of PostgreSQL not being able to recover from these files is not high, but it's not zero either. The probability of PostgreSQL not being able to load an SQL dump that it made, on the other hand, is zero. I prefer backup choices with lower probabilities of failure. pg_dump was designed for doing backups.
PostgreSQL recommends using pg_dump for backups, as a file system (or VM) backup requires the database to be shut down (and has other drawbacks):
http://www.postgresql.org/docs/8.1/static/backup-file.html
Edit: Also, a pg_dump backup will be significantly smaller than a filesystem dump of the same database.
There is an additional option. With PostgreSQL you can make an online backup that allows you to snapshot the file system and maintain consistency. You can see details here:
http://www.postgresql.org/docs/9.0/static/continuous-archiving.html
We use this exact method for making backups when we run PostgreSQL in a VM.