Postrgesql base Backup - older version to New version - postgresql

Can we take base backup from postgresql old version db to new version db.
For Example I have two DB servers. One server is having old version Postgres 9.x and another one is running Postgres 13.  
If I want to take base backup from postgres 9.x for Postgres 13, can we do that directly using the pg_basebackup command in Postgres 13 DB server.
Is there any other way to complete the above task in Postgres.
I have a plan below to complete the above task if the base backup does not work for lower to higher versions. But it is a slow process.
DB A has a lower version and DB B has a higher version.
So to take base backup, updating the A DB version same as B DB is the only option. 
Once the DB A updated successfully, Then start the base backup from A DB with updated Postgres version to the B DB. 
We can refer to the below site for the DB update process.
https://www.migops.com/blog/upgrading-postgresql-9-6-to-postgresql-13/
The DB update process will take time to complete. In that time the write process will not work on the primary DB.  All the write process will be affected because of this.  If any issue happens in the update process, almost everything will be a mess. 
I will suggest the old version DB to the new version db base backup process  to complete this activity.
Do you guys have any suggestions?

You are mixing up physical file-system level backup (pg_basebackup) and logical backup, also known as dump or export (pg_dump). You can only use the latter for upgrading. A pg_basebackup can only be restored to the same major PostgreSQL version.

Related

Update Postgresql in Ubuntu - pg_upgrade vs pg_upgradecluster

I would like to switch from postgres 9.6 to version 14 which runs on Ubuntu 21.04. I have a cluster with 3 databases.
I would like to know what is the difference between upgrading with pg_upgrade and pg_upgradecluster? Which one is faster and safer?
pg_upgrade is a tool from Postgresql itself that will operate on a single database (folder).
pg_upgradecluster however is a wrapper from your operating system (= Ubuntu) to pg_upgrade or pg_dump/pg_restore. In addition to very conveniently upgrading your database, it will also do some housekeeping like moving the config files to the correct folder in /etc/postgres/ .
So, if you have set up your database by pg_createcluster and it is hence listed by pg_lsclusters, I'd strongly recommend using pg_upgradecluster to upgrade it.
In terms of "faster vs. safer", be sure to read about the various options on the manpage.
If you can take a reliable backup (e.g. snapshot), you can safely use the -m upgrade --link option which will be fastest and allow for a very short downtime (depending on database size and resources, but I've recently upgraded a 700GB database in ~25 seconds).
The safest option of course is not using pg_upgrade, but the default pg_dump/pg_restore method, which will shut down your original database and copy the data to a new database in a new location (= it will use at approx. twice the space, at least temporarily until you decide to delete the original folder).

GCP database migration gets stuck - PostgreSQL

I have a Postgres database instance in GCP running on version 9.6.
I want to upgrade the Postgres version to a newer version, and I use GCPs "Database migration" for that purpose.
I have 2 databases in the instance and they fill around 800 GB in total.
The problem is that the migration gets in stuck. There are no errors in the migration log.
Copy of monitoring
In short, my question is:
How can I check in what phase is the migration and what is the issue?
Thanks.

How I can copy my local PostgreSQL database to Heroku for SpringBoot app

I have deployed my SpringBoot app to Heroku. Now I would like to copy my local PostgreSQL to Heroku.
I have found some information on devcenter.heroku.com.
However I don't understand enough about the using of file db.changelog-master.yaml.
Could anyone give me details about the simplest solutions to copy the database?
Create a valid dump of your local postgres database and host it somewhere publicly available. Now you will be able to restore this entire dataset (schema and records) with pg:backups:restore as shown here. The sole caveat here is that the target database must be completely empty for this to work. You can empty a Heroku postgres database with heroku pg:reset.
If you cannot take the approach listed above then you can run pg_restore directly from your local instance, provided your local version of Postgres is >= the target version of Postgres. This also applies to creating the dumpfile and is a requirement because pg utilities are not guaranteed to be forward compatible. Documentation for pg_restore is here.

org.sonar.api.utils.MessageException: Database relates to a more recent version of sonar

I am facing the below error.
org.sonar.api.utils.MessageException: Database relates to a more recent version of sonar. Please check your settings.
i have 2 different servers. one for sonar and another for database.
1. I have taken snapshot of sonar server, but i didnot take snapshot for database(forgot to take it).
2. I have upgraded sonar from sonar 4.0 to sonarqube4.5.1 after taking backup of database(postgresql) using pg_dump command... but as i have faced some loss of data after upgrading sonar and database i have reverted back to previous snapshot (sonar server)
3. now we have done restoring the database using pg_restore command successfully but the sonar 4.0 is not getting started and gives the above error
could anyone help me with this
Message is quite obvious. You are starting your SQ 4.0 instance connected to a DB which is recognized as more recent.
Options are limited:
either the DB is really your original SQ 4.0 DB and you are not running the exact same SQ 4.0 software you used to
or you are running the same SQ 4.0 software and the DB is not your original SQ 4.0 DB
My guess is that you did not successfully restore your DB to its SQ 4.0 state, or only partially.
Under the hood, SQ uses table schema_migrations to know which version of the DB it is connected to.
Each db migration "script" (lets use that name for simplicity sake) has a unique number (number is strictly increasing) and each SQ version knows the number of the last migration script it bundles. When a script is successfully executed, a row is added to table schema_migrations.
SQ checks at startup its last script's number against the highest number in schema_migrations:
same number, everything is ok
lower number, DB needs an upgrade
higher number, the error message you got

How to salvage data from Heroku Postgres

we are using Heroku Postgres with Ruby on Rails 3.2.
A few days before, we deleted important data by mistake using 'heroku run db:load' with misconfigured data.yml, that is, drop tables and the recreate tables with almost no data.
Backup data is only available 2 weeeks before, so we lost data of 2 weeks.
So We need to recover not by PG Backup/pg_dump but by postgresql's system data files.
I think, the only way to recover data is to restore data from xlog or archive file, but of course we don't have permission to be Super User/Replication Role to copy postgres database on heroku (or Amazon EC2) to local server.
Is there anyone who confronted such a case and resolved the problem?
Your only option is the backups provided by the PgBackups service (if you had that running). If not, Heroku support might have more options available.
At a minimum, you will have some data loss, but you can guarantee you won't do it again ;)