rsync command not found (while backing up Postgresql using barman) - postgresql

I am trying to use barman to schedule backups to be taken of my Postgresql database. The IPs of my DB server and backup server (both of which run Cent OS 7.2.1511) is 10.113.12.200 and 10.133.12.205 respectively. After studying Barman's documentation and Postgresql documentation for point-in-archiving, I set up the archiving in Postgresql configuation file as
archive_mode = on
wal_level = 'replica'
archive_command = 'rsync --rsync-path=/usr/bin/rsync -a %p barman#10.133.12.205:/var/lib/barman/incoming/my_database/%f'
I have also enabled password-less SSH from postgres#10.113.12.200 to barman#10.113.12.205 and vice-versa as the Barman docs say.
Below is an excerpt from my barman.conf's DB server section.
[my-database]
description = "My Database"
ssh_command = ssh postgres#10.113.12.200
conninfo = host=10.113.12.200 user=postgres
retention_policy_mode = auto
retention_policy = RECOVERY WINDOW OF 7 days
wal_retention_policy = main
I'm really very puzzled and confused over this error sh: rsync command not found. Now, I know what this error is after reading about on rsync and going through some questions here on SO including this one here https://unix.stackexchange.com/questions/198756/why-is-rsync-not-found
I SSHed to barman#10.113.12.205 from postgres#10.113.12.200 to find out the $PATH, executed which rsync and then only added --rsync-path as above. But still this issue. None of my WAL archives are getting copied as a result. Can someone shed some light? I think I miss something fundamental here.

Related

Postgresql 10 replication mode error

I'm trying to setup a basic master slave configuration using streaming replication for postgres 10 and docker
Since the official docker image provides a docker-entrypoint-initdb.d folder for placing initialization scripts i thought it would be a swell idea to start placing my preparation code here.
What i'm trying to do is automate the way the database is restored before starting the slave in standby mode, so i run
rm -rf /var/lib/postgresql/data/* && pg_basebackup 'host=postgres-master port=5432 user=foo password=foo' -D /var/lib/postgresql/data/
and this succeeds.
Then the server is shutdown and restarted as per the docker initialization script, which pops up a message saying
database system identifier differs between the primary and standby
Now I've been sitting online for a while now, and the only 2 explanations i got is that I either have a misconfigured recovery.conf file, which looks like this
standby_mode = 'on'
primary_conninfo = 'host=postgres-master port=5432 user=foo password=foo'
trigger_file = '/tmp/postgresql.trigger'
Where the connection string is the same one i used for the base backup.
The second solution circulating is that the backup command could be messed up, but the only thing i add to the data folder after backup is the recovery.conf file.
Anyone have any idea where i'm messing up?
Should i just go for repmgr and call it a day?
Thanks in advance
To give an answer to my own question, the issue lied in how the dockerfile entrypoint scripts were called. Rather, they would end up starting the instance as a master or slave according to variables that were not properly set by me.

Migrating 200GB of Postgres data from 9.0 to 9.6

We have a simple database with just 5 tables. But 1 table is huge, around 100GB of data by itself, and the indices together are nearly double that size. The server is an old CentOS 5 server with PG 9.0. I'm moving to a more modern setup with SSD hard disks, CentOS 7, and PG 9.6.
Question: what's the best way to migrate data in a simple way. pg_dump it on the old server, move it via rsync or something to the new server and pg_restore? I could do the pg_dump with -Fc option, so that we can pg_restore it easily (otherwise it's a text format and we have to use psql -f instead). But a trial run suggested that while the pg_dump is OK, the pg_restore on the destination server, which is much faster, goes on and on. We did a pg_restore --verbose, but there was no verbosity at all. Perhaps the server was stuck doing IO?
Our pg.conf settings for the pg_restore are as follows:
maintenance_work_mem = 1500MB
fsync = off
synchronous_commit = off
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
max_wal_senders = 0
wal_keep_segments = 0
archive_mode = off
autovacuum = off
What should we do to ensure that the pg_restore works? Right now both servers are offline, so I can do pretty much anything needed -- any settings can be changed.
Some more background info--
Old server: CentOS 5, SCSI RAID 1 disks, 4GB RAM (not much), PG 9.0
New server: CentOS 7 (latest), SSD disk, 16GB RAM, PG 9.6
Thank you for any pointers on moving large tables in the best way possible. The usual PG documentation doesn't seem to be helping. We've tried both the text dump way and the -Fc way.
I strongly suggest you pg_upgrade:
Install 9.0.23 on the new server. From source if necessary.
Set up a streaming replica on the new server using pg_basebackup and a suitable recovery.conf. Enable WAL archiving and restore_command too, in case it becomes desynchronised for any reason.
Also install 9.6 on the new server
Do an upgrade test by stopping the replica and attempting a pg_upgrade to 9.6. Restart the replica, fix any issues and repeat until you succeed.
When you're confident pg_upgrade will succeed, plan a cut-over time. Stop the 9.0 master and stop the replica. pg_upgrade the replica. Start the new 9.6 server.
See the pg_upgrade documentation for more info.
Remember: KEEP BACKUPS.
If you want simple, just pg_dumpall and then pipe to psql. But that'll be slow and it'll cause problems if your restore fails partway through then you try to resume, etc.
Better:
If you don't want to use replication, then use parallel-mode pg_dump and pg_restore with directory format input/output if you want to get things done quickly.
Configure your 9.0 database to accept connections from the 9.6 host and make sure there's a high-performance network connection (gigabit or better).
Using the 9.6 host, running the 9.6 versions of pg_dump and pg_dumpall:
Dump your global objects with pg_dumpall --globals-only -f globals.sql
Dump your database(s) with pg_dump -Fd -j4 -d dbname -f dbname.dumpdir or similar. -j is the number of parallel jobs. You'll need to dump each database separately if there are multiple ones.
Cleanly initdb a new PostgreSQL 9.6 install, removing whatever attempts you have previously made (since I don't know what is/isn't there). Alternately, DROP any created roles, databases, etc, returning it to a clean state.
Use psql to run the globals script: psql -v ON_ERROR_STOP=1 --single-transaction -f globals.sql -d postgres
Use pg_restore to load the database dumps: pg_restore --create -d template1 -j4 template1 dbname.dump, repeating for each dumped DB. You can restore multiple DBs concurrently.
Yes, I know the handling of global objects sucks. And yes, it'd be nice if all this were wrapped up in a simple command. But it isn't. Designs and well thought out patches are welcome if you want to try to improve this. So far nobody's wanted to enough to do the work.

Postgres Recovery Failure

What I am trying to accomplish is a recovery using a continuous archive backup.
I am running a vm of CentOS 6.8 and Postgres 9.1 Postgres 9.1 is the same as the DB that I am pulling from.
I installed Postgres and initialized the DB, started up fine.
Then, following these directions: https://www.postgresql.org/docs/9.3/static/continuous-archiving.html
Stopped the destination pSQL server (as root: service postgresql-9.1 stop)
Copied the destination cluster data folder to the side (as postgres)
Removed the cluster data files (as postgres)
Copied in my source data folder (as postgres)
Copied WAL files into a clean pg_xlog folder under the data folder (as postgres)
Created a recovery.conf file which contained:
restore_command = 'cp /var/lib/pgsql/database_sample_backup/wal_archives/0A/%f %p'
This being another location for the WAL files other than the copy I placed in pg_xlog (was not sure if I needed both)
But when I attempt to restart my server, it fails. (as root: service postgresql-9.1 start)
My pgstartup.log at one point spit out "runuser: cannot set groups: Operation not permitted" but it doesn't consistently do this with every attempt to start.
I've also tried turning off archiving and replication directive in postgres.conf (so that it can run stand alone) and tried copying over the pg_hba.conf from the new DB I had created to see if they would resolve the issue. Neither did.
I've also done a netstat -ntap | grep 5432 which confirmed that I don't have anything else running on the port.
What else can I provide in the form of details, and what else my I attempt in this restoration process.
Thank you for your help!

WAL level not sufficient for making an online backup

I have tried to do database replication in linux 7.0 red hat using postgresql.
I refered this URL for Confuring http://blog.terminal.com/postgresql-replication-and-backup-methods/ I completed the the steps upto this step
Configuring the slave server
but the step
Initial replication
when I executed this command in Master
-bash-4.2$ psql -c "select pg_start_backup('initial_backup');"
I got Error Like this
ERROR: WAL level not sufficient for making an online backup
HINT: wal_level must be set to "archive" or "hot_standby" at server start.
So kindly let me know where we are wrong.
On your master PostgreSQL server's configuration file <PG_DATA>/postgresql.conf there is parameter called wal_level it should be set to either "archive" or "hot_standby"
Both said to postgres to keep WAL segmetnts for replication server. The difference between the two is rather simple:
"hot_standby" means that your replication server will allow connections for SELECT statements
"archive" means you don't want that possibility.
More to read in this tutorial on streaming replication
Also keep in mind that change of wal_level property needs server restart to take an effect.

Where are Postgresql 9.1 logs? (not starting on FreeBSD 10)

I tried finding solutions, but nothing helps.
I need to do a backup of my pgsql data from the app, I haven't used for months now. I have discovered, that the postgresql server is not running. But cannot start it.
I run pg_ctl -D /usr/local/pgsql/data -l logging.log -w -s start as pgsql user (su pgsql). Output says that it couldn't start a server and tells me to check logs. But logging.log is an empty file. Any default logging file I have found on the web about is modified months ago or empty or even doesn't exist.
I have no idea how to find the error, since logs are empty or I just don't know where to look for them.
Important note: it was working few months ago, but there were almost no changes in that time (possible hostname change).
Postgres is v9.1
System: FreeBSD 10.0-RC4
Some versions of FreeBSD ports installed PostgreSQL with syslog logging enabled. You can confirm this by looking at /usr/local/pgsql/data/postgresql.conf for log_destination = 'syslog'
If that is the case, the logging output should be visible in /var/log/messages
Default syslog logging enabled (log_destination = 'syslog') and logging output should be visible in /var/log/messages.
If you want to make a log in a separate file:
1) Create log file:
touch /var/log/postgresql/postgresql.log
2) Edit /etc/syslog.conf, append lines
!postgres
*.* /var/log/postgresql/postgresql.log
!*
After editing, you need to restart the service
service syslogd restart
4) do not forget to rotate postgresql.log (edit /etc/newsyslog.conf)
5) Perhaps in order to see something you will need to set the logging level. As an example, add to your postgresql.conf
client_min_messages = log
log_min_messages = info
log_checkpoints = on
log_connections = on
log_disconnections = on