How to improve pg_basebackup speed. I can not find the parallelism option for pg_basebackup. There is no any parallelism option for pg_basebackup? Thank you. I just want to create slave database fast. The database is 5TB and it takes more time for creating Slave database. If there is no any parallel option how can I avoid this time problem?
Command for creating Slave
pg_basebackup -Xs -h 172.31.34.215 -U repuser --checkpoint=fast -D /var/lib/postgresql/14/ter -R --slot=replication_slot -C
Related
I am about to backup 120 Gb database. I kept on failing when using PGADMIN backup (because of VPN disconnection after 7 hours running) or SQLMaestro (out of memory issue after 3 hours running).
So I want to run it on the server using pg_dump. The command I want to use is : time pg_dump -j 5 -Fc -Z 1 db_profile_20210714 -f /var/lib/postgresql/backup2/ (I want to measure the time as well, so I put time). And after that I will run pg_dumpall -g
I have 30 cores server and backup drive mounted on NFS. Postgres 12 running on Ubuntu 12.
Questions :
If I use -Z 0, will it undo the default compression of -Fc ? (-Fc is compressed by default)
Does the usage of -j 5 and -Z 1 counter productive to each other ? I read from article that to throttle pg_dump process so that it wont cause I/O spike, one can use -Z between 3 and 5. But what if some one want to utilize the cores and compress at once, is it effective / efficient ?
Thanks
Yes, if you use -Z 0, the custom format dump will be uncompressed. -j and -Z are independent from each other, and you cannot use -j with the custom format. If using compression speeds up the dump or not depends on your bottleneck. If that is the network, compression can help. Otherwise, compression usually makes pg_dump slower.
We are using below command to take the backup of the database.
$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip --compress=1 --pgdata=- -D /opt/rao
While taking the backup we have received below error.
transaction log start point: 285/8F000080
pg_basebackup: could not get transaction log end position from server: FATAL: requested WAL segment 00000001000002850000008F has already been removed
Please guide me why and how to handle this error. Do you want me to change any of the option in my pg_basebackup command let me know.
Please clarify me what it means --pgdata=--D in my above pg_basebackup command.
-D directory
--pgdata=directory
This specifies the directory to write the output to.
When the backup is in tar mode, and the directory is specified as - (dash), the tar file will be written to stdout. This parameter is required.
FATAL: requested WAL segment 00000001000002850000008F has already been removed
means that the master hasn't kept enough history to bring the standby back up to date.
You can use pg_basebackup to create a new slave:
pg_basebackup -h masterhost -U postgres -D path --progress --verbose -c fast
When having a WAL archive, you can try restore_command. The pg_basebackup creates an entirely new slave in an empty directory.
I'm on a Windows Server 2016 machine. I have run pg_dump.exe on a 3gb postgres 9.4 database using the -Fc format.
When I run pg_restore to a local database (9.6):
pg_restore.exe -O -x -C -v -f c:/myfilename
The command runs for over 24 hours. (Still running)
Similar to this issue: Postgres Restore taking ages (days)
I am using the verbose cli option, which looks to be spitting out a lot of JSON. I'm assuming that's getting inserted into tables. The task manager has the CPU at 0%, using .06MB of memory. Looks like I should add more jobs next time, but this still seems pretty ridiculous.
I prefer using a linux machine, but this is what the client provided. Any suggestions?
pg_restore.exe -d {db_name} -O -x c:/myfilename
Did the trick.
I got rid of the -C and manually created the database prior to running the command. I also realized that connection options should come before other options:
pg_restore [connection-option...] [option...] [filename]
see postgres documentation for more.
When trying to perform pg_basebackup on a replica, I always get the following message:
postgres#db1:~/10$ pg_basebackup -h foo.bar.com -U repluser -D /var/lib/postgresql/10/main -v -P
pg_basebackup: initiating base backup, waiting for checkpoint to complete
I've tried waiting, but nothing happens. Is it possible to speed up the process?
Call pg_basebackup with the option --checkpoint=fast to force a fast checkpoint rather than waiting for a spread one to complete.
It's possible to force a checkpoint to complete. To do so, run CHECKPOINT; on the master server:
$ sudo su - postgres
$ psql
postgres=# CHECKPOINT;
Wanted to point out that while the accepted answer of running 'CHECKPOINT' command on the primary server is correct, it is not meant to be run during normal operation, this is according to postgres documentation which you can see here:
https://www.postgresql.org/docs/current/sql-checkpoint.html
So be sure and not do this while the server is processing normal operations from your apps etc.
I am a DBA working for a small company. Our first business project is eBook. My supervisor ask me to study and deploy MongoDB as the primary DB for the whole project. Our project is still in the development stage.
Last night, we encountered a serious problem. My supervisor asked me to dump data from database in Server A, then restore to Server B.
This is my command:
Server A
mongodump -h 127.0.0.1 -d puppy -o /
tar -cpf /puppy.tar /puppy/
Server B
scp root#hostname:/puppy.tar /
cd /
tar -xf puppy.tar
mongorestore -h 127.0.0.1 -d puppy --directoryperdb /puppy/ --drop
After these actions, I found the record count of database “puppy” in Server B did not match the number of Server A. After several trials and use of the command:
use puppy; db.dropDatabase();mongorestore -h 127.0.0.1 -d puppy --directoryperdb /puppy/ --drop
It still did not work.
And after that, I tried to restore Server A from its own DB dump file and a disaster happened. Record number in Server A is the same number in Server B! Because this is a regular action that I've done before, and never encountered such a big problem, it shocks me and my supervisor.
We took a long time to fix this problem. But my supervisor asked me to find reason, or I’ll be fired! This is my worst nightmare! Can anyone give me a clue?
Thank for your help.