Creating backup in directory format using
pg_dump -f "sba" -Fdirectory --jobs=32 --verbose sba
throws error
pg_dump: error: could not stat file "sba/282168.data.gz": value too large
How to create backup in format from which tables can selectively restored?
This file size is 6.2 GB in NTFS. Was backup created successfully? Is this only informational message?
Server is Postgres 12 running in Debian Linux 10 under WSL
Client is pg_dump from Postgres 15 running in Windows 11
Related
I'm using a Linux server, I want to upgrade postgresql from the version 9 to 11. I have created a dump of my database. Then I have installed postgresql 11, now I want to import the dump to postgresql 11, I run the command
pg_restore -h localhost -d dbLitstUsers -U postgres .dataBasebackup but get the error
pg_restore: [archiver] unsupported version (1.14) in file header
How can I fix this error ?
You must have used a pg_dump from v12 or higher. That generates a custom dump in a format (1.14) which is not compatible with older pg_restore. Retake the dump with the pg_dump coming with the version you want to restore to.
My hoster upgraded my Ubuntu server and it's not booting any more. The only way I can access my data any more is in read mode via a rescue environment (SSH shell).
I am running a postgres 9.1 installation on the crashed server. I am not able to start the postgres server in the rescue environment. I also do not have a dababase dump created with pg_dump.
However, I was able to copy the whole /var/lib/postgresql folder to a new machine . I installed Postgres 9.1 on this machine. Afertwards I replaced the /var/lib/postgresql with my old files.
When I start the postgres server, I get something like "incorrect checksum in control file".
I there any way to restore the database content without using pg_dump (since I don't have a current dump and I am not able to run it on the defective machine).
Indeed it was an issue between 32bit and 64bit. I had another old server running on 32bit Ubuntu. Initially I tried to restore the data on a 64bit machine. With the 32bit machine it simply worked by copying the postgres main directory. Finally I was able to log into the database and create a dump.
I am trying to execute a backup of my PostgreSQL-10 database running on a CentOS 7 machine and then to restore it in a development machine running Windows 10, but I am getting errors during the restore process:
pg_restore: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
I have made sure that the commands' parameters passed in both dump and restore are the same:
pg_dump --format=c --compress=9 --encoding=UTF-8 -n public --verbose --username=postgres databaseName -W -f /usr/local/production-dump.backup
However it does not work at all. Even though the schema is restored, the data is not, because right before the restore process is going to start restoring data, it gives a "pipe has ended" error and does not proceed with the full restore process. I am using the "custom" format because the plain SQL or tar formats generate huge backup files.
What am I doing wrong? Is there any parameter that I need to pass to the dump or restore commands?
The likely explanation is that the file was modified during file transfer. Could you calculate a checksum of the file before and after transfer and verify that it is the same?
If the file did not change, then you have probably found a PostgreSQL bug. If you have a dump that you can share and that exhibits the problem, please report this problem to PostgreSQL.
My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.
I'm trying to create a hot standby server using PostgreSQL 9.3.5 and Red Hat 6.5
I receive the folowing error when running pg_basebackup:
$ pg_basebackup -h 172.28.250.10 -D /var/lib/pgsql/9.3/data -U replicador -v -P
pg_basebackup: could not create directory "/var/lib/pgsql/9.3/data/osm_indices":
File exists
/var/lib/pgsql/9.3/data exists and is empty when I launch the tool and when it fails there is data at /var/lib/pgsql/9.3/data/osm_indices. The DB has 5 tablespaces and 4 are completely copied.
Both servers are running the same O.S. and DB server version.
I've tried the same with 2 different masters and 3 slaves with the same result, but not always is the same tablespaces that fails to copy.
Thanks,
Luis.
It looks like you might have tablespaces inside the data directory.
You should not do that. Tablespaces are meant to be separate paths, and some of the tools assume that they will be.
Move the tablespaces outside the datadir and pg_basebackup should behave, so long as you have corresponding paths on the destination server.