I'm on a Windows Server 2016 machine. I have run pg_dump.exe on a 3gb postgres 9.4 database using the -Fc format.
When I run pg_restore to a local database (9.6):
pg_restore.exe -O -x -C -v -f c:/myfilename
The command runs for over 24 hours. (Still running)
Similar to this issue: Postgres Restore taking ages (days)
I am using the verbose cli option, which looks to be spitting out a lot of JSON. I'm assuming that's getting inserted into tables. The task manager has the CPU at 0%, using .06MB of memory. Looks like I should add more jobs next time, but this still seems pretty ridiculous.
I prefer using a linux machine, but this is what the client provided. Any suggestions?
pg_restore.exe -d {db_name} -O -x c:/myfilename
Did the trick.
I got rid of the -C and manually created the database prior to running the command. I also realized that connection options should come before other options:
pg_restore [connection-option...] [option...] [filename]
see postgres documentation for more.
Related
I installed jenkins and postgres on same centos7 server.
I also installed and configured "database" and "PostgreSQL Database Plugin" as shown in this image:
I want to insert data in my database jenkinsdb (the table i want to work on is "builds") after build is succesfull so i can track history of builds , deployments etc.
How can i run query to my database from jenkins ?
Create a script file, let's say build_complete.sh, with the postgresql commands:
#!/bin/bash
#Updated command that solves the bug. Courtesy: YoussefBoudaya's comment.
"export PGPASSWORD='postgres'; sudo -u postgres -H -- psql -d jenkinsdb -c "SELECT * FROM builds" postgres"
Please confirm psql path from server, it will be similar to /usr/lib/postgresql/10/bin/psql.
Add execute script step at the end of your pipeline and simple run your script.
A similar solution can be read here.
I use tableplus for my general admin.
Currently using the docker postgres image at 10.3 for both production and localhost development.
Because tableplus upgraded their postgres 10 drivers to 10.5, I can no longer use pg_restore to restore the backup files which are dumped using 10.5 --format=custom
See image for how I backup using tableplus. And how it uses 10.5 driver
The error message I get is pg_restore: [archiver] unsupported version (1.14) in file header
What i tried
I tried in localhost to simply change the tag for postgres in my dockerfile from 10.3 to 10.5 and it didn't work
original dockerfile
FROM postgres:10.3
COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenance
to
FROM postgres:10.5
COPY ./maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenance
My host system for development is macOS.
I have many existing databases and schemas in my development docker postgres. So I am currently stumped as to how to upgrade safely without destroying old data.
Can advise?
Also I think a long term is to figure out how to have data files outside the docker (i.e. inside my host system) so that everytime I want to upgrade my docker image for postgres I can do so safely without fear.
I like to ask about how to switch to such a setup as well.
If I understand you correctly, you want to restore a custom format dump taken with 10.5 into a 10.3 database.
That won't be possible if the archive format has changed between 10.3 and 10.5.
As a workaround, you could use a “plain format” dump (option --format=plain) which does not have an “archive version”. But any problems during restore are yours to deal with, since downgrading PostgreSQL isn't supported.
You should always use the same version for development and production, and you should always use the latest minor release (currently 10.13). Everything else is asking for trouble.
backup as plain text like this: warning! the file will be huge. Around 17x more than regular custom format. My typical 90mb is now 1.75Gb
copy the backup file into the postgres container docker cp ~/path/to/dump/in-host-system/2020-07-08-1.dump <name_of_postgres_container>:/backups
go to the bash of your postgres container docker exec -it <name_of_postgres_container> bash
inside the bash of postgres container: psql -U username -d dbname < backups/2020-07-08-1.dump
That will work
I am quite new in Arch and a total beginner in PostgreSQL, so this may be a very basic question.
I installed postgresql 11.5-4 from extra and pgadmin 4 from AUR, both seem to be running well.
I created a test DB with the following command:
initdb -D /home/lg/test-db
I got the answer:
You can start the db-server using:
pg_ctl -D /home/lg/test-db -l logdatei start
I tried that and got:
pg_ctl -D /home/lg/test-db -l logdatei start
waiting for serer to start.... stopped
pg_ctl: could not start the server
check the log.
The log only says that the lockfile »/run/postgresql/.s.PGSQL.5432.lock« could not be created, because the folder could not be found. Under /run is no folder called "postgresql". I suppose postgresql can not create this folder, because it does not have the permission. Several posts online posts suggest to change the user/owner of the db to sudo, however. Postgresql prevents this, however. When I try any command as sudo, postgresql tells me that this command can't be run as root. There must be some very basic error in my thinking here, but I have not worked it out for 3 hours.
You'll have to remove /run/postgresql from unix_socket_directories in postgresql.conf before starting the server.
Probably You have /var/run symlinked to /run and run is on tmpfs. You should add something like d /run/postgresql 0755 postgres postgres - into /usr/lib/tmpfiles.d/postgresql.conf
I have a 2 node system running pacemaker, corosync and postgresql 9.4. I am carrying out pgsql replication using virtual IP and am able to successfully recover a downed machine using manual commands. Now to automate stuff, I want to run the following commands in a script to get my recovered master back on cluster.
`#su -postgres
$rm -rf /var/lib/pgsql/9.4/data/* //To delete old data files
$pg_basebackup -h 192.XX.XX.XX -U postgres -D /var/lib/pgsql/9.4/data -X stream -P // To recover the latest data from standby PC running latest entries
$rm /var/lib/pgsql/tmp/PGSQL.lock
$exit //exit from postgresql shell
#pcs resource cleanup msPostgresql`
Now when i run these commands as a script, it hangs after the first command itself i.e. su -postgres and the cursor blinks at bash$ syntax without inserting the commands down below.
I want to automate this process using cron but the script itself is not working for me. Can someone help me out here.
Regards
As far as I know "su -postgres" is wrong. You can use either "su postgres", or "sudo -i -u postgres"
Regarding the scripts here you can find tested, working scripts. The one you are interested in is called "initiate_replication.sh" there.
I've found a number of similar problems - and even added an answer to a one similar non-dup. But I can't see or solve this problem. Here's the core problem:
I have a staging server on Heroku. I want to copy the staging server database to development on Nitrous to solve a problem. Nitrous has Postgres 9,2,4, and Heroku has Postgres 9.3.3.
My boss is away on holiday, and I have no authority to upgrade the Heroku staging service to a paid plan in which I can fork (and then use the forked Heroku database as a remote database for development).
I have used heroku pg:push to send development databases to staging, in earlier work. No problem. But I can not use heroku pg:pull - it fails, saying that:
pg_dump: server version: 9.3.3; pg_dump version: 9.2.4
pg_dump: aborting because of server version mismatch
I have tried a rake db:structure:dump - fails for version mismatch reasons. I'd vaguely hoped that this used the pg gem and would magically work, ignoring rev levels. Hey, if you're ignorant enough, magic does work, sometimes.
I have a Nitrous box for development because the office firewall blocks, well, pretty much everything but 25, 80 and 443. All the useful ports like 22, 5432, 3000, etc, are blocked. So I develop on Nitrous. It's pretty neat. But it never occurred to me that Nitrous would have an old version of Postgres, and no apparent way to update it. Especially given that Nitrous often emphasises using Heroku.
I've tried using the more basic commands:
pg_dump -h ec2-XX-XXX-XX-XXX.compute-1.amazonaws.com -p 5432 -Fc --no-acl --no-owner --compress 3 -o -U ${DBNAME} > dumpfile.gz
But that fails (heroku pg:pull probably uses this command, under the hood) for the same reasons - version mismatch.
I realise that if I'd known more when I started, I could have requested that Heroku used 9.2. But I have data now, in a 9.3.3 instance, and I want that data, not the data I would have had, if only a time machine was available to me, and I could cope with the trousers of time paradoxes.
Possible solutions? Is there another web IDE that has PG 9.3? Is there a flag that I can't find that lets PG Dump 9.2 work with an up-rev DB? Is there a way to upgrade Nitrous to 9.3? At least for the critical pg_dump command?
Browser based IDE's versions of Postgres (as of 2014/08/13):
nitrous - 9.2
koding - 9.1
cloud9: 9.3 (Yay! - Pick me! Pick me!)
I spent another couple of hours and worked out a solution, using a different browser based IDE. Cloud9 offers Postgres 9.3, pre-installed in a new VM.
You'll need to register your Cloud9 ID with Heroku (find the SSH keys in the Cloud9 Console, and paste into your ID SSH Keys in Heroku). And you'll need to sign in to Heroku from Cloud 9.
Then use pg_dump and pg_restore on Cloud9, using Heroku databases as the source and target.
pg_dump -h ec2-XX-XX-XX-XX.compute.amazonaws.com -p 5432 --no-owner --no-acl -Fc -o -U ${HEROKU_STAGING_DATABASE_USER} ${HEROKU_STAGING_DATABASE_NAME} > staging.dump
pg_restore -h ec2-XX-XX-XX-XX.compute.amazonaws.com -p 5432 --no-owner --no-acl --clean --verbose -d ${HEROKU_DEV_DATABASE_NAME} -U ${HEROKU_DEV_DATABASE_USER} < staging.dump
In your dev environment, make sure you update the config/database.yaml (or whatever it is your web apps need) to use the Heroku remote DB Service.
Simples. Eventually.
I ran into precisely this problem, and solved with blind magic by
Downloading a recent.dump file from the Heroku postgres dashboard
Moving that file into the Nitrous box (and into the app directory)
Following the instructions here:
https://stackoverflow.com/a/11391586/3850418
pg_restore -O -d MY_APPNAME_DEV recent.dump
I got a bunch of warnings, but it seemed to have worked, at least enough for my dev/testing purposes.