I work with PostgreSql and PgAdmin, and I have had an accident that drives to format my Mac, here I have two options, recover all system via TimeMahine or install all from zero.
I have chosen install from zero, but I need to recover the old PostgreSql database, now I’m installing all again, pgAdmin… etc, but I don’t have any backup of my old database, I have the old PostgreSql database system files (thanks to TimeMachine),
How can I recover the old database to the new one?
I tried to do this, from http://www.postgresql.org/docs/8.1/static/backup-file.html
tar -cf backup.tar /usr/local/pgsql/data
the problem is that the data files are (I think) in /Library/PostgreSQL/9.1/data and the folder is encrypted or hidden ¿?
and I can't execute tar command this is what happens:
tar -cf backup.tar "/Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data"
I get this error:
tar: /Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.
Another option was to copy the TimeMachine folder /9.1/ beside the new 9.3 and try this:
How to restore a file system level copy of a PostgreSQL database (not dump) to a different PC
but when executing pg_dump asked me a pasword I didn't had.
The solution for dummies: Uninstall pgAdmin 9.3, and install pgAdmin 9.1, here detects the folder 9.1 and tell me It was going to use it ¡perfect! it is exactly what I need.
Now I can keep 9.1 or upgrade to 9.3
Related
I have the package postgresql11-contrib installed.
$ yum list installed | grep contrib
postgresql11-contrib.x86_64 11.11-1PGDG.rhel7 #pgdg11
Here is the version of postgres.
psql (PostgreSQL) 11.6
I have the below entries in postgresql.conf which is causing the error.
shared_preload_libraries = 'pg_stat_statements' # (change requires restart)
pg_stat_statements.max = 10000
pg_stat_statements.track = all
Upon some online searching I found that, I need to run "Run CREATE EXTENSION pg_stat_statements in the database(s) of my choice". But to do that I first commented above mentioned 3 lines from conf file because my psql server was failing to start with error could not access file "pg_stat_statements": No such file or directory".
https://www.oreilly.com/library/view/mastering-postgresql-11/9781789537819/a6a44124-558b-42f9-a0f3-eb52ea2799d4.xhtml
https://www.postgresql.org/docs/11/pgstatstatements.html
Now when I execute the command CREATE EXTENSION pg_stat_statements; I see error "could not open extension control file "/usr/postgresql/share/extension/pg_stat_statements.control": No such file or directory" and upon looking into the mentioned directory, these is no file pg_stat_statements.control
What am I missing here? Please help.
You seem to have installed two versions of Postgres on your Redhat/CentOS machine.
You probably have installed postgresql11 from the PGDG repository (yum.postgresql.org) as well as the default postgresql provided by CentOS 7. If you do rpm -aq | grep postgres I'll bet you'll see a v. 11 and a v. 9.2.
Just yum erase postgresql, which will delete the v. 9.2 instance that's in the CentOS repository. However, before you do that you'll need to dump the database(s) and stop Postgres. After you delete v. 9.2, you'll need to do a fresh initdb before starting Postgres, and re-load the data you had dumped.
I have a postgres 12 database in use on heroku with postgres 11 installed on my macOS workstation. When I try to restore the file provided to me by Heroku, I get
$ pg_restore --verbose --no-owner -h localhost -d myapp_development latest-heroku.dump
pg_restore: [archiver] unsupported version (1.14) in file header
According to Heroku's documentation, they make it sound like the only option is that if a Heroku user wants to access their data locally, they must be running postgres 12? That seems silly.
Digging into the Postgres docs on this topic, they say:
pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.)
Which certainly sounds like it should be possible to specify a target version of pg_restore to be used by pg_dump? But nowhere on the internet does there seem to be an example of this in action. Including the postgres docs themselves, which offer no clues about the syntax that would be used to target the "dump versions back to version 8.0".
Has anyone ever managed to use the pg_restore installed with postgres 11 to import a dump from the pg_dump installed with postgres 12?
The best answer to this that I figured out was to upgrade via brew upgrade libpq. This upgrades psql, pg_dump and pg_restore to the latest version (to link them I had to use brew link --force libpq). Once that upgrade was in place, I was able to dump from the postgres 12 databases on heroku, and import into my postgres 11 database locally. I thought I might need to dump to raw SQL for that to work, but thankfully the pg-12 based pg_restore was able to import into my postgres 11 database without issue.
pg_restore will refuse to handle a dump with a later version than itself - basically, if it encounters a file "from the future", it cannot know how to handle it.
So if you dump a database with pg_dump from v12, pg_restore from v11 cannot handle it.
The solution, as you have found out, is to upgrade the v11 installation.
My hoster upgraded my Ubuntu server and it's not booting any more. The only way I can access my data any more is in read mode via a rescue environment (SSH shell).
I am running a postgres 9.1 installation on the crashed server. I am not able to start the postgres server in the rescue environment. I also do not have a dababase dump created with pg_dump.
However, I was able to copy the whole /var/lib/postgresql folder to a new machine . I installed Postgres 9.1 on this machine. Afertwards I replaced the /var/lib/postgresql with my old files.
When I start the postgres server, I get something like "incorrect checksum in control file".
I there any way to restore the database content without using pg_dump (since I don't have a current dump and I am not able to run it on the defective machine).
Indeed it was an issue between 32bit and 64bit. I had another old server running on 32bit Ubuntu. Initially I tried to restore the data on a 64bit machine. With the 32bit machine it simply worked by copying the postgres main directory. Finally I was able to log into the database and create a dump.
I had Postgresql 9.6 installed (on Windows 10) and did a complete uninstall including the data directory and all old copies of PgAdmin and there are no environment variables relating to this or any other old Postgres installation either.
I recently installed Postgresql 11 and PgAdmin 4 v3.6 using EnterpriseDB installer. When I run PgAdmin 4 it aurto detects a postgresql instance called 9.6 - though the details tell me it is actually my v11 instance with the same port number and password etc. The only difference is that it is pointing to the non-existent old data directory.
I have searched for a stray postgresql.conf file (and can't find one as it was in the deleted data directory!). As there is also no environment variables, no binaries and no data I can't understand how PgAdmin is auto detecting this ghost. Any suggestions on how to correct it?
EDIT:
I have tried deleting all cookies relating to PgAdmin and Postgresql in Chrome too - this had no effect
I have also double checked that there is no postgresql 9.6 service running (but that just confirms the above where PgAdmin tells me it is called 9.6 but actually is a v11 instance)
Try deleting pgAdmin4 config file pgadmin4.db located at %APPDATA%\pgAdmin\
Restart pgAdmin4 and check.
I am trying to execute a backup of my PostgreSQL-10 database running on a CentOS 7 machine and then to restore it in a development machine running Windows 10, but I am getting errors during the restore process:
pg_restore: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
I have made sure that the commands' parameters passed in both dump and restore are the same:
pg_dump --format=c --compress=9 --encoding=UTF-8 -n public --verbose --username=postgres databaseName -W -f /usr/local/production-dump.backup
However it does not work at all. Even though the schema is restored, the data is not, because right before the restore process is going to start restoring data, it gives a "pipe has ended" error and does not proceed with the full restore process. I am using the "custom" format because the plain SQL or tar formats generate huge backup files.
What am I doing wrong? Is there any parameter that I need to pass to the dump or restore commands?
The likely explanation is that the file was modified during file transfer. Could you calculate a checksum of the file before and after transfer and verify that it is the same?
If the file did not change, then you have probably found a PostgreSQL bug. If you have a dump that you can share and that exhibits the problem, please report this problem to PostgreSQL.