My hoster upgraded my Ubuntu server and it's not booting any more. The only way I can access my data any more is in read mode via a rescue environment (SSH shell).
I am running a postgres 9.1 installation on the crashed server. I am not able to start the postgres server in the rescue environment. I also do not have a dababase dump created with pg_dump.
However, I was able to copy the whole /var/lib/postgresql folder to a new machine . I installed Postgres 9.1 on this machine. Afertwards I replaced the /var/lib/postgresql with my old files.
When I start the postgres server, I get something like "incorrect checksum in control file".
I there any way to restore the database content without using pg_dump (since I don't have a current dump and I am not able to run it on the defective machine).
Indeed it was an issue between 32bit and 64bit. I had another old server running on 32bit Ubuntu. Initially I tried to restore the data on a 64bit machine. With the 32bit machine it simply worked by copying the postgres main directory. Finally I was able to log into the database and create a dump.
Related
I have a kubernetes space where the PostgreSQL DB is on a different POD than the executor where my Typescrtipt code is being executed, which means that I do not have access to pg_dump and psql. Is there a typescript npm library which can backup and restore the DB programmatically and remotely or a special client which I can install in project so that I can use? I could not find such an mpm library or client on the internet.
Another thing that I can think of is some "universal" SQL script, which makes the backup and another one for the restore, if there is such a thing at all.
I have an important issue with my database and I don't know how to fix it.
I have postgresql 9.6 in CentOS running. After a system reboot, postgrsql-service doesn't start, so following the instructions in the shell, I launched "sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb"
What my surpirse when I started pgsql... and it is a new empty instance, I had a database with a size of 4GB and has dissapear...
However data must be still in the files, because data folder have a size of 4GB, how I can recover this last situation?
Thank you very much,
Regards,
I had Postgresql 9.6 installed (on Windows 10) and did a complete uninstall including the data directory and all old copies of PgAdmin and there are no environment variables relating to this or any other old Postgres installation either.
I recently installed Postgresql 11 and PgAdmin 4 v3.6 using EnterpriseDB installer. When I run PgAdmin 4 it aurto detects a postgresql instance called 9.6 - though the details tell me it is actually my v11 instance with the same port number and password etc. The only difference is that it is pointing to the non-existent old data directory.
I have searched for a stray postgresql.conf file (and can't find one as it was in the deleted data directory!). As there is also no environment variables, no binaries and no data I can't understand how PgAdmin is auto detecting this ghost. Any suggestions on how to correct it?
EDIT:
I have tried deleting all cookies relating to PgAdmin and Postgresql in Chrome too - this had no effect
I have also double checked that there is no postgresql 9.6 service running (but that just confirms the above where PgAdmin tells me it is called 9.6 but actually is a v11 instance)
Try deleting pgAdmin4 config file pgadmin4.db located at %APPDATA%\pgAdmin\
Restart pgAdmin4 and check.
I work with PostgreSql and PgAdmin, and I have had an accident that drives to format my Mac, here I have two options, recover all system via TimeMahine or install all from zero.
I have chosen install from zero, but I need to recover the old PostgreSql database, now I’m installing all again, pgAdmin… etc, but I don’t have any backup of my old database, I have the old PostgreSql database system files (thanks to TimeMachine),
How can I recover the old database to the new one?
I tried to do this, from http://www.postgresql.org/docs/8.1/static/backup-file.html
tar -cf backup.tar /usr/local/pgsql/data
the problem is that the data files are (I think) in /Library/PostgreSQL/9.1/data and the folder is encrypted or hidden ¿?
and I can't execute tar command this is what happens:
tar -cf backup.tar "/Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data"
I get this error:
tar: /Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.
Another option was to copy the TimeMachine folder /9.1/ beside the new 9.3 and try this:
How to restore a file system level copy of a PostgreSQL database (not dump) to a different PC
but when executing pg_dump asked me a pasword I didn't had.
The solution for dummies: Uninstall pgAdmin 9.3, and install pgAdmin 9.1, here detects the folder 9.1 and tell me It was going to use it ¡perfect! it is exactly what I need.
Now I can keep 9.1 or upgrade to 9.3
I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.