restore db postgres - format .gz - postgresql

I am trying to restore a database on my local machine. I downloaded this database from the server in file.gz format. Inside this archive is the file file.out. How can I restore the entire structure on my computer through
pg_admin or via console?
If I understand correctly, you need to use the pg_restore command. But all my attempts end with a syntax error.

Related

how to decompress .sql extension file in windows server

I have taken full backup of postgresql database which consists of 100 databases. The backup format is .sql (eg pg_dumpall.exe -U postgres > D:\Backup\fullbkp.sql) now one of my database got crashed and I want to extract this file to get that database backup only for restoration.
I have searched a lot but couldn't find any way to decompress so that I can get that particular database from full backup file.
Please suggest !!!!
Regards
Sadam
Such a backup is not compressed. Also, it contains a backup of all databases in th cluster, and there is no easy way to extract a single database.
Create a new PostgreSQL cluster with initdb, restore the dump there using psql, then use pg_dump to extract the single database you need.

Is it possible to import postgresql custom-format dump file in cloud sql without using pg_restore?

I downloaded the postgresql .dmp file from the chembl database.
I want to import this into gcp cloudsql.
When I run it with the console and gcloud command, I get the following error:
Importing data into Cloud SQL instance...failed.
ERROR: (gcloud.sql.import.sql) [ERROR_RDBMS] exit status 1
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
Can I import custom-format dmp files without using the pg_restore command?
https://cloud.google.com/sql/docs/postgres/import-export/importing
There is a description of pg_restore in the document on that site, but I didn't get it right.
In the case of custom-format files, is it necessary to pg_restore after uploading them to the cloud shell?
According to the CloudSQL docs:
Only plain SQL format is supported by the Cloud SQL Admin API.
The custom format is allowed if the dump file is intended for use with pg_restore.
If you cannot use pg_restore for some reason, I would spin up a local Postgres instance (i.e., on your laptop) and use pg_restore to restore the database.
After loading into your local database, you can use pg_dump to dump to file in plaintext format, then load into CloudSQL with the console or gcloud command.

Need to convert a dump.sql to a *fname.dump file for restoration of Odoo

My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump

Backup and restore postgresql data folder directly

Till now I've been backing up my postgresql data using pg_dump, which exports the data to an sql file mydb.sql, and then restoring from that sql file using psql -U user -d db < mydb.sql.
For one reason or another it would be more convenient to restore the database content more directly, in an environment where psql does not exist... specifically on a host server where postgresql is installed in a docker container running on the host, but not on the host itself.
My plan is to back up the content of /var/lib/postgresql/data/ to a tar file, and when required (e.g. when a new server is created that hosts the postgresql container) just restore that to the same path. The folder /var/lib/postgresql/data/ in the docker container is mapped to a folder on the host server, so I would create this backup on the host, not inside the postgres container.
Is this a valid approach? Any "gotchas"? And are there any subfolders within /var/lib/postgresql/data/ that I can exclude from the tar file? I don't want to back up mere 'housekeeping' information.
You can do that, but you have to do it properly if you don't want your database to become corrupted.
Either stop PostgreSQL before copying the data directory or follow the instructions from the documentation for an online backup.

Can PostgreSQL COPY read CSV from a remote location?

I've been using JDBC with a local Postgres DB to copy data from CSV files into the database with the Postgres COPY command. I use Java to parse the existing CSV file into a CSV format matches the tables in the DB. I then save this parsed CSV to my local disk. I then have JDBC execute a COPY command using the parsed CSV to my local DB. Everything works as expected.
Now I'm trying to perform the same process on a Postgres DB on a remote server using JDBC. However, when JDBC tries to execute the COPY I get
org.postgresql.util.PSQLException: ERROR: could not open file "C:\data\datafile.csv" for reading: No such file or directory
Am I correct in understanding that the COPY command tells the DB to look locally for this file? I.E. the remote server is looking on its C: drive (doesn't exist).
If this is the case, is there anyway to indicate to the copy command to look on my computer rather than "locally" on the remote machine? Reading through the copy documentation I didn't find anything that indicated this functionality.
If the functionality doesn't exist, I'm thinking of just populating the whole database locally then copying to database to the remote server but just wanted to check that I wasn't missing anything.
Thanks for your help.
Create your sql file as follows on your client machine
COPY testtable (column1, c2, c3) FROM STDIN WITH CSV;
1,2,3
4,5,6
\.
Then execute, on your client
psql -U postgres -f /mylocaldrive/copy.sql -h remoteserver.example.com
If you use JDBC, the best solution for you is to use the PostgreSQL COPY API
http://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html
Otherwise (as already noted by others) you can use \copy from psql which allows accessing the local files on the client machine
To my knowledge, the COPY command can only be used to read locally (either from stdin or from file) from the machine where the database is running.
You could make a shell script where you run you the java conversion, then use psql to do a \copy command, which reads from a file on the client machine.