Is it possible to have the postgres database dump(pg_dump) using SQLAlchemy? i can get the dump using pg_dump but I am doing all other db operations using SQLALchemy and thus want to know if this dump operation is also opssible using SQLAlchemy. Any suggestion, link would be of great help.
Thanks,
Tara Singh
pg_dump is a system command.so I do not think you could have postgres database dump using SQLAlchemy.
SqlAlchemy do not manage sort of pg_dump. You probably can mimic it with a buch of queries but it will be painfull.
The more easy way is to use pg_dump itself inside a python script with os.system or subprocess.call
If it's for regular saves also have a look to safekeep project who speak for you to your databases
Related
I am new in Docker, I want to get a backup for my postgress database running in docker. All solutions i saw are offering to generate a dump sql script and restore db with running this script. But i dont want to do this? Is it possible backup and restore by migrating binary files of the db?
You can build Postgres image from plain empty Postgres db image. In Dockerfile you add SQL script which runs on db initialization (docker-entrypoint-initdb.d). The SQL script contains dblink to your backed up db and commands create table my_table as select * from my_table#remotedb. After docker build you have image with backup of your original database tables.
I do something similar with Oracle with more complexity (copying only subset of original database, preserving indexes etc.). Oracle docker image differs from PG in some properties but I believe the rough idea is applicable. It is some time ago I worked with PG so I won't advise you how to migrate binary files (though I believe it would be possible too).
Can someone help me, please? I need to perform pg_dump from DB and that must be read by Athena
pg_dump --> s3 < ---- aws athena
How can I do this?
The PostgreSQL pg_dump format is pure SQL that can be run on a PostgreSQL database to create tables and load data. Open it in a text editor and take a look -- you'll see what I mean.
As a result, pg_dump files are not in a format that can be used with Amazon Athena.
I have Hasura GraphQL engine running on a Docker container. My goal is to export all the data from my Postgres database so that my coworker can import it and work with the same data. What is the correct way to do this with Hasura?
You can totally do like the normal database on docker, nothing different, you can use any kind of dump or import/export.
You should understand the way Hasura works, it just take database as an input, Hasura does not take the database installation process.
Is there any way to dump a postgres db using psql only ( without pg_dump )?
Thanks.
Theoretically you have access to all the data needed. In practice you're more likely to be able to dump/save some data using COPY command, but not the database schema, etc.
Note, that you do not have to have pg_dump on the same machine where your database server is, if it listens to the network. But well, I don't know why you even ask :)
In theory you could run queries to extract the schema and then use those results to extract the data. But it wouldn't be easy to manipulate all of that into something usable for a restore using just psql.
When using Cucumber with Capybara, I have to load test database data from SQL data dump.
Unfortunately it takes 10s for each scenario, which slows down the tests.
I have found something like: http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#How_to_Replicate
Do you think binary replication will be quicker then using SQL files?
Is there anything I can do to make the restore quicker (I restore just data, not structure)?
What approaches would you recommend to try?
You could try to put your test data into a "template" database (e.g. mydb_template)
To prepare the test scenario, you simply drop your database using DROP DATABASE mydb and recreate based on the template: CREATE DATABASE mydb TEMPLATE = mydb_template;.
Of course you'll need to connect to e.g. template0 or the postgres database in order to be able to drop mydb.
I think this could be faster than importing a dump.
I recall a discussion on the PG mailing list regarding this approach and some performance problems with large "templates" that were fixed with 9.0.
(I restore just data, not structure)
COPY is always fastest for importing just data. Other answer deals with restoring a whole database.