REstore only triggers postgres - postgresql

I made a backup of a database of postgres, it is not the first time I do it, I used this command:
pg_dump db -f /backup/agosto_31.sql
And I do the restore with this:
psql -d August_31 -f August_31.sql
But this time I did not import any trigger, and there are many. I checked in the file August_31.sql and they are. How could I import them again? Only the triggers.
Thanks everyone, greetings!

There are not any possibility to import only triggers. But, because your dump (backup file) is in SQL format (plain text), you can cut trigger definition manually in any editor. For this case is practical dump data and schema to separate files.
Is possible so import (restore from backup) fails, probably due broken dependencies. Check psql output if there are some errors. psql has nice possibility to stop on first error:
psql -v ON_ERROR_STOP=1
Use this option. Without error specification is not possible to help more.

Related

Need to convert a dump.sql to a *fname.dump file for restoration of Odoo

My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump

Issues when upgrading and dockerising a Postgres v9.2 legacy database using pg_dumpall and pg_dump

I am using an official postgres v12 docker image that I want to initialise with two SQL dump files that are gathered from a remote legacy v9.2 postgres server during the docker build phase:
RUN ssh $REMOTE_USER#$REMOTE_HOST "pg_dumpall -w -U $REMOTE_DB_USER -h localhost -p $REMOTE_DB_PORT --clean --globals-only -l $REMOTE_DB_NAME" >> dump/a_globals.sql
RUN ssh $REMOTE_USER#$REMOTE_HOST "pg_dump -w -U $REMOTE_DB_USER -h localhost -p $REMOTE_DB_PORT --clean --create $REMOTE_DB_NAME" >> dump/b_db.sql
By placing both a_globals.sql and b_db.sql files into the docker image folder docker-entrypoint-initdb.d, then the database is initialised with the legacy SQL files when the v12 container starts (as described here). Docker is working correctly, the dump files are retrieved successfully. However I am running into problems initialising the container's database and require guidance:
When the container starts to initialise its DB, it stops with ERROR: role $someDBRole does not exist. This is because the psql v9.2 dump SQL files DROP roles before reinstating them; the container DB does not like this. Unfortunately it is not until psql v9.4 that pg_dumpall and pg_dump have the option to --if-exists (see pg_dumpall v9.2 documentation). What would you suggest that I do in order to remedy this? I could manually edit the SQL dump files, but this would be impractical as the snapshots of the legacy DB need to be automated. Is there a way to suppress this error during container startup?
If I want to convert from ASCII to UTF-8, is it adequate to simply set the encoding option for pg_dumpall and pg_dump? Or do I need to take into consideration other issues when upgrading?
Is there a way to supress the removal and adding of the postgres super user which is in the dump SQL?
In general are there any other gotchas when containerising and/or updating a postgres DB.
I'm not familiar with Docker so I don't know how straightforward it'll be do to these things, but in general, pg_dump/dumpall output, when it's in SQL format, will work just fine after having gone through some ugly string manipulation.
Pipe it through sed -e 's/DROP ROLE/DROP ROLE IF EXISTS/', ideally when writing the .sqls, but it's fine to just run sed -i -e <...> to munge the files in-place after they're created if you don't have a full shell available. Make it sed -r -e '/^DROP ROLE/DROP ROLE IF EXISTS/ if you're worried about strings containing DROP ROLE in your data, at the cost of portability (AFAIK -r is a GNU addition to sed).
Yes. It's worth checking the data in pg12 to make sure it got imported correctly, but in the general case, pg_dump has been aware of encoding considerations since time immemorial, and a dump->load is absolutely the best way to change your DB encoding.
Sure. Find the lines that do it in your .sql, copy enough of it to be unique, and pipe it through grep -v <what you copied> :D
I can't speak to the containerizing aspect of things, but - and this is more of a general practice, not even really PG-specific - if you're dealing with a large DB that's getting migrated, prepare a small one, as similar as possible to the real one but omitting any bulky data, to test with to get everything working so that doing the real migration is just a matter of changing some vars (I guess $REMOtE_HOST and $REMOTE_PORT in your case). If it's not large, then just be comfortable blowing away any pg12 containers that failed partway through the import, figure out & do whatever to fix the failure, and start from the top again until it works end-to-end.

pg_dump is available in AgensGraph?

I know the function "pg_dump" for backup and restore.
But I never tried because so scared.
Question is simple. Can I use that function for graph data?
Or Is the other function supported for that? There's no information in their documentation.
You may refer the postgreSQL pg_dump document coz nothing different from doing bakcup on postgreSQL.
I referred the guide to create dump script with crontab and both dump and restore worked fine.
In my case, I used pg_dump for creating dump file and restore it with psql. You may choose pg_restore instead if necessary.
agens#karl ~] pg_dump --port=5432 --username=agens --file=agens.dump agens
agens#karl ~] psql --port=5432 --username=agens --dbname=agens2 -f agens.dump
However, I no longer use pg_dump for backup task due to the incremental bakcup requirement. So I googled available backup OSS for postgresql. Among the options I searched, pg_rman is currently what I am using.
It made me easier to build a scheduling script for archive backup every 6 hours, incremental backup every day and full backup every week and those jobs are working properly more than 2 months so far.
Restoring the data on other severs is tested successfully as well.
Hope this helpful for you.

Best way to make PostgreSQL backups

I have a site that uses PostgreSQL. All content that I provide in my site is created at a development environment (this happens because it's webcrawler content). The only information created at the production environment is information about the users.
I need to find a good way to update data stored at production. May I restore to production only the tables updated at development environment and PostgreSQL will update this records at production or the best way would be to backup the users information at production, insert them at development and restore the whole database at production?
Thank you
You can use pg_dump to export the data just from the non-user tables in the development environment and pg_restore to bring that into prod.
The -t switch will let you pick specific tables.
pg_dump -d <database_name> -t <table_name>
https://www.postgresql.org/docs/current/static/app-pgdump.html
There are many tips arounds this subject here and here.
I'd suggest you to take a look on these links before everything.
If your data is discarded at each update process then a plain dump will be enough. You can redirect pg_dump output directly to psql connected on production to avoid pg_restore step, something like below:
#Of course you must drop tables to load it again
#so it'll be reasonable to make a full backup before this
pg_dump -Fp -U user -h host_to_dev -T=user your_db | psql -U user -h host_to_production your_db
You might asking yourself "Why he's saying to drop my tables"?
Bulk loading data on a fresh table is faster than deleting old data and inserting again. A quote from the docs:
Creating an index on pre-existing data is quicker than updating it incrementally as each row is loaded.
Ps¹: If you can't connect on both environment at same time then you need to do pg_restore manually.
Ps²: I don't recommend it but you can append --clean option on pg_dump to generate DROP statements automatically. Be extreme careful with this option to avoid dropping unnexpected objects.

How to restore a database runtime

I have a test database connected to a test server. I want to run set of selenium tests and I have to restore database after every test.
I made a backup with cli command "createdb" and I just drop the main table every time, but how can I restore database without turning the whole server off and on (can't use createdb with any open connections), as it would take hours or days to make a full set of tests?
I probably won't be given constant admin access to the server, unless it's necessary.
You can kill all connections vis SQL (see https://stackoverflow.com/a/5109190/2352344). Instead of dropping the whole database you can just remove the schema:
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
I think that instead of dropping the table, how about undoing or deleting the rows in the table. When you run the test, you know what entries will be made in the Table. With this information, just before the test terminates, invoke a script to delete the rows created due to running this test.
You can use a real tool for your backup/restore (Wal-E, barman or backrest). Particularly with backrest, you can do a diff restore where it restores only files that have changes.
I solved the problem by making a bash script that i run from java code.
String[] args = new String[]{"./script.sh"};
Process proc = new ProcessBuilder(args).start();
proc.waitFor();
script.sh:
#!/bin/bash
psql dbname -c "drop schema \"public\" cascade;"
psql dbname -c "create schema \"public\";"
psql dbname < "path/backupname"
I had to use script and not just make it arguments in args, probably becouse of the "<" sign. I found no flag replacement to it.