Currently when I do a db2 backup I can see Autonomic backup/restore - using parallelism = 2. in the logs. How can we disable parallelism so that its at the lowest? I am on DB2 version 11.1.9.0.
Try the PARALLELISM 1 clause
db2 backup database test to /dev/null PARALLELISM 1
Backup successful. The timestamp for this backup image is : 20181105183709
db2diag -H 1m | grep "Autonomic backup"
Autonomic backup/restore - using parallelism = 1.
but as you are using Db2 Warehouse, you should use db_backup to backup the database https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/app_backup.html
Related
According to the WAL Point-in-time recovery and incremental backup documentation, pg_basebackup and wals are used to take backups and incremental backups of a running PostgreSQL cluster.
Is there a way to take incremental backups per database instead of the whole cluster ?
Thanks !
We have a simple database with just 5 tables. But 1 table is huge, around 100GB of data by itself, and the indices together are nearly double that size. The server is an old CentOS 5 server with PG 9.0. I'm moving to a more modern setup with SSD hard disks, CentOS 7, and PG 9.6.
Question: what's the best way to migrate data in a simple way. pg_dump it on the old server, move it via rsync or something to the new server and pg_restore? I could do the pg_dump with -Fc option, so that we can pg_restore it easily (otherwise it's a text format and we have to use psql -f instead). But a trial run suggested that while the pg_dump is OK, the pg_restore on the destination server, which is much faster, goes on and on. We did a pg_restore --verbose, but there was no verbosity at all. Perhaps the server was stuck doing IO?
Our pg.conf settings for the pg_restore are as follows:
maintenance_work_mem = 1500MB
fsync = off
synchronous_commit = off
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
max_wal_senders = 0
wal_keep_segments = 0
archive_mode = off
autovacuum = off
What should we do to ensure that the pg_restore works? Right now both servers are offline, so I can do pretty much anything needed -- any settings can be changed.
Some more background info--
Old server: CentOS 5, SCSI RAID 1 disks, 4GB RAM (not much), PG 9.0
New server: CentOS 7 (latest), SSD disk, 16GB RAM, PG 9.6
Thank you for any pointers on moving large tables in the best way possible. The usual PG documentation doesn't seem to be helping. We've tried both the text dump way and the -Fc way.
I strongly suggest you pg_upgrade:
Install 9.0.23 on the new server. From source if necessary.
Set up a streaming replica on the new server using pg_basebackup and a suitable recovery.conf. Enable WAL archiving and restore_command too, in case it becomes desynchronised for any reason.
Also install 9.6 on the new server
Do an upgrade test by stopping the replica and attempting a pg_upgrade to 9.6. Restart the replica, fix any issues and repeat until you succeed.
When you're confident pg_upgrade will succeed, plan a cut-over time. Stop the 9.0 master and stop the replica. pg_upgrade the replica. Start the new 9.6 server.
See the pg_upgrade documentation for more info.
Remember: KEEP BACKUPS.
If you want simple, just pg_dumpall and then pipe to psql. But that'll be slow and it'll cause problems if your restore fails partway through then you try to resume, etc.
Better:
If you don't want to use replication, then use parallel-mode pg_dump and pg_restore with directory format input/output if you want to get things done quickly.
Configure your 9.0 database to accept connections from the 9.6 host and make sure there's a high-performance network connection (gigabit or better).
Using the 9.6 host, running the 9.6 versions of pg_dump and pg_dumpall:
Dump your global objects with pg_dumpall --globals-only -f globals.sql
Dump your database(s) with pg_dump -Fd -j4 -d dbname -f dbname.dumpdir or similar. -j is the number of parallel jobs. You'll need to dump each database separately if there are multiple ones.
Cleanly initdb a new PostgreSQL 9.6 install, removing whatever attempts you have previously made (since I don't know what is/isn't there). Alternately, DROP any created roles, databases, etc, returning it to a clean state.
Use psql to run the globals script: psql -v ON_ERROR_STOP=1 --single-transaction -f globals.sql -d postgres
Use pg_restore to load the database dumps: pg_restore --create -d template1 -j4 template1 dbname.dump, repeating for each dumped DB. You can restore multiple DBs concurrently.
Yes, I know the handling of global objects sucks. And yes, it'd be nice if all this were wrapped up in a simple command. But it isn't. Designs and well thought out patches are welcome if you want to try to improve this. So far nobody's wanted to enough to do the work.
How to take an automatic backup of a PostgreSQL database in Ubuntu?
Or is there a script available to take time-to-time PostgreSQL database backups?
you can use the following:
sudo crontab -e
at the end of the file add this:
0 6 * * * sudo pg_dump -U USERNAME -h REMOTE_HOST -p REMOTE_PORT NAME_OF_DB > LOCATION_AND_NAME_OF_BACKUP_FILE
This command will take an automated backup of your selected db everyday at 6:00 AM (after changing options of the command to fit to ur db)
It is advisable to create a backup every time with a new name in order to be able to restore data to a specific date. Also good practice to send notifications in case the backups fail.
Here is a good script for automatic backup, as well as general recommendations for automating backups:
How to Automate PostgreSQL Database Backups
I have DB running with TRACKMOD ON but when ever i try to take incremental backup using the command ' backup db online incremental to ' it gives me an error
" SQL2426N The database has not been configured to allow the incremental backup operation. Reason code = reason-code."
When i check db2diag log it say " Incremental backup was not enabled for table space SYSCATSPACE because the TRACKMOD configuration parameter was not enabled."
I have taken the backup of SYSCATSPACE and TRACKMOD is also ON still same error. can anyone help !!
First do a full backup (online/offline), then an incremental backup will work
db2 backup db sampledb to /dev/null
db2 backup db sampledb incremental to /dev/null
For Windows use "NUL:" instead of "/dev/null"
How can I dump my database schema and data in such a way that the usernames, database names and the schema names of the dumped data matches these variables on the servers I deploy to?
My current process entails moving the data in two steps. First, I dump the schema of the database (pg_dump --schema-only -C -c) then I dump out the data with pg_dump --data-only -C and restore these on the remote server in tandem using the psql command. But there has to be a better way than this.
We use the following to replicate databases.
pg_basebackup -x -P -D /var/lib/pgsql/9.2/data -h OTHER_DB_IP_ADDR -U postgres
It requires the "master" server at OTHER_DB_IP_ADDR to be running the replication service and pg_hba.conf must allow replication connections. You do not have to run the "slave" service as a hot/warm stand by in order to replicate. One downside of this method compared with a dump/restore, the restore operation effectively vacuums and re-indexes and resets EVERYTHING, while the replication doesn't, so replicating can use a bit more disk space if your database has been heavily edited. On the other hand, replicating is MUCH faster (15 mins vs 3 hours in our case) since indexes do not have to be rebuilt.
Some useful references:
http://opensourcedbms.com/dbms/setup-replication-with-postgres-9-2-on-centos-6redhat-el6fedora/
http://www.postgresql.org/docs/9.2/static/high-availability.html
http://www.rassoc.com/gregr/weblog/2013/02/16/zero-to-postgresql-streaming-replication-in-10-mins/