How to recover the tablespace from backup pending state in DB2 - db2

Should I go with this below command to recover the tablespace in DEV environment, or is there a better solution?
db2 "backup database DEV tablespace (xyz) online to /dev/null"

If a table space is backup pending, it's usually due to some event that requires a new backup point, such as after a LOAD .. COPY NO. In such a situation it is best advised that you take a new backup to an actual location to save the image for future recoveries through this point in time.
If not, you could be exposed to data loss until a new backup including this table space is completed.
Thanks.

Related

Cloud SQL (Postgres) Backup and Restore

I understand that Cloud SQL ( Postgres) on-demand backup is incremental. when you restore an instance using this backup, the existing data is wiped off before the instance is restored with all new data. in other words, the "backup" process is incremental but there is no way to restore only a specific incremental backup into an instance
Please, can you confirm if the aforementioned understanding is right?
Indeed, Cloud SQL backups are incremental. Taken from the Documentation:
Backups for Second Generation instances are incremental; they contain
only data that has changed since the previous backup was taken. This
means that your oldest backup is a similar size to your database, but
the sizes of subsequent backups depend on the rate of change of your
data. When the oldest backup is deleted, the size of the next oldest
backup increases so that a full backup still exists.
Yet, Cloud SQL stores up to seven automated backups for each instance. This in fact allows you to restore to any specific backup, but of course, you will be delete all the data on the instance in order to restore the one in the backup.
If you are asking if it will be possible to restore only the incremental differences of an specific backup, then no, it is not possible. And it is also meant that way by the concept of Incremental Backups. You see, by definition, the incremental backup must have all the backups before. So, by restoring an "specific incremental backup into an instance" you are restoring the fullback + all the incremental backups up to the incremental backup you are requesting.

Backup postgresql WAL logs

I try to configure backuping database in postgresql with pg_basebackup and WAL logs.
For now I created full backup once a week and want to backup wal logs too. But, as I understand, posgresql writes them all the time. So, how can I copy them and be shure that they are not corrupted?
Thanks
You set archive_command to a shell command that copies the WAL file to a safe archive location, so that burden is mostly on you.
When PostgreSQL runs archive_command, it assumes that the WAL file is not corrupted. Only a PostgreSQL bug or a bug in the storage system could cause a corrupted WAL segment.
There is no better protection against PostgreSQL bugs than always running the latest bugfix release, and you can invest in storage hardware that will at least detect failure.
You can also write your archive_command with a certain amount of paranoia, e.g. by comparing the md5sum of the WAL segment and its archive copy.
Another idea is to write two copies of the WAL file to different storage systems.

SQL2059W A device full warning - when trying to bring tablespace online

Trying to do a DB2 import as part of a system copy and the transaction logs filled up. Import was cancelled, transaction log backup ran, and number of logs were increased to approximately 90% of the available disk (previously 70%).
Restarted DB and kicked off DB but now that errors due to the tablespace state - running db2 list tablespaces show detail shows I have 4 tablespaces in Backup Pending state.
So I tried db2 backup database <SID> tablespace <SID>#BTABI online but I get the error:
SQL2059W A device full warning was encountered on device "/db2/db2". Do you want to continue(c), terminate this device only(d), abort the utility(t) ? (c/d/t) t
No option works but to terminate.
The thing is, the device isn't full. There's no activities on the DB, running db2 list applications gives:
SQL1611W No data was returned by Database System Monitor.
Running db2 "select log_utilization_percent,dbpartitionnum from sysibmadm.log_utilization order by 2" to show the log utilization returns 0.
There's no logs in use. The filesystem has space free. I even tried reducing the number of logs again to make sure but get the same issue.
I tried db2 "alter tablespace <SID>#BTABI switch online" instead and although this returns a 'success' statement it doesn't actually do anything - my tablespaces are still in Backup pending?
Any ideas please
You're trying to write the backup images to the /db2/db2 file system, which doesn't have enough space to hold the backup image(s).
Note: When you execute BACKUP DATABASE as in your example above without specifying where to send the backup (i.e. you don't use the to /dir/ectory or another option like use TSM), DB2 will write the backup image to the current directory. Make sure you specify where to store the backup image (and that it has enough free space to hold the backup image). If you don't care about recoverability and are just trying to get the table space out of backup pending state, you can specify /dev/null as your location as #mustaccio suggests in the comments above.
Also: You may want to look at the COMMITCOUNT option for the import utility so you're not trying to insert all data in a single massive transaction.
As per above comments - just kept running the import, resetting the 'pending load' status each time with:
load from /dev/null of del terminate into SAPECD.
A few packages fail each time but the rest process. Letting finish, resetting again and restarting the import gets through a little more each time.

How can I force drop a broken Postgres database?

I have a database that seems to be broken for some reason. It's a development db for rails so I don't have a backup but I do need to continue development. I tried to just drop it but that's not working.
$ dropdb "database-name"
dropdb: database removal failed: ERROR: could not open file "global/2964": No such file or directory
Thanks in advance for any help!
There's more wrong here than a "broken" database. Something is badly wrong with your PostgreSQL data directory.
global/9264 looks like it's pg_catalog.pg_db_role_setting, which stores ALTER DATABASE ... SET ... and ALTER ROLE ... SET ... settings. This is not database-specific, it's a global table.
If you have missing files in your data directory your whole PostgreSQL data directory is probably damaged. You should back up what you can, if there's anything you care about, then rename or delete the damaged data directory and initdb a new blank one.
You won't be able to DROP this database (or do much else) because PostgreSQL can't load the files for the pg_db_role_setting table, but it needs to delete entries referring to the dropped database from there.
As for how this happened:
Have you ever run with fsync = off in postgresql.conf?
Do you have SSD storage? If so, have you had any recent sudden power loss?
Have you ever done any direct modifications of any kind inside the PostgreSQL data directory?
Is the PostgreSQL data directory on external storage that might have been suddenly removed?
Have you ever deleted postmaster.pid ?
See also https://wiki.postgresql.org/wiki/Corruption

DB2 Online Restore but Without Roll Forward?

I read a lot of documentations for db2 restore but I could not find how to perform online restore from the last database backup but without roll forwarding of logs?
I will appreciate command example.
On example my last online backup is made 1st february. I want to do ONLINE RESTORE of that backup but without logs after 1st February (similar with offline restore option WITHOUT ROLL FORWARD).
I am using db2 9.7
Thank you in advance
The database backup contains a snapshot of the tablespaces, and they may not be in stable state. Roll-forward is always required (unless you want to take insane risks by forcing DB2 to start using potentially corrupt data) to reach the nearest stable state.
If you are asking your question because you want manageable database backup dumps without having to worry about shipping logs etc, use the INCLUDE LOGS option when taking the backup. It will include in the backup file the minimum set of transaction logs that would be required for reaching stable state. When restoring you could then use the LOGS to extract them and then ROLLFORWARD DATABASE for the required typical 0-x seconds (depending on your database transactions).
A lazy dba would probably just use the RECOVER DB SAMPLE TO 2013-02-01-00.00.00 and allow the DB2 to worry about all the details. It will automatically fetch the required database backup and transaction files (even from the backup tapes etc if you set them up correctly), and do everything for you - as long as you don't attempt to manually manage them.