DB2 SQL3706N A disk full error was encountered - db2

I have nearly 600+ files to load in DB2 database version 10.5.9. Each file size is nearly 200 MB. I have a batch script to upload each files in a loop.
My Disk "/mnt/blumeta0/db2/copy"size is 16 GB
If i run this upload with nonrecoverable mode it works. But i cant do that in my prod database.
I tried to db3 connect refresh and db3 terminate after each file uploaded but does not worked.
Manually cleaned up disk /mnt/blumeta0/db2/copy but total size of all files is more than 16 GB so got same error.
I cannot clean folder in script as clean up can be done with super user.
db2 "LOAD FROM $i OF DEL INSERT INTO <table_name>"
SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
How DB2 server cleans copy folder? Is there any other alternative i can try?

You suggested that the Load succeeds when using NONRECOVERABLE mode, however fails otherwise with error "SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
I'm guessing that the Load is being performed using the COPY YES option. Since the Load command that you pasted does not show the COPY YES option, I'm guessing that you have a special configuration setting enabled that forces Load operations to use COPY YES in order to prevent the table from becoming inaccessible in a rollforward recovery event or HADR standby takeover event. The name of this configuration setting (registry variable) is "DB2_LOAD_COPY_NO_OVERRIDE".
When the Load is performed with COPY YES, a copy of the table pages/extents that were generated during the Load operation is written into a copy image file.
I suspect that you have the registry variable "DB2_LOAD_COPY_NO_OVERRIDE=COPY YES /mnt/blumeta0/db2/copy" configured (you can use db2set -all on the database server to display all configured registry variables). If so, the copy image files are being stored in this path, which at 16GB appears to be too small to contain them all.
You can consider changing the location of this path to somewhere with more disk space, however the path should always be accessible in the event of a database rollforward recovery or hadr standby takeover, otherwise the table will not be accessible after such an event.

Related

Link mongo-data to /data/db folder to a volume Mongodb Docker

I accidentally deleted a volume of docker mongo-data:/data/db , i have a copy of that folder , now the problem is when i run docker-compose up mongodb container doesn't start and gives an error of mongo_1 exited with code 14 below more details of the error and the mongo-data folder , can you someone help me please
in docker-compose.yml
volumes:
- ./mongo-data:/data/db
Restore from backup files
A step-by-step process to repair the corrupted files from a failed mongodb in a docker container:
! Before you start, make copy of the files. !
Make sure you know which version of the image was running in the container
Spawn new container with to run the repair process as follows
docker run -it -v <data folder>:/data/db <image-name>:<image-version> mongod --repair
Once the files are repaired, you can start the containers from the docker-compose
If the repair fails, it usually means that the files are corrupted beyond repair. There is still a chance to repair it with exporting the data as described here.
How to secure proper backup files
The database is constantly working with the files, so the files are constantly changed on the disks. In addition, the database will keep some of the changes in the internal memory buffers before they are flushed to the filesystem. Although the database engines are doing very good job to assure the the database can recover from abrupt failure by using the 2-stage commit process (first update the transaction-log than the datafile), when the files are copied there could be a corruption that will prevent the database from recovery.
Reason for such corruption is that the copy process is not aware of the database written process progress, and this creates a racing condition. With very simple words, while the database is in middle of writing, the copy process will create a copy of the file(s) that is half-updated, hence it will be corrupted.
When the database writer is in middle of writing to the files, we call them hot files. hot files are term from the OS perspective, and MongoDB also uses a term hot backup which is a term from MongoDB perspective. Hot backup means that the backup was taken when the database was running.
To take a proper snapshot (assuring the files are cold) you need to follow the procedure explained here. In short, the command db.fsyncLock() that is issued during this process will inform the database engine to flush all buffers and stop writing to the files. This will make the files cold, however the database remains hot, hence the difference between the terms hot files and hot backup. Once the copy is done, the database is informed to start writing to the filesystem by issuing db.fsyncUnlock()
Note the process is more complex and can change with different version of the databse. Here I give a simplification of it, in order to illustrate the point about the problems with the file snapshot. To secure proper and consistent backup, always follow the documented procedure for the database version that you use.
Suggested backup method
Preferred backup should always be the data dump method, since this assures that you can restore even in case of upgraded/downgraded database engines. MongoDB provides very useful tool called mongodump that can be used to create database backups by dumping the data, instead by copy of the files.
For more details on how to use the backup tools, as well as for the other methods of backup read the MongoDB Backup Methods chapter of the MondoDB documentation.

Interbase Backup Validation

We have a custom backup solution utilizing the ibx controls in Delphi to perform nightly automatic backups. As part of our current validation for a successful backup, we read the output logs generated by the backup looking for the "closing file, committing and finishing" verbiage that's last in the log file. Additionally we perform a full restore to a separate area to ensure the ibk file is valid. That's turning out to be problematic in terms of available drive space so looking for other ideas to make sure the backup is successful.
How else might we ensure that our ibk file is valid?
Jeff,
Not sure what your database size or backup file size is, and if they are too big for the remaining disk space. Can you share the database and backup size details?
Older InterBase (2017 and earlier) had a way for the command line tool, gbak, to pipe the output from the backup to another gbak process restoring from the backup. This would allow you to save the disk space on the backup file. But, since you are using the IBX backup/restore service, this is not possible. Also, InterBase 2020 has a different backup format which requires random (not sequential) write access to the backup file, thereby not allowing any pipe output even via the 'gbak' command line tool.
Here are a couple of ways to "reduce" the disk storage requirements that may work for you.
** Backup file **
You can have the InterBase backup service (from your application) store the target backup file in an external storage medium (HDD, USB stick etc., or a SAN disk/network file share). The backup/restore service can read/write backup file(s) from network shares/external medium.
** Restored database **
When restoring the database you can use the service parameter option UseAllSpace (http://docwiki.embarcadero.com/Libraries/Sydney/en/IBX.IBServices.TRestoreOptions), equivalent to gbak option "-use_all_space". This will save you about 20% space on restored data pages.
Turn off index creation, thereby reducing page consumption (possibly quite a bit depending on your index definitions). But, you will lose index validation because of this. "DeactivateIndexes" option (gbak option "-inactive") in the same page above.
Restore the database to a remote InterBase server with its own storage medium, or, to an attached USB stick or SAN disk. Since you are using the restored database only for validating the backup file, you can have this restored database on a slower I/O medium or a slower server over the network.

SQL2059W A device full warning - when trying to bring tablespace online

Trying to do a DB2 import as part of a system copy and the transaction logs filled up. Import was cancelled, transaction log backup ran, and number of logs were increased to approximately 90% of the available disk (previously 70%).
Restarted DB and kicked off DB but now that errors due to the tablespace state - running db2 list tablespaces show detail shows I have 4 tablespaces in Backup Pending state.
So I tried db2 backup database <SID> tablespace <SID>#BTABI online but I get the error:
SQL2059W A device full warning was encountered on device "/db2/db2". Do you want to continue(c), terminate this device only(d), abort the utility(t) ? (c/d/t) t
No option works but to terminate.
The thing is, the device isn't full. There's no activities on the DB, running db2 list applications gives:
SQL1611W No data was returned by Database System Monitor.
Running db2 "select log_utilization_percent,dbpartitionnum from sysibmadm.log_utilization order by 2" to show the log utilization returns 0.
There's no logs in use. The filesystem has space free. I even tried reducing the number of logs again to make sure but get the same issue.
I tried db2 "alter tablespace <SID>#BTABI switch online" instead and although this returns a 'success' statement it doesn't actually do anything - my tablespaces are still in Backup pending?
Any ideas please
You're trying to write the backup images to the /db2/db2 file system, which doesn't have enough space to hold the backup image(s).
Note: When you execute BACKUP DATABASE as in your example above without specifying where to send the backup (i.e. you don't use the to /dir/ectory or another option like use TSM), DB2 will write the backup image to the current directory. Make sure you specify where to store the backup image (and that it has enough free space to hold the backup image). If you don't care about recoverability and are just trying to get the table space out of backup pending state, you can specify /dev/null as your location as #mustaccio suggests in the comments above.
Also: You may want to look at the COMMITCOUNT option for the import utility so you're not trying to insert all data in a single massive transaction.
As per above comments - just kept running the import, resetting the 'pending load' status each time with:
load from /dev/null of del terminate into SAPECD.
A few packages fail each time but the rest process. Letting finish, resetting again and restarting the import gets through a little more each time.

Postgres 9.2 pg_largeobject tablespace

I am currently moving some data around and I am running into an interesting issue.
I have a CentOS server (6.3) up and running with Postgres 9.2 on a server with limited built in disk space; however, I do have a large amount of extremely reliable external network disk space available.
I have set the tablespace to a directory on this storage devise for my database and everything seems to be working well, until...
I realized that I have a large amount of BLOB data that needs to be stored in pg_largeobject.
I have been goggling how to set the tablespace of pg_largeobject and I did find some results, but they are horribly out dated.
I did find one article that looks promising, but I'm hesitant because the thread also references that things will/should have changed.
I have two questions...
In an ideal world, I would like to move all of postgres (including pg_largeobject) onto this external storage for ease of maintenance. Is this possible?
If not, how can I get pg_largeobject to use my network storage?
As you alluded to, your best bet is to move the entirety of PostgreSQL onto the remote storage, assuming that storage uses a reliable file network block device like iSCSI, ATAoE or NBD. I wouldn't recommend running Pg on NFS, and running it on CIFS/SMBFS just won't work.
Just:
Make a backup
Take a note of the output of SHOW data_directory; in psql
Shut PostgreSQL down
Move the data directory (the folder containing pg_xlog, pg_clog, etc) to the remote storage
Adjust the permissions on the parent directories for the datadir's new location to make sure the postgres user, postgres, group or others permissions block has at least execute on each parent directory so it can traverse the tree.
Adjust your system startup scripts to set the new location as the PostgreSQL datadir or symlink the old datadir location (output by SHOW data_directory) to the new location.
Start PostgreSQL
Unfortunately, different systems and packages find the datadir different ways. Debian/Ubuntu use pg_wrapper, for example.

disk I/O error with SQLite

I have a (tiny) dynamic website that is (roughly) a Perl CGI script using a SQLite database. Package DBI is the abstraction layer used in Perl.
About one week ago, I started to see this error message:
disk I/O error(10) at dbdimp.c line 271
Since this is a hosted site running Apache, I cannot see if the hard disk is (nearly) full. Access to command "df" is disabled.... but I used the (UNIX) shell command "yes > blah" to test the disk can still create new files. My database is very tiny -- less than 50 kilobytes.
I checked file and directory permissions: Directory and all parents are a+r,a+x (all + read/executable). The directory containing my SQLite database file is also a+w (all + write). The database file itself has a+w,a+r (all + read/write).
I wrote a simple Perl program to test I can run the failing select query: It runs fine.
I ran query "VACUUM" on the database. I tried my tests again -- no improvement.
I dumped the SQLite database to raw SQL (using SQLite shell command ".dump") and rebuilt. I tried my tests again -- no improvement.
Any suggestions? I am so confused... Normally, the above list can catch most programming/setup errors.
Another cause for this:
Database file is writeable
Database journal file (ending in -journal) is not writable
When the database file isn't writable, you get a "readonly database" error. When it's writable, but the journal file is not, you get "I/O error" instead.
Unfortunately, sqlite3.h isn't very descriptive about what the specific issue is. Error code 10 is defined here:
#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
You may have an issue with /tmp being full at certain points or sqlite not having access to memory to write its page cache. This is unlikely though if your db is 50kb as sqlite should be able to hold your page cache in memory.
You could try making a copy of the db in the hopes that sqlite can read the copied database and update your code to reflect that:
$sqlite3 your.db
sqlite> begin immediate;
<press CTRL+Z>
$cp your.db copyofyour.db
$exit
sqlite> rollback;
You should also check the logs to see if this is happening with every request or intermittently. You may want to see if you have access to other commands to monitor server health (top, free). Being able to reproduce the issue seems to be your first task at hand. If you can't reproduce it with consistently, it's likely a memory related issue.
A possible, and maybe hard to detect, error source may be if file locking fails. You could test if your file system currently supports file locking with
flock testfile touch testfile
NFS file systems for example may exhibit this behavior depending on NFS server configuration.