I have a (tiny) dynamic website that is (roughly) a Perl CGI script using a SQLite database. Package DBI is the abstraction layer used in Perl.
About one week ago, I started to see this error message:
disk I/O error(10) at dbdimp.c line 271
Since this is a hosted site running Apache, I cannot see if the hard disk is (nearly) full. Access to command "df" is disabled.... but I used the (UNIX) shell command "yes > blah" to test the disk can still create new files. My database is very tiny -- less than 50 kilobytes.
I checked file and directory permissions: Directory and all parents are a+r,a+x (all + read/executable). The directory containing my SQLite database file is also a+w (all + write). The database file itself has a+w,a+r (all + read/write).
I wrote a simple Perl program to test I can run the failing select query: It runs fine.
I ran query "VACUUM" on the database. I tried my tests again -- no improvement.
I dumped the SQLite database to raw SQL (using SQLite shell command ".dump") and rebuilt. I tried my tests again -- no improvement.
Any suggestions? I am so confused... Normally, the above list can catch most programming/setup errors.
Another cause for this:
Database file is writeable
Database journal file (ending in -journal) is not writable
When the database file isn't writable, you get a "readonly database" error. When it's writable, but the journal file is not, you get "I/O error" instead.
Unfortunately, sqlite3.h isn't very descriptive about what the specific issue is. Error code 10 is defined here:
#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
You may have an issue with /tmp being full at certain points or sqlite not having access to memory to write its page cache. This is unlikely though if your db is 50kb as sqlite should be able to hold your page cache in memory.
You could try making a copy of the db in the hopes that sqlite can read the copied database and update your code to reflect that:
$sqlite3 your.db
sqlite> begin immediate;
<press CTRL+Z>
$cp your.db copyofyour.db
$exit
sqlite> rollback;
You should also check the logs to see if this is happening with every request or intermittently. You may want to see if you have access to other commands to monitor server health (top, free). Being able to reproduce the issue seems to be your first task at hand. If you can't reproduce it with consistently, it's likely a memory related issue.
A possible, and maybe hard to detect, error source may be if file locking fails. You could test if your file system currently supports file locking with
flock testfile touch testfile
NFS file systems for example may exhibit this behavior depending on NFS server configuration.
Related
We have a custom backup solution utilizing the ibx controls in Delphi to perform nightly automatic backups. As part of our current validation for a successful backup, we read the output logs generated by the backup looking for the "closing file, committing and finishing" verbiage that's last in the log file. Additionally we perform a full restore to a separate area to ensure the ibk file is valid. That's turning out to be problematic in terms of available drive space so looking for other ideas to make sure the backup is successful.
How else might we ensure that our ibk file is valid?
Jeff,
Not sure what your database size or backup file size is, and if they are too big for the remaining disk space. Can you share the database and backup size details?
Older InterBase (2017 and earlier) had a way for the command line tool, gbak, to pipe the output from the backup to another gbak process restoring from the backup. This would allow you to save the disk space on the backup file. But, since you are using the IBX backup/restore service, this is not possible. Also, InterBase 2020 has a different backup format which requires random (not sequential) write access to the backup file, thereby not allowing any pipe output even via the 'gbak' command line tool.
Here are a couple of ways to "reduce" the disk storage requirements that may work for you.
** Backup file **
You can have the InterBase backup service (from your application) store the target backup file in an external storage medium (HDD, USB stick etc., or a SAN disk/network file share). The backup/restore service can read/write backup file(s) from network shares/external medium.
** Restored database **
When restoring the database you can use the service parameter option UseAllSpace (http://docwiki.embarcadero.com/Libraries/Sydney/en/IBX.IBServices.TRestoreOptions), equivalent to gbak option "-use_all_space". This will save you about 20% space on restored data pages.
Turn off index creation, thereby reducing page consumption (possibly quite a bit depending on your index definitions). But, you will lose index validation because of this. "DeactivateIndexes" option (gbak option "-inactive") in the same page above.
Restore the database to a remote InterBase server with its own storage medium, or, to an attached USB stick or SAN disk. Since you are using the restored database only for validating the backup file, you can have this restored database on a slower I/O medium or a slower server over the network.
I have nearly 600+ files to load in DB2 database version 10.5.9. Each file size is nearly 200 MB. I have a batch script to upload each files in a loop.
My Disk "/mnt/blumeta0/db2/copy"size is 16 GB
If i run this upload with nonrecoverable mode it works. But i cant do that in my prod database.
I tried to db3 connect refresh and db3 terminate after each file uploaded but does not worked.
Manually cleaned up disk /mnt/blumeta0/db2/copy but total size of all files is more than 16 GB so got same error.
I cannot clean folder in script as clean up can be done with super user.
db2 "LOAD FROM $i OF DEL INSERT INTO <table_name>"
SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
How DB2 server cleans copy folder? Is there any other alternative i can try?
You suggested that the Load succeeds when using NONRECOVERABLE mode, however fails otherwise with error "SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
I'm guessing that the Load is being performed using the COPY YES option. Since the Load command that you pasted does not show the COPY YES option, I'm guessing that you have a special configuration setting enabled that forces Load operations to use COPY YES in order to prevent the table from becoming inaccessible in a rollforward recovery event or HADR standby takeover event. The name of this configuration setting (registry variable) is "DB2_LOAD_COPY_NO_OVERRIDE".
When the Load is performed with COPY YES, a copy of the table pages/extents that were generated during the Load operation is written into a copy image file.
I suspect that you have the registry variable "DB2_LOAD_COPY_NO_OVERRIDE=COPY YES /mnt/blumeta0/db2/copy" configured (you can use db2set -all on the database server to display all configured registry variables). If so, the copy image files are being stored in this path, which at 16GB appears to be too small to contain them all.
You can consider changing the location of this path to somewhere with more disk space, however the path should always be accessible in the event of a database rollforward recovery or hadr standby takeover, otherwise the table will not be accessible after such an event.
I am getting the error like following while accessing a Postgres database
ERROR: could not access status of transaction 69675
DETAIL: Could not open file "pg_clog/0000": No such file or directory.
I didn't do anything with the pg_clog folder but the 0000 file is not there.
Is there any way to recover that file or in any way to fix this issue?
Any help would be appreciated.
You are experiencing database corruption, and you should restore from a backup. You should try to figure out what happened to the database so you can prevent it in the future.
Is your storage reliable?
Are you using dangerous settings like fsync = off?
Were there any crashes recently?
Are you really running 9.1? If yes, you shouldn't do that, as it is out of support.
Are there any files in the pg_clog directory? There should be.
Did you have an out-of-space problem recently that may have led someone to remove files from a “log” directory?
As stated in the previous response, you're better off restoring from backup, however, I discovered the metadata for those transaction files are not stored in the same location as the data when we restored the data on a server where we were doing some testing with full vacuum and needed to restore the database to an earlier state before the vacuum. In the event where your data integrity isn't as critical like a test database you can get away with creating empty files for the missing transaction logs like this:
dd if=/dev/zero of=/path/to/db/pg_clog/xxxx bs=256k count=1
chown postgres.postgres /path/to/db/pg_clog/xxxx
chmod go-rwX /path/to/db/pg_clog/xxxx
There may be multiple missing files, but if it's just a few files this is an alternative to consider.
Trying to do a DB2 import as part of a system copy and the transaction logs filled up. Import was cancelled, transaction log backup ran, and number of logs were increased to approximately 90% of the available disk (previously 70%).
Restarted DB and kicked off DB but now that errors due to the tablespace state - running db2 list tablespaces show detail shows I have 4 tablespaces in Backup Pending state.
So I tried db2 backup database <SID> tablespace <SID>#BTABI online but I get the error:
SQL2059W A device full warning was encountered on device "/db2/db2". Do you want to continue(c), terminate this device only(d), abort the utility(t) ? (c/d/t) t
No option works but to terminate.
The thing is, the device isn't full. There's no activities on the DB, running db2 list applications gives:
SQL1611W No data was returned by Database System Monitor.
Running db2 "select log_utilization_percent,dbpartitionnum from sysibmadm.log_utilization order by 2" to show the log utilization returns 0.
There's no logs in use. The filesystem has space free. I even tried reducing the number of logs again to make sure but get the same issue.
I tried db2 "alter tablespace <SID>#BTABI switch online" instead and although this returns a 'success' statement it doesn't actually do anything - my tablespaces are still in Backup pending?
Any ideas please
You're trying to write the backup images to the /db2/db2 file system, which doesn't have enough space to hold the backup image(s).
Note: When you execute BACKUP DATABASE as in your example above without specifying where to send the backup (i.e. you don't use the to /dir/ectory or another option like use TSM), DB2 will write the backup image to the current directory. Make sure you specify where to store the backup image (and that it has enough free space to hold the backup image). If you don't care about recoverability and are just trying to get the table space out of backup pending state, you can specify /dev/null as your location as #mustaccio suggests in the comments above.
Also: You may want to look at the COMMITCOUNT option for the import utility so you're not trying to insert all data in a single massive transaction.
As per above comments - just kept running the import, resetting the 'pending load' status each time with:
load from /dev/null of del terminate into SAPECD.
A few packages fail each time but the rest process. Letting finish, resetting again and restarting the import gets through a little more each time.
I know this issue has already been raised by others, but even trying previous suggestions I still get this error...
When I try to populate a table copying from a csv file, I get a permission error.
COPY Eurasia FROM '/Users/Oritteropus/Desktop/eurasia1.csv' CSV HEADER;
ERROR: could not open file "/Users/Oritteropus/Desktop/eurasia1.csv" for reading: Permission denied
SQL state: 42501
As previously suggested in these cases, I changed the permission of the file (chmod 711 eurasia1.csv or chmod a+r eurasia1.csv) and I also changed the user rights with:
ALTER USER postgres WITH SUPERUSER; #where postgres is my user
However, I still get the same error.
I also tried to manually change the privileges from pgAdmin but seems avery privilege is already given.
I'm working on a Mac Os and I'm using PostGreSQL 9.2.4.
Any suggestion? Thanks
The best option is to change and use COPY FROM STDIN as that avoids quite a number of permissions issues.
Alternatively you can make sure that the postgres user can access the file. This rarely better than COPY FROM STDIN however for a couple reasons.
COPY TO STDOUT can conceivably corrupt your data. Because this involves file I/O by PostgreSQL if bugs exist in COPY FROM STDIN that could be a problem too.
If you are doing it on the server side because of automation/stored proc concerns, this is rarely a win, as you are combining transactional and non-transactional effects. COPY TO STDOUT and COPY FROM STDIN do not have these issues. (For example, you don't have to wonder whether the atime of the inode actually means the file was properly processed).