How to use the same SQLite3 database from multiple Perl processes? - perl

I've an unfortunate situation where multiple Perl processes write and read the same SQLite3 database at the same time.
This often causes Perl processes to crash as two processes would be writing at the same time, or one process would be reading from the database while the other tries to update the same record.
Does anyone know how I could coordinate the multiple processes to work with the same sqlite database?
I'll be working on moving this system to a different database engine but before I do that, I somehow need to fix it to work as it is.

SQLite is designed to be used from multiple processes. There are some exceptions if you host the sqlite file on a network drive, and there maybe a way to compile it such that it expects to be used from one process, but I use it from multiple processes regularly. If you are experiencing problems, try increasing the timeout value. SQLite uses the filesystem locks to protect the data from simultaneous access. If one process is writing to the file, a second process might have to wait. I set my timeouts to 3 seconds, and have very little problems with that.
Here is the link to set the timeout value

Related

Does Firebird have "Group Commit"

As I understand it, this is where a background thread is responsible for writing transactions to disk in "careful write" order so that the user does not have to wait for the actual writing to disk to occur.
I have seen references to this (e.g. here) from a long time ago relating to interbase but I could not see it mentioned in relation to firebird anywhere.
Using gfix utility you can set FORCED WRITES flag on or off for a database file. When turned on, the server will wait until actual disk write occur. When turned off the server will continue execution leaving to OS to decide when to write data to a disk. Performance gains are up to 3x but then there is a posibility that some data would be written in a wrong order if power failure occurs.
We strongly advice our customers toward using RAID controller with independent power source for a cache memory together with FORCED WRITES = ON.
Based on the comments on this thread and searching online it seems that firebird does not have GROUP COMMIT

Powershell holding a lock on a log file

I use ADD-CONTENT and OUT-FILE to write information to log files, and in order to simplify the amount of logs I have I'd like multiple instances of my script to be able to share log files. Is there any way to make sure that powershell doesn't hold a lock on those files when writing to them?
For example, I have a SQLCMD call that restores a database, which can take 20 minutes or so. During this time, it writes the output to a log file and thus maintains a lock on that file (so I can't write to it with other scripts).
Ideally I would like both processes to be able to write at the same time. Should I write a test-file function to see if the file is locked prior to writing? And if it is, sleep for x seconds and check again?
Multiple processes writing to a file is pretty difficult to pull off safely. A better method is a transactional system. Many people use a transactional database for multiple processes to log to. Another good option is to write to a custom or system event log. This is also transactional and should avoid collisions.

wait/notify mechanism for multiple readers in Oracle sql?

We have multiple processes which read one database table, get available record and work with it. It works fine.
When there is no record in this table each process waits 5 seconds and reads it again.
So, record could idle in the table for 5 seconds which is not good.
What would be recommended solution to eliminate such waiting and proceed immediately when record is created? One solution could be trigger which does something when record created. But this solution requires knowledge of working processes to deliver record to the one of idle processes.
It looks that ideal solution would be when each process starts to read via SQL from something and when record is created one of waiting processes will have it record and other will continue to wait.
Does Oracle 10 provide such or similar mechanism?
Look at Database Change Notification in 10g, which has since been renamed Continuous Query Notification.
I normally like to include an example but it's hard to find a 10g instance these days, and even a short example requires a lot of code. The process looks complicated, it might be better off to use triggers as you suggested, and deal with the tight coupling.

MongoDB: mongodump/restore vs. backup up files directly

I'm wondering about experiences people have had with MongoDB backups. Assuming a filesystem snapshot is not an option, what have your experiences been with mongodump/restore versus doing a write lock and backing up the files? Have you run into any bugs with one method that caused you to switch?
From the reading I've done so far, it seems like mongodump/restore has the advantage of being able to run it while the server is live, but I'm not sure how well it will scale.
Locking and copying files is only an option when you don't have heavy write load.
mongodump can be run against live server. It will create some additional load, so don't do it on peak hours. Also, it is advised to do it on a secondary node (if you don't use replica sets, you should).
There are some complications when you have a DB so large that no single machine can hold it. See this document.
Also, if you have replica set, you take down one of secondaries and copy its files directly. See http://www.mongodb.org/display/DOCS/Backups:
A simple approach is just to stop the database, back up the data files, and resume. This is safe but of course requires downtime. This can be done on a secondary without requiring downtime, but you must ensure your oplog is large enough to cover the time the secondary is unavailable so that it can catch up again when you restart it.

Access to Apache::DBI from DBI

Is it possible to access a Apache::DBI database handle from a perl script (that isn't running under mod_perl).
What I am looking for is database pooling for my perl scripts, I have a fair number of database sources (oracle/mysql) and a growing number of scripts.
Some ideas like SQLRelay, using Oracle10XE with database links and pooling, and or convert all scripts to SOAP calls, etc are becoming more and more viable. But if there was a mechanism for reusing Apache::DBI I could fight this a bit.
I have no non-perl requirements, so we don't have a php/jdbc implementation or similar to deal with.
Thanks
First off it helps to remember that DBI/DBD is not a wire protocol, but an API over diverse data sources.
Since you are wanting to connect to a pool of database connections from separate processes, DBIx::Connector is not appropriate for that, and Rose::DB seems an odd choice too (they are both wrappers over DBI). You are looking for something like DBD::Proxy or DBD::Gofer, which let you connect multiple processes to a shared database handle.