I'm currently trying to move my existing MongoDb deployment to the --directoryperdb option to make use of different, mounted volumes. In the docs this is achieved via mongodump and restore, however on a large database with over 50GB compressed data this takes a really long time and indices need to be completely rebuilt.
Is there a way to move the WiredTiger files inside the current /data/db path, just how you would when just changing the db path? I've tried copying the corresponding collection- and index- files into their subdirectory, which doesn't work. Creating dummy collections and replacing them with the old files and then running --repair works but I don't know how long it takes since I only tested it with a few docs large collection, this also seems very hacky with a lot of things that could go wrong (for example data loss).
Any advice on how to do this, or is this something that simply should not be done?
I am stuck with a sys_file_processedfile table with more than 200.000 entries. Is it possible to truncate the table and empty the folder /fileadmin/_processed_ without destroying something?
Thanks!
It is possible.
In Admin Tools (Installtool) under Maintenance there is a card named Remove Temporary Assets which you should use to do so.
TYPO3 stores processed files and cached images in a dedicated directory. This directory is likely to grow quickly.
With this action you can delete the files in this folder. Afterwards, you should also clear the cache database tables.
The File Abstraction Layer additionally stores a database record for every file it needs to process. (e.g. image thumbnails). In case you modified some graphics settings (All Configuration [GFX]) and you need all processed files to get regenerated, you can use this tool to remove the "processed" ones.
Ive used the Data Comare tool to update schema between the same DB's on different servers, but what If so many things have changed (including data), I simply want to REPLACE the target database?
In the past Ive just used TSQL, taken a backup then restored onto the target with the replace command and/or move if the data & log files are on different drives. Id rather have an easier way to do this.
You can use Schema Compare (also by Red Gate) to compare the schema of your source database to a blank target database (and update), then use Data Compare to compare the data in them (and update). This should leave you with the target the same as the source. However, it may well be easier to use the backup/restore method in that instance.
I need to know - Is it any possibility to restore data in collection or database if it was dropped?
The OS, by default (or in the case of Windows: any case) will not allow you to restore deleted data. You will need a third party program which can read the sectors. It is also good to note that while database drops will delete the files collection drops will not, instead they get nulled.
Dropping a collection should make it near on impossible to retrieve the data since the hard drive sectors that were used have now been overwritten with new data (basically one pass 0).
So the files may be recoverable on a database drop but that is still questionable.
I have a mongo database db.ns, db.0, db.1, ... db.7
Accidentally I remove all the data from a collection, but in the database files (explorer with vim) it's all (or part of) the data.
After trying to recover the data moving to another mongodb instance, or mongod --restore, also, I try with the mongodump, but the collection appears empty.
I try to recover from scratch, directly from the files. I try with bsondump for each one, and for a single file (cat db.ns db.1 ... > bigDB) but nothing.
I don't know what other ways are from recover the data from a mongo database file.
Any suggestion?? Thx!!!
[SOLVED]
I will try to explain what I do to "solved" the problem.
First. Theory.
In this SlideShare, can see a little of how files MongoDB database work.
http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
Options:
When you remove accidentally a collection:
the first thing that you have to do is quickly copy all the database (normally in /data/db or /var/lib/mongodb) and stop the service.
Remove the journal directory try to recover from this copy and pray ;D
You can see more about that, here:
mongodb recovery removed records
In my case, this did not work for me.
In Journaly case, mongo no update its database files directly only their indexes.
So that, you can access to the files (appointed as database.ns, database.0, database.1 ...) and try to recover this.
These files are as cleaved BSONs and binary. So, you can open, and see all the information
In my case, I create a easy function in PHP that first read the file, and explode the file in smallers files.
Before, takes one to one and apply some regular expresions to remove Hexadecimal values, explode the info into the registers (you can see the "_id" key to do that) and do some others task to clean the info.
And finally, I have to process manually all the preprocessed info to obtain all the information.
I think, I have lost, at least, the 15-25% of the information. But I prefer to think that I have recovered the 75% of the lost info.
Caution:
This is not a easy and secure way to solve this problem. In my case, the db only recive information, and not modify or update this.
With this method, a lot of information will be lost, Mongo IDs, integers, dates, can't be recovered.
The proccess is 100% manually, you can spend your time on automating certain tasks, but will depend on your database structure.