MongoDB, modifydate of data files not updated (Windows) - mongodb

I just noticed something strange with MongoDb physical files on Windows.
When i update/insert data in my MongoDB database, the modify date of the related *.0 files in 'data' folder is not updated.
I understand that the size is pre-allocated so it stays the same, but why the modify date is not at least updated ?
This is quite an issue for me cause i store the mongo files in Google drive, so nothing is updated due to this weird behavior.
Does anyone have an explanation ?
Thank you

As an answer to my own question, i came with the solution of using **mongodump** to keep my data up to date on Google Drive.
I scheduled a task to dump frequently my mongo database into same data folder, so this folder is backed up, it just need some extra work using **mongorestore** on other machines but it's quite easy steps.

Related

Moving existing MongoDb deployment to directoryperdb by moving files instead of dump/restore

I'm currently trying to move my existing MongoDb deployment to the --directoryperdb option to make use of different, mounted volumes. In the docs this is achieved via mongodump and restore, however on a large database with over 50GB compressed data this takes a really long time and indices need to be completely rebuilt.
Is there a way to move the WiredTiger files inside the current /data/db path, just how you would when just changing the db path? I've tried copying the corresponding collection- and index- files into their subdirectory, which doesn't work. Creating dummy collections and replacing them with the old files and then running --repair works but I don't know how long it takes since I only tested it with a few docs large collection, this also seems very hacky with a lot of things that could go wrong (for example data loss).
Any advice on how to do this, or is this something that simply should not be done?

Update all data in MongoDB or replace MongoDB instance

MongoDB contains data ready for client-side apps. The raw data being stored in Google BigQuery (GBQ). Each day a lot of new data being added into GBQ and once a day pretty much everything in MongoDB needs to be updated according to the most recent data in GBQ. All outdated (not updated) records must be deleted.
What is the right way to handle MongoDB update with close to 0 downtime?
Among the crazy solutions: may be i should have two instances of MongoDB, one is in production, another is being updated. Once the second db updated, i'll run Google Kubernetes Engine deploy with changed configs, so all clients will be smoothly moved from previous data to the updated one without messing up with partially updated data and without downtime. Though, i have never heard about such solutions, so i'm not sure if this is the right one.
Another solution is to have two versions of each collection under a single instance of MongoDB. Once collection is updated, server switches to that collection.
The 2nd solution seems a good option, if you know the trigger for the update, you can have minimum downtime by creating a new collection (named by date or a unique serial maybe) and update your code accordingly.
I had some good experience doing this for a fashion website sometime back, where we scraped data (using scrapinghub) and imported them into mongodb (collections stored by date) and used accordingly. So our scraping ran early morning (5-6AM) and when our editors/curators came in the office, they would start using the current dated collection (via the Web Interface of course :) )

Copying a MongoDB record for record

We have a MongoDB sitting at 600GB. We've deleted a lot of documents, and in the hopes of shrinking it, we repaired it onto a 2TB drive.
It ran for hours, eventually running out of the 2TB space. When I looked at the repair directory, it had created way more files than the original database??
Anyway, I'm trying to look for alternative options. My first thought was to create a new MongoDB, and copy each row from the old to the new. Is it possible to do this, and what's the fastest way?
I have a lot of success in copying database using the db.copyDatabase command:
link to mongodb copyDatabase
I have also used MongoVUE, which is a software that makes it easy to copy databases from one location to another - MonogoVUE (which is jus ta graphical interface on top of monogo).
If you have no luck with copyDatabase, I can suggest you try to dump and restore the database to an external file, something like mongodump or lvcreate
Here is a full read on backup and restore which should allow you to copy the database easily: http://docs.mongodb.org/manual/core/backups/

Export from mongo database file to bson

I have a mongo database db.ns, db.0, db.1, ... db.7
Accidentally I remove all the data from a collection, but in the database files (explorer with vim) it's all (or part of) the data.
After trying to recover the data moving to another mongodb instance, or mongod --restore, also, I try with the mongodump, but the collection appears empty.
I try to recover from scratch, directly from the files. I try with bsondump for each one, and for a single file (cat db.ns db.1 ... > bigDB) but nothing.
I don't know what other ways are from recover the data from a mongo database file.
Any suggestion?? Thx!!!
[SOLVED]
I will try to explain what I do to "solved" the problem.
First. Theory.
In this SlideShare, can see a little of how files MongoDB database work.
http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
Options:
When you remove accidentally a collection:
the first thing that you have to do is quickly copy all the database (normally in /data/db or /var/lib/mongodb) and stop the service.
Remove the journal directory try to recover from this copy and pray ;D
You can see more about that, here:
mongodb recovery removed records
In my case, this did not work for me.
In Journaly case, mongo no update its database files directly only their indexes.
So that, you can access to the files (appointed as database.ns, database.0, database.1 ...) and try to recover this.
These files are as cleaved BSONs and binary. So, you can open, and see all the information
In my case, I create a easy function in PHP that first read the file, and explode the file in smallers files.
Before, takes one to one and apply some regular expresions to remove Hexadecimal values, explode the info into the registers (you can see the "_id" key to do that) and do some others task to clean the info.
And finally, I have to process manually all the preprocessed info to obtain all the information.
I think, I have lost, at least, the 15-25% of the information. But I prefer to think that I have recovered the 75% of the lost info.
Caution:
This is not a easy and secure way to solve this problem. In my case, the db only recive information, and not modify or update this.
With this method, a lot of information will be lost, Mongo IDs, integers, dates, can't be recovered.
The proccess is 100% manually, you can spend your time on automating certain tasks, but will depend on your database structure.

Reading data issue meanwhile updating and/or writing on Postgres DB

I am new to DB and I needed it for a project.My problem is as follows: I have 3 scripts that write to Postgres DB and another script that does updates on it. So far, with that I haven't had any issues. However, now I need to read that data at the same time. More specifically from that DB, I need to read last 1 min data meanwhile. And I have another script for that. But, when I run this script, I can't see any writes from the scripts that is supposed to write. Any suggestions?
Chances are your other scripts haven't COMMITed their data yet, which means that their updates aren't visible to your queries yet.