Mongodb atlas backup/export data auto - mongodb

I'm using Atlas mongodb
When I try to set a back it says Turn on Backup (M10 and up) which means that I cannot backup (Logical Size928.8 KB).
Is there a different method to backup Atlas mongodb?
I know that I can use Compas to export each collection but it's tedious, I would like to have simple method to backup my data daily, preferable automatic.
Is there a similar service I can use that offer backup for smaller DB as well?

Related

Is there a faster way to clone limited mongo database

I want to clone remote Mongo Database to my local database with limited data. I know I can do it thanks to exporting each collection then import it to my local database. But is there any faster or efficient way that I can clone the mongo database?
Thanks.
To take a database dump, you need to:
connect to the server
issue the query requesting the data you want
receive query results (as bson)
perform the minimum amount of transformations on that bson to get to the format you want to save in
mongodump with bson output and filter conditions should do these operations no less efficiently than any hand-built system you can come up with.
Same thing for importing.

Should I use different databases or just different collections in MongoDB to store user information and rest of the database?

I am pretty new to MongoDB. I am creating an application where I will have users and a lot of other data.I have already created a database where I am storing user information using MongoDB. Now I have to create a new database or collection to store rest of the data. What are the pros and cons of creating different or different collection ?
I use MongoDB in a very similar way and have already thought a lot about dividing my database. Here are some of the things we considered:
Using 2 databases is harder to maintain, your application will have to know which database to update, also it can increase the costs (even more if you intend to monitor the databases and host on different infrastructure).
Mongo 2 used to lock the entire database when updating, so I think it would be better to separate then, but Mongo 3 with WiredTiger locks only the document, so you won't have the problems we used to have in the past.
One good thing about splitting the database in two is that even if your data explodes one database, the other will still work
IMHO, if you use a decent machine to store your databases and monitor it the right way, you won't have any troubles keeping just one until your system is giant with millions of active users. You can also use Replica Sets and Sharding to increase efficiency.

In MongoDB, does a lock apply to a collection, a database, or a server?

In a MongoDB server, there may be multiple databases, and each database can have multiple collections, and a collection can have multiple documents.
Does a lock apply to a collection, a database, or a server?
I asked this question because when designing MongoDB database, I want to determine what is stored in a database and what is in a collection. My data can be partitioned into different parts, and I hope to be able to move a part from a MongoDB server to a filesystem, without being hindered by the lock that applies to another part, so I wish to store the parts of data in a way that different parts have different locks.
Thanks.
From the official documentation : https://docs.mongodb.com/manual/faq/concurrency/
Basically, it's global / database / collection.
But with some specific storage engines, it can lock at document level too, for instance with WiredTiger (only with Mongo 3.0+)

MongoDB. Keep information about sharded collections when restoring

I am using mongodump and mongorestore in a replicated shard cluster in MongoDB 2.2. to get a backup and restore it.
First, I use mongodump for creating the dump of all the system, then I drop a concrete collection and restore it using mongorestore with the output of mongodump. After that, the collection is correct (the data it contains is correct and also the indexes), but the information about if this collection is sharded is lost. Before dropping it, the collection was sharded. After the restore, however, the collection was not sharded anymore.
I was wondering then if a way of keeping this information in backups exist. I was thinking that maybe sharded information for collection is kept in the admin database, but in the dump, admin folder is empty, and using show collections for this database I get nothing. Then I thought it could be kept in the metadata, but this would be strange, because I know that, in the metadata, the information about indexes is stored and indexes are correctly restored.
Then, I would like to know if it could be possible to keep this information using instead of mongodump + mongorestore, filesystem snapshots; or maybe still using mongodump and mongorestore but stopping the system or locking writing. I don't thing this last option could be the reason, because I am not performing writing operations while restoring even not being locking it, but just to give ideas.
I also would like to know if anyone is completely sure about if it is the case that this feature is still not available in the current version.
Any ideas?
If you are using mongodump to back up your sharded collection, are you sure it really needs to be sharded? Usually sharded collections are very large and mongodump would take too long to back it up.
What you can do to back up a large sharded collection is described here.
The key piece is to back up your config server as well as each shard - and do it as close to "simultaneously" as possible after having stopped balancing. Config DB is small so you should probably back it up very frequently anyway. Best way to back up large shards is via file snapshots.

creating a different database for each collection in MongoDB 2.2

MongoDB 2.2 has a write lock per database as opposed to a global write lock on the server in previous versions. So would it be ok if i store each collection in a separate database to effectively have a write lock per collection.(This will make it look like MyISAM's table level locking). Is this approach faulty?
There's a key limitation to the locking and that is the local database. That database includes a the oplog collection which is used for replication.
If you're running in production, you should be running with Replica Sets. If you're running with Replica Sets, you need to be aware of the write lock effect on that database.
Breaking out your 10 collections into 10 DBs is useless if they all block waiting for the oplog.
Before taking a large step to re-write, please ensure that the oplog will not cause issues.
Also, be aware that MongoDB implements DB-level security. If you're using any security features, you are now creating more DBs to secure.
Yes that will work, 10gen actually offers this as an option in their talks on locking.
I probably isolate every collection, though. Most databases seem to have 2-5 high activity collections. For the sake of simplicity it's probably better to keep the low activity collections grouped in one DB and put high activity collections in their own databases.