I was wondering if there is any way to have separate servers for engine and storage of MongoDB? I am interested in running the MongoDB engine on a local Ubuntu machine but store the data remotely on another server, like having a remote address for MongoDB data directory.
Whether it's possible or not, it wouldn't be advisable. MongoDB really benefits from fast disk i/o, but if your data is stored on a remote server then the network latency would make the disk i/o very slow.
Related
There is any way to make this operation faster?
I'm trying to restore my DB to the AWS DocumentDB, and probably it will take some weeks to finish... my overall data is less than 400MB.
dump is Gzipped
To resolve this it was suggested to run the command from an EC2 instance rather than a remote host.
This enabled a speedy import.
The likely reason is the number of network based operations across the internet rather than a local network resource which has shorter latency between each interaction.
I want to integrate atlas mongo DB cluster to solarwind for monitoring. These are the matrices I want to monitor from solarwind. Is it possible to do this ?
Mongo DB Metrices
Connections,Memory ,DB Storage,Operation execution time,
Hardware metrics
Disk IOPS(Input/output per seconds),Process CPU,System CPU,Disk space free,Disk space used
The hardware stuff you will get with basic monitoring (agent,WMI or SNMP), though possibly not disk IOPS via SNMP.
There are two MongoDB monitoring templates provided with Server and Application Manager (SAM), one for Linux and one for Windows.
Details - http://www.solarwinds.com/documentation/en/flarehelp/sam/content/sam-mongo-sw5644.htm
The windows version uses powershell to access the database, using the mongodb executable with --eval switches, providing connections, global locks, network, messages, operations, database and other statistics, along with a TCP port check and a service/process check.
If you know the correct eval switches then it should be easy enough.
The Linux version does the same but with shell scripts.
Can I deploy large database by copying its files (eg. testing database with files: testing.0,testing.1,testing.ns found on mongodb dbpath) from another server to the target servers (replica set) to avoid usage of communication bandwidth for replication (in case it is only deployed to the primary)? So basically I want to avoid the slow process of replication.
If journaling is enabled, what is the effect on the process?
Yes you can, this is a perfectly valid way of solving having to do tedious and time consuming replication between members of a distanced or latenced network.
If journaling is enabled nothing really happens, copying via the file system goes around MongoDB.
There is a production server for realtime data collection in remote server room. My workshop also have a server for offline data analytics. Is there any way to sync those 2 database without a master? It looks like master-slave needs master accessible even I connect to the slave.
I know I can run mongodump and mongorestore, but the data is too big to carry through network. I need some way could do incremental updating and works like mirroring.
Thanks.
If I populate a MongoDB instance on my local machine, can I wholesale transfer that database to a server and have it work without too much effort?
The reason I ask is that my server is currently an Amazon EC2 Micro instance and I need to put LOTS of data into a MongoDB and don't think I can spare the transactions and bandwidth on the EC2 instance.
There is copy database command which I guess should be good fit for your need.
Alternatively, you can just stop MongoDb, copy the database files to another server and run an instance of MongoDb there.