I'm wondering whether there is a way which can set priority to the three config servers for all mongos instances.
Because We deploy mongo clusters cross IDC, two of the config servers on the main one and another on the other. We want mongos to access the two first, and the last one is for hot backups.
Thanks a lot!
The config server that is listed first in the string of three is the one that will be used by all mongos instances. If you have one that you would prefer be used, list it first when you launch mongos processes.
Related
So I installed MongoDB as a windows service. It starts and works as expected.
Then while I was playing around I decided to add a new data directory and, from the command line, start an instance of mongod.exe pointing at this new data directory.
So at this point I have one instance running as a service and one instance running from a command prompt (or that's how it appears).
I then connected with my gui tool to localhost and got the server instance.
Looking at the details they both appear to be running on the same port (27017).
My questions are:
Are there really two instances running on the same port or is it one instance with two data directories?
If it is two instances how can they share the same port?
If it is two instances is there away to connect to either one?
If it is one instance then are both data directories being used?
If it is one instance and both data directories are being used what is the second one being used for?
Are there really two instances running on the same port or is it one
instance with two data directories?
No, two instances cannot run in the same port. You would have got an error when you started the second mongod on the same port.
If it is two instances how can they share the same port?
No, same port cannot be shared between two different instances of mongod on the same machine. Also, there cannot be two data directories for a single instance of mongod.
If it is two instances is there away to connect to either one?
Check the service status, if it is running or not also you can check the mongod.log for the current state of the service. For command prompt instance you should be able to see the status in command prompt itself if --fork option is not enabled.
If it is one instance then are both data directories being used?
No, there cannot be two data directories for a single instance of mongod.
If it is one instance and both data directories are being used what is
the second one being used for?
No, there cannot be two data directories for a single instance of mongod.
Can we have this type of configuration?
Two server running the following things each-
1.Mongo Config Server.
2.Mongo Router.
3.Application.
Total 4 EC2 servers-
First Server-Running the web application & mongos.
Second Server-Running the web application & mongos.
Third Server-Running the First Shard with complete DB(Say for
example Demo).
Forth Server-Running The Second Shard with complete DB(Say for
example Demo).
Both the Mongos should point to one shard named Shard1?
Yes, You can have multiple mongos instances running against a single shard. Think of the mongos instances as clients for the sharded cluster which have to run as a daemon process in order to keep metadata and heartbeats up to date.
Edit: as for having a complete DB, this is only possible for a single DB. You can have one DB on shard1 and the other DB on shard2, for example. but you can never have a single complete DB on two shards. To achieve the goal of having db1 on shard1 and db2 on shard2, you simply make the respective shard the primary shard of the respective database and don't shard any collection. Please read the docs for the movePrimary command for details.
A bit OOT:
However, running a single config server is strongly advised against, and for a good reason. If the single config server goes down or gets corrupted, your cluster will be impossible to use - and recreating the sharded cluster will not an easy task to be done. And it's going to be a lengty process. So please, use three config servers.*
I'm trying to figure out how different instances of mongos server work together.
If I have 1 configserver and some shards, for example four, each of them composed by only one node (a master of course), and have four mongos server... do the mongos server communicate between them? Is it possible that one mongos redirect its load to another mongos?
When you have multiple mongos instances, they do not automatically load-balance between each other. They don't even know about each others existence.
The MongoDB drivers for most programming languages allow to specify multiple mongos instances when creating a connection. In that case the driver will usually ping all of them and connect to the one with the lowest latency. This will usually be the one which is closest geographically. When all have the same network distance, the one which is least busy right now will usually respond first. The driver will then stay connected to that one mongos, unless the program explicitely reconnects or the mongos can no longer be reached (in that case the driver will usually automatically pick another one from the initial list).
That means using multiple mongos instances is normally only a valid method for scaling when you have a large number of low-load clients, not one high-load client. When you want your one high-load client to make use of many mongos instances, you need to implement this yourself by creating a separate connection to each mongos instance and implement your own mechanism to distribute queries among them.
Short answer
As of MongoDB 2.4, the mongos servers only provide a routing service to direct read/write queries to the appropriate shard(s). The mongos servers discover the configuration for your sharded cluster via the config servers. You can find out more details in the MongoDB documentation: Sharded Cluster Query Routing.
Longer scoop
I'm trying to figure out how different instances of mongos server work togheter.
The mongos servers do not currently talk directly to each other. They do coordinate activity some activity via your config servers:
reading the sharded cluster metadata
initiating a balancing round (any mongos can start a balancing round, but only one round can be active at a time)
If I have 1 configserver
You should always have 3 config servers in production. If you somehow lose or corrupt your config server, you will have to combine your data and re-shard your database(s). The sharded cluster metadata saved on the config servers is the definitive source for what sharded data ranges should live on each shard.
some shards, for example four, each of them composed by only one node (a master of course)
Ideally each shard should be backed by a replica set if you want optimal uptime. Replica sets provide for auto-failover and can be very useful for administrative purposes (for example, taking backups or adding indexes offline).
Is it possible that one mongos redirect its load to another mongos?
No, the mongos do not perform any load balancing. The typical recommendation is to deploy one mongos per app server.
From an application/driver point of view you can specify multiple mongos in your connect string for failover purposes. The application drivers will generally connect to the nearest available mongos (by network ping time), and attempt to reconnect to in the event the current mongos connection fails.
A MongoDB instance can have different roles:
Config server
Router (mongos)
Data server
Arbiter server (for replica sets)
I know that db.serverStatus() can be used to see if an instance is a router, the process value is mongos.
But for config servers, arbiters and data nodes the process value is mongod.
Is there a simple way of distinguishing between these instance types?
I want to bring attention to one particular important issue with this question: sharding is and horizontal dimension ( several replicasets where data is distributed to ) and replicaset is a high availability solution which is represented by the composition of different mongod nodes!
So you actually what you are trying to figure out is:
ReplicaSet nodes roles
Shard Nodes members
In the case of a replicaSet what you might be interested in knowing is each node role. You can easily get the information without needing to connect to all the nodes of the replicaset, just run the command:
db.isMaster()
with this you will get the node members and roles of each member.
For shard node members first of all you should never try to connect directly to the config servers. These are their to manage the distribution of chunks, chunk splits and other configuration data, relevant only for the shard cluster functionality. Avoid using those ip's to connect to from your application.
So if you want to have a clear view of which members compose your shard cluster, how many shards you have etc, you need to run command:
db.printShardStatus()
or
sh.status()
Please review the documentation here
Cheers,
N.
The rule of thumb is to have the 'mongos' process running on each of your application servers. This keeps your application talking to localhost which is fast and your mongos processes scale with your app.
Say we have 2 distinct mongo clusters (sharded), is it possible to configure one mongos process to talk to two different clusters? It would be awesome to abstract away the fact that the databases lived in different places.
Or would you have to launch two different mongos processes on different ports? If this IS possible I still worry that it might be dangerous having two different mongos processes fighting for resources.
Or something completely different? Ideas?
Each mongos belongs to one, only one, cluster (defined by the config db servers). The mongos processes don't use much resources; you can run multiple on a machine.
You can have more than one sharded db/collection per cluster.