Adding New Node to an Existing Mongo Replica with Authentication - mongodb

I've an existing replica set using simple authentication configured and I want to add a new node to the replica.
I now use the following flow to do so:
Start the new node as a standalone with authentication enabled and the same keyfile as all other nodes
create an admin user locally (with the exact same credentials as the nodes in the replica have)
Stop the node
Start it again with the replica settings
Add it to the replica
But this feels awkward - having to recreate the user(s) for each node, is there no other way? to just specify the keyfile and connect the new node to the replica?
Thanks

Related

Jump box to MongoDB Atlas with VPC Peering

I have a Mongodb Atlas database which is set up with VPC peering to a VPC in AWS. This works find and I'm able to access it from inside the VPC. I was, however, hoping to provide a jumpbox so that developers could use an SSH tunnel to connect to the Atlas database from their workstations outside of the VPC.
Developer workstation --> SSH Tunnel to box in VPC --> Atlas
I'm having trouble with that, however because I'm not sure what tunnel I need to set up. It looks to me like Mongo connects by looking up replica information in a DNS seed list (mongodb+srv://). So it isn't as simple as doing
ssh user#jumpbox -L 27017:env.somehost.mongodb.net:27017
Is there a way to enable direct connections on Atlas so that I can enable developers to access this database through an SSH tunnel?
For a replica set connection this isn't going to work with just MongoDB and a driver, but you can try running a proxy like https://github.com/coinbase/mongobetween on the jumpbox.
For standalone deployments you can connect through tunnels since the driver uses the address you supply and that's the end of it. Use directConnection URI option to force a standalone connection to a node of any deployment. While this allows you to connect to any node, you have to connect to the right node for replica sets (you can't write to secondaries) so this approach has limited utility for replica set deployments.
For mongos deployments that are not on Atlas the standalone behavior applies. With Atlas there are SRV records published which the driver follows, therefore for the tunneling purposes an Atlas sharded cluster behaves like a replica set and you can't trivially proxy connections to it. mongobetween may also work in this case.

Tarantool master-master replication

I had cluster with two nodes in replica set (master and replica).
How should I configure replication master-master?
I tried change flag read_only on replica, but it isn't work.
You should use all_rw replicaset parameter; it is available via both UI and API.

How do you create a mongodb replica set or shard that is externally available in kubernetes?

I have followed tutorials and set up working mongodb replica sets however when it comes to exposing them as a service I am stuck with using a LoadBalancer which directs to any pod. In most cases this ends up being a secondary database and not terrible helpful. I have also managed to get separate mongodb replicas set up and then tried to connect to those externally however connections fail because internal replicaset ips are all through local google cloud dns.
What I am hoping for is something like this.
Then (potentially) there is a single connection uri that could connect you to your mongodb replicaset without needing individual mongodb connection details.
I'm not sure if this is possible but any help is greatly appreciated!
The loadbalancer type service will route traffic to any one pod matches its selector, which is not how mongodb replica set works. The connection string should contain all instances in the set. You probably need to expose each replica instance with type=loadbalancer. Then you may connect via "mongodb://mongo-0_IP,mongo-1_IP,mongo-2_IP:27017/dbname_?"
If you configure a mongodb replica set with stateful sets, you should
also create a headless service. Then you can connect to the replica set with a url like:
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?”
Here mongo-0,mongo-1,mongo-2 are pod names and "mongo" is the headless service name.
If you still want to able to connect to a specific mongo instance, you can create separate service (of type=NodePort) for each of the deployment/replica and then you should be able to connect to a specific mongo instance using <any-node-ip>:<nodeport>
But you'll not be able to leverage the advantages of having a mongo replica set in this case.

MongoDB replica set in Azure, where do I point the firewall?

I have a mongoDB replica set in azure
I have:
server1 Primary
server2 secondary
server3 Arbiter
I have a dev environment on my local machine that I want to point to this mongoDB instance
What do I open on my Azure Firewall to make sure this configuration is setup with best practices.
Do I create a load balanced endpoint to the Primary and Secondary or do I create a single endpoint to the arbiter, or perhaps even something else?
thanks!
MongoDB will not play well with a load-balanced endpoint (as you might end up sending traffic to a secondary, and you'd have no control over this unless you implemented a custom probe for each VM, and then you'd need to update the probe's status based on the replicaset node's health, for each node). The MongoDB client-side driver is designed to work with a replicaset's topology to make the correct decision on which node to communicate with. Each replicaset node should have a discrete addressable ip:port. If you have all your instances in a single cloud service (e.g. myservice.cloudapp.net) then you'll need one port per instance (since they'd all share a single ip address). If each instance is in a different cloud service, then you can have the same port for each, with different dns name / ip address for each.
The best solution with an iptables is to open the third with an ip rule. It's open in the twice configuration and secure. This solution is the best architecture for your code.

MongoDB MMS for cluster with keyFile auth

I have a sharded & replicated MongoDB cluster which uses keyFile auth.
I am trying to configure the MongoDB MMS Agent to communicate with all of the cluster members.
I've tried installing MMS on every cluster member and informing mms.10gen.com of the IP/port of each cluster member's address. The agent reports that it is unauthorized and I get no data.
It appears that MMS does not support keyFile auth, but is this not the standard production cluster setup?
How can I set up MMS for this kind of cluster?
I posted this on the 10gen-mms mailing list and found the answer.
keyFile authentication is meant only for intra-cluster communication and communication between MongoS instances and the cluster.
Specifying keyFile auth means that you can access the cluster without a username/password via MongoS (with keyFile) if no users exist.
If a user is created, then user auth is additionally required.
You can create a user locally on each of the MongoD instances and use that to connect directly to them without a keyFile.
So the solution was to create users for MMS and my front-end service and to use user authentication.