Cannot upgrade sharded mongoDB or stop the balancer - mongodb

mongos is not running in the beginning. When tried to start the mongos I see the following log:
Fri Mar 22 17:43:13.383 [mongosMain] ERROR: error upgrading config
database to v4 :: caused by :: newer version 4 of mongo config
metadata is required, current version is 3, need to run mongos with
--upgrade
But with --upgrade parameter, I see the following log:
Fri Mar 22 17:43:39.273 [mongosMain] ERROR: error upgrading config
database to v4 :: caused by :: balancer must be stopped for config
upgrade
Now the problem is: I cannot stop the balancer by sh.stopBalancer() because I cannot start mongos. It's a deadlock to me now. Please help.

I found the the problem. I should connect to port 27019 for a configsrv. In this way I don't need to start mongos. Instead the sh.stopBalancer() could be executed simply in mongo interpreter.

I had the same problem just now. I updated my mongo database from verion 2.0.4 to 2.4.3.
I cannot connect to mongos because config server need to be upgraded. However, I can not stop the balancer using command stopBalancer() because my mongos is inactive. I did not find other solutions from stackoverflow. I tried many times.
My solution is:
1, ssh to the config server;
2, use config database;
maybe need to pass authentication;
3,
run db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true )
to stop balancer;
4, I can run "mongos" with --upgrade option;

Assuming you followed the recommendation to always run 3 config server, I would try these steps:
Make sure all other mongos clients are stopped. If no mongos are running, no balancer should be keeping a lock either.
If you still get the error (after being certain that no mongos are connected to the config servers) I would stop all config servers but one and clear any remains of balancer locks in the admin database. After a successful try with this config server, I would reset the other two. If it was not successful, you still have two other copies.

I had the same problem. The solution for me was to connect to the configsvr with mongo shell
mongo --host ip_of_config_server_host --port 27019
and setting the balancer off from there with
sh.setBalancerState(false)
After this I could do the config server upgrade with
mongos --port 27017 --configdb ip_of_config_server_host --upgrade

Related

How to shut down mongos server?

I found out that mongod has --shutdown to cleanly shut down a server, what is the corresponding command for a mongos server?
The only way i found out was to simply find the PID for the server and kill -9 it, but it seems like this is not the smartest way to do it.
Using mongodb version 3.0 btw.
Try the following steps:
Login to mongos
switch to admin database
run db.shutdownServer()

setting up config server setup for mongo query router

I am using mongo v3.2 - I already have the config replica set as well as two shard replica sets running. When I try to launch the mongos (query router) with the test config file setting below, I get the error copied below - any ideas on how to fix this?
sharding:
configDB: config-set0/127.0.0.1:27019,127.0.0.1:27020,127.0.0.1:27021
error
Unrecognized option: sharding.configDB
I can see this setting in the mongodb docs at the URL below:
https://docs.mongodb.com/manual/tutorial/deploy-shard-cluster/
Ensure that the process is being launched as mongos and not mongod if the intent is to run the process as a query router (and not a config server or a shard).
Use this command to start the mongos.
sudo mongos -f <-- location of conf file -->

Mongo: network error while attempting to run command 'whatsmyuri' on host

I have been trying to access my mongo instance from another machine, but I get this error. I could not find many references to this whatsmyuri error. This is what I get from the external machine:
$ mongo <IP_ADDRESS>:27017/youtube_advertising -u user -p password
MongoDB shell version: 3.2.0
connecting to: <IP_ADDRESS>:27017/youtube_advertising
2016-02-19T17:10:02.923+0100 E QUERY [thread1] Error: network error while attempting to run command 'whatsmyuri' on host '<IP_ADDRESS>:27017' :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
I have already changed the /etc/mongod.conf file, opened connections through port 27017 (with iptables) and restarted mongo. I am able to connect via ssh to that machine.
Searching about this whatsmyuri, I ran this command on mongo:
> db.runCommand( { whatsmyuri: 1 } )
{ "you" : "127.0.0.1:36990", "ok" : 1 }
I do not know if that 36990 port is right or wrong. Just in case I opened connections from there too, but still nothing.
Any ideas?
UPDATE
Checking the /var/log/mongodb/mongod.log, this is what I get when I try to connect from remote:
2016-02-19T10:41:07.292-0600 I NETWORK [initandlisten] connection accepted from <EXT_IP_ADDRESS>:51800 #2 (1 connection now open)
2016-02-19T10:41:07.310-0600 I QUERY [conn2] operation isn't supported: 2010
2016-02-19T10:41:07.310-0600 I - [conn2] Assertion: 16141:cannot translate opcode 2010
Check your versions. That may help.
I was having the same problem. In my case, the server was version 3.2.0-rc2, while mongo shell version was 3.2.1.
Upgrading the server to 3.2.1 fixed the problem.
This issue bit me when I was running two versions (3.4 and 4.2) of MongoDB on the same Windows 10 machine. I ran v3.4 mongod with no problems mentioned in the console output, but then running the v3.4 mongo shell produced the above error. Checking the Task Manager, it turned out there was a MongoDB process (I'm not sure, but I think it was for v4.2) running. After ending that process through the Task Manager, the v3.4 mongo shell ran fine with no error.
Using mongodb-community-shell
In MacOs brew install mongodb/brew/mongodb-community-shell
It will ask you to overwrite link with mongo command.
brew link --overwrite mongodb-community-shell
The issue arises when the mongo client's and server's version mismatch.
Uninstall MongoDB from the client and for installation follow the detailed instruction provided over here.
https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/
The key step which needs to be done with the attention is
sudo apt-get install -y mongodb-org=4.2.7 mongodb-org-server=4.2.7 mongodb-org-shell=4.2.7 mongodb-org-mongos=4.2.7
mongodb-org-tools=4.2.7
Note: In my case, the mongo version of server was 4.2.7
Another thing that one can do is uninstall MongoDB from both the systems taking necessary backups and install it again.

mongodb replicaset auth not working

I have a problem with replica sets
After I add keyFile path to mongodb.conf I can connect, this is my mongo.conf:
logpath=/path/to/log
logappend=true
replSet = rsname
fork = true
keyFile = /path/to/key
And this is what is showed in the command line:
XXXX#XXXX:/etc$ sudo service mongodb restart
stop: Unknown instance:
mongodb start/running, process 10540
XXXX#XXXX:/etc$ mongo
MongoDB shell version: 2.4.6
connecting to: test
Mon Sep 30 18:44:20.984 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
XXXX#XXXX:/etc$
if I comment the keyFile line in mongo.conf it works fine.
I solve the problem.
It was related with the key file permissions, I fixed the permissionas and ownership and work like charm:
As a root user I did:
$ chmod 700 keyfile
$ chown monogdb:mongodb keyfile
If the authentication would be the problem you should get a different message (and should be able to start the shell without the authenticated session just prevent you to run most of the commands).
This one means more like a socket exception that where you likely to connect there is no service listening. You can check with netstat if the process is listening that ip:port which is in the message. I assume that the mongod process have not started which can be for several reasons check the logs for the current one. One thing can be that the keyfile is not exists at the specified path or not the appropriate privileges have set on.
Adding a keyfile automaticly turns on the auth option too. This means you have to use a user to authenticate, but you can bypass this authentication with a localhost exception: . Read the documentation.

Issue with persistent mongoDB data on EC2

I'm trying to store data in a mongoDB database on Amazon EC2. I'm using starcluster to configure and start the EC2 instance. I have an EBS volume mounted at /root/data. I installed mongoDB following the instructions here. When I log in to the EC2 instance I am able to type mongo, which brings me to the mongo shell with the test database. I have then added some data to a database, let's say database1, with various collections in it. When I exit the EC2 instance and terminate it, using starcluster terminate mycluster, and then create a new, different instance, the database1 data is no longer shown in the mongo shell.
I have tried changing the dbpath in the /etc/mongodb.conf file to /root/data/mongodb, which is the EBS volume, and then start and stop the mongodb service using sudo service mongodb stop and sudo service mongodb start. I then try mongo again and receive
MongoDB shell version: 2.2.2
connecting to: test
Sat Jan 19 21:27:42 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
An additional issue is that whenever I terminate the EC2 instance any changes I made to the config file disappear.
So my basic question is: how do I change where mongoDB stores its data on EC2 so that the data will remain when I terminate one EC2 instance and then start another EC2 instance.
Edit:
In response to the first answer:
The directory does exist
I changed the owner to mongodb
I then issued the command sudo service mongodb stop
Checked to see if the port is released using netstat -anp | grep 27017. There was no output.
Restarted mongodb using sudo service mongodb start
Checked for port 27017 again and receive no output.
Tried to connect to the mongo shell and received the same error message.
Changed the mongodb.conf back to the original settings, restarted mongodb as in the above steps, and tried to connect again. Same error.
The EBS volume is configured in the starcluster config to be reattached on each startup.
For the "connect failed" after you change /etc/mongodb.conf problem, you can check the log file specified in the /etc/mongodb.conf (probably at /var/log/mongodb/mongodb.log):
Check that the directory specified by dbpath exists.
Make sure it is writable by the "mongodb" user. Perhaps it's best to chown to mongodb.
Make sure mongod actually released the 27017 port before starting it using: netstat -anp | grep 27017
Wait a couple seconds for mongod to restart before launching mongo.
It's not clear from your question if you are using Starcluster EBS volumes for Persistent Storage. Note that Ordinary EBS volumes do not automatically persist and reattach when you terminate an instance and start another. You would need to attach and mount them manually.
Once you get that working you'll probably want to create a custom Starcluster AMI with mongo properly installed and /etc/mongodb.conf appropriately modified.