Mongodb Shrading gives config server not synched error? - mongodb

I have created three instance of mongodb on localhost with different port like
mongod.exe --configsvr --dbpath C:\MongoDB2.6\mongodb\data --port 2021
mongod.exe --configsvr --dbpath D:\mongodb\data --port 2022
mongod.exe --configsvr --dbpath E:\mongodb\data --port 2023
These 3 are getting instantiated successfully.
Now I am using mongos command for clustring for these three instance and my command is like this
mongos.exe --configdb 127.0.0.1:2021,127.0.0.1:2022,127.0.0.1:2023
BUt it gives me the error which is like this
C:\MongoDB2.6\bin>mongos.exe --configdb 127.0.0.1:2021,127.0.0.1:2022,127.0.0.1:2023 --port 2025
2015-02-26T15:51:22.451+0530 [mongosMain] MongoS version 2.6.7 starting: pid=12256 port=2025 64-bit host=triconnode112 (--help for usage)
2015-02-26T15:51:22.451+0530 [mongosMain] db version v2.6.7
2015-02-26T15:51:22.451+0530 [mongosMain] git version: a7d57ad27c382de82e9cb93bf983a80fd9ac9899
2015-02-26T15:51:22.451+0530 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOO
ST_LIB_VERSION=1_49
2015-02-26T15:51:22.451+0530 [mongosMain] allocator: system
2015-02-26T15:51:22.466+0530 [mongosMain] options: { net: { port: 2025 }, sharding: { configDB: "127.0.0.1:2021,127.0.0.1:2022,127.0.0.1:2023" } }
2015-02-26T15:51:22.482+0530 [mongosMain] warning: config servers 127.0.0.1:2021 and 127.0.0.1:2022 differ
2015-02-26T15:51:22.482+0530 [mongosMain] warning: config servers 127.0.0.1:2021 and 127.0.0.1:2022 differ
2015-02-26T15:51:22.482+0530 [mongosMain] warning: config servers 127.0.0.1:2021 and 127.0.0.1:2022 differ
2015-02-26T15:51:22.498+0530 [mongosMain] warning: config servers 127.0.0.1:2021 and 127.0.0.1:2022 differ
2015-02-26T15:51:22.498+0530 [mongosMain] ERROR: could not verify that config servers are in sync :: caused by :: config servers 127.0.0.1:2021 and 127.0.0.1:20
22 differ: { chunks: "d41d8cd98f00b204e9800998ecf8427e", shards: "d41d8cd98f00b204e9800998ecf8427e", version: "9c051057927f3ebf9b0ee68b4b0ff78d" } vs { chunks:
"d41d8cd98f00b204e9800998ecf8427e", shards: "d41d8cd98f00b204e9800998ecf8427e", version: "4c0907d023ca1c1216c89d83fcb6a841" }
2015-02-26T15:51:22.498+0530 [mongosMain] configServer connection startup check failed
I am not getting where i am doing mistake. In this full example actually i am trying to implement sharding for the mongodb.
Now I am able to solve this problema nd tried to execute the shard command which is
sh.addShard("127.0.0.1:2021,127.0.0.1:2022,127.0.0.1:2023")
Now it gives me the error
{
"ok" : 0,
"errmsg" : "can't use sync cluster as a shard. for replica set, have to use <setname>/<server1>,<server2>,..."
}
I have no idea why this error is coming.

Related

Upgrade of metadata is not happening in 2 nodes (mongo 2.6 to mongo 3.0)

Before nothing, thanks for give me your time reading this question!
I'm testing an upgrade from mongo 2.4 to version 3 (I have updated to 2.6 successfully). The issue is one step before upgrading the binaries, I want to have the validation of metadata for each node of my replica which is sharded as described in upgrade sharded cluster
I have disabled the balancer and it seems that is disabled successfully:
mongos> sh.getBalancerState()
false
Sharding status:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5eebbde828374ced0f5e432c")
}
shards:
{ "_id" : "test_rs1", "host" : "test_rs1/mongodb01:27017,mongodb02:27017,mongodb03:27017" }
balancer:
Currently enabled: no
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "test_rs1" }
As you can see the balancer is currently not running. Anyway, if I try the metadata upgrade is successfully for node 01 but not for the two others:
root#mongodb01 tmp]# ./mongos30Up --configdb mongodb02 --upgrade
2020-06-18T21:08:49.706+0000 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production
2020-06-18T21:08:49.708+0000 I CONTROL ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-18T21:08:49.708+0000 I CONTROL
2020-06-18T21:08:49.708+0000 I SHARDING [mongosMain] MongoS version 3.0.15 starting: pid=11224 port=27017 64-bit host=mongodb01 (--help for usage)
2020-06-18T21:08:49.708+0000 I CONTROL [mongosMain] db version v3.0.15
2020-06-18T21:08:49.708+0000 I CONTROL [mongosMain] git version: b8ff507269c382bc100fc52f75f48d54cd42ec3b
2020-06-18T21:08:49.708+0000 I CONTROL [mongosMain] build info: Linux ip-10-35-227-214 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2020-06-18T21:08:49.708+0000 I CONTROL [mongosMain] allocator: tcmalloc
2020-06-18T21:08:49.708+0000 I CONTROL [mongosMain] options: { sharding: { configDB: "mongodb02" }, upgrade: true }
2020-06-18T21:08:49.714+0000 E - [mongosMain] error upgrading config database to v6 :: caused by :: balancer must be stopped for config upgrade
As you can see, is not working because of the balancer! same case for node 03...
Any clue?
EDIT: (Added bash script)
#!/bin/bash
# Run the metadata for all DB's.
# VARS
MONGOS_BIN='./mongos30Up'
MONGO1='mongodb01'
MONGO2='mongodb02'
MONGO3='mongodb03'
#Give exec permission
`chmod 0100 $MONGOS_BIN`
## Start the metadatata upgrade.
$MONGOS_BIN --configdb $MONGO1 --upgrade
$MONGOS_BIN --configdb $MONGO2 --upgrade
$MONGOS_BIN --configdb $MONGO3 --upgrade

Mongo Shell from local is not getting connected to mongo server running in docker

I ran mongo server in docker and the logs came up just fine.
2019-04-12T10:39:51.334+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=8a03346e57d7
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] db version v3.2.22
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] modules: none
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] build environment:
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] distmod: rhel70
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] distarch: x86_64
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-04-12T10:39:51.335+0000 I CONTROL [initandlisten] options: { net: { port: 27017 }, processManagement: { pidFilePath: "/var/run/mongodb/mongod.pid" }, storage: { dbPath: "/data/db" } }
2019-04-12T10:39:51.339+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
2019-04-12T10:39:51.389+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-04-12T10:39:51.389+0000 I CONTROL [initandlisten]
2019-04-12T10:39:51.406+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2019-04-12T10:39:51.406+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-04-12T10:39:51.406+0000 I NETWORK [initandlisten] waiting for connections on port 27017
But, while connecting the mongodb server from local shell, the connection is getting timedout.
~/mongodb/bin/mongo --host 172.17.0.2
MongoDB shell version v3.6.11
connecting to: mongodb://172.17.0.2:27017/?gssapiServiceName=mongodb
2019-04-12T16:04:45.750+0530 W NETWORK [thread1] Failed to connect to 172.17.0.2:27017 after 5000ms milliseconds, giving up.
2019-04-12T16:04:45.753+0530 E QUERY [thread1] Error: couldn't connect to server 172.17.0.2:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:263:13
#(connect):1:6
exception: connect failed
where 172.17.0.2 is the IP obtained by running docker inspect <container-id
I was able to connect mongo shell with in the docker using the command docker exec --it <container-id> bash.
Here's the Docker file for refrence.
RUN echo -e "\
[mongodb]\n\
name=MongoDB Repository\n\
baseurl=https://repo.mongodb.org/yum/redhat/7Server/mongodb-org/3.2/x86_64/\n\
gpgcheck=0\n\
enabled=1\n" >> /etc/yum.repos.d/mongodb.repo
# Install mongodb
RUN yum update -y && yum install -y mongodb-org
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
# Start mongodb
ENTRYPOINT ["/usr/bin/mongod"]
CMD ["--port", "27017", "--dbpath", "/data/db", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
Let me know if I am missing something.
Using MacOS

ERROR: dbpath (/data/db) does not exist. while trying to create a replica set in onrder to use mongoconnector for elastic search

while trying to create a replica set of db i get an error that dbpath /data/db does not exist.i am currently running mongo in docker as root.
version(mongo) 2.6..10.
i start using the service mongodb start command after which the mongo shell appears.log below
root#5936a72e744f:/dbex# mongod --replSet myDevReplSet
2017-07-31T05:13:30.946+0000 [initandlisten] MongoDB starting : pid=679 port=27017 dbpath=/data/db 64-bit host=5936a72e744f
2017-07-31T05:13:30.947+0000 [initandlisten] db version v2.6.10
2017-07-31T05:13:30.947+0000 [initandlisten] git version: nogitversion
2017-07-31T05:13:30.947+0000 [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2017-07-31T05:13:30.947+0000 [initandlisten] build info: Linux lgw01-12 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 BOOST_LIB_VERSION=1_58
2017-07-31T05:13:30.948+0000 [initandlisten] allocator: tcmalloc
2017-07-31T05:13:30.949+0000 [initandlisten] options: { replication: { replSet: "myDevReplSet" } }
2017-07-31T05:13:30.950+0000 [initandlisten] exception in initAndListen: 10296
ERROR: dbpath (/data/db) does not exist.
Create this directory or give existing directory in --dbpath.
See http://dochub.mongodb.org/core/startingandstoppingmongo
, terminating
If you have a look into your /etc/mongod.conf you will see
storage:
dbPath: /data/db
So create directory /data/db and ensure proper permission.
ok guys..sometimes the data is stored in var/lib/mongod..so to start a replica set use
mongod --port 27017 --dbpath "/var/lib/mongodb" --replSet rs0 so check appropriate location and change dbpath location

MongoDB Self-signed SSL connection: SSL peer certificate validation failed

I have followed this guide Self-signed SSL connection using PyMongo, by Wan Bachtiar to create three .pem files; server.pem, client.pem and ca.pem.
I am using Ubuntu 16.04 and MongoDB v3.2.11.
The purpose is to secure the MongoDB before opening it to the public internet.
lets start the mongod:
$ mongod --auth --port 27017 --dbpath /data/db1
--sslMode requireSSL --sslPEMKeyFile /etc/ssl/server.pem
--sslCAFile /etc/ssl/ca.pem --sslAllowInvalidHostnames &
Output:
root#tim:/etc/ssl# 2017-01-13T12:58:55.150+0000 I CONTROL [initandlisten] MongoDB starting : pid=19058 port=27017 dbpath=/data/db1 64-bit host=tim
2017-01-13T12:58:55.150+0000 I CONTROL [initandlisten] db version v3.2.11
2017-01-13T12:58:55.151+0000 I CONTROL [initandlisten] git version: 009580ad490190ba33d1c6253ebd8d91808923e4
2017-01-13T12:58:55.151+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] allocator: tcmalloc
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] modules: none
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] build environment:
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] distmod: ubuntu1604
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] distarch: x86_64
2017-01-13T12:58:55.152+0000 I CONTROL [initandlisten] target_arch: x86_64
2017-01-13T12:58:55.153+0000 I CONTROL [initandlisten] options: { net: { port: 27017, ssl: { CAFile: "/etc/ssl/ca.pem", PEMKeyFile: "/etc/ssl/server.pem", allowInvalidHostnames: true, mode: "requireSSL" }
}, security: { authorization: "enabled" }, storage: { dbPath: "/data/db1" } }
2017-01-13T12:58:55.211+0000 I - [initandlisten] Detected data files in /data/db1 created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2017-01-13T12:58:55.212+0000 W - [initandlisten] Detected unclean shutdown - /data/db1/mongod.lock is not empty.
2017-01-13T12:58:55.212+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2017-01-13T12:58:55.212+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4)
,config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-01-13T12:58:55.886+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2017-01-13T12:58:55.886+0000 I CONTROL [initandlisten]
2017-01-13T12:58:55.895+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db1/diagnostic.data'
2017-01-13T12:58:55.897+0000 I NETWORK [initandlisten] waiting for connections on port 27017 ssl
2017-01-13T12:58:55.897+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-01-13T12:58:56.026+0000 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
After running the mongod, I start the mongo shell:
$ mongo --port 27017 -u "my username" -p "my password"
--authenticationDatabase "" --ssl --sslPEMKeyFile /etc/ssl/client.pem
--sslCAFile /etc/ssl/ca.pem --host tim
The output is similar to the question by Marshall Farrier; lets have a look.
MongoDB shell version: 3.2.11
connecting to: 127.0.0.1:27017/datatest
2017-01-13T12:35:58.247+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38902 #8 (1 connection now open)
2017-01-13T12:35:58.259+0000 E NETWORK [thread1] SSL peer certificate validation failed: self signed certificate
2017-01-13T12:35:58.259+0000 E QUERY [thread1] Error: socket exception [CONNECT_ERROR] for SSL peer certificate validation failed: self signed certificate :
connect#src/mongo/shell/mongo.js:231:14
#(connect):1:6
2017-01-13T12:35:58.263+0000 E NETWORK [conn8] SSL peer certificate validation failed: self signed certificate
2017-01-13T12:35:58.263+0000 I NETWORK [conn8] end connection 127.0.0.1:38902 (0 connections now open)
What am I doing wrong?
After some searching, it seems like this error is due to the fact that the hostname "CN" was incorrect.
From digitalocean:
Whenever you generate a CSR, you will be prompted to provide information regarding the certificate. This information is known as a Distinguised Name (DN). An important field in the DN is the Common Name (CN), which should be the exact Fully Qualified Domain Name (FQDN) of the host that you intend to use the certificate with.
Also from MongoDB documentation:
If your MongoDB deployment uses SSL, you must also specify the --host option. mongo verifies that the hostname of the mongod or mongos to which you are connecting matches the CN or SAN of the mongod or mongos‘s --sslPEMKeyFile certificate. If the hostname does not match the CN/SAN, mongo will fail to connect.
SOLUTION:
I regenerated the keys, replaced localhost with any other hostname in the CN = <hostname> and completed the guide by Wan Bachtiar.
Running the following command after completion worked:
$ mongo --port 27017 -u '<_username_>' -p '<_password_>'
--authenticationDatabase "<_my db_>" --ssl --sslPEMKeyFile
/etc/ssl/client.pem --sslCAFile /etc/ssl/ca.pem --host localhost
Note:
The MongoDB folows a strict ruling of who has access to what db, a quick test in the mongo shell:
> show dbs
return an error. However, my user actually only have access to the db specified in "<my db>", so looping through the rows in "<my db>" works perfectly.

mongodb cluster upgrade config servers fails

I'm trying to create a mongodb cluster with 3 machines by following this link : http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/ . I have downloaded mongodb 2.6.4 to all machines and started config servers with this command :
mongod --configsvr --dbpath /home/sshusr/mongodb/data/configdb/
and I'm trying to start a mongos instance to upgrade config servers to v5 because it tells me to do that. So I run this command
mongos --upgrade --configdb 10.122.123.64:27019,10.122.123.65:27019,10.122.123.66:27019
but it gives me this
2014-09-26T17:07:55.629+0300 [mongosMain] MongoS version 2.6.4 starting: pid=50066 port=27017 64-bit host=tesla (--help for usage)
2014-09-26T17:07:55.629+0300 [mongosMain] db version v2.6.4
2014-09-26T17:07:55.629+0300 [mongosMain] git version: 3a830be0eb92d772aa855ebb711ac91d658ee910
2014-09-26T17:07:55.629+0300 [mongosMain] build info: Linux build7.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2014-09-26T17:07:55.629+0300 [mongosMain] allocator: tcmalloc
2014-09-26T17:07:55.629+0300 [mongosMain] options: { sharding: { configDB: "10.122.123.64:27019,10.122.123.65:27019,10.122.123.66:27019" }, upgrade: true }
2014-09-26T17:07:55.633+0300 [mongosMain] SyncClusterConnection connecting to [10.122.123.64:27019]
2014-09-26T17:07:55.633+0300 [mongosMain] SyncClusterConnection connecting to [10.122.123.65:27019]
2014-09-26T17:07:55.633+0300 [mongosMain] SyncClusterConnection connecting to [10.122.123.66:27019]
2014-09-26T17:07:55.718+0300 [mongosMain] scoped connection to 10.122.123.64:27019,10.122.123.65:27019,10.122.123.66:27019 not being returned to the pool
2014-09-26T17:08:06.770+0300 [mongosMain] waited 11s for distributed lock configUpgrade for upgrading config database to new format v5
2014-09-26T17:08:17.816+0300 [mongosMain] waited 22s for distributed lock configUpgrade for upgrading config database to new format v5
2014-09-26T17:08:28.863+0300 [mongosMain] waited 33s for distributed lock configUpgrade for upgrading config database to new format v5
2014-09-26T17:08:39.909+0300 [mongosMain] waited 44s for distributed lock configUpgrade for upgrading config database to new format v5
....
what am I missing here? Any help is appreciated. Thanks.
I have resolved the problem.
I executed the command with verbose and saw that there was a time/date difference between servers. I have assigned an ntp server for each server and problem was solved.