MongoDB Cluster upgrade to use SSL/TLS failed - mongodb

I reproduce MongoDB Cluster replica-set and added user like admin with Non-SSL following below link.
Link : https://github.com/arun2pratap/mongodbClusterForWindowsOneClick
Environment :
OS : Windows 2019 server ( set all instance in one windows server)
1 mongos ( port : 26000 )
2 shards ( port : sh01 : 27011 ~ 27013 / sh02 : 27021 ~ 27023 )
1 conf servers ( port : csrs : 26001 ~ 26003 )
After reproduce Cluster with Non-SSL, I tried to upgrade Cluster to use SSL following MongoDB Manual for 4.5 and other links but I couldn't found clear answer or guide.
Below are my refer links.
https://www.mongodb.com/docs/v4.4/tutorial/upgrade-cluster-to-ssl/
https://www.mongodb.com/docs/v4.4/tutorial/deploy-replica-set-with-keyfile-access-control/
https://www.mongodb.com/community/forums/t/cannot-start-mongodb-service-after-configuring-tls/2802
MongoDB Shell connection errors using test self signed certificates
https://www.mongodb.com/community/forums/t/creating-openssl-server-certificates-for-testing-failed/109058
I just configured conf files like sh011.conf following manuals, guides and started. but server seems only started csrs instances. because, I couldn't found other instance's port numbers.
1. sh011.conf
sharding:
clusterRole: shardsvr
replication:
replSetName: sh01
net:
bindIpAll: true
port: 27011
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: sh01/sh011/log/sh011.log
logAppend: true
storage:
dbPath: sh01/sh011/db/
2. mongos.conf
sharding:
configDB: csrs/WIN-BKEV4AO0KED:26001,WIN-BKEV4AO0KED:26002,WIN-BKEV4AO0KED:26003
net:
bindIpAll: true
port: 26000
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: router/log/mongos.log
logAppend: true
security:
authorization: enabled
clusterAuthMode: x509
3. "netstat -an" output
C:\database\MongoDB\Server\4.4\bin>netstat -an
Active Connections
Proto Local Address Foreign Address State
TCP 0.0.0.0:22 0.0.0.0:0 LISTENING
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5432 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26001 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26002 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26003 0.0.0.0:0 LISTENING
TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING
When I checked log files, each shard nodes occurred SSL error like below
{"t":{"$date":"2022-05-09T14:34:54.933+09:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"csrs","host":"WIN-BKEV4AO0KED:26001","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to WIN-BKEV4AO0KED:26001 (192.168.100.202:26001) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified."},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":":26001","success":false}}}}
{"t":{"$date":"2022-05-09T14:34:55.164+09:00"},"s":"I", "c":"-", "id":4333222, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM received failed isMaster","attr":{"host":"WIN-BKEV4AO0KED:26003","error":"HostUnreachable: Error connecting to WIN-BKEV4AO0KED:26003 (192.168.100.202:26003) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified.","replicaSet":"csrs","isMasterReply":"{}"}}
I thought, that issues cause is relate host names so, I configured hosts file.
Then, re-created certification files for CA, Server, Client following manual.
1. openssl-test-server.conf
[ alt_names ]
DNS.1 = WIN-BKEV4AO0KED
IP.1 = 192.168.100.202
[ req_dn ]
countryName = Country Name (2 letter code)
countryName_default = AA
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = City
stateOrProvinceName_max = 64
localityName = Locality Name (eg, city)
localityName_default = City
localityName_max = 64
organizationName = Organization Name (eg, company)
organizationName_default = DevCompany
organizationName_max = 64
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Dev
organizationalUnitName_max = 64
commonName = Common Name (eg, YOUR name)
commonName_default = WIN-BKEV4AO0KED
commonName_max = 64
But, still mongos and other instances are not started.
Finally, I think some configuration is wrong. I want know what I missed or wrong for SSL.

Finally, I found what is cause of issue and How to start MongoDB Cluster with SSL Myself.
1st, Root cause is that I couldn't start MongoDB instances like mongos, mongod with SSL enable and missed some parameters while starting like below :
before start command
$ mongod -f csrs1.conf
modified start command
$ mongod -f csrs1.conf --tlsMode requireTLS --tlsCertificateKeyFile test-server1.pem --tlsCAFile test-ca.pem
Note : I was not set MongoDB as service and just control through prompt
When I generated certification base on default setting and start each MongoDB with new command, that was working fine.
And I tried modify START.bat file for convenience like above new command.
But, that was not working. So, I opened prompt for each nodes and executed start command manually.
I hope this information will help.

Related

MongoServerError: "No host described in new configuration with {version: 1, term: 0} for replica set rs0 maps to this node," How do I fix this?

Started mongo server using
mongod --port 27018 --dbpath "/usr/local/var/mongodb/data/db0" --replSet rs0 --bind_ip "locahost,0.0.0.0"
And connected to the instance using
mongosh mongodb://127.0.0.1:27018/vndr
Tried to initiate a replica set using rs.initiate()
Then got the error
---
> rs.initiate()
{
"ok" : 0,
"errmsg" : "No host described in new configuration with {version: 1, term: 0} for replica set rs0 maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
On the server log noticed these lines saying ""Nnadozies-MacBook-Pro.local" not known
{"t":{"$date":"2022-08-18T14:52:20.861+01:00"},"s":"W", "c":"NETWORK", "id":21207, "ctx":"conn1","msg":"getaddrinfo() failed","attr":{"host":"Nnadozies-MacBook-Pro.local","error":"nodename nor servname provided, or not known"}}
{"t":{"$date":"2022-08-18T14:52:25.866+01:00"},"s":"I", "c":"NETWORK", "id":4834700, "ctx":"conn1","msg":"isSelf could not connect via connectSocketOnly","attr":{"hostAndPort":"Nnadozies-MacBook-Pro.local:27018","error":{"code":6,"codeName":"HostUnreachable","errmsg":"couldn't connect to server Nnadozies-MacBook-Pro.local:27018, connection attempt failed: HostNotFound: Could not find address for Nnadozies-MacBook-Pro.local:27018: SocketException: Host not found (authoritative)"}}}
{"t":{"$date":"2022-08-18T14:52:25.867+01:00"},"s":"E", "c":"REPL", "id":21425, "ctx":"conn1","msg":"replSetInitiate error while validating config","attr":{"error":{"code":74,"codeName":"NodeNotFound","errmsg":"No host described in new configuration with {version: 1, term: 0} for replica set rs0 maps to this node"},"config":{"_id":"rs0","version":1,"members":[{"_id":0,"host":"Nnadozies-MacBook-Pro.local:27018"}]}}}
Followed response in this similar question and added the line 127.0.0.1 Nnadozies-MacBook-Pro.local to my /etc/hosts file to fix this.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 figmadaemon.com
127.0.0.1 Nnadozies-MacBook-Pro.local
Posting this because the similar question wasn't really clear on how to solve the problem. I think this makes it clearer that the hostname mongodb was looking at could not be resolved to an IP address because it wasn't in /etc/hosts
Tip on editing /etc/hosts: use sudo permissions for write access
Other than that please I'm curious what other way I can run mongodb instances so they use known hostnames.

MongoDB bind ip

I want to allow specific IP addres to comunicate with the mongoDB that I have in a server. In order to do that, I opened mongod.conf and edited as follows:
net :
port: 27017
bindIp: localhost,<IpAdres>
security:
authorization: enable
I tried with authorization 'enable' but the only wait it works is as the initial configuration:
net :
port: 27017
bindIp: localhost
#security:
# authorization: enable
Also If I set
net :
port: 27017, <IpAdress>
bindIp: localhost
#security:
# authorization: enable
The error I got is:
Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
My goal is to only allow a specific set of IP ADDRES to comunicate with the mongo on my server (comunicate = make consults)
Version: 4.0.18 Running on RedHat 7.8
netstat looks like this
Proto | Recv-Q |Send-Q | Local Address | Foreign Address | State | PID/Program name
tcp | 0 | 0 |127.0.0.1:27017 | 0.0.0.0:* | LISTEN | -
You donĀ“t need enable authentication:
net:
port: 27017
bindIp: localhost, <YOUR_IP>
It is correct, probably you have another communication issue, check your ip/port allows traffic.
Command 'netstat -tulnp' can help you.
https://docs.mongodb.com/manual/reference/configuration-options/
Anyway, the error could be caused by other third reason, go to /run/mongodb folder and deleted the file called mongod.pid. Then try to restart the mongo using:
sudo service mongod restart

Can't connect to MongoDB remote server

I have setup mongodb on a Google cloud compute instance and am trying to connect to it remotely. My mongod.conf file looks like this:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
#processManagement:
security:
authorization: 'enabled'
I have setup a firewall rule in my Google Cloud console that tags the instance and opens tcp:27017 for ip ranges 0.0.0.0/0.
Checking on the port 27017 it looks like mongo is listening:
sudo netstat -tulpn | grep 27017
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 3781/mongod
Overall it also seems like port 27017 is open:
netstat --listen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:27017 *:* LISTEN
...
On the instance I setup an admin database with admin user:
>use admin
>db.createUser(
{
user: 'admin',
pwd: 'somepass',
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
}
)
I also setup a user for a test database:
>use test
>db.createUser(
{
user: 'tester',
pwd: 'apassword',
roles: [{ role: "readWrite", db: "test" },
{ role: "read", db: "reporting" }]
}
)
This user works locally:
>use test
>db.auth('tester', 'apassword')
1
However when I try to connect remotely it fails:
$ mongo -u tester -p apassword 12.345.67.890/test
MongoDB shell version v3.4.1
connecting to: mongodb://12.345.67.890/test
2017-11-14T12:47:07.369-0700 W NETWORK [main] Failed to connect to 12.345.67.890:27017 after 5000ms milliseconds, giving up.
2017-11-14T12:47:07.370-0700 E QUERY [main] Error: couldn't connect to server 12.345.67.890:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:234:13
#(connect):1:6
exception: connect failed
I'm not a networking expert so I've pretty much exhausted my knowledge at this point and don't know how to proceed.
Am I missing something in mongod.conf? Did i setup the firewall incorrectly? Any help is appreciated.
I came up with a solution for this, but I'm unclear why it works and not my original approach.
My instance has http/https enabled. This uses the firewall rules default-allow-http and default-allow-https. These rules enable connections from anywhere (0.0.0.0/0) through tcp ports 80 and 443 respectively. I edited the http rule and added tcp port 27017.
Now I can connect to the server.
As a test, I reset the http rule and added another rule applied to all instances opening up tcp:27017 to 0.0.0.0/0. Essentially everything is the same as the http rule save for the name and the target tags. With this change I cannot connect to the server.
It seems rather strange and it doesn't feel like an intended behavior, unless my understanding of the firewall rules in incomplete.
In the end it looks like either setting up mongo to use one of the http/https ports or adding port 27017 to those rules is the way to go.

Haproxy + percona 5.7 xtradb error

i configure
Hello, I configure haproxy by digitalocean manual, roundrobin for percona 5.7 bases, but on the haproxy server, when I try to connect to the database I getting error.
On the haproxy server:
mysql -h 127.0.0.1 -u haproxy_root -p -e "SHOW DATABASES"
And i get error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2
Haproxy config:
lobal
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 1024
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1024
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen galera_cluster
bind 127.0.0.1:3306
mode tcp
option httpchk
balance leastconn
server galera-node01 192.168.0.101:3306 check port 9200
server galera-node02 192.168.0.102:3306 check port 9200
server galera-node03 192.168.0.103:3306 check port 9200
If I connect directly to the database 192.168.0.101, everything works, I get a response from the database, but when I make the request through to haproxy 127.0.0.1 I get this error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading
initial communication packet', system error: 2
My xinetd config on mysql:
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
server_args = percona percona
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
If i telnet to PXC node on port 9200, i got:
telnet 192.168.0.101 9200
Trying 192.168.0.101...
Connected to 192.168.0.101.
Escape character is '^]'.
HTTP/1.1 503 Service Unavailable
Content-Type: text/plain
Connection: close
Content-Length: 57
Percona XtraDB Cluster Node is not synced or non-PRIM.
Connection closed by foreign host.
The most common reason for this is that all nodes in the cluster is down. If you have enabled your HAProxy stats, check that all nodes are up. If not, you mysqlchk service is likely not being able to connect to the cluster nodes properly.
Check your mysqlchk xinetd service should have the proper server_args configured. Once these are set, restart xinetd, and telnet to port 9200 to validate
[root#node02 log]# cat /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
...
server = /usr/bin/clustercheck
server_args = percona percona
...
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
}
UPDATE:
A more thorough procedure, including making sure mysqlchk is configured, can be found here https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/virt_sandbox.html

Mongo can't connect to remote instance

I installed mongodb on remote server via vagrant. I can access postgres from my local system but mongo is not available. When I login via ssh and check mongo status it says that mongo running, I can make queries too. When I try to connect from my local system using this command:
mongo 192.168.192.168:27017
I get an error
MongoDB shell version: 2.6.5
connecting to: 192.168.192.168:27017/test
2014-12-27T22:19:19.417+0100 warning: Failed to connect to 192.168.192.168:27017, reason: errno:111 Connection refused
2014-12-27T22:19:19.418+0100 Error: couldn't connect to server 192.168.192.168:27017 (192.168.192.168), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
looks like mongo not listen to connection from other ips? I commented bind_ip in mongo settings but it doesn't help.
services for 192.168.192.168 via nmap command:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
5432/tcp open postgresql
9000/tcp open cslistener
Looks like mongd listen
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
mongod 1988 mongodb 6u IPv4 5407 0t0 TCP *:27017 (LISTEN)
mongod 1988 mongodb 8u IPv4 5411 0t0 TCP *:28017 (LISTEN)
Firewall rules
sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Update
My mongo config
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
#bind_ip = 127.0.0.1
#port = 27017
# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling
journal=true
# Enables periodic logging of CPU utilization and I/O wait
#cpu = true
# Turn on/off security. Off is currently the default
#noauth = true
#auth = true
Solution is to change mongo configuration
bind_ip = 0.0.0.0