Mongo can't connect to remote instance - mongodb

I installed mongodb on remote server via vagrant. I can access postgres from my local system but mongo is not available. When I login via ssh and check mongo status it says that mongo running, I can make queries too. When I try to connect from my local system using this command:
mongo 192.168.192.168:27017
I get an error
MongoDB shell version: 2.6.5
connecting to: 192.168.192.168:27017/test
2014-12-27T22:19:19.417+0100 warning: Failed to connect to 192.168.192.168:27017, reason: errno:111 Connection refused
2014-12-27T22:19:19.418+0100 Error: couldn't connect to server 192.168.192.168:27017 (192.168.192.168), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
looks like mongo not listen to connection from other ips? I commented bind_ip in mongo settings but it doesn't help.
services for 192.168.192.168 via nmap command:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
5432/tcp open postgresql
9000/tcp open cslistener
Looks like mongd listen
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
mongod 1988 mongodb 6u IPv4 5407 0t0 TCP *:27017 (LISTEN)
mongod 1988 mongodb 8u IPv4 5411 0t0 TCP *:28017 (LISTEN)
Firewall rules
sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Update
My mongo config
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
#bind_ip = 127.0.0.1
#port = 27017
# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling
journal=true
# Enables periodic logging of CPU utilization and I/O wait
#cpu = true
# Turn on/off security. Off is currently the default
#noauth = true
#auth = true

Solution is to change mongo configuration
bind_ip = 0.0.0.0

Related

Rancher desktop error when starting kubernetes

My Rancher desktop was working just fine, until today when I switched container runtime from containerd to dockerd. When I wanted to change it back to containerd, it says:
Error Starting Kubernetes
Error: unable to verify the first certificate
Some recent logfile lines:
client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUV1eXhYdFYvTDZOQmZsZVV0Mnp5ekhNUmlzK2xXRzUxUzBlWklKMmZ5MHJvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNGdQODBWNllIVzBMSW13Q3lBT2RWT1FzeGNhcnlsWU8zMm1YUFNvQ2Z2aTBvL29UcklMSApCV2NZdUt3VnVuK1liS3hEb0VackdvbTJ2bFJTWkZUZTZ3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
2022-09-02T13:03:15.834Z: Error starting lima: Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:390:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12) {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
Tried reinstalling, factory reset etc. but no luck. I am using 1.24.4 verison.
TLDR: Try turning off Docker/Something that is binding to port 6443. Reset Kubernetes in Rancher Desktop, then try again.
Try checking if there is anything else listening on port 6443 which is needed by kubernetes:rancher-desktop.
In my case, lsof -i :6443 gave me...
~ lsof -i :6443
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 63385 ~~~~~~~~~~~~ 150u IPv4 0x44822db677e8e087 0t0 TCP localhost:sun-sr-https (LISTEN)
ssh 82481 ~~~~~~~~~~~~ 27u IPv4 0x44822db677ebb1e7 0t0 TCP *:sun-sr-https (LISTEN)

MongoDB Cluster upgrade to use SSL/TLS failed

I reproduce MongoDB Cluster replica-set and added user like admin with Non-SSL following below link.
Link : https://github.com/arun2pratap/mongodbClusterForWindowsOneClick
Environment :
OS : Windows 2019 server ( set all instance in one windows server)
1 mongos ( port : 26000 )
2 shards ( port : sh01 : 27011 ~ 27013 / sh02 : 27021 ~ 27023 )
1 conf servers ( port : csrs : 26001 ~ 26003 )
After reproduce Cluster with Non-SSL, I tried to upgrade Cluster to use SSL following MongoDB Manual for 4.5 and other links but I couldn't found clear answer or guide.
Below are my refer links.
https://www.mongodb.com/docs/v4.4/tutorial/upgrade-cluster-to-ssl/
https://www.mongodb.com/docs/v4.4/tutorial/deploy-replica-set-with-keyfile-access-control/
https://www.mongodb.com/community/forums/t/cannot-start-mongodb-service-after-configuring-tls/2802
MongoDB Shell connection errors using test self signed certificates
https://www.mongodb.com/community/forums/t/creating-openssl-server-certificates-for-testing-failed/109058
I just configured conf files like sh011.conf following manuals, guides and started. but server seems only started csrs instances. because, I couldn't found other instance's port numbers.
1. sh011.conf
sharding:
clusterRole: shardsvr
replication:
replSetName: sh01
net:
bindIpAll: true
port: 27011
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: sh01/sh011/log/sh011.log
logAppend: true
storage:
dbPath: sh01/sh011/db/
2. mongos.conf
sharding:
configDB: csrs/WIN-BKEV4AO0KED:26001,WIN-BKEV4AO0KED:26002,WIN-BKEV4AO0KED:26003
net:
bindIpAll: true
port: 26000
tls:
mode: requireTLS
certificateKeyFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-server1.pem
CAFile: C:\database\MongoDB\Server\4.4\bin\certifications\test-ca.pem
systemLog:
destination: file
path: router/log/mongos.log
logAppend: true
security:
authorization: enabled
clusterAuthMode: x509
3. "netstat -an" output
C:\database\MongoDB\Server\4.4\bin>netstat -an
Active Connections
Proto Local Address Foreign Address State
TCP 0.0.0.0:22 0.0.0.0:0 LISTENING
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5432 0.0.0.0:0 LISTENING
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26001 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26002 0.0.0.0:0 LISTENING
TCP 0.0.0.0:26003 0.0.0.0:0 LISTENING
TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING
When I checked log files, each shard nodes occurred SSL error like below
{"t":{"$date":"2022-05-09T14:34:54.933+09:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"csrs","host":"WIN-BKEV4AO0KED:26001","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to WIN-BKEV4AO0KED:26001 (192.168.100.202:26001) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified."},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":":26001","success":false}}}}
{"t":{"$date":"2022-05-09T14:34:55.164+09:00"},"s":"I", "c":"-", "id":4333222, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM received failed isMaster","attr":{"host":"WIN-BKEV4AO0KED:26003","error":"HostUnreachable: Error connecting to WIN-BKEV4AO0KED:26003 (192.168.100.202:26003) :: caused by :: SSL peer certificate validation failed: (80096004)The signature of the certificate cannot be verified.","replicaSet":"csrs","isMasterReply":"{}"}}
I thought, that issues cause is relate host names so, I configured hosts file.
Then, re-created certification files for CA, Server, Client following manual.
1. openssl-test-server.conf
[ alt_names ]
DNS.1 = WIN-BKEV4AO0KED
IP.1 = 192.168.100.202
[ req_dn ]
countryName = Country Name (2 letter code)
countryName_default = AA
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = City
stateOrProvinceName_max = 64
localityName = Locality Name (eg, city)
localityName_default = City
localityName_max = 64
organizationName = Organization Name (eg, company)
organizationName_default = DevCompany
organizationName_max = 64
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Dev
organizationalUnitName_max = 64
commonName = Common Name (eg, YOUR name)
commonName_default = WIN-BKEV4AO0KED
commonName_max = 64
But, still mongos and other instances are not started.
Finally, I think some configuration is wrong. I want know what I missed or wrong for SSL.
Finally, I found what is cause of issue and How to start MongoDB Cluster with SSL Myself.
1st, Root cause is that I couldn't start MongoDB instances like mongos, mongod with SSL enable and missed some parameters while starting like below :
before start command
$ mongod -f csrs1.conf
modified start command
$ mongod -f csrs1.conf --tlsMode requireTLS --tlsCertificateKeyFile test-server1.pem --tlsCAFile test-ca.pem
Note : I was not set MongoDB as service and just control through prompt
When I generated certification base on default setting and start each MongoDB with new command, that was working fine.
And I tried modify START.bat file for convenience like above new command.
But, that was not working. So, I opened prompt for each nodes and executed start command manually.
I hope this information will help.

Sendmail not working in Google Instance with Sendgrid

I once did this configuration and it worked, but now on other machine is not working. I followed the instruction from sendgrid:
https://sendgrid.com/docs/for-developers/sending-email/sendmail/
But when i want to send an email it says:
sm-msp-queue[4439]: 07QAbRM1009651: to=smmsp, delay=01:20:57, xdelay=00:04:22, mailer=relay, pri=301852, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1]
I think that it should not be 127.0.0.1 but the smtp.sendgrid.net.
netstat -ntlp shows:
tcp 0 0 127.0.0.1:587 0.0.0.0:* LISTEN -
I also can telnet to smtp.sendgrid.net on port 587
I found the problem. There was an iptable rule that drops all loopback connection.
-A OUTPUT -o lo -j DROP
removed with
-D OUTPUT -o lo -j DROP
Make sure to save the new rules:
iptables-save > /etc/iptables/rules.v4

bind multiple IP in mongoDb 4.x.x

for mongoDB 4.0.3,
unable to add multiple ips in bindIp
following config works for localhost
net:
port:27017
bindIp:127.0.0.1
Following works for logging from other ip:
net:
port:27017
bindIp:0.0.0.0
following doesn't work
bindIp:127.0.0.1 10.0.0.10
bindIp:127.0.0.1,10.0.0.10
bindIp:"127.0.0.1,10.0.0.10"
bindIp:"127.0.0.1 10.0.0.10"
bindIp:[127.0.0.1,10.0.0.10]
bindIp:[127.0.0.1, 10.0.0.10]
any ip other than 0.0.0.0 or 127.0.0.1 gives error for bindIP
If I try following:
bindIp:10.0.0.10
ERROR: child process failed, exited with error number 48
this MongoDB Doc doesnt help
Any help will be appreciated.
Use ; I used in Mongo 4.2.2 version
net:
port: 27017
bindIp: 127.0.0.1;10.0.1.149
The documentation you linked actually does have the answer to this. If you go here, you will see that the indicate:
To bind to multiple addresses, enter a list of comma-separated values.
EXAMPLE
localhost,/tmp/mongod.sock
I applied this in my environment and can see that mongod is listening on local and the designated IP.
root#aqi-backup:~# netstat -pano | grep 27017
tcp 0 0 10.0.1.149:27017 0.0.0.0:* LISTEN 12541/mongod off (0.00/0/0)
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 12541/mongod off (0.00/0/0)
Here is my mongod.conf file (relevant section).
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,10.0.1.149
The only way that I found was open for all on MongoDB and control IPs t by firewall to specifics ips.
net:
port:27017
bindIp:0.0.0.0
Here Is The Simplest Configuration of BindIp Value for MongoDB above 4.X version mentioned by subhash sautiyal
vim /etc/mongod.conf
bindIp: 127.0.0.1;182.58.65.45;165.165.66.8
sudo service mongod restart

openldap fails to bind ldaps://127.0.0.1:636

Here is my testcase :
[root#192.168.121.130 ~$]slapd -d 1 -h ldaps://127.0.0.1:636
#(#) $OpenLDAP: slapd 2.4.23 (Apr 29 2013 07:47:08) $
mockbuild#c6b7.bsys.dev.centos.org:/builddir/build/BUILD/openldap-2.4.23/openldap-2.4.23/build-servers/servers/slapd
ldap_pvt_gethostbyname_a: host=centos-6.3, r=0
daemon_init: listen on ldaps://127.0.0.1:636
daemon_init: 1 listeners to open...
ldap_url_parse_ext(ldaps://127.0.0.1:636)
daemon: bind(7) failed errno=98 (Address already in use)
slap_open_listener: failed on ldaps://127.0.0.1:636
slapd stopped.
connections_destroy: nothing to destroy.
But if I change another port , such as 6361, it works.
My environment:
OS: centos 6.4 x86_64
OpenLDAP: 2.4.23 installed by yum
Any suggestion?
it seems that another service is already running on port 636:
daemon: bind(7) failed errno=98 (Address already in use)
you can try the following command to identify this service:
netstat -tulpn | grep ':636 ' | grep 'LISTEN'
Old post, but still ...
This error is also displayed when SELinux prevents slapd from starting. Personally I experienced this after manually copying data (/var/lib/ldap/) from another server, to this one. I had to restore the imported files to default SELinux security contexts:
restorecon -R /var/lib/ldap
And I see this doesn't apply to you, but this might also happen if you're attempting to bind slapd to a port out of the ordinary. Default on CentOS7, these are the allowed ports:
#semanage port -l | grep ldap
ldap_port_t tcp 389, 636, 3268, 7389
ldap_port_t udp 389, 636
Adding another one to the legal port range, could be done with semanage. (You might need to install the package policycoreutils-python.):
semanage port -a -t ldap_port_t -p tcp 10389
... if you wish to allow slapd to bind on TCP port 10389 in addition to the four listed above. After this, the previous result would look like:
# semanage port -l | grep ldap
ldap_port_t tcp 10389, 389, 636, 3268, 7389
ldap_port_t udp 389, 636