I am trying to configure mongodb for ssl. I have the two certs within a directory on Ubuntu, but when I try to restart the service with the mongodb.conf set up correctly, the service will not start. If I comment out the lines in the mongodb.conf file that I added, I can then start mongodb. I think the syntax is wrong, and not the certs them self.
#SSL options
sslMode = requireSSL
#Enable SSL on normal ports
#sslOnNormalPorts = true
# SSL Key file and password
sslPEMKeyFile = /path/to/cert
sslPEMKeyPassword = password
sslCAFile = /path/to/cert
I get this error when I try to start the server with these lines not commented out
stop: Unknown instance:
mongodb start/running, process 7725
If i try to get into mongo shell i get this(assuming this is because I could not restart the service properly)
Thu Jul 21 14:32:07.660 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
The mongodb.conf file is a YAML file so you need to format it as such. Meaning you can't use tabs. Also it does look like the syntax you're using isn't correct.
Try this:
net:
#SSL options
ssl:
mode: requireSSL
# SSL Key file and password
PEMKeyFile: /path/to/cert
PEMKeyPassword: password
CAFile: /path/to/cert
Also, I know it's commented out but just wanted to mention, the sslOnNormal ports option is deprecated. See here: https://docs.mongodb.com/manual/reference/configuration-options/#net.ssl.sslOnNormalPorts
Related
I have setup my mongodb on AWS Linux 2 EC2 instance.
I have associated inbound rule as - SSH | TCP | 22 | to the instance.
I was able to SSH into it through MongoDB Compass by using following settings:
However as soon as I added a username password to my database using following method:
use my_database
db.createUser(
{
user: "some_user",
pwd: "some_password",
roles: [{ role: "readWrite", db: "my_database" }]
}
)
And tried to access it using following parameters:
I got following error:
Error creating SSH Tunnel: connect EADDRINUSE some_ip:22 - Local (0.0.0.0:29353)
Here is my /etc/ssh/sshd_config file content:
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
SyslogFacility AUTHPRIV
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#PubkeyAuthentication yes
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedPrincipalsFile none
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
PermitEmptyPasswords no
#PasswordAuthentication no
# Change to no to disable s/key passwords
ChallengeResponseAuthentication yes
#ChallengeResponseAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
#KerberosUseKuserok yes
# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials no
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
#GSSAPIEnablek5users no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several
# problems.
UsePAM yes
#AllowAgentForwarding yes
AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#UseLogin no
#UsePrivilegeSeparation sandbox
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#ShowPatchLevel no
#UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server
AuthorizedKeysCommand /opt/aws/bin/eic_run_authorized_keys %u %f
AuthorizedKeysCommandUser ec2-instance-connect
Am I missing anything over here?
I was running in to the exact same issue when trying to connect through an SSH tunnel. I have found a quirky solution for this issue.
I solved it by installing Studio 3T. Once opened create a new connection by clicking on Connect -> New Connection.
Once opened set up your connection, save it, and you should be able to connect successfully.
When this is complete do the following:
Click on Connect once again.
Right-click the saved connection and select Edit....
At the bottom left there is an option named To URI... to export the Connection String.
And Finally select the option Include Passwords and copy the Connection String.
That's it! You can now paste it in MongoDB Compass and you should be good to go.
I'm trying to move my bot to an Ubuntu virtual server from Vultr but it's having a problem connecting to the postgres database. I've tried editing the config from md5 to true, and host to local, etc. But those only give me different errors and also make it stop working on my original machine too. It's working perfectly fine on my Windows machine. Here is the error I'm facing:
asyncpg.exceptions.InvalidAuthorizationSpecificationError: no pg_hba.conf entry for host "[local]", user "postgres", database "xxx", SSL off
So I've tried to change this line:
async def create_db_pool():
bot.pg_con = await asyncpg.create_pool(database='xxx', user='postgres', password='???')
to this:
async def create_db_pool():
bot.pg_con = await asyncpg.create_pool(database='xxx', user='postgres', password='???', ssl=True)
and that gives me this error:
asyncpg.exceptions._base.InterfaceError: `ssl` parameter can only be enabled for TCP addresses, got a UNIX socket path: '/run/postgresql/.s.PGSQL.5432'
So I don't know what else to try. I've been stuck on this for a while. If it's relevant, it connects at the bottom of the bot.py file like this:
bot.loop.run_until_complete(create_db_pool())
Whether ssl is True or not, the database seems to still function on my Windows machine. But I can't get it to work on my Ubuntu virtual server.
If I edit my config to this:
# TYPE DATABASE USER ADDRESS METHOD
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::/0 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
host replication all 0.0.0.0/0 md5
host replication all ::/0 md5
Then I get a call error like this:
OSError: Multiple exceptions: [Errno 111] Connect call failed ('::1', 5432, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 5432)
This is really driving me crazy. I have no idea what to do. I bought this virtual server to host my bot on but I can't even get it to connect to the database.
When I simply type psql in the terminal, I get this error:
Error: Invalid data directory for cluster 12 main
Postgres is not working as intended in basically any way. I'm using Vultr.com to host the Ubuntu server, if that matters. And connecting with PuTTy.
Your pg_hba.conf has multiple syntax errors. The "localhost" connection type is not allowed at all, and the "local" connection type does not accept an IP address field. The server would refuse to start/restart with the file you show, and if you try to reload a running server it will just keep using the previous settings.
LOG: invalid connection type "localhost"
CONTEXT: line 4 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "127.0.0.1/32"
CONTEXT: line 5 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "::1/128"
CONTEXT: line 9 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid connection type "localhost"
CONTEXT: line 10 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
LOG: invalid authentication method "127.0.0.1/32"
CONTEXT: line 102 of configuration file "/home/jjanes/pgsql/data/pg_hba.conf"
FATAL: could not load pg_hba.conf
LOG: database system is shut down
Trying to set up a 3 node mongodb server replica on Ubuntu 18.04, mongo version 4.0.18
gl1 192.168.1.30
gl2 192.168.1.31
gl3 192.168.1.33
Using an internal CA on the same network to create certs, I have created 2 certs, one for the server mongo is installed on (GL1, GL2, GL3) for PEMKeyFile and one for the clusterFile (mongo1, mongo2, mongo3). Each CAFile is set listing the respective RSA key, PEMKeyFile and RootCA for each server. I have mongo services running (according to systemctl) fine using the individual certs (PEMKey and clusterFILE).
net:
port: 27017
bindIp: 0.0.0.0
net:
ssl:
mode: requireSSL
PEMKeyFile: /opt/ssl/MongoDB.pem
CAFile: /opt/ssl/ca.pem
clusterFile: /opt/ssl/mongo.pem
allowConnectionsWithoutCertificates: true
#replication
replication:
replSetName: rs0
Getting the following error when I try to rs.add("192.168.1.31:27017") I get the following error
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: 192.168.1.30:27017; the following nodes did not respond affirmatively: gl2.domain.com:27017 failed with stream truncated",
"code" : 74,
"codeName" : "NodeNotFound",
In the mongod.log on node 192.168.1.31 the following is logged:
2020-05-22T18:20:48.161+0000 E NETWORK [conn4] SSL peer certificate validation failed: unsupported certificate purpose
2020-05-22T18:20:48.161+0000 I NETWORK [conn4] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from 192.168.1.30:55002 (connection id: 4)
I have read on an old Google groups post: https://groups.google.com/forum/#!msg/mongodb-user/EmESxx5KK9Q/xH6Ul7fTBQAJ that the clusterFile and PEMKeyFile had to be different. However, I did that and it still is throwing errors. I have done a lot of searching on this and I'm seeing much to support that this how it's done, but it is the only place I've found that has a similar error message and it seems logical that it should work. However, I'm not sure how I can verify that my clusterFile is actually being used. It is indeed a separate certificate with a FQDN for each node. All three nodes have host files updated to find each other (gl1, mongo1, etc). I can ping all nodes between themselves, so networking is up. I've also verified the firewall (ufw and iptables) is not blocking 27017 or anything at this point. Previously I tried the self-signed CA and certs but kept running into errors since those were self signed certs, so that is why I went the internal CA route.
The "purpose" is also known as "extended key usage".
Openssl x509v3 Extended Key Usage gives some example code for setting the purposes.
As pointed out by Joe, the documentation states that the certificates must either have no extended key usage at all, or the one in the PEMKeyFile must have server auth, and the one in the cluster file must have client auth.
i`m trying to connect mongodb to php application from compose to local,but get this error.
but i can remote using mongo chef
No suitable servers found (serverSelectionTryOnce set): [TLS handshake failed: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed calling ismaster
I think you have 3 options to solve this problem.
1. Disable SSL on your server
Find mongod.conf, if you're using linux, then normally it should locate at /etc/mongod.conf, use # to comment these lines under net. Finally, you need to restart mongodb for making change works.
net:
ssl:
mode: requireSSL
PEMKeyFile: ./mongodb.pem
...
2. Use the option called weak_cert_validation on your client
This is not a safety solution, but this is definitely one of the simplest one.
For example, if the mongoDB server has enable SSL but without (or did't) offering CA certificate(means self-signed certificate), then set weak_cert_validation as true in the client side, a example for C client:
mongoc_ssl_opt_t ssl_opts = {0};
ssl_opts.weak_cert_validation = true;
mongoc_client_set_ssl_opts(client, &ssl_opts);
3. Follow configure-ssl to create a certificate and get it signed for your program.
Here some provider: https://en.wikipedia.org/wiki/Certificate_authority#Providers
When trying to run Elixir (Phoenix) Web Application using PostgreSQL Database hosted 3rd party "Database-as-a-Service" (Azure Database for PostgreSQL).
We attempt to start the app with mix phoenix.server we see the following error:
[info] Running Pxblog.Endpoint with Cowboy using http://localhost:4000
[error] GenServer #PID<0.247.0> terminating
** (FunctionClauseError) no function clause matching in Postgrex.Messages.decode_fields/1
(postgrex) lib/postgrex/messages.ex:339: Postgrex.Messages.decode_fields("")
(postgrex) lib/postgrex/messages.ex:344: Postgrex.Messages.decode_fields/1
(postgrex) lib/postgrex/messages.ex:344: Postgrex.Messages.decode_fields/1
(postgrex) lib/postgrex/messages.ex:131: Postgrex.Messages.parse/3
(postgrex) lib/postgrex/protocol.ex:1842: Postgrex.Protocol.msg_decode/1
(postgrex) lib/postgrex/protocol.ex:1816: Postgrex.Protocol.msg_recv/3
(postgrex) lib/postgrex/protocol.ex:560: Postgrex.Protocol.auth_recv/3
(postgrex) lib/postgrex/protocol.ex:475: Postgrex.Protocol.handshake/2
(db_connection) lib/db_connection/connection.ex:134: DBConnection.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Last message: nil
State: Postgrex.Protocol
Insight: "Enforce SSL" was Enabled on the Azure DB ...
Through investigation we realised that the error is cause because the Azure PostgreSQL Service had Enforce SSL Connection set to Enabled (by default):
We think having "Enforce SSL" to Enabled is good for Security, but we aren't able to get it working with Phoenix ...
(Temporary) Solution: Disable "Enforce SSL"
So, we have (temporarily) disabled SSL for now:
But we would much prefer a "permanent" solution to this issue.
Preferred Solution: Use SSL when Connecting to PostgreSQL
If anyone can clarify (or point us to) how to connect to PostgreSQL over SSL from Phoenix/Ecto
we would be super grateful! :-)
Does the Application (Phoenix) Server need to have an SSL Certificated configured in order to connect from the App server to the DB Server...?
e.g: http://www.phoenixframework.org/docs/configuration-for-ssl ?
Microsoft has the following help guide:
https://learn.microsoft.com/en-us/azure/postgresql/concepts-ssl-connection-security
It seems to suggest we need OpenSSL on the App Server ... can anyone confirm?
Background
I was experiencing the same problem connecting Phoenix/Ecto/Postgrex to Azure Database for PostgreSQL server. Even after setting ssl: true in my Repo configuration, I was still not able to connect to the database with Postgrex even though connecting using psql "postgresql://...?sslmode=require" -U ... on the same machine succeeded. The error returned with ssl: true was:
[error] Postgrex.Protocol (#PID<0.1853.0>) failed to connect: **(DBConnection.ConnectionError) ssl connect: closed
** (DBConnection.ConnectionError) connection not available because of disconnection
(db_connection) lib/db_connection.ex:926: DBConnection.checkout/2
...
After digging through the source code, I discovered that the failing call was actually the ssl.connect/3 call from the Erlang ssl module:
# deps/postgrex/lib/postgrex/protocol.ex:535
defp ssl_connect(%{sock: {:gen_tcp, sock}, timeout: timeout} = s, status) do
case :ssl.connect(sock, status.opts[:ssl_opts] || [], timeout) do
{:ok, ssl_sock} ->
startup(%{s | sock: {:ssl, ssl_sock}}, status)
{:error, reason} ->
disconnect(s, :ssl, "connect", reason)
end
end
Doing some snooping with Wireshark, I was able to see that when connecting successfully with psql, I could see packets with TLSV1.2 as the protocol, but when postgrex was connecting with ssl: true I was seeing packets with SSL as the protocol before failing to connect.
Looking at the Ecto.Adapters.Postgres options docs, you'll see there's an ssl_opts configuration option which ends up getting passed to :ssl.connect/3 in which you can set versions to override the TLS version(s) used to connect.
Solution
I was able to connect to the database by adding the following to my Repo configuration:
ssl_opts: [
versions: [:"tlsv1.2"]
]
My full configuration ended up looking like this:
config :myapp, Myapp.Repo,
adapter: Ecto.Adapters.Postgres,
username: "myapp#dev-db",
password: "...",
database: "myapp_dev",
port: 5432,
hostname: "dev-db.postgres.database.azure.com",
pool_size: 10,
ssl: true,
ssl_opts: [
versions: [:"tlsv1.2"]
]
I'm not really sure why the TLS version needs to be set explicitly, perhaps someone with more expertise in this area can shed some light on this.
You might also need to add your ip (or ip range) to the postgres firewall in azure. It's right under the SSL settings.
Erlang is usually built with OpenSSL, and does require it for several libraries. You haven't posted the error you get with ssl: true, but if your erlang was built without OpenSSL it might be the cause. From the build/install guide:
OpenSSL -- The opensource toolkit for Secure Socket Layer and
Transport Layer Security. Required for building the application
crypto. Further, ssl and ssh require a working crypto application and
will also be skipped if OpenSSL is missing. The public_key application
is available without crypto, but the functionality will be very
limited.
What output do you get if you run :ssl.versions() in an iex shell?
Here is my Database init() connection code using ssl and certificates. Check your ssl_opts settings, maybe.
def init() do
case Postgrex.start_link(
hostname: App.Endpoint.config(:dbhost),
username: App.Endpoint.config(:username),
database: App.Endpoint.config(:dbname),
port: App.Endpoint.config(:dbport),
ssl: true,
ssl_opts: [
keyfile: "priv/cert.key",
certfile: "priv/cert.crt"
]) do
{:ok, postgrex} ->
postgrex
_ ->
:error
end
end
To add to the above answers, here is my ssl_opts block when using self-signed certificates and auth-options set to clientcert=verify_full in your pg_hba.conf. My connection is using TLSv1.3.
ssl_opts: [
verify: :verify_peer,
versions: [:"tlsv1.3"],
ciphers: :ssl.cipher_suites(:all, {3,4}),
cacertfile: Path.expand("priv/certs/ca-cert.pem"),
certfile: Path.expand("priv/certs/client-cert.pem"),
keyfile: Path.expand("priv/certs/client-key.pem")
],