I am trying to connect to RDS running on AWS (Amazon Web Services) using SSL. I saw limited info in PosgreSQL pgAdmin III docs about fields on SSL tab.
RDS instances are setup to accept SSL connections by default.
I've downloaded the public key from Amazon and converted it from a .pem to a .crt file using openSSL. On the SSL tab in pgAdmin III I entered path to converted key file "Server Root Certificate File" field.
I can connect to instance without issue but there is no indication that the data is being transferred over SSL. AWS does not set their RDS instances to use SSL exclusivly so I may be connected without using SSL and not know it.
Does pgAdmin III show any indication when it's connected using SSL (like a lock icon)?
Can anyone provide additional info that describes the fields (SSL dropdown, Client Cert File, Client Key) on the SSL tab in pgAdmin III?
Thanks.
I have not used SSL with PGAdmin on AWS, but I have on a server, and I can tell you that you know when you are connected to a server via PGAdmin, I'm not sure how there is ambiguity there, can you see the databases, tables?
The quoted post below might help you with connecting to a server via SSL.
On the client, we need three files. For Windows, these files must be
in %appdata%\postgresql\ directory. For Linux ~/.postgresql/
directory. root.crt (trusted root certificate) postgresql.crt (client
certificate) postgresql.key (private key)
Generate the the needed files on the server machine, and then copy
them to the client. We'll generate the needed files in the /tmp/
directory.
First create the private key postgresql.key for the client machine,
and remove the passphrase.
openssl genrsa -des3 -out /tmp/postgresql.key 1024
openssl rsa -in /tmp/postgresql.key -out /tmp/postgresql.key
Then create the certificate postgresql.crt. It must be signed by our
trusted root (which is using the private key file on the server
machine). Also, the certificate common name (CN) must be set to the
database user name we'll connect as.
openssl req -new -key /tmp/postgresql.key -out /tmp/postgresql.csr -subj '/C=CA/ST=British Columbia/L=Comox/O=TheBrain.ca/CN=www-data'
openssl x509 -req -in /tmp/postgresql.csr -CA root.crt -CAkey server.key -out /tmp/postgresql.crt -CAcreateserial
Copy the three files we created from the server /tmp/ directory to the
client machine.
Copy the trusted root certificate root.crt from the server machine to
the client machine (for Windows pgadmin %appdata%\postgresql\ or for
Linux pgadmin ~/.postgresql/). Change the file permission of
postgresql.key to restrict access to just you (probably not needed on
Windows as the restricted access is already inherited). Remove the
files from the server /tmp/ directory.
From: http://www.howtoforge.com/postgresql-ssl-certificates
First, login as your postgresql admin user then run the following to install sslinfo on RDS:
create extension sslinfo;
To verify if you're connected via ssl simply run the following query in your session:
select ssl_is_used();
If it returns true (t), then you're connected via SSL.
Related
I have an appication deployed on AWS EKS that uses an RDS PostgreSQL database. I have downloaded the intermediate and root certifcates, and added them to a trust store, as described in this post: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
However I cannot connect via SSL with sslmode=verify-full and I think it's because I do not have a copy of the certificate generated when RDS creates the DB instance and installs the certificate on the instance, as described here: https://aws.amazon.com/premiumsupport/knowledge-center/rds-connect-ssl-connection/
The certificate generated when the database is provisioned has the hostname of the server as the Common Name, and I think this is used to veryify the host when a client connects.
Does anyone know where I can download this certiifacte or if I have misunderstood how to do this, tell me what it is I am doing wrong?
Thanks
You need to do multiple things:
Download the ca certs from https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem.
Import this cert into the ca-certs of the JDK/JRE in your docker image by using this command keytool -importcert -alias aws-certs -trustcacerts -file /path/to/global-bundle.pem -storepass changeit -cacerts -noprompt
Note: You might have to run this command as root/sudo depending on the permissions on the cacerts file in the JDK_HOME/lib/security folder.
Make changes to your postgres jdbc url as decribed here; basically adding sslmode=verify-full
When I go to make a new data source in Data Studio, I'm prompted to enter the client/server certificates to make a secure connection. Where can I get this client certificate and key? How can I allow Google Data Studio to connect to my RDS instance?
Do the following:
Add the Google IP addresses to your RDS security group inbound rule list.
Generate a self-signed cert. This answer provides instructions.
openssl req -newkey rsa:2048 -nodes -keyout client.key -x509 -days 365 -out client.crt
Get the correct server PEM file for your region from AWS
Add your database user information to the form and attach your client.crt, client.key and AWS pem files.
I have set up a SonarQube and configured SSL certificates to make the URL always HTTPS using CertBot. As of now, the PostgreSQL database has a public IP and below are the values changed in sonar.properties file:
sonar.jdbc.username=weakusername
sonar.jdbc.password=strongpassword
sonar.web.host=127.0.0.1
sonar.jdbc.url=jdbc:postgresql://xx.xxx.xxx.xxx/sonarqube
sonar.search.javaOpts=-Xms512m -Xmx512m
# Change max limits
sysctl -w vm.max_map_count=262144
I am using Cloud SQL PostGres as the database. I would like to allow Only SSL Only Connections to the database and here the way how to do it, generate client certificate, etc.
After setting "Allow only SSL Connections" to true I understand there is a way to connect to the database using the client certificate described here.
Below is the command to start the psql client:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=[INSTANCE_IP] \
user=postgres dbname=[DB_NAME]"
However, SonarQube is not able to connect to the Database (Not sure how to tell SonarQube to use the client certificates).
What changes are required in the configuration file to make SonarQube use appropriate client certificate and connect to the database using SSL?
You should add the following to the URL:
jdbc:postgresql://xx.xxx.xxx.xxx/sonarqube?ssl=true&sslmode=verify-ca&sslrootcert=/path/to/server-ca.pem&sslkey=/path/to/client-key.pem&sslcert=/path/to/client-cert.pem
See the documentation for the available SSL connection parameters and SSL client configuration.
Convert client key format from PEM to PK8:
openssl pkcs8 -topk8 -inform PEM -outform DER -in client-key.pem -out client-key.pk8 -nocrypt
Be sure to update the value for the sslkey query param in sonar.jdbc.url with the new path/filename.
I'm trying to set up a PostgreSQL db server with ssl. Or more specifically, I've successfully set up the server and ssl is working... as long as there are no intermediate certificates. It's not working if there is an intermediate cert.
Background / Setup:
I have a root CA.cert.
I used the CA to sign an intermediate.csr and create an intermediate.cert.
I used the intermediate.cert to sign a postgres.csr and create a postgres.cert.
The CA.cert, postgres.key and postgres.cert have been installed on the server.
The CA.cert has been set as a trusted certificate.
postgresql.conf has been modified to point to the above files.
I used the intermediate.cert to sign a client_0.csr and create a client_0.cert.
I used the CA.cert to sign a client_1.csr and create a client_1.cert.
I create a client chain.cert: cat client_0.cert intermediate.cert > chain.cert
Proper extensions have been used, both client certs have their common name set to the (username) of the db being connected to.
Fun, aka The Problem.
psql "sslmode=require hostname=(host) db=(db) sslcert=client_1.cert sslkey=client_1.key" -U (username): Great success!
psql "sslmode=require hostname=(host) db=(db) sslcert=client_0.cert sslkey=client_0.key" -U (username): alert unknown ca. This is expected, client_0.cert is not signed by CA.cert.
psql "sslmode=require hostname=(host) db=(db) sslcert=chain.cert sslkey=client_0.key" -U (username): alert unknown ca. Uh oh.
Confusion
Documentation for connecting to a postgresql instance with ssl enabled and intermediate certificates present:
In some cases, the client certificate might be signed by an
"intermediate" certificate authority, rather than one that is directly
trusted by the server. To use such a certificate, append the
certificate of the signing authority to the postgresql.crt file, then
its parent authority's certificate, and so on up to a certificate
authority, "root" or "intermediate", that is trusted by the server,
i.e. signed by a certificate in the server's root.crt file.
https://www.postgresql.org/docs/9.6/static/libpq-ssl.html
I have also tried cat-ing the full chain, client inter ca > chain , nothing doing.
Question
What have I done wrong here?
Thank you,
How to make connection from mongo-spark connector to mongodb when only TLS/ssl enabled for mongo DB ?
How to pass the uri and collection name in read config to make connection with TLS/ssl enabled mongodb instance?
Thanks in advance ?
To make the ssl connection from Spark to the Mongo server you will need to trust the Mongo certificate, or the CA (certificate authority) that has signed that certificate. This is the most important part, and the trickiest one for me to figure it out.
Spark is a Java application, so it get the certificates from a jks trustStore. you will need to import the Mongo certificate (only the public part) into a trustStore to make it available for spark. To do so:
Get the Mongo certificate: Ask the DBA or the sysadmin who has setup the mongo to provide the certificate to you. Other aproach is to get it with openssl:
$ openssl s_client -connect mongodb:27017
CONNECTED(00000003)
depth=0 C = ES, ST = Madrid, L = Madrid, O = HOME, OU = HOME, CN=mongodb mongo.hostname.local
verify error:num=19:self signed certificate in certificate chain
verify return:0
---
Certificate chain
0 s:/C=ES/ST=Madrid/L=Madrid/O=COMPANY/OU=AREA/CN=mongo.hostname.local
i:/C=ES/ST=Madrid/L=Madrid/O=COMPANY/OU=AREA/CN=mongo.hostname.localIssuing CA
---
Server certificate
-----BEGIN CERTIFICATE-----
[..... A bunch of base64 text....]
-----END CERTIFICATE-----
Get the part from the -----BEGIN CERTIFICATE----- to -----BEGIN CERTIFICATE----- and save it in a .cert file
Import it into a trustStore
$ keytool -import -file /path/to/your/mongodb.crt -alias mongodb -keystore /path/to/your/trustStore.jks
Enter keystore password: 123456
...
...
Trust this certificate? [no]: yes
Certificate was added to keystore
Make sure the keystore is accesible from all your spark cluster nodes.
Now, you have your server certificate imported. If you need mutual TLS you will need to provide a valid client certificate. This certificate, and the certificate private key, should be in a jks keyStore (it could be in the same trustStore file you have stored the Mongo server certificate because it uses the same format). If are not going to use mutual TLS you don't need to do this, but you have to check that the MongoDB instance is able to accept connections without client certificates. This is with the flag sslAllowConnectionsWithoutCertificates
The next step is specifying in the connection URI that you want to use TLS. This is fairly simple, just add the ?ssl=true to your connection string. So the connection URI will be something like this
mongodb://user:pw#host:port/db.collection?ssl=true
Now you can summit your job. When summiting the job we also need to specify the location of our trustStore, and the libraries for the mongo connector:
/spark/bin/spark-submit \
--master spark://spark-master:7077 \
--packages org.mongodb.spark:mongo-spark-connector_2.11:2.2.0 \
--conf spark.executor.extraJavaOptions="-Djavax.net.ssl.trustStore=/path/to/your/trustStore.jks -Djavax.net.ssl.trustStorePassword=yourPassword" \
--conf spark.driver.extraJavaOptions="-Djavax.net.ssl.trustStore=/path/to/your/trustStore.jks -Djavax.net.ssl.trustStorePassword=yourPassword" \
/yourJob.jar
We use the extraJavaOptions for the driver and the executor to pass these parameters. If you are using mutual TLS, include the following extra java options:
-Djavax.net.ssl.keyStore=/path/to/your/keyStore.jks
-Djavax.net.ssl.keyStorePassword=yourPassword
The /path/to/your/keyStore.jks is where you have stored your client certificates.
If the spark connector library is not already installed, you may run into trouble. The spark process will go to maven to download the library, but it will not be able to verify the maven certificates because we have specified another keyStore with just our certificate. One workaround is to import our certificate directly into the default keystore located at $JAVA_HOME/jre/lib/security/cacerts. The default password is changeit. Remember to do this in every worker node too.
I hope it helps!
Sources:
https://github.com/brunocfnba/spark-mongo-ssl
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.5/bk_spark-component-guide/content/spark-encryption.html
https://community.hortonworks.com/articles/147113/how-to-configure-your-spark-application-to-use-mon.html
https://mapr.com/support/s/article/Unable-to-find-valid-certification-path-to-requested-target-error-while-accessing?language=en_US