AWS RDS SSL - Does ssl server certificates different for each and every RDS instance or same? - postgresql

I have 2 aws accounts having their own RDS instances(not publicly accessible) with db engine as postgresql 12.5. I downloaded RDS certificate from "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem".
I am using JDBC(postgresql driver) with properties ssl=true and sslrootcert="path to above certificate" to establish secure connections.
My questions:
This certificate is same for both aws accounts which have different names, so how does it work , Does ssl hand shake verifies that client(jdbc connection) is talking to rds.amazonaws.com or the actual RDS instance which has separate name ?
RDS certificates are replaced every 5 years, i.e. applications also have to update the certificate every 5 years or sooner than that once new certificate is available from RDS, is this correct ?

Q1.
Yes, its same for all accounts. You can download it from docs here. Its about the instances as explained in the docs:
Using a server certificate provides an extra layer of security by validating that the connection is being made to an Amazon RDS DB instance.
Q2.
You can update before actual expiration few months before. Last year it happened as explained here:

The server's certificates are each different. Each server sends its own cert when you ask to establish a SSL connection to it. The thing you download is the cert for the authority which signs each of the server certs. You (Or your JDBC) use it to verify that the per-server certs are genuine.

Related

Create custom server CA for GCP databases

For GCP managed SQL databases: we want to enable SSL/TLS using a common client cert across all our GCP databases. From reviewing the official google docs (https://cloud.google.com/sql/docs/mysql/configure-ssl-instance), it seems the server certificate is not customizable and we require a different client certificate for each database instance.
Is there any solution where we can use a common client cert for multiple database instances?

MongoDB connection security

I'm having some mongodb connection securtity concerns for my env.
Here is my environment:
one ECS hosted on cloud that has a public IP but no domain and no ssl certificate neither.
installed mongodb service on this ECS that needs username/password to authenticate
only specific IPs in the whitelist can access the ECS/mongodb
I'm wondering if the data transfer between this mongodb and my local pc is safe or not?
Will the data be encrpyted during the transmission or just plain text so that everyone on the internet can catch and read it? (As I don't have https so it's not using TLS/SSL)
Can canyone explain the machanism or give some some doc links?
Thanks!
As your not using SSL, your data on fly is not encrypted. You need to use TLS/SSL to encrypt the network transmission. You must have the TLS/SSL certificates as PEM files, which are concatenated certificate containers
In addition to encrypting connections, TLS/SSL allows for authentication using certificates, both for client authentication and for internal authentication of members of replica sets and sharded clusters

is it possible to authenticate a DB user in RDS (postgresql) via the certificate used to encrypt SSL connections?

I'm trying to apply security best practices to an AWS RDS postgresql instance, but Amazon seems to have gone out of its way to prevent some fairly common and routine features around authentication. I don't seem to be able to authenticate against any external source, which sucks, since now I have to maintain db users completely separately from normal user management. But it seems that, despite the fact that I can use SSL to connect, none of the functionality that might actually validate a client's cert against the server's CA is accessible in RDS. Is this true? It seems like the easiest thing in the world to have amazon sign certs with it CA and then validate those certs against that CA when connections are established, yet I cannot find any mention of how to do it in the documentation or out on the web. Am I really confined ONLY to password authentication of db-internal users? This is almost hard to believe, but after days of research, is the only conclusion I have been able to support.

How to update SSL certificate on EC2 instances

Here is my dilemma. Currently we run quite a few server on AWS EC2 service. Before my time, they used to configure Server images with the SSL certificate on them. Now, the certificate is about to expire and we need to replace the old one with the new one. I have read documentation on AWS in regards to uploading new certificate to IAM but it is very confusing. Is there any way, for example, using Power Shell commands to upload the new certificate to the existing servers?
Thanks in advance.
If you have certificates that are expired on existing instances and NOT on an Elastic Load Balancer, then you need to update each server as needed, on that server.
It is not an IAM type server certificate.
So you need to touch each server and upgrade. If you have AMIs for each server, you may need to create new AMIs after upgrading the certificate.
See Install certificate with PowerShell on remote server for some suggestion on PowerShell methods of installing a certificate file remotely.
Depending on your budget, you could consider using an ELB even for one instance, and installing the SSL cert there. It makes it easier in the long run to manage certs at the ELB level, rather than at the server/AMI level

Npgsql 3.0.3 error with Power BI Desktop

I'm receiving the following error when connecting to an AWS Postgres database that requires SSL. I recently upgraded from npgsql 2.3.2 (which was buggy) to 3.0.3 which won't connect. Any suggestions would be appreciated.
DataSource.Error: TlsClientStream.ClientAlertException:
CertificateUnknown: Server certificate was not accepted. Chain status:
A certificate chain could not be built to a trusted root authority. .
at TlsClientStream.TlsClientStream.ParseCertificateMessage(Byte[] buf,
Int32& pos) at
TlsClientStream.TlsClientStream.TraverseHandshakeMessages() at
TlsClientStream.TlsClientStream.GetInitialHandshakeMessages(Boolean
allowApplicationData) at
TlsClientStream.TlsClientStream.PerformInitialHandshake(String
hostName, X509CertificateCollection clientCertificates,
RemoteCertificateValidationCallback
remoteCertificateValidationCallback, Boolean
checkCertificateRevocation) Details:
DataSourceKind=PostgreSQL
I was able to fix the issue by installing the Amazon RDS public certificate on my machine. Once I did this, I was able to connect.
Steps I followed:
Download the AWS RDS public certificate 1
Create a .crt file from the .pem file downloaded. Sample instructions
here 2
Install the certificate (.crt file) on the machine. 3
Connect!
The docs from npgsql give the solution as changing the default trust server certificate of 'false' to 'true' in the connection string.
Unfortunately, neither Excel (AFAIK) nor Power BI will allow you to edit the connection string. So if you are unable to get the SSL certificate from the DB admin (as suggested in another answer), or the SSL cert has a different server name to the name you connect to (in my case an IP address), there is not much that can be done.
I can see two ways of fixing this. Either Shay & co from npgsql (who are doing an excellent job btw) provide some way for users to change the default settings for the connection string parameters. Or Microsoft allows users to send keywords in the connection dialog of Power BI (and Excel).
Npgsql 2.x didn’t perform validation on the server’s certificate by default, so self-signed certificate were accepted. The new default is to perform validation, which is probably why your connection is failing. Specify the Trust Server Certificate connection string parameter to get back previous behavior.
You can read more on the Npgsql security doc page, note also that this change is mentioned in our migration notes.
I had the same issue connecting PowerBI to a locally hosted PostgreSQL server and it turned out to be easy to solve if you can get the right information. Recent Npgsql versions will only connect over SSL if it trusts the certificate of the server. As a Windows application PowerBI uses the windows certificate store to decide what to trust. If you can get the SSL cert for the PostgreSQL server (or the CA cert used to sign that one) then tell Windows to trust that certificate, PowerBI will trust it too.
In the configuration folder for the PostgreSQL server there is a postgresql.conf file, search it for ssl settings, there is one with the location of the ssl cert. Note NOT the key file which contains the private key, only the cert file which contains the public key. copy it or its content to the machine running PowerBI and import using Run | mmc | Add Plugin... Certificates (Google it)
Look at the server name once you imported the cert and connect from PowerBI using the same server name (so the cert matches the connection). That solved the problem for me. If PostgreSQL is configured to insist on a SSL connection you might have to do the same for a ODBC connection too.
Its not best way but worked for me since if u dont need encryption for security reason.
Go to Postgres config file on your DB server and go from
ssl = true
to
ssl = false
Then open your power bi desktop File-> Options and settings -> Data source settings -> then in global you will have saved your connection press Edit Permissions and uncheck "ENCRYPT CONNECTIONS"
Then it will work
WARNING: THIS IS NOT RECOMMENDED IF YOUR DB IS OPEN TO PUBLIC.
Regards,
Davlik