TLS connection to MongoDB in Quarkus - mongodb

I am attempting to connect from Quarkus to a MongoDB instance in the cloud which requires TLS. I have the certificate file for the server but cannot see how to use it with Quarkus.
I currently have the following properties set
quarkus.mongodb.connection-string = mongodb://blah:blah#mydomain.com:27017
quarkus.mongodb.database=school
quarkus.mongodb.tls=true
There does not appear to be anywhere to set the certificate file.
I cannot get past this error
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Warren

There is no specific TLS settings for MongoDB with Quarkus.
If your certificate has a known root (which didn't seem to be the case), there is nothing more to do.
If your certificate is not knownw by your JVM, you need to use the keetool of your JVM to import it.
Be careful that if you deploy your application as a native executable, there is some steps to follow: https://quarkus.io/guides/native-and-ssl

Related

Add Relying Party Trust is failing in ADFS SAML

I've spent quite a few hours fighting with these issues so I though a quick recap might be helpful for somebody else too.
First, when trying to import an RP from a metadata URL:
I was getting this error:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: The underlying connection was closed: An unexpected error occured on a send.
The problem turned out to be caused by the fact that Windows Server at least up to 2016 is using TLS 1.0 for .NET framework (in which the ADFS configuration wizard is implemented) while my service hosting the metadata document only allowed TLS 1.2 as the minimum version:
Dropping the minimum version to TLS 1.0 is a no-go from security point of view, so the proper fix would be to enable TLS 1.2 as the default version on the ADFS server.
That would solve the issue (which I confirmed with a test) but then some of the other RPs only supporting TLS 1.0 would stop working, so I had to give up on importing metadata directly from a URL and use the file import option:
In this case another error popped up, which happened to be:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: Entity descriptor '...'. ID6018: Digest verification failed for reference '...'.
This one turned out to be caused by me when I formatted the XML in the metadata file with line breaks and tabs to improve readability - it's all on a single line originally. ADFS won't allow that so the document must be exactly the same it came out of the metadata endpoint.
The same issue might result in different error messages and codes, depending on Windows and ADFS versions. For example this one is possible caused by a failed metadata integrity check as well:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: Entity descriptor '...'. ID6013: The signature verification failed.
After having successfully imported a raw metadata file and having added a suitable Claim Issuance Policy I've got it finally working:

How to pass certificate path with Postgres database url string for SSL connection

I am trying to secure connection to AWS RDS instance over SSL for my Spring boot application. I have looked upon several blogs and official documentation and accordingly modified my connection string to contain certain parameter related to SSL connection.
I have my certificate placed inside a cert folder in resources. Below is how I have tried to pass the certificate path:
jdbc:postgresql://myamazondomain.rds.amazonaws.com:5432/db_name?sslmode=verify-full&sslrootcert=/cert/rds-ca-cert_name.p12&password=my_passwrord
Also I have tried:
jdbc:postgresql://myamazondomain.rds.amazonaws.com:5432/db_name?sslmode=verify-full&sslrootcert=/src/main/resources/cert/rds-ca-cert_name.p12&password=mypassword
However, when I try to connect to the RDS from my ECS container, I receive the following error:
ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
org.postgresql.util.PSQLException: Could not open SSL root certificate file /cert/rds-ca-cert_name.p12.
at org.postgresql.ssl.LibPQFactory.<init>(LibPQFactory.java:120)
at org.postgresql.core.SocketFactoryFactory.getSslSocketFactory(SocketFactoryFactory.java:61)
at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:33)
at org.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:435)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:94)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
at org.postgresql.Driver.makeConnection(Driver.java:454)
Can someone suggest what is the error here. What is the correct way of passing the certificate stored in classpath to jdbc connection string.
We need to use SingleCertValidatingFactory class to specify certificate file on classpath (or from file system, environment variables etc). This class has argument sslfactoryarg where we can add path to certificate file.
Your URL should look like:
jdbc:postgresql://myamazondomain.rds.amazonaws.com:5432/db_name?sslmode=verify-full&sslfactory=org.postgresql.ssl.SingleCertValidatingFactory&sslfactoryarg=classpath:cert/rds-ca-cert_name.p12

Sagemaker certificate issue with Kubernetes

I have created a docker container that is using Sagemaker via the java sdk. This container is deployed on a k8s cluster with several replicas.
The container is doing simple requests to Sagemaker to list some models that we have trained and deployed. However we are now having issues with some java certificate. I am quite novice with k8s and certificates so I will appreciate if you could provide some help to fix the issue.
Here are some traces from the log when it tries to list the endpoints:
org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:394)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:353)
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:132)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
at com.amazonaws.http.conn.$Proxy67.connect(Unknown Source)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1236)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
... 70 common frames omitted
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)
at sun.security.validator.Validator.validate(Validator.java:262)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621)
... 97 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)
... 103 common frames omitted
This might most likely to do with some custom SSL certification path added to your network by your admin. You might want to inspect the SSL root certificates by opening any secured website on your browser and click on the Secure link to the left of the address bar ( atleast this is how it is in chrome ) . You will see a popup showing certificate and certification information. Go to its Certificate Path and see the ROOT certificate , if it is something of custom certificate then you will need to add the same to your cacerts file. Read this link for more details
I think I have found the answer to my problem. I have set up another k8s cluster and deployed the container there as well. They are working fine and the certificate issues does not happen. When investigating more I have noticed that they were some issues with DNS resolution on the first k8s cluster. In fact the containers with certificate issues could not ping google.com for example.
I fixed the DNS issue by not relying on core-dns and setting the DNS configuration in the deployment.yaml file. I am not sure to understand why exactly but this seems to have fixed the certificate issue.
The error message you're receiving occurs when Java does not know about the root certificate returned by an TLS endpoint. This often occurs if you change the root certificates available.
Per https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#Customization:
"If a truststore named <java-home>/lib/security/jssecacerts is found, it is used.
If not, then a truststore named <java-home>/lib/security/cacerts is searched for and used (if it exists).
Finally, if a truststore is still not found, then the truststore managed by the TrustManager will be a new empty truststore."
Openssl is a good tool for debugging such certificate issues. You can use the following command to retrieve the certificate returned by an endpoint. This may help you determine what the certificate chain looks like.
openssl s_client -showcerts -connect www.example.com:443 </dev/null
You can view the list of certificates that Java knows about using keytool, a utility vended with the JRE.
keytool -list -cacerts
Some system administrators will override the default certificates by writing an alternative truststore file into the default location. Other times, teams may override the default using the javax.net.ssl.trustStore system property.
Finally, you can use the jps utility, also vended with the JRE, to see the system properties set on a running Java process.
jps -v

Postgresql : SSL certificate error unable to get local issuer certificate

In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.

Understanding OPC-UA Security using Eclipse Milo

I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.