Service fabric client encryption - certificate

Does the client x.509 certificate encrypt the data as well as handle authorization?
Documentation says it handles authorization and message signing. But does that mean the data is encrypted in transit?

It is NOT encrypted when using Secure-Cluster with Certificates (Node2Node + Client2Node) with default Rpc-Endpoints. In Wireshark you can see the whole communication. It seems just for authorization.
Endpoints with https are encrypted of course.

Yes, a given x509 certificate will be used to encrypt the data while communication happens between a client and the cluster. As for authorization, it means that you could set what client certificates will posses 'SF Cluster Admin' privileges, and the ones that will allow only to query the info about your cluster.
In addition to the cluster certificates, you can add client
certificates to perform management operations on a service fabric
cluster. You can add two kinds of client certificates - Admin or
Read-only. These then can be used to control access to the admin
operations and Query operations on the cluster. By default, the
cluster certificates are added to the allowed Admin certificates list.
you can specify any number of client certificates. Each
addition/deletion results in a configuration update to the service
fabric cluster

Related

Support wildcard domain, kubernetes

My problem is very specific and I dont know how to achieve in kubernetes.
Im trying to configure an application called presto
https://prestodb.io/docs/current/security/internal-communication.html
All my nodes should have a certificate to talk to each other a wildcard certificate would be much simpler, but in kubernetes I dont have a pods domain.
I need way to configure a certificate to all my pods.
For example, if I have a way to call my pods like this:
pod1.example.com
pod2.example.com
I could generate a certificate with *.example.com.
How to achieve that in kubernetes?
Trino is much easier to configure, but I can't use trino yet because Trino doesn't work with metabase.
You can store or save the certificate in secret and use it also. Instead of creating cert at POD level better to manage for each service level.
However, looks like you are looking for something similar to Mtls :
In mTLS, each microservice in a service mesh verifies the other's certificate and uses the public keys to create encryption keys unique to each conversation. This enables the communications between pairs of microservices to be authenticated and encrypted.
You can read more about it : https://thenewstack.io/mutual-tls-microservices-encryption-for-service-mesh/#:~:text=In%20mTLS%2C%20each%20microservice%20in,to%20be%20authenticated%20and%20encrypted.
Description :
Microservice A sends a request for the certificate of microservice B.
Microservice B replies with its certificate and requests the certificate of Microservice A.
Microservice A checks with the certificate authority that the certificate belongs to Microservice B.
Microservice A sends its certificate to microservice B and also shares a session encryption key (encrypted with the public key of microservice B).
Microservice B checks with the certificate authority that the certificate it received belongs to microservice A.
With both microservices mutually authenticated and a session key created, communication between them can be encrypted and sent via the secure link.
If are looking for the above scenario managing service communication with certs i would recommend using the service mesh with the Mtls.

Select proper KafkaUser authentication type?

Maybe I miss something, if so forgive my ignorance.
Here what we have:
We use TLS authentication listeners in Kafka cluster (this can be changed, we can add new type of listeners).
When connect to Kafka topic from Java code I use SSL certificate generated for the Kafka user.
If I decide to avoid using SSL certificate, because of 2 reasons:
I will connect to Kafka topic only from trusted OpenShift cluster PODs
To avoid updating on producer/consumer side re-generareated yearly user's SSL certificate (because Kafka generates user certificate 1 year valid period)
Would be the SCRAM-SHA-512 authentication type for KafkaUser a better (and the only ?) choice for the two reasons above? Or SCRAM-SHA-512 also requires SSL certificates?
Another approach I saw was no authentication, but I am not sure how can ACL be used for such users? How I pass to server information which user is connecting. Is it possible to use both ACL and not authenticated by SSL certificate or by password Kafka user?
[UPD] Environment is built on Strimzi (Apache Kafka cluster in OpenShift)
Using SCRAM-SHA-512 does not require TLS. So you can just disable the TLS encryption in the Kafka custom resource (.spec.kafka.listeners -> set tls: false), enable he SCRAM-SHA-512 authentication (same place, in the authentication section). And then you just use the KafkaUser to create the user and get the password.
In general, TLS encryption is normally always recommended. But the SCRAM-SHA mechanisms do not send the password over the network directly, so using it without encryption should not leak the password. At the end, it is up to you to decide.
Also, just as a sidenote - the certificates are for 1 year by default. You can change it in the Kafka CR.

MongoDB connection security

I'm having some mongodb connection securtity concerns for my env.
Here is my environment:
one ECS hosted on cloud that has a public IP but no domain and no ssl certificate neither.
installed mongodb service on this ECS that needs username/password to authenticate
only specific IPs in the whitelist can access the ECS/mongodb
I'm wondering if the data transfer between this mongodb and my local pc is safe or not?
Will the data be encrpyted during the transmission or just plain text so that everyone on the internet can catch and read it? (As I don't have https so it's not using TLS/SSL)
Can canyone explain the machanism or give some some doc links?
Thanks!
As your not using SSL, your data on fly is not encrypted. You need to use TLS/SSL to encrypt the network transmission. You must have the TLS/SSL certificates as PEM files, which are concatenated certificate containers
In addition to encrypting connections, TLS/SSL allows for authentication using certificates, both for client authentication and for internal authentication of members of replica sets and sharded clusters

Fabric access with client certificate auth fails

We're using Fabric secure cluster and need client certificate for CI/CD tools.
I've created both Cluster primary certificate and client certificate with this script https://gist.github.com/kagarlickij/d63a4061a1066d3a85abcc658f0856f5
so both have been uploaded to the same Kay vault and both have been installed to local keystore on my machine.
I've added client certificate to my Fabric security settings (Authentication type = Admin client, Authorization method = Certificate thumbprint).
The problem is that I can connect (I'm using Connect-ServiceFabricCluster in PowerShell) to Fabric cluster with Cluster primary certificate but can't with Client certificate.
I'm getting this error: Connect-ServiceFabricCluster : FABRIC_E_SERVER_AUTHENTICATION_FAILED: 0x800b0109
Please advice what can be done?
Based on this link the corresponding error code for 0x800b0109 is:
A certificate chain processed, but terminated in a root certificate
which is not trusted by the trust provider.
You're using a self-signed certificate as client cert. I'm not sure it's supported as explained in the Service Fabric Security documentation, moreover you'll have to make sure the SSL certificate has been added inside your local Store.
Client X.509 certificates
Client certificates typically are not issued by a third-party CA.
Instead, the Personal store of the current user location typically
contains client certificates placed there by a root authority, with an
Intended Purposes value of Client Authentication. The client can use
this certificate when mutual authentication is required. Note
All management operations on a Service Fabric cluster require server certificates. Client certificates cannot be used for management.
I had the same issue managing my cluster through powershell, I only had 1 cert on the cluster (the one azure generates when creating the cluster) and I believe it is a client cert since I have to select it in my browser when managing the cluster.
Ultimately I had to add the self signed cert to my Root certificate store (in addition to my personal store where I already had it) to get the powershell module to stop complaining about it.

How can I overcome the x509 signed by unknown certificate authority error when using the default Kubernetes API Server virtual IP?

I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.