My problem is very specific and I dont know how to achieve in kubernetes.
Im trying to configure an application called presto
https://prestodb.io/docs/current/security/internal-communication.html
All my nodes should have a certificate to talk to each other a wildcard certificate would be much simpler, but in kubernetes I dont have a pods domain.
I need way to configure a certificate to all my pods.
For example, if I have a way to call my pods like this:
pod1.example.com
pod2.example.com
I could generate a certificate with *.example.com.
How to achieve that in kubernetes?
Trino is much easier to configure, but I can't use trino yet because Trino doesn't work with metabase.
You can store or save the certificate in secret and use it also. Instead of creating cert at POD level better to manage for each service level.
However, looks like you are looking for something similar to Mtls :
In mTLS, each microservice in a service mesh verifies the other's certificate and uses the public keys to create encryption keys unique to each conversation. This enables the communications between pairs of microservices to be authenticated and encrypted.
You can read more about it : https://thenewstack.io/mutual-tls-microservices-encryption-for-service-mesh/#:~:text=In%20mTLS%2C%20each%20microservice%20in,to%20be%20authenticated%20and%20encrypted.
Description :
Microservice A sends a request for the certificate of microservice B.
Microservice B replies with its certificate and requests the certificate of Microservice A.
Microservice A checks with the certificate authority that the certificate belongs to Microservice B.
Microservice A sends its certificate to microservice B and also shares a session encryption key (encrypted with the public key of microservice B).
Microservice B checks with the certificate authority that the certificate it received belongs to microservice A.
With both microservices mutually authenticated and a session key created, communication between them can be encrypted and sent via the secure link.
If are looking for the above scenario managing service communication with certs i would recommend using the service mesh with the Mtls.
Related
I have the following scenario:
The user red make a http request to one of the three services in the namespace1. Somehow in the K8s, it should verify, if the user red has a valid certificate for namespace1 to call services or not. In this case, the user red owns the right certificate for the namespace1, so it allows to call any services within the namespace1. The same rule is also apply for user blue.
But when user red try to call services in the namespace2, then requests should be rejected, because it does not own the right certificat for namespace2.
The question is, it is possible to create namespace certificate in kubernetes. For example, when I have a certificate A, then I can only access namespace1 but not namespace2.
I think Kubernetes Services don’t offer such features. The authentication should be done in an ingress controller (e.g. nginx-ingress). You just deploy two different ones, one per namespace, with different certificate configuration.
Not exactly what you want but it's possible to do per domain.You can use an ingress controller such as ambassador with SNI support.You supply separate TLS certificates for different domains, instead of using a single TLS certificate for all domains.It is designed to be configured on a per-mapping basis, enabling application developers or service owners to individually manage how their service gets exposed over TLS.
Using SNI instead multiple ingress controller deployments is more scalable because multiple load balancers or IPs for those ingress controllers can be avoided.
The problem with SNI is client library and browser support is limited.
All the documentation for Service Fabric mentions that for a production cluster you should use an X509 certificate from a trusted CA with the common name of the cluster address. The problem is I can't find any documentation on the process of obtaining the certificate. As far as I can tell for creating a certificate you need to prove you are who you say you are and to do so you either need to own the domain or expose some sort of file on the specified address.
The problem is that the url of the cluster is on a domain owned by Microsoft and my cluster is not exposed to the outside world as a website. Am I missing something? Do I have to create a web service and expose it in order to just create a certificate?
You can use any a free solution like Letsencrypt, for this it's not required to own the domain (specifically; control the DNS records). They also provide the option to respond to a HTTP based challenge, as proof of control.
To kick off the process, the agent asks the Let’s Encrypt CA what it
needs to do in order to prove that it controls example.com. The Let’s
Encrypt CA will look at the domain name being requested and issue one
or more sets of challenges. These are different ways that the agent
can prove control of the domain. For example, the CA might give the
agent a choice of either: Provisioning a DNS record under example.com,
or Provisioning an HTTP resource under a well-known URI on
https://example.com/
An easy way to get started with Letsencrypt is by using CertBot.
This needs to run on the domain, so it can respond to the HTTP challenge, which results in the issuing of a certificate for your specific cluster endpoint.
Maybe this sample project helps.
Does the client x.509 certificate encrypt the data as well as handle authorization?
Documentation says it handles authorization and message signing. But does that mean the data is encrypted in transit?
It is NOT encrypted when using Secure-Cluster with Certificates (Node2Node + Client2Node) with default Rpc-Endpoints. In Wireshark you can see the whole communication. It seems just for authorization.
Endpoints with https are encrypted of course.
Yes, a given x509 certificate will be used to encrypt the data while communication happens between a client and the cluster. As for authorization, it means that you could set what client certificates will posses 'SF Cluster Admin' privileges, and the ones that will allow only to query the info about your cluster.
In addition to the cluster certificates, you can add client
certificates to perform management operations on a service fabric
cluster. You can add two kinds of client certificates - Admin or
Read-only. These then can be used to control access to the admin
operations and Query operations on the cluster. By default, the
cluster certificates are added to the allowed Admin certificates list.
you can specify any number of client certificates. Each
addition/deletion results in a configuration update to the service
fabric cluster
We're using Fabric secure cluster and need client certificate for CI/CD tools.
I've created both Cluster primary certificate and client certificate with this script https://gist.github.com/kagarlickij/d63a4061a1066d3a85abcc658f0856f5
so both have been uploaded to the same Kay vault and both have been installed to local keystore on my machine.
I've added client certificate to my Fabric security settings (Authentication type = Admin client, Authorization method = Certificate thumbprint).
The problem is that I can connect (I'm using Connect-ServiceFabricCluster in PowerShell) to Fabric cluster with Cluster primary certificate but can't with Client certificate.
I'm getting this error: Connect-ServiceFabricCluster : FABRIC_E_SERVER_AUTHENTICATION_FAILED: 0x800b0109
Please advice what can be done?
Based on this link the corresponding error code for 0x800b0109 is:
A certificate chain processed, but terminated in a root certificate
which is not trusted by the trust provider.
You're using a self-signed certificate as client cert. I'm not sure it's supported as explained in the Service Fabric Security documentation, moreover you'll have to make sure the SSL certificate has been added inside your local Store.
Client X.509 certificates
Client certificates typically are not issued by a third-party CA.
Instead, the Personal store of the current user location typically
contains client certificates placed there by a root authority, with an
Intended Purposes value of Client Authentication. The client can use
this certificate when mutual authentication is required. Note
All management operations on a Service Fabric cluster require server certificates. Client certificates cannot be used for management.
I had the same issue managing my cluster through powershell, I only had 1 cert on the cluster (the one azure generates when creating the cluster) and I believe it is a client cert since I have to select it in my browser when managing the cluster.
Ultimately I had to add the self signed cert to my Root certificate store (in addition to my personal store where I already had it) to get the powershell module to stop complaining about it.
I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.