What is meaning of Kubernetes webhook user client-certificate config? - kubernetes

I need to implement a custom authentication and authorisation module for Kubernetes. This is going to have to be done via a web hook.
The documentation for the authentication and authorisation webhooks describes a config file that the API Server needs to be started with.
The config file looks identical for both authentication and authorisation and looks like this:
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authn-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'.
# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authn-service
user: name-of-api-sever
name: webhook
I can see that the clusters section refers to the remote service, i.e. it's defining the webhook, thereby answering the question the API Server needs to have answered: "what is the URL endpoint to hit when an authn/authz decision is required, and when I connect via HTTPS, who is the CA authority for the webhook's TLS certificate so that I know I can trust the remote webhook?"
I'm not sure of the users section. What is the purpose of the client-certificate and client-key fields? The comment in the file says "cert for the webhook plugin to use", but as this config file is given to the API Server, not the web hook, I don't understand what this means. Is this a certificate that will allow the webhook service to authenticate the connection that the API Server will initiate with it? i.e. the client certificate needs to go into the truststore of the webhook server?
Are both of these assumptions correct?

Kubernetes webhook is using two-way SSL authentication, so the fields in users section is used to configure the certificates for "client side's authentication".
clusters section configuration just works normal one way SSL authentication, which is server (here is your webhook module) will validate client's (here is Kubernetes) request with configured certificate.
As long as you configured certificates in users section, client (Kubernetes) could have the ability to validate server's (webhook module) response, just acting like a reverse CA authentication of one way SSL.

Related

Self-signed certificates ok for kubernetes validating webhooks?

I'm trying to understand the security implications for using self-signed certificates for a Kubernetes validating webhook.
If I'm understanding correctly, the certificate is simply used to be able to serve the validating webhook server over https. When the Kubernetes api-server receives a request that matches the configuration for a validating webhook, it'll first check with the validating webhook server over https. If your validating webhook server lives on the Kubernetes cluster (is not external) then this traffic is all internal to a Kubernetes cluster. If this is the case is it problematic that the cert is self-signed?
If I'm understanding correctly, the certificate is simply used to be
able to serve the validating webhook server over https.
Basically yes.
If your validating webhook server lives on the Kubernetes cluster (is
not external) then this traffic is all internal to a Kubernetes
cluster. If this is the case is it problematic that the cert is
self-signed?
If the issuing process is handled properly and in secure manner, self-signed certs shouldn't be a problem at all. Compare with this example.

Keycloak-nodejs-connect grantManager can't validateToken when configured with internal kubernetes keycloak service address

I have an issue when validating tokens using the keycloak-nodejs-connect library deployed to a kubernetes cluster - specifically when using the internal kubernete's service address for keycloak as the auth-server-url. I am using keycloak version 10.0.1.
Our workflow is as follows - our web app authenticates with a public keycloak client to obtain an access token. This token is attached to requests to the db for data. The db (hasura) uses an auth hook to validate the token before allowing access to its data. This auth hook implements the keycloak-nodejs-connect lib and through the provided middleware calls the grantManager's validateToken. However when the connect lib is configured with kubernete's service address (http://keycloak:8080/auth/) it is guaranteed to error on the issuer match because the issuer property in the JWT token (iss) will be the frontend url configured in the keycloak server (https://keycloak.public.address.uk/auth/).
Is there a way to provide a frontend and backend url to the keycloak-nodejs-connect library so that the issuer validation can occur whilst using the backend url to speak to keycloak via a kubernete's service - or should I be configuring keycloak a certain way so that the issuer is different? I am specifically needing to use a kubernete's service address here rather than a public address for keycloak communications in my cluster.
The following source location hyperlinks try to highlight the issue in code:
nodejs connect server url config (note only one url available used
for both keycloak server communication and issuer validation)
Where the config is applied
Where the token issuer is validated against the configured keycloak auth server
Keycloak server's front end url
One example of how the issuer is set to the frontend url when the token
is being generated
Many thanks for any help,
Andy.

Service fabric client encryption

Does the client x.509 certificate encrypt the data as well as handle authorization?
Documentation says it handles authorization and message signing. But does that mean the data is encrypted in transit?
It is NOT encrypted when using Secure-Cluster with Certificates (Node2Node + Client2Node) with default Rpc-Endpoints. In Wireshark you can see the whole communication. It seems just for authorization.
Endpoints with https are encrypted of course.
Yes, a given x509 certificate will be used to encrypt the data while communication happens between a client and the cluster. As for authorization, it means that you could set what client certificates will posses 'SF Cluster Admin' privileges, and the ones that will allow only to query the info about your cluster.
In addition to the cluster certificates, you can add client
certificates to perform management operations on a service fabric
cluster. You can add two kinds of client certificates - Admin or
Read-only. These then can be used to control access to the admin
operations and Query operations on the cluster. By default, the
cluster certificates are added to the allowed Admin certificates list.
you can specify any number of client certificates. Each
addition/deletion results in a configuration update to the service
fabric cluster

Forcing all requests to an HTTP endpoint through AWS API Gateway

I have an rest HTTP endpoint that is sitting outside of AWS, but I want to use AWS API Gateway to proxy through to that endpoint. What would be the best way to only allow requests to the HTTP endpoint to process that come through the API gateway?
One possibility would be to make your non-AWS endpoint require a client TLS certificate. AWS API Gateway can generate client certificates, and your non-AWS endpoint can:
require a client certificate (if not provided, then ignore / don't allow)
use the API Gateway cert public key to verify the client is your API Gateway.
This would give you good assurance that traffic to your non-AWS endpoint is only coming through the AWS API Gateway, so long as the client certificate generated by AWS is not compromised.
From the AWS FAQs:
Q: Can I verify that it is API Gateway calling my backend?
Yes. Amazon API Gateway can generate a client-side SSL certificate and make the public key of that certificate available to you. Calls to your backend can be made with the generated certificate, and you can verify calls originating from Amazon API Gateway using the public key of the certificate.

How can I overcome the x509 signed by unknown certificate authority error when using the default Kubernetes API Server virtual IP?

I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.