I'm a bit disturbed on how to secure the kubernetes API for call and access, also Kube-ui is available to everybody.
How can I set credential to secure all the services ?
Thank you
The Kubernetes API supports multiple forms of authentication: http basic auth, bearer token, client certificates. When launching the apiserver, you can enable / disable each of these authentication methods with command line flags.
You should also be running the apiserver where the insecure port is only accessible to localhost, so that all connections coming across the network use https. By having your api clients verify the TLS certificate presented by the apiserver, they can verify that the connection is both encrypted and not susceptible to man-in-the-middle attacks.
By default, anyone who has access credentials to the apiserver has full access to the cluster. You can also configure more fine grained authorization policies which will become more flexible and configurable in future Kubernetes releases.
Related
I'm trying to understand the security implications for using self-signed certificates for a Kubernetes validating webhook.
If I'm understanding correctly, the certificate is simply used to be able to serve the validating webhook server over https. When the Kubernetes api-server receives a request that matches the configuration for a validating webhook, it'll first check with the validating webhook server over https. If your validating webhook server lives on the Kubernetes cluster (is not external) then this traffic is all internal to a Kubernetes cluster. If this is the case is it problematic that the cert is self-signed?
If I'm understanding correctly, the certificate is simply used to be
able to serve the validating webhook server over https.
Basically yes.
If your validating webhook server lives on the Kubernetes cluster (is
not external) then this traffic is all internal to a Kubernetes
cluster. If this is the case is it problematic that the cert is
self-signed?
If the issuing process is handled properly and in secure manner, self-signed certs shouldn't be a problem at all. Compare with this example.
Does the client x.509 certificate encrypt the data as well as handle authorization?
Documentation says it handles authorization and message signing. But does that mean the data is encrypted in transit?
It is NOT encrypted when using Secure-Cluster with Certificates (Node2Node + Client2Node) with default Rpc-Endpoints. In Wireshark you can see the whole communication. It seems just for authorization.
Endpoints with https are encrypted of course.
Yes, a given x509 certificate will be used to encrypt the data while communication happens between a client and the cluster. As for authorization, it means that you could set what client certificates will posses 'SF Cluster Admin' privileges, and the ones that will allow only to query the info about your cluster.
In addition to the cluster certificates, you can add client
certificates to perform management operations on a service fabric
cluster. You can add two kinds of client certificates - Admin or
Read-only. These then can be used to control access to the admin
operations and Query operations on the cluster. By default, the
cluster certificates are added to the allowed Admin certificates list.
you can specify any number of client certificates. Each
addition/deletion results in a configuration update to the service
fabric cluster
Objective
I'm seeking clarification around the nuances of accessing the Kubelet API.
Context
I have the IP of the node (physical host's IP) that a pod is in. I would like to make calls to the Kubelet API (running on the node) e.g to ${node_ip}:10255
Question(s)
Can the protocol be HTTP?
If it can be HTTP, do I need provide any form of authentication e.g. a bearer token?
If it must be HTTPS, what forms of authentication must I provide?
Bearer token?
Certificates?
There are two ports the kubelet may listen on.
--read-only-port is the http read-only port for the Kubelet to serve on with no authentication/authorization (defaults to 10255, can set to 0 to disable). If enabled, this only serves read-only data, and doesn't expose the APIs that allow pod exec/attach/proxy, etc.
--port is the https port for the Kubelet to serve all its APIs on, with optional authentication/authorization. (defaults to 10250)
See http://kubernetes.io/docs/admin/kubelet-authentication-authorization/ for the authentication/authorization options for the secure port.
Authentication options include client certificate, API bearer token, and to allow anonymous requests.
Authorization options include allowing all requests and delegating authorization to the API server via the SubjectAccessReview API
GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
Is there a better method for securing access to the UI? It appears to me this should be accessed behind a secure VPN to prevent various types of attacks. If someone could access the Kubernetes UI, they could cause a lot of damage to the cluster.
GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
The UI is running as a Pod in the Kubernetes cluster with a service attached so that it is accessible from inside of the cluster. If you want to access it remotely, you can use the service proxy running in the apiserver, which means that you would authenticate with the apiserver to access the UI.
The apiserver accepts three forms of client authentication: basic auth, bearer token, and client certificate. The basic auth password should have high entropy, and is only transmitted over SSL. It is provided to make access via a browser simpler since OAuth integration does not yet exist (although you should only pass your credentials over the SSL connection if you have verified the server certificate in your web browser so that your credentials aren't stolen by a man in the middle attack).
Is there a better method for securing access to the UI?
There isn't a way to tell GKE to disable the service proxy in the master, but if an attacker had credentials, then they could access your cluster using the API and do as much harm as if they could get to the UI. So I'm not sure why you are particularly concerned with securing the UI via the service proxy vs. securing the apiserver's API endpoint.
It appears to me this should be accessed behind a secure VPN to prevent various types of attacks.
Which types of attacks are you concerned about specifically?
I have some APIs on my laptop. They are visible on the internet through secure gateway service.
The secure gateway destination is configured with TLS mutual authentication option. So APIs require TLS mutual authentication.
I would like to add those APIs to API Management.
I could not bind a SSL profile on Proxy tab, but I could bind a SSL profile to an HTTP GET operation on Implementation tab.
Does this mean I have to implement an assembly operation to bind a SSL profile?
The "Proxy" tab is meant as a "simple" get you going proxy setup. For more advanced "proxies", you should use an "Assembly" implementation with a "Proxy" policy. On the settings for the Proxy policy you can specify an SSL profile.