Best practice for managing the cluster from the outside - kubernetes

I have a best-practice question:
I have a Django application running somewhere (non-k8s) where my end-user accounts are tracked. Separately, I have a k8s cluster that has a service for every user. When a new user signs up in Django, a new service should be created in the cluster.
What is the best practice for doing this? Two options I see are:
Have a long-lived service in the cluster, something like user-pod-creator, which exposes an API to the Django side, allowing it to ask for a pod to be created.
Give the Django permissions to use the cluster's API directly; have it create (and delete) pods as it wishes.
Intuitively I prefer the first because of the separation of concerns it creates and because of security reasons. But the second would give a lot of flexibility to the Django app so that it can not only create and delete pods, but it can have more visibility into the cluster if need be with direct API calls, instead of me having to expose new API endpoints in user-pod-creator or some other service.

Option 2 is a valid approach and can be solved with a service account.
Create a ServiceAccount for your Django app:
kubectl create serviceaccount django
This ServiceAccount points to a Secret, and this Secret contains a token.
Find out the Secret associated with the ServiceAccount:
kubectl get serviceaccount django -o yaml
Get the token in the Secret:
kubectl get secret django-token-d2tz4 -o jsonpath='{.data.token}'
Now you can use this token as an HTTP bearer token in the Kubernetes API requests from your Django app outside the cluster.
That is, include the token in the HTTP Authorization header of the Kubernetes API requests like this:
Authorization: Bearer <TOKEN>
In this way, your request passes the authentication stage in the API server. However, the service account has no permissions yet (authorisation).
You can assign the required permissions to your service account with Roles and RoleBindings:
kubectl create role django --verb <...> --resource <...>
kubectl create rolebinding django --role django --serviceaccount django
Regarding security: grant only the minimum permissions needed to the service account (principle of least privileges), and if you think someone stole the service account token, you can just delete the service account with kubectl delete serviceaccount django to invalidate the token.
See also here for an example of the presented approach. Especially:
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API.

Related

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

Backup and restore a Kubernetes Serviceaccount Token

does anyone know if there is a way to back up and restore a Serviceaccount in Kubernetes that always has the same token? We use service account tokens quite often (for example when they act as an oauth client in Openshift), and it would be nice if we could reproduce a Serviceaccount to have the same token in the event it's deleted.
I noticed that there is a way to manually create a service account token as described here. But, as far as I know this method will still auto generate the secret contents.
Fundamentally ability to reproduce a new serviceaccount to have the same old token poses security risk.
Ideally you should be taking periodic backup of ETCD which would have the secret token as well along with all other kubernetes resources. In case of unforeseen disaster you can restore ETCD from the backup and you will get back the secret token. You can use valero for the backup of the kubernetes cluster resources as well.

GOCD agent registration with kubernetes

I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.

How do I authenticate with Kubernetes kubectl using a username and password?

I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Kubernetes 1.6+ RBAC: Gain access as role cluster-admin via kubectl

1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials​ from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.