Backup and restore a Kubernetes Serviceaccount Token - kubernetes

does anyone know if there is a way to back up and restore a Serviceaccount in Kubernetes that always has the same token? We use service account tokens quite often (for example when they act as an oauth client in Openshift), and it would be nice if we could reproduce a Serviceaccount to have the same token in the event it's deleted.
I noticed that there is a way to manually create a service account token as described here. But, as far as I know this method will still auto generate the secret contents.

Fundamentally ability to reproduce a new serviceaccount to have the same old token poses security risk.
Ideally you should be taking periodic backup of ETCD which would have the secret token as well along with all other kubernetes resources. In case of unforeseen disaster you can restore ETCD from the backup and you will get back the secret token. You can use valero for the backup of the kubernetes cluster resources as well.

Related

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

Kubernetes secret is really secret?

While I developed an API server, I needed to give some account information to API server, which should not be shown to anyone.
K8s recommends secret for this kind of situation, so I used.
But I wonder if the secret is really secret.
Secret is just base 64 "encoded" text, not "encrypted".
When I see an arbitary secret like below,
namespace: ZGVmYXVsdA==
I can easily know the real value of it by decoding.
namespace: default
In such a this situation, is secret really helpful for security?
What I know about the security advantage of secret is that it is on-memory not on-node file system.
But I think that is not enough for security.
Thank you.
From Kubernetes Secrets documentation:
Risks
In the API server, secret data is stored in etcd(by default, etcd data is not encrypted); therefore:
Administrators should enable encryption at rest for cluster data (requires v1.13 or later).
Administrators should limit access to etcd to admin users.
Administrators may want to wipe/shred disks used by etcd when no longer in use.
If running etcd in a cluster, administrators should make sure to use SSL/TLS for etcd peer-to-peer communication.
If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party.
A user who can create a Pod that uses a secret can also see the value of that secret. Even if the API server policy does not allow that user to read the Secret, the user could run a Pod which exposes the secret.
Currently, anyone with root permission on any node can read any secret from the API server, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node.
Also check great post Can Kubernetes Keep a Secret? It all depends what tool you’re using, especcially "What’s wrong with Kubernetes plain Secrets?" part..
I hope that answered your question, but generally #Harsh Manvar is right: you should have an access first to that secret.
You should limit access using authorization policies such as RBAC.
You'll need to create a Role/ClusterRole with appropriate permissions and then bind (using RoleBinding/ClusterRoleBinding) that to a user and/or a service account (can be used in pod definition then), depending on your use case.
You can look at the documentation here to create Role & ClusterRole and the docs here for RoleBinding and ClusterRoleBinding.

Best practice for managing the cluster from the outside

I have a best-practice question:
I have a Django application running somewhere (non-k8s) where my end-user accounts are tracked. Separately, I have a k8s cluster that has a service for every user. When a new user signs up in Django, a new service should be created in the cluster.
What is the best practice for doing this? Two options I see are:
Have a long-lived service in the cluster, something like user-pod-creator, which exposes an API to the Django side, allowing it to ask for a pod to be created.
Give the Django permissions to use the cluster's API directly; have it create (and delete) pods as it wishes.
Intuitively I prefer the first because of the separation of concerns it creates and because of security reasons. But the second would give a lot of flexibility to the Django app so that it can not only create and delete pods, but it can have more visibility into the cluster if need be with direct API calls, instead of me having to expose new API endpoints in user-pod-creator or some other service.
Option 2 is a valid approach and can be solved with a service account.
Create a ServiceAccount for your Django app:
kubectl create serviceaccount django
This ServiceAccount points to a Secret, and this Secret contains a token.
Find out the Secret associated with the ServiceAccount:
kubectl get serviceaccount django -o yaml
Get the token in the Secret:
kubectl get secret django-token-d2tz4 -o jsonpath='{.data.token}'
Now you can use this token as an HTTP bearer token in the Kubernetes API requests from your Django app outside the cluster.
That is, include the token in the HTTP Authorization header of the Kubernetes API requests like this:
Authorization: Bearer <TOKEN>
In this way, your request passes the authentication stage in the API server. However, the service account has no permissions yet (authorisation).
You can assign the required permissions to your service account with Roles and RoleBindings:
kubectl create role django --verb <...> --resource <...>
kubectl create rolebinding django --role django --serviceaccount django
Regarding security: grant only the minimum permissions needed to the service account (principle of least privileges), and if you think someone stole the service account token, you can just delete the service account with kubectl delete serviceaccount django to invalidate the token.
See also here for an example of the presented approach. Especially:
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API.

GOCD agent registration with kubernetes

I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.

Kubernetes 1.6+ RBAC: Gain access as role cluster-admin via kubectl

1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials​ from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.