I'd like to know if kubectl offers an easy way to list all the secrets that a certain pod/deployment/statefulset is using, or if there is some way to cleanly retrieve this info. When doing a kubectl describe for a pod, I see I can get a list of mounted volumes which include the ones that come from secrets that I could extract using jq and the like, but this way feels a bit clumsy. I have been searching a bit to no avail. Do you know if there is anything like that around? Perhaps using the API directly?
To List all Secrets currently in use by a pod use:
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
In the other hand if you want to access to stored secrets in the API:
Kubernetes Secrets are, by default, stored unencrypted in the API
server's underlying data store (etcd). Anyone with API access can
retrieve or modify a Secret, and so can anyone with access to etcd.
Additionally, anyone who is authorized to create a Pod in a namespace
can use that in order to safely use Secrets, take at least the
following steps:
Enable Encryption at Rest for Secrets.
Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means).
Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing
ones.access to read any Secret in that namespace; this includes
indirect access such as the ability to create a Deployment.
If you want more information about secrets in kubernetes, follow this link.
Related
My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.
Such as system:masters、system:anonymous、system:unauthenticated.
Is there a way to have all system groups that do not contain external creation, just the system,kubectl command or a list?
I searched the Kubernetes documentation but didn't find a list or a way to get it.
There is no build-in command to list all the default user groups from the Kubernetes cluster.
However you can try to workaround in several options:
You can create your custom script (i.e. in Bash) based on kubectl get clusterrole command.
You can try install some plugins. Plugin rakkess could help you:
Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use kubectl auth can-i list deployments, but maybe you are looking for a complete overview? This is what rakkess is for. It lists access rights for the current user and all server resources, similar to kubectl auth can-i --list.
See also more information about:
kubelet authentication / authorization
anonymous requests
How can we obtain the gke pod counts running in the cluster? I found there are ways to get node count but we needed pod count as well. it will be better if we can use something with no logging needed in gcp operations.
You can do it with Kubernetes Python Client library as shown in this question, posted by Pradeep Padmanaban C, where he was looking for more effective way of doing it, but his example is actually the best what you can do to perform such operation as there is no specific method which would allow you just to count pods without retrieving their entire json manifests:
from kubernetes import client , config
config.load_kube_config()
v1= client.CoreV1Api()
ret_pod = v1.list_pod_for_all_namespaces(watch=False)
print(len(ret_pod.items))
You can also use a different method, which allows to retrieve pods only from specific namespace e.g.:
list_namespaced_pod("default")
In kubectl way you can do it as follows (as proposed here by RammusXu):
kubectl get pods --all-namespaces --no-headers | wc -l
You can directly access the kubernetes API using a restful API call. You will need to make sure you provide the authentication token in your call by including a bearer token.
Once you are able to query the api server directly, you can use GET <master_endpoint>/api/v1/pods to list all the pods in the cluster. You can also search for specific namespaces by specifying the namespace /api/v1/namespaces/<namespace>/pods.
Keep in mind that the kubectl cli tool is just a wrapper for API calls, each kubectl command will form a RESTful API call in a similar format to the one listed above, so any interaction you have with the cluster using kubectl can also be achieved through RESTful API calls
I can easily get the secrets stored in Kubernetes.
$ kubectl get secret my-app-secrets -o yaml
Select secret value from output that I want to decode.
Example ZXhwb3NlZC1wYXNzd29yZAo=
$ echo ZXhwb3NlZC1wYXNzd29yZAo= | base64 --decode
> exposed-password
I'm not sure I understand the effectiveness of the secrets resources in Kubernetes ecosystem since it's easy to obtain this.
base64 is encoding, not encryption, it allows you to simply encode information in a convenient way.
The data that you encode may contain many unrecognized characters, line feeds, etc., so it is convenient to encode them.
In kubernetes, you can enable encryption using this instruction.
But kubernetes should not be the only source of truth, rather kubernetes loads these secrets from an external vault that you need to select, such as hashicorp's vault, as indicated in the comments.
In addition to hashicorp vault, there are various ways to store secrets in git:
Helm secrets
Kamus
Sealed secrets
git-crypt
You may also be interested in the kubesec project, which can be used to analyze kubernetes resources for security risks.
The point is that in Kubernetes, the secret allows you to protect your password (what you want to do by encrypting it) by controlling the access to the secret, instead of by encrypting it.
There are several mechanisms for it:
Secrets can only by accessed by those of their very same namespace.
Secrets have permissions as any other file, so you choose who has access to it.
They are only sent to pods whenever required, not before.
They're not written in local disk storage.
That said, in case something goes wrong, solutions as Sealed Secrets created by Bitnami or others solutions (see Mokrecov answer) have arisen to give even more robustness to the matter, just in case someone undesired gained access to your secret.
Secrets in kubernetes are separate manifests NOT to protect your secret data, but to separate your secret data from your deployment/pod configuration.
Then it's up to you how to secure your secrets, there are many options with it's pros and cons (see Mokrecov's answer). There is also some advantages of secrets compared to other types. Like namespace restriction, seperate access management, not available in pod before it's needed and they are not written in the local disc storage.
Let's think other way around, let's imagine there wasn't any Secrets in kubernetes. Now, your secret data will be inside your deployment/pod/configmap. You have several problems. For example:
You want to give access to deployment manifest to all users but restrict access to Secrets to person A and B only. How do you do that?
If you want to encrypt secrets, you will have to encrypt all data together with deployment data which will make maintenance impossible. Or you can encrypt each secret value but you have to come up with some decryption mechanism for each of them, and keys to decrypt will be unvailed in that phase anyway.
You can use ConfigMap to seperate secret data from configuration. But then when you want to add encryption mechanism, or some access restrictions to it, you will be restricted by characteristics of ConfigMap, because it's intention is only to store non secret data. With Secrets you have easy options to add encryption/restrictions.
I am trying to use Helm charts to install applications in Kubernetes clusters. Can someone please suggest what could be a better solution to manage secrets? Using helm secrets would be a good idea or Hashicorp Vault?
Vault is technically awesome, but it can be an administrative burden. You can get strong protection of "secrets", whatever they may be; you can avoid ever sharing magic secrets like the your central database password by generating single-use passwords; if you need something signed or encrypted, you can ask Vault to do that for you and avoid ever having to know the cryptographic secret yourself. The big downsides are that it's a separate service to manage, getting secrets out of it is not totally seamless, and you occasionally need to have an administrator party to unseal it if you need to restart the server.
Kubernetes secrets are really just ConfigMaps with a different name. With default settings it's very easy for an operator to get out the value of a Secret (kubectl get secret ... -o yaml, then base64 decode the strings), so they're not actually that secret. If you have an interesting namespace setup, you generally can't access a Secret in a different namespace, which could mean being forced to copy around Secrets a lot. Using only native tools like kubectl to manage Secrets is also a little clumsy.
Pushing credentials in via Helm is probably the most seamless path – it's very easy to convert from a Helm value to a Secret object to push into a container, and very easy to push in values from somewhere like a CI system – but also the least secure. In addition to being able to dump out the values via kubectl you can also helm get values on a Helm release to find out the values.
So it's a question of how important keeping your secrets really secret is, and how much effort you want to put in. If you want seamless integration and can limit access to your cluster to authorized operators and effectively use RBAC, a Helm value might be good enough. If you can invest in the technically best and also most complex solution and you want some of its advanced capabilities, Vault works well. Maintaining a plain Kubernetes secret is kind of a middle ground, it's a little more secure than using Helm but not nearly as manageable.