While I developed an API server, I needed to give some account information to API server, which should not be shown to anyone.
K8s recommends secret for this kind of situation, so I used.
But I wonder if the secret is really secret.
Secret is just base 64 "encoded" text, not "encrypted".
When I see an arbitary secret like below,
namespace: ZGVmYXVsdA==
I can easily know the real value of it by decoding.
namespace: default
In such a this situation, is secret really helpful for security?
What I know about the security advantage of secret is that it is on-memory not on-node file system.
But I think that is not enough for security.
Thank you.
From Kubernetes Secrets documentation:
Risks
In the API server, secret data is stored in etcd(by default, etcd data is not encrypted); therefore:
Administrators should enable encryption at rest for cluster data (requires v1.13 or later).
Administrators should limit access to etcd to admin users.
Administrators may want to wipe/shred disks used by etcd when no longer in use.
If running etcd in a cluster, administrators should make sure to use SSL/TLS for etcd peer-to-peer communication.
If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party.
A user who can create a Pod that uses a secret can also see the value of that secret. Even if the API server policy does not allow that user to read the Secret, the user could run a Pod which exposes the secret.
Currently, anyone with root permission on any node can read any secret from the API server, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node.
Also check great post Can Kubernetes Keep a Secret? It all depends what tool you’re using, especcially "What’s wrong with Kubernetes plain Secrets?" part..
I hope that answered your question, but generally #Harsh Manvar is right: you should have an access first to that secret.
You should limit access using authorization policies such as RBAC.
You'll need to create a Role/ClusterRole with appropriate permissions and then bind (using RoleBinding/ClusterRoleBinding) that to a user and/or a service account (can be used in pod definition then), depending on your use case.
You can look at the documentation here to create Role & ClusterRole and the docs here for RoleBinding and ClusterRoleBinding.
Related
My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.
does anyone know if there is a way to back up and restore a Serviceaccount in Kubernetes that always has the same token? We use service account tokens quite often (for example when they act as an oauth client in Openshift), and it would be nice if we could reproduce a Serviceaccount to have the same token in the event it's deleted.
I noticed that there is a way to manually create a service account token as described here. But, as far as I know this method will still auto generate the secret contents.
Fundamentally ability to reproduce a new serviceaccount to have the same old token poses security risk.
Ideally you should be taking periodic backup of ETCD which would have the secret token as well along with all other kubernetes resources. In case of unforeseen disaster you can restore ETCD from the backup and you will get back the secret token. You can use valero for the backup of the kubernetes cluster resources as well.
I can easily get the secrets stored in Kubernetes.
$ kubectl get secret my-app-secrets -o yaml
Select secret value from output that I want to decode.
Example ZXhwb3NlZC1wYXNzd29yZAo=
$ echo ZXhwb3NlZC1wYXNzd29yZAo= | base64 --decode
> exposed-password
I'm not sure I understand the effectiveness of the secrets resources in Kubernetes ecosystem since it's easy to obtain this.
base64 is encoding, not encryption, it allows you to simply encode information in a convenient way.
The data that you encode may contain many unrecognized characters, line feeds, etc., so it is convenient to encode them.
In kubernetes, you can enable encryption using this instruction.
But kubernetes should not be the only source of truth, rather kubernetes loads these secrets from an external vault that you need to select, such as hashicorp's vault, as indicated in the comments.
In addition to hashicorp vault, there are various ways to store secrets in git:
Helm secrets
Kamus
Sealed secrets
git-crypt
You may also be interested in the kubesec project, which can be used to analyze kubernetes resources for security risks.
The point is that in Kubernetes, the secret allows you to protect your password (what you want to do by encrypting it) by controlling the access to the secret, instead of by encrypting it.
There are several mechanisms for it:
Secrets can only by accessed by those of their very same namespace.
Secrets have permissions as any other file, so you choose who has access to it.
They are only sent to pods whenever required, not before.
They're not written in local disk storage.
That said, in case something goes wrong, solutions as Sealed Secrets created by Bitnami or others solutions (see Mokrecov answer) have arisen to give even more robustness to the matter, just in case someone undesired gained access to your secret.
Secrets in kubernetes are separate manifests NOT to protect your secret data, but to separate your secret data from your deployment/pod configuration.
Then it's up to you how to secure your secrets, there are many options with it's pros and cons (see Mokrecov's answer). There is also some advantages of secrets compared to other types. Like namespace restriction, seperate access management, not available in pod before it's needed and they are not written in the local disc storage.
Let's think other way around, let's imagine there wasn't any Secrets in kubernetes. Now, your secret data will be inside your deployment/pod/configmap. You have several problems. For example:
You want to give access to deployment manifest to all users but restrict access to Secrets to person A and B only. How do you do that?
If you want to encrypt secrets, you will have to encrypt all data together with deployment data which will make maintenance impossible. Or you can encrypt each secret value but you have to come up with some decryption mechanism for each of them, and keys to decrypt will be unvailed in that phase anyway.
You can use ConfigMap to seperate secret data from configuration. But then when you want to add encryption mechanism, or some access restrictions to it, you will be restricted by characteristics of ConfigMap, because it's intention is only to store non secret data. With Secrets you have easy options to add encryption/restrictions.
When Kubernetes creates secrets, do they encrypt the given user name and password with certificate?
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
It depends, but yes - it's encrypted at rest. The secrets are store at etcd (the database used to store all Kubernetes objects) and you can enable a Key Management System that will be used to encrypt the secrets. You can find all the relevant details on the documentation.
Please note that this does not protect the manifests files - which are not encrypted. The secrets are only encrypted on etcd, but when getting them with kubectl or with the API you will get them decrypted.
If you wish to encrypt also the manifest files, there are multiple good solutions to that, like Sealed Secrets, Helm Secrets or Kamus. You can read more about them on my blog post.
Secrets are stored in etcd which is highly-available key value store fo cluster information data. Data are encrypted at rest. By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
EncryptionConfiguration was introduced to encrypt secrets locally, with a locally managed key.
Encrypting secrets with a locally managed key protects against an etcd compromise, but it fails to protect against a host compromise.
Since the encryption keys are stored on the host in the EncryptionConfig YAML file, a skilled attacker can access that file and extract the encryption keys. This was a stepping stone in development to the kms provider, introduced in 1.10, and beta since 1.12. Envelope encryption creates dependence on a separate key, not stored in Kubernetes.
In this case, an attacker would need to compromise etcd, the kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than locally-stored encryption keys.
More information you can find here:
secrets, encryption.
I hope it helps.
I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.