How to write secrets to HashiCorp Valut or Azure Key Vault from Kubernetes? - kubernetes

I have come across injectors/drivers/et cetera for Kubernetes for most major secret providers, but the common theme with those solutions are that these only sync one-way, i.e., only from the vault to the cluster. I want to be able to update the secrets too, from my Kubernetes cluster.
What is the recommended pattern for doing this? (Apart from the obvious solution of writing a custom service that communicates with the vault)

I'd say that this is an anti pattern, meaning you shouldn't do that.
If you create your secret in k8s from file, that would mean you either have it in version control, something you should never do. Or you don't have it in version control or create it from literal, which is good, but than you neither have a change history/log nor a real documentation of your secret. I guess that would explain, why the major secret providers don't support that.
You should set up the secret using the key vault and apply it to your cluster using Terraform for example.
Terraform supports both azure key vault secret https://www.terraform.io/docs/providers/azurerm/r/key_vault_secret.html and Kubernetes secrets https://www.terraform.io/docs/providers/kubernetes/r/secret.html
You can simply import the key vault secret and use it in the k8s secret. Every time you update the key vault secret, you apply the changes with Terraform.

Related

How to pass configuration via argocd and crossplane

We are trying to create an environment using crossplane and argocd. Once Crossplane generates the database and saves the credentials to a secret on the management cluster. After we are deploying the credentials from management cluster to our destination cluster to a secret.
Now we need to pass the credentials from secret a to secret B which the application knows about. The issue starts when argo do not use helm install but template thus lookup function don't work. We thought about using vault as a middle man but we are not sure how to load values from secret to vault.
Anyway if you encounter such an issue or have some sort of a solution we'll be very happy to hear.
Thank you
You need to commit the (encrypted) secrets somewhere for ArgoCD to pick them up. That is the whole point of GitOps.
Alternatively you can try using https://argo-cd.readthedocs.io/en/stable/user-guide/parameters/ but this is considered a temporary workaround

What is the point of Kubernetes secrets if I can decode them?

I can easily get the secrets stored in Kubernetes.
$ kubectl get secret my-app-secrets -o yaml
Select secret value from output that I want to decode.
Example ZXhwb3NlZC1wYXNzd29yZAo=
$ echo ZXhwb3NlZC1wYXNzd29yZAo= | base64 --decode
> exposed-password
I'm not sure I understand the effectiveness of the secrets resources in Kubernetes ecosystem since it's easy to obtain this.
base64 is encoding, not encryption, it allows you to simply encode information in a convenient way.
The data that you encode may contain many unrecognized characters, line feeds, etc., so it is convenient to encode them.
In kubernetes, you can enable encryption using this instruction.
But kubernetes should not be the only source of truth, rather kubernetes loads these secrets from an external vault that you need to select, such as hashicorp's vault, as indicated in the comments.
In addition to hashicorp vault, there are various ways to store secrets in git:
Helm secrets
Kamus
Sealed secrets
git-crypt
You may also be interested in the kubesec project, which can be used to analyze kubernetes resources for security risks.
The point is that in Kubernetes, the secret allows you to protect your password (what you want to do by encrypting it) by controlling the access to the secret, instead of by encrypting it.
There are several mechanisms for it:
Secrets can only by accessed by those of their very same namespace.
Secrets have permissions as any other file, so you choose who has access to it.
They are only sent to pods whenever required, not before.
They're not written in local disk storage.
That said, in case something goes wrong, solutions as Sealed Secrets created by Bitnami or others solutions (see Mokrecov answer) have arisen to give even more robustness to the matter, just in case someone undesired gained access to your secret.
Secrets in kubernetes are separate manifests NOT to protect your secret data, but to separate your secret data from your deployment/pod configuration.
Then it's up to you how to secure your secrets, there are many options with it's pros and cons (see Mokrecov's answer). There is also some advantages of secrets compared to other types. Like namespace restriction, seperate access management, not available in pod before it's needed and they are not written in the local disc storage.
Let's think other way around, let's imagine there wasn't any Secrets in kubernetes. Now, your secret data will be inside your deployment/pod/configmap. You have several problems. For example:
You want to give access to deployment manifest to all users but restrict access to Secrets to person A and B only. How do you do that?
If you want to encrypt secrets, you will have to encrypt all data together with deployment data which will make maintenance impossible. Or you can encrypt each secret value but you have to come up with some decryption mechanism for each of them, and keys to decrypt will be unvailed in that phase anyway.
You can use ConfigMap to seperate secret data from configuration. But then when you want to add encryption mechanism, or some access restrictions to it, you will be restricted by characteristics of ConfigMap, because it's intention is only to store non secret data. With Secrets you have easy options to add encryption/restrictions.

Best practice for shared K8s Secrets in Helm 3?

I have a couple Charts which all need access to the same Kubernetes Secret. My initial plan was to create a Chart just for those Secrets but it seems Helm doesn't like that. I am thinking this must be a common problem and am wondering what folks generally do to solve this problem?
Thanks!
Best practice is, don't save any sensitive secrets in kubernetes clusters. kubernetes secret is encode, not encrypt.
You can reference the secret via aws ssm/secrets manager, hashicorp Vault or other similars.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets
Most charts that follow the common chart development practices allow you to use an existing secret instead of creating one for you. This way, you can create your common secrets normally (without helm), and refer to them from the charts that need them, via a reference like existingSecret config key.
Take minio helm chart for example: it accepts an existingSecret key as an alternative to passing an accessKey and a secretKey.
As you can see in the main charts repo, this is a pretty common practice.

Populate kubernetes Configmap from hashicorp vault

i want to populate configmaps from data inside vault in kubernetes. I just complete setup of vault and auth method as kubernetes(Service account) and userpass.
Can someone suggest easy way to integrate variables for application ? what to add in yaml file ? if i can populate configmap then i can easily use it to yaml.
how to changes will be affected if variable change on vault.
you can try using Vault CRD, when you create a custom resource of type vault, it will create a secrets using a data from the vault
You can use Vault CRD as Xavier Adaickalam mentioned.
Regarding the subject of variable changes, you have 2 ways of exposing variables inside Pods, using volumes and using environment variables. Volumes are updated automatically when the secrets are modified. Unfortunately, environment variables do not receive updates even if you modify your secrets. You have to restart your container if the values are modified.

Secret management in Helm Charts

I am trying to use Helm charts to install applications in Kubernetes clusters. Can someone please suggest what could be a better solution to manage secrets? Using helm secrets would be a good idea or Hashicorp Vault?
Vault is technically awesome, but it can be an administrative burden. You can get strong protection of "secrets", whatever they may be; you can avoid ever sharing magic secrets like the your central database password by generating single-use passwords; if you need something signed or encrypted, you can ask Vault to do that for you and avoid ever having to know the cryptographic secret yourself. The big downsides are that it's a separate service to manage, getting secrets out of it is not totally seamless, and you occasionally need to have an administrator party to unseal it if you need to restart the server.
Kubernetes secrets are really just ConfigMaps with a different name. With default settings it's very easy for an operator to get out the value of a Secret (kubectl get secret ... -o yaml, then base64 decode the strings), so they're not actually that secret. If you have an interesting namespace setup, you generally can't access a Secret in a different namespace, which could mean being forced to copy around Secrets a lot. Using only native tools like kubectl to manage Secrets is also a little clumsy.
Pushing credentials in via Helm is probably the most seamless path – it's very easy to convert from a Helm value to a Secret object to push into a container, and very easy to push in values from somewhere like a CI system – but also the least secure. In addition to being able to dump out the values via kubectl you can also helm get values on a Helm release to find out the values.
So it's a question of how important keeping your secrets really secret is, and how much effort you want to put in. If you want seamless integration and can limit access to your cluster to authorized operators and effectively use RBAC, a Helm value might be good enough. If you can invest in the technically best and also most complex solution and you want some of its advanced capabilities, Vault works well. Maintaining a plain Kubernetes secret is kind of a middle ground, it's a little more secure than using Helm but not nearly as manageable.