I'm using a Kubernetes operator that generate original secret data. I need to use this secret in the other secret that consumed by deployment with some value mapping. I can't/won't modify Deployment to use generated secret directly. How it can be implemented?
There is few good solutions to access cluster-external secrets. May be there is custom operator/controller etc. that cat generate secret by template and values from other secret?
Details: https://github.com/kubernetes/kubernetes/issues/97062
Related
I am using Helm w/ Kubernetes and am trying to add data that I have in an existing Configmap to an existing secret. The reason for this, is that there is a property on a CRD that I need to set which only takes in a single secret key ref. The existing secret is created by Vault, and the existing Configmap is configured in the Helm chart in plain text. For reasons that I won't get into, we cannot include the content of the configmap into the Vault secret entry, so I MUST be able to merge these two into a secret.
I've tried searching for this, but most answers I see involve creating an initContainer and setting up a volume, but unfortunately I don't think this will work for my situation. I just need a single secret that I can reference in a CRD and problem solved. Is this possible using Kubernetes/Helm?
My fallback plan is to create my own CRD and associated controller to merge the configmap data and the secret's data and basically create a new secret, but it seems like overkill.
As far as I am aware of there is not way to do this in kubernetes.
The only solution that I can see would be to implement some tool yourself. With something like kopf you could implement a simple operator that listen for the creation/update of a specific secret and configmap, get their data and merge it into a new secret.
Using an operator allows you to handle all the cases that might occur during the life of your resources, such as when your new secret is deleted or updated, etc.
I have come across injectors/drivers/et cetera for Kubernetes for most major secret providers, but the common theme with those solutions are that these only sync one-way, i.e., only from the vault to the cluster. I want to be able to update the secrets too, from my Kubernetes cluster.
What is the recommended pattern for doing this? (Apart from the obvious solution of writing a custom service that communicates with the vault)
I'd say that this is an anti pattern, meaning you shouldn't do that.
If you create your secret in k8s from file, that would mean you either have it in version control, something you should never do. Or you don't have it in version control or create it from literal, which is good, but than you neither have a change history/log nor a real documentation of your secret. I guess that would explain, why the major secret providers don't support that.
You should set up the secret using the key vault and apply it to your cluster using Terraform for example.
Terraform supports both azure key vault secret https://www.terraform.io/docs/providers/azurerm/r/key_vault_secret.html and Kubernetes secrets https://www.terraform.io/docs/providers/kubernetes/r/secret.html
You can simply import the key vault secret and use it in the k8s secret. Every time you update the key vault secret, you apply the changes with Terraform.
i want to populate configmaps from data inside vault in kubernetes. I just complete setup of vault and auth method as kubernetes(Service account) and userpass.
Can someone suggest easy way to integrate variables for application ? what to add in yaml file ? if i can populate configmap then i can easily use it to yaml.
how to changes will be affected if variable change on vault.
you can try using Vault CRD, when you create a custom resource of type vault, it will create a secrets using a data from the vault
You can use Vault CRD as Xavier Adaickalam mentioned.
Regarding the subject of variable changes, you have 2 ways of exposing variables inside Pods, using volumes and using environment variables. Volumes are updated automatically when the secrets are modified. Unfortunately, environment variables do not receive updates even if you modify your secrets. You have to restart your container if the values are modified.
When Kubernetes creates secrets, do they encrypt the given user name and password with certificate?
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
It depends, but yes - it's encrypted at rest. The secrets are store at etcd (the database used to store all Kubernetes objects) and you can enable a Key Management System that will be used to encrypt the secrets. You can find all the relevant details on the documentation.
Please note that this does not protect the manifests files - which are not encrypted. The secrets are only encrypted on etcd, but when getting them with kubectl or with the API you will get them decrypted.
If you wish to encrypt also the manifest files, there are multiple good solutions to that, like Sealed Secrets, Helm Secrets or Kamus. You can read more about them on my blog post.
Secrets are stored in etcd which is highly-available key value store fo cluster information data. Data are encrypted at rest. By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
EncryptionConfiguration was introduced to encrypt secrets locally, with a locally managed key.
Encrypting secrets with a locally managed key protects against an etcd compromise, but it fails to protect against a host compromise.
Since the encryption keys are stored on the host in the EncryptionConfig YAML file, a skilled attacker can access that file and extract the encryption keys. This was a stepping stone in development to the kms provider, introduced in 1.10, and beta since 1.12. Envelope encryption creates dependence on a separate key, not stored in Kubernetes.
In this case, an attacker would need to compromise etcd, the kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than locally-stored encryption keys.
More information you can find here:
secrets, encryption.
I hope it helps.
I have pods who deployed to Kubernetes cluster (hosted with Google Cloud Kubernetes). Those pods are using some secret, which are plain text files. I added the secret to the yaml file and deployed the deployment. The application is working fine.
Now, let say that someone compromised my code and somehow get access to all my files on the container. In that case, the attacker can find the secrets directory and print all the secrets written there. It's a plain text.
Question:
Why it more secure use kubernetes-secrets instead of just a plain-text?
There are different levels of security and as #Vishal Biyani says in the comments, it sounds like you're looking for a level of security you'd get from a project like Sealed Secrets.
As you say, out of the box secrets doesn't give you encryption at the container level. But it does give controls on access through kubectl and the kubernetes APIs. For example, you could use role-based access control so that specific users could see that a secret exists without seeing (through the k8s APIs) what its value is.
In case you can create the secrets using a command instead of having it on the yaml file:
example:
kubectl create secret generic cloudsql-user-credentials --from-literal=username=[your user]--from-literal=password=[your pass]
you can also read it as
kubectl get secret cloudsql-user-credentials -o yaml
i also use the secret with 2 levels, the one is the kubernetes :
env:
- name: SECRETS_USER
valueFrom:
secretKeyRef:
name: cloudsql-user-credentials
key: username
the SECRETS_USER is a env var, which i use this value on jasypt
spring:
datasource:
password: ENC(${SECRETS_USER})
on the app start up you use the param : -Djasypt.encryptor.password=encryptKeyCode
/.m2/repository/org/jasypt/jasypt/1.9.2/jasypt-1.9.2.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input="encryptKeyCode" password=[pass user] algorithm=PBEWithMD5AndDES