Injecting vault secrets into Kubernetes Pod Environment variable - postgresql

I'm trying to install Sonarqube in Kubernetes environment which needs PostgresSQL.
I'm using an external Postgres instance and I have the crednetials kv secret set in Vault.
SonarQube helm chart creates an Environment variable in the container which takes the username and password for Postgres.
How can I inject the secret from my Vault to environment variable of sonarqube pod running on Kubernetes?
Creating a Kubernetes secret and using the secret in the helm chart works, but we are managing all secrets on Vault and need Vault secrets to be injected into pods.
Thanks

There are 2 ways to inject vault secrets into the k8s pod as ENV vars.
1) Use the vault Agent Injector
A template should be created that exports a Vault secret as an environment variable.
spec:
template:
metadata:
annotations:
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret "secret/data/web" -}}
export api_key="{{ .Data.data.payments_api_key }}"
{{- end }}
And the application container should source those files during startup.
args:
['sh', '-c', 'source /vault/secrets/config && <entrypoint script>']
Reference: https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example
2) Use banzaicloud bank-vault
Reference: https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/.
Comments:
Both methods are bypassing k8s security because secrets are not stored in etcd.
In addition, pods are unaware of vault in both methods.
So any one of these can be adopted without a deep comparison.
For vault-k8s and vault-helm users, I recommend the first method.

If you are facing issue in injecting secret using consul sidecar container and finding it very difficult to setup you can use this : https://github.com/DaspawnW/vault-crd
This is vault-custom resource definition which directly sync vault environment variables to kuberntes secret so now you can directly add secret to POD. with secretref.
vault crd create one pod in which you have to pass vault service name or URL using which application can connect to vault and on changes in vault value it will automatically sync value to kubernetes secret.
https://vault.koudingspawn.de/

You need to use a parent process that will talk to vault and retrieve the value, and then run your real process. https://github.com/hashicorp/envconsul is the marginally official tool for this from the Vault team, but there are many other options if you go looking.

Here's a method that could provide some insight: https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/#kubernetes-mutating-webhook-for-injecting-secrets

Related

Same secret for multiple deployments in Helm chart

I am using supabase-community/supabase-kubernetes to deploy Supabase in Kubernetes.
For Studio, Storage, Kong, Realtime, Rest and Auth services, you need to define at least jwt secret or in some cases the anon or service key.
However, I have two problems with this kind of configuration:
You need to configure the same secret information multiple times in values.yaml
The secrets won't be stored in a K8s secret
To improve these two aspects, I propose two configure those values in a dedicated section, e.g.:
jwtSecrets:
anonKey: "JWT_ANON_KEY"
serviceKey: "JWT_SERVICE_KEY"
key: "YOUR_SUPER_SECRET_JWT_TOKEN_WITH_AT_LEAST_32_CHARACTERS_LONG"
When rendered with the templates, a "global" secret gets created and every service (Studio, Storage, Kong, etc.) references this secret in its configuration:
env:
...
- name: SUPABASE_ANON_KEY
valueFrom:
secretKeyRef:
name: my-jwt-secret
key: anonKey
However, I am unsure if this is best practice for Helm charts?, to have such global configuration sections? Besides, I would like to know where to define this global secret creation — in _helpers.tpl?
Any help is appreciated! :)
As stated out by #David Maze there is no best practice regarding one secret for multiple deployments in values.yaml of a Helm chart.
On grounds of convenience, the secret name should be referenced in values.yaml like this:
jwtSecretName: my-secret
While the secret must be created by the user beforehand:
apiVersion: v1
data:
jwtSecret: YWRtaW4=
serviceKey: MWYyZDFlMmU2N2Rm
anonKey: MWYyZDFlMmU2N2Rm
kind: Secret
This allows to store the secret data according to Kubernetes best practice and simplifies the configuration of Helm charts.

How to manage google service accounts from helm chart

I am in the learning phase of kubernetes and able to set up deployments, services etc. However I have got stuck on how to manage secrets.
Context
I am using GKE for Kubernetes cluster
I am using helm charts for managing all deployment operations
I have created a google service account that has access to say google cloud storage.
My application uses the helm to create deployments and services, however, how do I manage the google service account creds I have created in an automated way like
I do not want to create the secrets manually like this - kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json ,
I want to do it through helm because say tomorrow if I move to another k8s cluster then I have do it manually again
Is there anyway to push my helm charts to repos without concerning of exposing my secrets as plain objects.
Apart from this, any other guidelines and best practices would be really helpful.
I do not want to create the secrets manually like this - kubectl
create secret generic pubsub-key
--from-file=key.json=PATH-TO-KEY-FILE.json , I want to do it through helm because say tomorrow if I move to another k8s cluster then I have
do it manually again
You can create the secret template to helm which will create the secret for you, at run time of helm time.
You helm will find the service account.json and create the secret based on that.
For example service-account.yaml
{{- $all := . -}}
{{ range $name, $secret := .Values.serviceAccountSecrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $name }}
labels:
app: {{ $name }}
chart: {{ template "atlantis.chart" $all }}
component: service-account-secret
heritage: {{ $all.Release.Service }}
release: {{ $all.Release.Name }}
data:
service-account.json: {{ $secret }}
---
{{ end }}
values.yaml
serviceAccountSecrets:
# credentials: <json file as base64 encoded string>
# credentials-staging: <json file as base64 encoded string>
Or else you can use this GCP service account controller which creates the Serviceaccount and the secret for you.
https://github.com/kiwigrid/gcp-serviceaccount-controller
Is there anyway to push my helm charts to repos without concerning of
exposing my secrets as plain objects.
For committing issues you can use the .helmignore file.
Read more at : https://helm.sh/docs/chart_template_guide/helm_ignore_file/
So inside the GIT, you have to commit only values.yaml not values-dev.yaml, values-stag.yaml
Thanks Harsh for the answer. I have made it work in little different way like this
I want my creds to be in helm values files
I want to commit my helm values to the git so that I can use the GITOPS at its full potential
I want to use just helm for deployment without manual intervention during CI CD process
So this is what I did
I have made use of helm AES and DES functions.
I encrypt the fields in values fields with AES function and commit it to the GIT
When installing the chart, I use --set aesKey=myEncryptedKey with helm install command.
Here is how it goes
I have google-service-account-creds.json, I create the base64 of the json present in this file
In values.yaml i chose a field say encrypt_account_info = base64 data from above
I encrypt the above field with AES
Now I was able to commit it to git as it my secret is encrypted.
In service.yaml, I have use google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
While installing the secret i use the following command helm install google-account-cred-release google-account-service/ --set aesKey=mykey1
Values file ( It is my encrypted google service account credentials usin AES method )
encrypt_account_info: fvCx82aMlEKgDP3t01lw4FnziI0pK55e9ESanx1ThGJMm+TJfO1fsLElYuTmYFkwvKhaQGuuDNI2TNBvYBch6G3yPcwbQ/LuhbUOgTFp8YopCVGo24mS/OA8GB7W8nL2N/NxF190e3LSIWU1mKkbsaZhAklKNs7kzxYzb+kUKoeIqEsGwIjcqQt96FhZYy9PcM6ysfl+ktHb07+rITiVK8UIQSXW/ZZ3zirnjJIF1ImmskXaeCWRcil3lZ59EQk1wevTomRGqyywQG3HDnrzLdWYE82Qk8eHNcGFIHW7wma9duXGUea3K5C5y6Psza76nrNwid7BGVGph3fHJDGqMrEQVrzhLUaJusqsgi24bJmz2Kb+a623g+4z9WjOBYUIcLnZVTq9nyr6xtnhpwaW/Dx8fK1ZzRUHfcxJQJfalCsLZhxvlw4tVOxnFZl587PHrX9pOUycNSHXJ9QS+22It1m5JUJM7MFGa+YKUpI578CWn31cCxM40prkcPR4mMB0Eo4qXnxDN4pBqUBJ3O9hqCxbBlsGdA9DzUVSTII/l8Q63H9D8MDHSGpUryb/raSV4/xD1uHnh61yKuM0RGq2GHK603sKZsbXnXdbMuzyINgnbf+zsy3vaYm3lh3778yPt4qFpDI30NR+g/SMEwr+yt8J6ud16sl1IyX21V7Txx8wUdxW7n5319Kq8AMtGbvNFuBWPE7uY9o6HC8GNPw2BQhGrYX+tHWfUGYvYAjkvFCU8ucs6xOmLFBe5hsywoKKgk7uPiJFt1Pf/vB1yyjzm7SKSaCYBvWYk7q1yJVZzpn5vd/5/pNODz6Y5nwZmYpMa3HTUg29qLv5vB4ua57MJbEsmXS7FpWi0QwlE/MSNQcOgqJE+VBqpUYluJYMgG4tyYNUck/yp0s2JWlyp9PZeGC6OMkOZeNDuD8sEqSvWGVdwzjLoTKbARI7QnqWVuLjpKnP7Y5vQ+v2nY0gkZUpdqZwALki3tje3BVAOXL5K8jsD3DjoQaxCkQ/PgeSlou7t6itinS8uL4kaSPSC+K3jntBdPpOTiu6NvZwc2ZMTJlyfKC6CDgK9C7k3i8H3TBCoahOzHqYQU32JcmtP6x7j5VXKqWlI1OUv3uajy6zH4oPxtw9btkSw3VaA5J7cj3Y+nVXBR17414ZYSILxlHQCm5F/XooLQRuUDTdvb4ORphdzH2EVgw8aJANLT6wRG3mvwIltoyLhiIES1AcnmZ4THeZv4Z03GFZCwBs6kKNfPeXyy5HxIdnChFdV4+3ggwvuiNUqXaja3xrm/K03pwpImjfV+T4coVKxwvwsz/e18fjREp9ZCauJTgSCNk+Dr7mAH4ReN3g5fSOcKeGZTTW3gCG896bySGLfvzoM3IpNf2GnX5EUUtFxac8MELAIrjtwTbcPHGe40V2Ymt666IpcCHMQoPshKQ7DEw2TzslIF6v0Pv6gO+/t8ALL88g8EY9OVGwNPot25zMChMstwKbF1gbMvGkFizS5yo2HienoltXJ7QOPZl7gpBfDu78mjtb1phtIltz7WJ/u/r/QBd2Dk9CGAWTGPKBKsAnyoYBJVFlVZLJomRT0BBWn8x97sw0aGH+ArZMvn0iIN6zBUJnD2rnL8+adbeGQJVXiQ9Tv5f2+Z8W/sE0Pr4KoahssTIlsPdHOToyHewsWxsg2339qcUHeHCoaWb5M3AzT9W+7kPg1OKYnTLug5gHFWWfjTu1Pq1INxX4s73ntlIH7Gfmgt8xVbuTvdyeQfT8r0yVboOcGrg302oFuxw2Wh+64e4fXVqTs31MMS+VvBwOXJL7V0VSZj0fv5ecvLiz2GIWS6vQsjcbu63+MoJcG6OG3BpJr7mV+vFBMZUlGbTUZPoCOMZX8ceU4nP1D/E7j5AkKQgxpZzzPoHYLi1MspxPaqgFU+bYDvl24T3CggS1VIM4ezINLOf63r8+MG1oFV1itzMlUuY0yCzHxMyjyurT5aZ/4PBJ//Gcpp9ZGoOgi92GObVjrw4uRjXXDGHAG749Jpo7RV0mFqURXmG3fx2y9FU25A2ZMxY//7ZB7Gy6mt9kOjtRkbRXRyhuCIS4Od2I9KKY7BZ/NqNB7TY5muTLws72Yjp+1FqDfxkXQyDUnX5cxRtjKsbBe6CYSjpX+pOr7yowZ87Z8gcj4LM32njVt50R70Y3C4FcIde70GwtjjnR4gc4FoGe5muR00/qTiUkhXqXTFyZE9Ecxp8xcA4aQ8ath1iKYhz2Hnp2VJpLvmSGss47fMBiagbHV3oIzGVpA+WnrPxICpTqsPyNfwaI2WN3mpuGOu7zgbOnpbsxb4but7e0L38erl546RMqoG/AQG9bisYCMYWVE2L+IFqgbW4h1HBfl050Ullj4R0Ryn6qLoX0WeoT1nTeb19NwN4I+EIubPj5/0SLOEBgmmN94G5WsFydQ3+oUIv5h770oLM5tK+ZiKqJA4PYJp0fVqYo8M5wCEECgVq54oTD64BONp2JjzCv3F6YOXuP3Eq1HHi9UIRNRv/c1QOQJGZVrBfTHjA2js+erfO4gF+is+gPltjcQ1N6akvB8p0Xv4KCALT2Y1ZjTjA5n90TncbUpk+Rl90ICH3jlN74EWsrgCIiTdtaZaO/WZENxblUZCaenRIZxB2dfe+xnidmJYGGhBFGecLhe+3DHWB6pkUNZ05j7wbtvSYqDcksjXlTQsXGh95rvDeJ+RNqImW9W9PXam+nsEr6NcCxrvRSCgh2uLHhsctp6CONS8L91jnbU5gM/Vg5dgfzqW43MepNBZbi0hOT3SFFWlaRsRbZcAThQxXkzJxDVulWN0YlUzsk5ktBVj8cqkFgz1CRFa2STnNbm/SXj/ZWfJbxjfouR64GrKMtX8vO+pySCQDXDmH/f0CoM0PqKvxU58t89uv6YHJMZG0W2gVF3X3LKzUX2fpYBbNlzRFLWbbhwRY2ihWSfhPcmeUXNuPHefBTv0J4CFhIo2AduK6jWthVukUHBRFeRVEFvLobXThp4/PnlqdVsryCLqZRPcino0H5XGgFfjNlJDPSuEDRZzJhdOwO4UWpDG8MZaJPmhHl3iYvB3n/e1vsFl4u93Z2qmdyhDF0bXhAlfVznAgGc5+x8FAi4nwOomeO+riwEiPHtNj60rBpyex42mE4z0fBFQ+VM+pJXkZWDoS7j2Z55NGH+TC//fxvI0pCB9pbT8slCLEmpiv0rDOj1Yhvm5PGDkNqnd0Yxs1fA6/G1EmQ7GsSOIqm17S9UHwBQbR33v2nbvGo/ZOdYDYGTtFq3KWRfTXP1O637XRLYFKGit7riVvWL825P0orpSOhgPC/C7+WAw7/Feh1O8dzaQgYK4Ili7TtJ2i4nBNR3aOp/VHGkKL5mVj79mm7wdi9ymJGs1uWUvV/zdsNRy6/Urpfr8fH39zJ91fw8N5AQL6ohAYCLgtdiuMLB5Yqq3etplCR2X+bzIYf/Y3t37xeEYFvO5wZ4tTrsB1VnyjKBAJ5M17XLrIFaAV8zLgXSW3gA1crYy0LBiseoOohi0auiJjL/m6wSYXZFF6WAXQ9KHobIGIluE0ZS/rkvRJQfmRBRxHC4ZvsOLf6+T5qUIydvLuPvXxMBCJecaqNf0u0RzFcSZUYovEj+zu30qlDAo/OIMHhpwyjNqJgfEEj4TDydQieOnDqYNxRI5Qz0Eo0oF7T2u6H4/6KDOA+PX7I6OCAbiEqUMccE+C+tWzLS+8IJ4OuDyFSUPFR9yQZN6aU6iIsAnNB3827C+4zCERF7YY7++1fpCtiXpG4tIsYBEIf8wJVaZWQjJE9YtZBgMm4dhaMg1RxcMMovw8GKjhI1MAoT3D5UbD0nBIdouOQAkNyJMbQ8m0t4lil+1jMawOfcxxyzjpjIwVDyQi9uSLctsm7TQ3D57Z1mtv6nv3CqUCL5uf42jQICiA7vD2GkJD1jVZc6g3K7UoYHE1KDuxemaDriZfgVJP3E1tHXv71Tk9pDxVDlB5R1wolJx8BDmbUHbwVQnQGYvVaElKM4Uwx1TfetgUyd7EHwDuiAw3W6z0tTWef286uoSUCC+odGl3lXy6un3pfKgN3MHb+4HdFjb/2vLmn6Pa5r68v3Z7IrAW6vWYCPv6O2ctXarcxpViepfcxciCh1l7T9D/gLS2qCyiByiM1gMmX1n3lHkabGrKUhpnK==
Secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: google-cloud-service-account
namespace: default
type: Opaque
data:
google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
command to install
helm install google-service-account-release google-service --set aesKey=myykey
It is all inspired by this - https://itnext.io/helm-3-secrets-management-4f23041f05c3
Secret management is a complex topic, and there are many approaches possible, like using the Secret Manager in GCP.
However, for the specific problem of managing Google Service Account credentials in GKE, the recommended approach is to use Workload Identity.
This way, you don't even have to create keys. You have to activate Workload Identity and create mappings for the Kubernetes service account to the GCP service account. Once this is set up, you can just set the Deployment's Kubernetes service account to that account using Workload Identity.

How to define Kubernetes Secret Map in a declarative way in CI without committing secrets to git?

I want to define a Kubernetes secrets map as part of my deployment pipeline. According to the Kubernetes documentation, there are two ways to define a secret.
Declarative Using a .yml with the Secret Object
Imperative Using kubectl create secret generic
The declarative approach requires writing a YAML similar to the one below.
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
username: bXktYXBw
password: Mzk1MjgkdmRnN0pi
I want to be able to check in all the Kubernetes YAML files to git so I can run them on a CI server. Checking in the YAML means that the secret is stored in git which is not good for security. I can put the secret into my CI systems secret's store but then how do I create a secrets YAML that references the OS environment variable at the time that kubectl is called.
Questions:
How to define a Kubernetes Secret from within a CI pipeline without having to commit the secrets into source control?
Is there a best practice approach for defining secrets as part of CI for K8s?
There is no really good way to managed secrets securely with a vanilla Kubernetes. If you decrypt the secret or inject an unencrypted secret in your CI/CD pipeline and create a Kubernetes Secret, you'll have a decrypted Base64 encoded string to store in your Kubernetes cluster (Etcd).
Most companies I've worked with recently deciding to either keep the secret in their Key Vault and use a Kubernetes controller to inject the secret at runtime or use a controller to be able to manage encrypted secrets like sealed-secrets or Kamus. Using encrypted secrets might be a good option if you want to keep your secrets in Git.
First-class support for Hashicorp Vault and Kubernetes : https://github.com/hashicorp/vault-k8s
Take a look at this blog post from Banzai Cloud for a more detailed explanation : Inject secrets directly into Pods from Vault revisited
I ended up hacking this with bash scrip to output a yaml secret secret.yaml.sh
cat <<EOF
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: Opaque
data:
username: $1
password: $2
jdbcUrl: $3
EOF
Then in my CI pipeline invoke the secret.yml.sh and pass in the base64 encoded values which are store in the CI system's credentials store then pipe to kubectl like so ./secret.yaml.sh $USERNAME $PASSWORD $URL | kubectl apply -f -
This hack makes it possible for me to run the CI pipeline and update the secrets based on what is stored in the CI systems.
As others have noted the secrets in Kubernetes etcd are not secure and it's better to use a key management system with k8s. However I don't have access to a key vault for this project.
You can encrypt the secret and commit the encrypted secret in git and while deployment it needs to be decrypted. For example ansible vault can be used if you are using ansible as CI tool.
If you are using Jenkins then you can use credentials or Hashicorp vault plugin for storing the secret.
If you are on public cloud then AWS KMS, Azure Vault etc are available.

Secret appregistry-mw-proxy-secret not found after deploying HCL Connections Customizer Helm chart

I'm installing the component pack 6.5.0.0 for HCL Connections. Orient me works, but after deploying the customizer, my mw-proxy pods got stuck at ContainerCreating. They show the following event log error:
MountVolume.SetUp failed for volume "appregistry-mw-proxy-secret-vol" : secrets "appregistry-mw-proxy-secret" not found
I never heared of those secret and looked inside the chart. mw-proxy-cloud-deployment.yaml try to mount those secret:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: customizernfsclaim
- name: appregistry-mw-proxy-secret-vol
secret:
secretName: appregistry-mw-proxy-secret
The problem is that I could not found any information what this secret is for and how it should be mounted. In the documentation they just require bootstrap, connections-env and infrastructure charts. All of them were installed. I just tried creating some file as secret:
echo Test123 > pwd-test
k create secret generic appregistry-mw-proxy-secret --from-file=pwd-test
After deleting all the pods, they came up running. But I don't know what this secret is for and what the customizer expects. Maybe this break some functionality of the application.
My questions are:
What is this secret for?
How do I create it correctly? (User, password, certificate, whatever)
Is there any documentation about it?
Have you tried to add the parameter
env.force_regenerate=true
to the bootstrap helmchart ?
There's also the createSecret=true in the connections-env helm chart.
If you used this documentation,
the order of the helm deployments is wrong.
The infrastructure deployment creates the secret "appregistry-mw-proxy-secret". So, first deploy infrastructure and after that mw-proxy and the pods will start.

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters