How to manage google service accounts from helm chart - kubernetes

I am in the learning phase of kubernetes and able to set up deployments, services etc. However I have got stuck on how to manage secrets.
Context
I am using GKE for Kubernetes cluster
I am using helm charts for managing all deployment operations
I have created a google service account that has access to say google cloud storage.
My application uses the helm to create deployments and services, however, how do I manage the google service account creds I have created in an automated way like
I do not want to create the secrets manually like this - kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json ,
I want to do it through helm because say tomorrow if I move to another k8s cluster then I have do it manually again
Is there anyway to push my helm charts to repos without concerning of exposing my secrets as plain objects.
Apart from this, any other guidelines and best practices would be really helpful.

I do not want to create the secrets manually like this - kubectl
create secret generic pubsub-key
--from-file=key.json=PATH-TO-KEY-FILE.json , I want to do it through helm because say tomorrow if I move to another k8s cluster then I have
do it manually again
You can create the secret template to helm which will create the secret for you, at run time of helm time.
You helm will find the service account.json and create the secret based on that.
For example service-account.yaml
{{- $all := . -}}
{{ range $name, $secret := .Values.serviceAccountSecrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $name }}
labels:
app: {{ $name }}
chart: {{ template "atlantis.chart" $all }}
component: service-account-secret
heritage: {{ $all.Release.Service }}
release: {{ $all.Release.Name }}
data:
service-account.json: {{ $secret }}
---
{{ end }}
values.yaml
serviceAccountSecrets:
# credentials: <json file as base64 encoded string>
# credentials-staging: <json file as base64 encoded string>
Or else you can use this GCP service account controller which creates the Serviceaccount and the secret for you.
https://github.com/kiwigrid/gcp-serviceaccount-controller
Is there anyway to push my helm charts to repos without concerning of
exposing my secrets as plain objects.
For committing issues you can use the .helmignore file.
Read more at : https://helm.sh/docs/chart_template_guide/helm_ignore_file/
So inside the GIT, you have to commit only values.yaml not values-dev.yaml, values-stag.yaml

Thanks Harsh for the answer. I have made it work in little different way like this
I want my creds to be in helm values files
I want to commit my helm values to the git so that I can use the GITOPS at its full potential
I want to use just helm for deployment without manual intervention during CI CD process
So this is what I did
I have made use of helm AES and DES functions.
I encrypt the fields in values fields with AES function and commit it to the GIT
When installing the chart, I use --set aesKey=myEncryptedKey with helm install command.
Here is how it goes
I have google-service-account-creds.json, I create the base64 of the json present in this file
In values.yaml i chose a field say encrypt_account_info = base64 data from above
I encrypt the above field with AES
Now I was able to commit it to git as it my secret is encrypted.
In service.yaml, I have use google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
While installing the secret i use the following command helm install google-account-cred-release google-account-service/ --set aesKey=mykey1
Values file ( It is my encrypted google service account credentials usin AES method )
encrypt_account_info: fvCx82aMlEKgDP3t01lw4FnziI0pK55e9ESanx1ThGJMm+TJfO1fsLElYuTmYFkwvKhaQGuuDNI2TNBvYBch6G3yPcwbQ/LuhbUOgTFp8YopCVGo24mS/OA8GB7W8nL2N/NxF190e3LSIWU1mKkbsaZhAklKNs7kzxYzb+kUKoeIqEsGwIjcqQt96FhZYy9PcM6ysfl+ktHb07+rITiVK8UIQSXW/ZZ3zirnjJIF1ImmskXaeCWRcil3lZ59EQk1wevTomRGqyywQG3HDnrzLdWYE82Qk8eHNcGFIHW7wma9duXGUea3K5C5y6Psza76nrNwid7BGVGph3fHJDGqMrEQVrzhLUaJusqsgi24bJmz2Kb+a623g+4z9WjOBYUIcLnZVTq9nyr6xtnhpwaW/Dx8fK1ZzRUHfcxJQJfalCsLZhxvlw4tVOxnFZl587PHrX9pOUycNSHXJ9QS+22It1m5JUJM7MFGa+YKUpI578CWn31cCxM40prkcPR4mMB0Eo4qXnxDN4pBqUBJ3O9hqCxbBlsGdA9DzUVSTII/l8Q63H9D8MDHSGpUryb/raSV4/xD1uHnh61yKuM0RGq2GHK603sKZsbXnXdbMuzyINgnbf+zsy3vaYm3lh3778yPt4qFpDI30NR+g/SMEwr+yt8J6ud16sl1IyX21V7Txx8wUdxW7n5319Kq8AMtGbvNFuBWPE7uY9o6HC8GNPw2BQhGrYX+tHWfUGYvYAjkvFCU8ucs6xOmLFBe5hsywoKKgk7uPiJFt1Pf/vB1yyjzm7SKSaCYBvWYk7q1yJVZzpn5vd/5/pNODz6Y5nwZmYpMa3HTUg29qLv5vB4ua57MJbEsmXS7FpWi0QwlE/MSNQcOgqJE+VBqpUYluJYMgG4tyYNUck/yp0s2JWlyp9PZeGC6OMkOZeNDuD8sEqSvWGVdwzjLoTKbARI7QnqWVuLjpKnP7Y5vQ+v2nY0gkZUpdqZwALki3tje3BVAOXL5K8jsD3DjoQaxCkQ/PgeSlou7t6itinS8uL4kaSPSC+K3jntBdPpOTiu6NvZwc2ZMTJlyfKC6CDgK9C7k3i8H3TBCoahOzHqYQU32JcmtP6x7j5VXKqWlI1OUv3uajy6zH4oPxtw9btkSw3VaA5J7cj3Y+nVXBR17414ZYSILxlHQCm5F/XooLQRuUDTdvb4ORphdzH2EVgw8aJANLT6wRG3mvwIltoyLhiIES1AcnmZ4THeZv4Z03GFZCwBs6kKNfPeXyy5HxIdnChFdV4+3ggwvuiNUqXaja3xrm/K03pwpImjfV+T4coVKxwvwsz/e18fjREp9ZCauJTgSCNk+Dr7mAH4ReN3g5fSOcKeGZTTW3gCG896bySGLfvzoM3IpNf2GnX5EUUtFxac8MELAIrjtwTbcPHGe40V2Ymt666IpcCHMQoPshKQ7DEw2TzslIF6v0Pv6gO+/t8ALL88g8EY9OVGwNPot25zMChMstwKbF1gbMvGkFizS5yo2HienoltXJ7QOPZl7gpBfDu78mjtb1phtIltz7WJ/u/r/QBd2Dk9CGAWTGPKBKsAnyoYBJVFlVZLJomRT0BBWn8x97sw0aGH+ArZMvn0iIN6zBUJnD2rnL8+adbeGQJVXiQ9Tv5f2+Z8W/sE0Pr4KoahssTIlsPdHOToyHewsWxsg2339qcUHeHCoaWb5M3AzT9W+7kPg1OKYnTLug5gHFWWfjTu1Pq1INxX4s73ntlIH7Gfmgt8xVbuTvdyeQfT8r0yVboOcGrg302oFuxw2Wh+64e4fXVqTs31MMS+VvBwOXJL7V0VSZj0fv5ecvLiz2GIWS6vQsjcbu63+MoJcG6OG3BpJr7mV+vFBMZUlGbTUZPoCOMZX8ceU4nP1D/E7j5AkKQgxpZzzPoHYLi1MspxPaqgFU+bYDvl24T3CggS1VIM4ezINLOf63r8+MG1oFV1itzMlUuY0yCzHxMyjyurT5aZ/4PBJ//Gcpp9ZGoOgi92GObVjrw4uRjXXDGHAG749Jpo7RV0mFqURXmG3fx2y9FU25A2ZMxY//7ZB7Gy6mt9kOjtRkbRXRyhuCIS4Od2I9KKY7BZ/NqNB7TY5muTLws72Yjp+1FqDfxkXQyDUnX5cxRtjKsbBe6CYSjpX+pOr7yowZ87Z8gcj4LM32njVt50R70Y3C4FcIde70GwtjjnR4gc4FoGe5muR00/qTiUkhXqXTFyZE9Ecxp8xcA4aQ8ath1iKYhz2Hnp2VJpLvmSGss47fMBiagbHV3oIzGVpA+WnrPxICpTqsPyNfwaI2WN3mpuGOu7zgbOnpbsxb4but7e0L38erl546RMqoG/AQG9bisYCMYWVE2L+IFqgbW4h1HBfl050Ullj4R0Ryn6qLoX0WeoT1nTeb19NwN4I+EIubPj5/0SLOEBgmmN94G5WsFydQ3+oUIv5h770oLM5tK+ZiKqJA4PYJp0fVqYo8M5wCEECgVq54oTD64BONp2JjzCv3F6YOXuP3Eq1HHi9UIRNRv/c1QOQJGZVrBfTHjA2js+erfO4gF+is+gPltjcQ1N6akvB8p0Xv4KCALT2Y1ZjTjA5n90TncbUpk+Rl90ICH3jlN74EWsrgCIiTdtaZaO/WZENxblUZCaenRIZxB2dfe+xnidmJYGGhBFGecLhe+3DHWB6pkUNZ05j7wbtvSYqDcksjXlTQsXGh95rvDeJ+RNqImW9W9PXam+nsEr6NcCxrvRSCgh2uLHhsctp6CONS8L91jnbU5gM/Vg5dgfzqW43MepNBZbi0hOT3SFFWlaRsRbZcAThQxXkzJxDVulWN0YlUzsk5ktBVj8cqkFgz1CRFa2STnNbm/SXj/ZWfJbxjfouR64GrKMtX8vO+pySCQDXDmH/f0CoM0PqKvxU58t89uv6YHJMZG0W2gVF3X3LKzUX2fpYBbNlzRFLWbbhwRY2ihWSfhPcmeUXNuPHefBTv0J4CFhIo2AduK6jWthVukUHBRFeRVEFvLobXThp4/PnlqdVsryCLqZRPcino0H5XGgFfjNlJDPSuEDRZzJhdOwO4UWpDG8MZaJPmhHl3iYvB3n/e1vsFl4u93Z2qmdyhDF0bXhAlfVznAgGc5+x8FAi4nwOomeO+riwEiPHtNj60rBpyex42mE4z0fBFQ+VM+pJXkZWDoS7j2Z55NGH+TC//fxvI0pCB9pbT8slCLEmpiv0rDOj1Yhvm5PGDkNqnd0Yxs1fA6/G1EmQ7GsSOIqm17S9UHwBQbR33v2nbvGo/ZOdYDYGTtFq3KWRfTXP1O637XRLYFKGit7riVvWL825P0orpSOhgPC/C7+WAw7/Feh1O8dzaQgYK4Ili7TtJ2i4nBNR3aOp/VHGkKL5mVj79mm7wdi9ymJGs1uWUvV/zdsNRy6/Urpfr8fH39zJ91fw8N5AQL6ohAYCLgtdiuMLB5Yqq3etplCR2X+bzIYf/Y3t37xeEYFvO5wZ4tTrsB1VnyjKBAJ5M17XLrIFaAV8zLgXSW3gA1crYy0LBiseoOohi0auiJjL/m6wSYXZFF6WAXQ9KHobIGIluE0ZS/rkvRJQfmRBRxHC4ZvsOLf6+T5qUIydvLuPvXxMBCJecaqNf0u0RzFcSZUYovEj+zu30qlDAo/OIMHhpwyjNqJgfEEj4TDydQieOnDqYNxRI5Qz0Eo0oF7T2u6H4/6KDOA+PX7I6OCAbiEqUMccE+C+tWzLS+8IJ4OuDyFSUPFR9yQZN6aU6iIsAnNB3827C+4zCERF7YY7++1fpCtiXpG4tIsYBEIf8wJVaZWQjJE9YtZBgMm4dhaMg1RxcMMovw8GKjhI1MAoT3D5UbD0nBIdouOQAkNyJMbQ8m0t4lil+1jMawOfcxxyzjpjIwVDyQi9uSLctsm7TQ3D57Z1mtv6nv3CqUCL5uf42jQICiA7vD2GkJD1jVZc6g3K7UoYHE1KDuxemaDriZfgVJP3E1tHXv71Tk9pDxVDlB5R1wolJx8BDmbUHbwVQnQGYvVaElKM4Uwx1TfetgUyd7EHwDuiAw3W6z0tTWef286uoSUCC+odGl3lXy6un3pfKgN3MHb+4HdFjb/2vLmn6Pa5r68v3Z7IrAW6vWYCPv6O2ctXarcxpViepfcxciCh1l7T9D/gLS2qCyiByiM1gMmX1n3lHkabGrKUhpnK==
Secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: google-cloud-service-account
namespace: default
type: Opaque
data:
google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
command to install
helm install google-service-account-release google-service --set aesKey=myykey
It is all inspired by this - https://itnext.io/helm-3-secrets-management-4f23041f05c3

Secret management is a complex topic, and there are many approaches possible, like using the Secret Manager in GCP.
However, for the specific problem of managing Google Service Account credentials in GKE, the recommended approach is to use Workload Identity.
This way, you don't even have to create keys. You have to activate Workload Identity and create mappings for the Kubernetes service account to the GCP service account. Once this is set up, you can just set the Deployment's Kubernetes service account to that account using Workload Identity.

Related

how to automatically roll deployments in helm when secret being used is created via some other helm chart

I want to automatically roll my deployment when there is a change in the secret being created via the same helm chart. For this I used sha256sum function and it worked well. I followed this Link for the same.
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
Now; I want to automatically roll my deployment (say, created via chart1) when the secret was created via some other helm chart say, chart2. And whenever there is a change in secret created via chart2 it should roll deployment created via chart1 as well. Secret created via chart2 is mounted to chart1 deployment. Is there a way to achieve this?

Same secret for multiple deployments in Helm chart

I am using supabase-community/supabase-kubernetes to deploy Supabase in Kubernetes.
For Studio, Storage, Kong, Realtime, Rest and Auth services, you need to define at least jwt secret or in some cases the anon or service key.
However, I have two problems with this kind of configuration:
You need to configure the same secret information multiple times in values.yaml
The secrets won't be stored in a K8s secret
To improve these two aspects, I propose two configure those values in a dedicated section, e.g.:
jwtSecrets:
anonKey: "JWT_ANON_KEY"
serviceKey: "JWT_SERVICE_KEY"
key: "YOUR_SUPER_SECRET_JWT_TOKEN_WITH_AT_LEAST_32_CHARACTERS_LONG"
When rendered with the templates, a "global" secret gets created and every service (Studio, Storage, Kong, etc.) references this secret in its configuration:
env:
...
- name: SUPABASE_ANON_KEY
valueFrom:
secretKeyRef:
name: my-jwt-secret
key: anonKey
However, I am unsure if this is best practice for Helm charts?, to have such global configuration sections? Besides, I would like to know where to define this global secret creation — in _helpers.tpl?
Any help is appreciated! :)
As stated out by #David Maze there is no best practice regarding one secret for multiple deployments in values.yaml of a Helm chart.
On grounds of convenience, the secret name should be referenced in values.yaml like this:
jwtSecretName: my-secret
While the secret must be created by the user beforehand:
apiVersion: v1
data:
jwtSecret: YWRtaW4=
serviceKey: MWYyZDFlMmU2N2Rm
anonKey: MWYyZDFlMmU2N2Rm
kind: Secret
This allows to store the secret data according to Kubernetes best practice and simplifies the configuration of Helm charts.

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

Injecting vault secrets into Kubernetes Pod Environment variable

I'm trying to install Sonarqube in Kubernetes environment which needs PostgresSQL.
I'm using an external Postgres instance and I have the crednetials kv secret set in Vault.
SonarQube helm chart creates an Environment variable in the container which takes the username and password for Postgres.
How can I inject the secret from my Vault to environment variable of sonarqube pod running on Kubernetes?
Creating a Kubernetes secret and using the secret in the helm chart works, but we are managing all secrets on Vault and need Vault secrets to be injected into pods.
Thanks
There are 2 ways to inject vault secrets into the k8s pod as ENV vars.
1) Use the vault Agent Injector
A template should be created that exports a Vault secret as an environment variable.
spec:
template:
metadata:
annotations:
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret "secret/data/web" -}}
export api_key="{{ .Data.data.payments_api_key }}"
{{- end }}
And the application container should source those files during startup.
args:
['sh', '-c', 'source /vault/secrets/config && <entrypoint script>']
Reference: https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example
2) Use banzaicloud bank-vault
Reference: https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/.
Comments:
Both methods are bypassing k8s security because secrets are not stored in etcd.
In addition, pods are unaware of vault in both methods.
So any one of these can be adopted without a deep comparison.
For vault-k8s and vault-helm users, I recommend the first method.
If you are facing issue in injecting secret using consul sidecar container and finding it very difficult to setup you can use this : https://github.com/DaspawnW/vault-crd
This is vault-custom resource definition which directly sync vault environment variables to kuberntes secret so now you can directly add secret to POD. with secretref.
vault crd create one pod in which you have to pass vault service name or URL using which application can connect to vault and on changes in vault value it will automatically sync value to kubernetes secret.
https://vault.koudingspawn.de/
You need to use a parent process that will talk to vault and retrieve the value, and then run your real process. https://github.com/hashicorp/envconsul is the marginally official tool for this from the Vault team, but there are many other options if you go looking.
Here's a method that could provide some insight: https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/#kubernetes-mutating-webhook-for-injecting-secrets