Helm Deployment of multiple microservices with single configmap - kubernetes-helm

we are deploying multiple microservices using helm charts but all those microservices are using one shared objects such as configmap.
Requirement should be when we do helm deployment microservices will be installed/upgraded using "helm upgrade --install", but it should not try to deploy shared configmap for all the microservice.
Shared configmap has to be deployed only once but all the microservices has to use that shared configmap, how can we achieve this with helm and what will be the helm structure and concepts for my case.

Please be more specific in your question, you are deploying each of you ms separately?
Let's suppose you have 3 ms - A, B and C. If you want to use shared config map for all of those, so you have next options:
Create top level chart, which includes A, B and C sub charts, as well as configmap definition. This way when you deploy your TLC, config map will be created once.
If you need to deploy each chart independently, in this case create separate chart which includes only config map, deploy it first, and then deploy all A, B and C charts.
Upd.
Example of how you can perform lookup and check if resource exists, and based on this create or not create.
{{- $secretName:= (tpl .Values.your.secret.secretName .)}}
{{- $secret:= (lookup "v1" "Secret" .Release.Namespace $secretName)}}
{{- if not $secret}}
{{- $defUsername:= (b64enc "test")}}
{{- $defPassword:= (b64enc "test")}}
apiVersion: v1
kind: Secret
metadata:
name: "{{$secretName}}"
namespace: "{{.Release.Namespace}}"
type: Opaque
data:
username: {{$defUsername}}
password: {{$defPassword}}
{{- end}}

Related

ArgoCD multiple files into argocd-rbac-cm configmap data

Is it possible to pass a csv file to the data of the "argocd-rbac-cm" config map? Since I've deployed argo-cd through gitops (with the official argo-cd helm chart), I would not like to hardcode a large csv file inside the configmap itseld, I'd prefer instead reference a csv file direct from the git repository where the helm chart is located.
And, is it also possible to pass more than one file-like keys?
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
<<< something to have this append many files
<<< https://gitlab.custom.net/proj_name/-/blob/master/first_policy.csv # URL from the first csv file in the git repository >>
<<< https://gitlab.custom.net/proj_name/-/blob/master/second_policy.csv # URL from the second csv file in the git repository >>
Thanks in advance!
Any external evaluation in a policy.csv would lead to some unpredictable behaviour in cluster and would complicate argocd codebase without obvious gains. That's why this configmap should be set statically before deploying anything
You basically have two options:
Correctly set .server.rbacConfig as per https://github.com/argoproj/argo-helm/blob/master/charts/argo-cd/templates/argocd-configs/argocd-rbac-cm.yaml - create your configuration with some bash scripts and assign to a variable, i.e. RBAC_CONFIG then pass it in your CI/CD pipeline as helm upgrade ... --set "server.rbacConfig=$RBAC_CONFIG"
Extend chart with your own template and use .Files.Get function to create cm from files that already exists in your repository, see https://helm.sh/docs/chart_template_guide/accessing_files/ , with something like
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |-
{{- $files := .Files }}
{{- range tuple "first_policy.csv" "first_policy.csv" }}
{{ $files.Get . }}
{{- end }}

How to manage google service accounts from helm chart

I am in the learning phase of kubernetes and able to set up deployments, services etc. However I have got stuck on how to manage secrets.
Context
I am using GKE for Kubernetes cluster
I am using helm charts for managing all deployment operations
I have created a google service account that has access to say google cloud storage.
My application uses the helm to create deployments and services, however, how do I manage the google service account creds I have created in an automated way like
I do not want to create the secrets manually like this - kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json ,
I want to do it through helm because say tomorrow if I move to another k8s cluster then I have do it manually again
Is there anyway to push my helm charts to repos without concerning of exposing my secrets as plain objects.
Apart from this, any other guidelines and best practices would be really helpful.
I do not want to create the secrets manually like this - kubectl
create secret generic pubsub-key
--from-file=key.json=PATH-TO-KEY-FILE.json , I want to do it through helm because say tomorrow if I move to another k8s cluster then I have
do it manually again
You can create the secret template to helm which will create the secret for you, at run time of helm time.
You helm will find the service account.json and create the secret based on that.
For example service-account.yaml
{{- $all := . -}}
{{ range $name, $secret := .Values.serviceAccountSecrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $name }}
labels:
app: {{ $name }}
chart: {{ template "atlantis.chart" $all }}
component: service-account-secret
heritage: {{ $all.Release.Service }}
release: {{ $all.Release.Name }}
data:
service-account.json: {{ $secret }}
---
{{ end }}
values.yaml
serviceAccountSecrets:
# credentials: <json file as base64 encoded string>
# credentials-staging: <json file as base64 encoded string>
Or else you can use this GCP service account controller which creates the Serviceaccount and the secret for you.
https://github.com/kiwigrid/gcp-serviceaccount-controller
Is there anyway to push my helm charts to repos without concerning of
exposing my secrets as plain objects.
For committing issues you can use the .helmignore file.
Read more at : https://helm.sh/docs/chart_template_guide/helm_ignore_file/
So inside the GIT, you have to commit only values.yaml not values-dev.yaml, values-stag.yaml
Thanks Harsh for the answer. I have made it work in little different way like this
I want my creds to be in helm values files
I want to commit my helm values to the git so that I can use the GITOPS at its full potential
I want to use just helm for deployment without manual intervention during CI CD process
So this is what I did
I have made use of helm AES and DES functions.
I encrypt the fields in values fields with AES function and commit it to the GIT
When installing the chart, I use --set aesKey=myEncryptedKey with helm install command.
Here is how it goes
I have google-service-account-creds.json, I create the base64 of the json present in this file
In values.yaml i chose a field say encrypt_account_info = base64 data from above
I encrypt the above field with AES
Now I was able to commit it to git as it my secret is encrypted.
In service.yaml, I have use google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
While installing the secret i use the following command helm install google-account-cred-release google-account-service/ --set aesKey=mykey1
Values file ( It is my encrypted google service account credentials usin AES method )
encrypt_account_info: fvCx82aMlEKgDP3t01lw4FnziI0pK55e9ESanx1ThGJMm+TJfO1fsLElYuTmYFkwvKhaQGuuDNI2TNBvYBch6G3yPcwbQ/LuhbUOgTFp8YopCVGo24mS/OA8GB7W8nL2N/NxF190e3LSIWU1mKkbsaZhAklKNs7kzxYzb+kUKoeIqEsGwIjcqQt96FhZYy9PcM6ysfl+ktHb07+rITiVK8UIQSXW/ZZ3zirnjJIF1ImmskXaeCWRcil3lZ59EQk1wevTomRGqyywQG3HDnrzLdWYE82Qk8eHNcGFIHW7wma9duXGUea3K5C5y6Psza76nrNwid7BGVGph3fHJDGqMrEQVrzhLUaJusqsgi24bJmz2Kb+a623g+4z9WjOBYUIcLnZVTq9nyr6xtnhpwaW/Dx8fK1ZzRUHfcxJQJfalCsLZhxvlw4tVOxnFZl587PHrX9pOUycNSHXJ9QS+22It1m5JUJM7MFGa+YKUpI578CWn31cCxM40prkcPR4mMB0Eo4qXnxDN4pBqUBJ3O9hqCxbBlsGdA9DzUVSTII/l8Q63H9D8MDHSGpUryb/raSV4/xD1uHnh61yKuM0RGq2GHK603sKZsbXnXdbMuzyINgnbf+zsy3vaYm3lh3778yPt4qFpDI30NR+g/SMEwr+yt8J6ud16sl1IyX21V7Txx8wUdxW7n5319Kq8AMtGbvNFuBWPE7uY9o6HC8GNPw2BQhGrYX+tHWfUGYvYAjkvFCU8ucs6xOmLFBe5hsywoKKgk7uPiJFt1Pf/vB1yyjzm7SKSaCYBvWYk7q1yJVZzpn5vd/5/pNODz6Y5nwZmYpMa3HTUg29qLv5vB4ua57MJbEsmXS7FpWi0QwlE/MSNQcOgqJE+VBqpUYluJYMgG4tyYNUck/yp0s2JWlyp9PZeGC6OMkOZeNDuD8sEqSvWGVdwzjLoTKbARI7QnqWVuLjpKnP7Y5vQ+v2nY0gkZUpdqZwALki3tje3BVAOXL5K8jsD3DjoQaxCkQ/PgeSlou7t6itinS8uL4kaSPSC+K3jntBdPpOTiu6NvZwc2ZMTJlyfKC6CDgK9C7k3i8H3TBCoahOzHqYQU32JcmtP6x7j5VXKqWlI1OUv3uajy6zH4oPxtw9btkSw3VaA5J7cj3Y+nVXBR17414ZYSILxlHQCm5F/XooLQRuUDTdvb4ORphdzH2EVgw8aJANLT6wRG3mvwIltoyLhiIES1AcnmZ4THeZv4Z03GFZCwBs6kKNfPeXyy5HxIdnChFdV4+3ggwvuiNUqXaja3xrm/K03pwpImjfV+T4coVKxwvwsz/e18fjREp9ZCauJTgSCNk+Dr7mAH4ReN3g5fSOcKeGZTTW3gCG896bySGLfvzoM3IpNf2GnX5EUUtFxac8MELAIrjtwTbcPHGe40V2Ymt666IpcCHMQoPshKQ7DEw2TzslIF6v0Pv6gO+/t8ALL88g8EY9OVGwNPot25zMChMstwKbF1gbMvGkFizS5yo2HienoltXJ7QOPZl7gpBfDu78mjtb1phtIltz7WJ/u/r/QBd2Dk9CGAWTGPKBKsAnyoYBJVFlVZLJomRT0BBWn8x97sw0aGH+ArZMvn0iIN6zBUJnD2rnL8+adbeGQJVXiQ9Tv5f2+Z8W/sE0Pr4KoahssTIlsPdHOToyHewsWxsg2339qcUHeHCoaWb5M3AzT9W+7kPg1OKYnTLug5gHFWWfjTu1Pq1INxX4s73ntlIH7Gfmgt8xVbuTvdyeQfT8r0yVboOcGrg302oFuxw2Wh+64e4fXVqTs31MMS+VvBwOXJL7V0VSZj0fv5ecvLiz2GIWS6vQsjcbu63+MoJcG6OG3BpJr7mV+vFBMZUlGbTUZPoCOMZX8ceU4nP1D/E7j5AkKQgxpZzzPoHYLi1MspxPaqgFU+bYDvl24T3CggS1VIM4ezINLOf63r8+MG1oFV1itzMlUuY0yCzHxMyjyurT5aZ/4PBJ//Gcpp9ZGoOgi92GObVjrw4uRjXXDGHAG749Jpo7RV0mFqURXmG3fx2y9FU25A2ZMxY//7ZB7Gy6mt9kOjtRkbRXRyhuCIS4Od2I9KKY7BZ/NqNB7TY5muTLws72Yjp+1FqDfxkXQyDUnX5cxRtjKsbBe6CYSjpX+pOr7yowZ87Z8gcj4LM32njVt50R70Y3C4FcIde70GwtjjnR4gc4FoGe5muR00/qTiUkhXqXTFyZE9Ecxp8xcA4aQ8ath1iKYhz2Hnp2VJpLvmSGss47fMBiagbHV3oIzGVpA+WnrPxICpTqsPyNfwaI2WN3mpuGOu7zgbOnpbsxb4but7e0L38erl546RMqoG/AQG9bisYCMYWVE2L+IFqgbW4h1HBfl050Ullj4R0Ryn6qLoX0WeoT1nTeb19NwN4I+EIubPj5/0SLOEBgmmN94G5WsFydQ3+oUIv5h770oLM5tK+ZiKqJA4PYJp0fVqYo8M5wCEECgVq54oTD64BONp2JjzCv3F6YOXuP3Eq1HHi9UIRNRv/c1QOQJGZVrBfTHjA2js+erfO4gF+is+gPltjcQ1N6akvB8p0Xv4KCALT2Y1ZjTjA5n90TncbUpk+Rl90ICH3jlN74EWsrgCIiTdtaZaO/WZENxblUZCaenRIZxB2dfe+xnidmJYGGhBFGecLhe+3DHWB6pkUNZ05j7wbtvSYqDcksjXlTQsXGh95rvDeJ+RNqImW9W9PXam+nsEr6NcCxrvRSCgh2uLHhsctp6CONS8L91jnbU5gM/Vg5dgfzqW43MepNBZbi0hOT3SFFWlaRsRbZcAThQxXkzJxDVulWN0YlUzsk5ktBVj8cqkFgz1CRFa2STnNbm/SXj/ZWfJbxjfouR64GrKMtX8vO+pySCQDXDmH/f0CoM0PqKvxU58t89uv6YHJMZG0W2gVF3X3LKzUX2fpYBbNlzRFLWbbhwRY2ihWSfhPcmeUXNuPHefBTv0J4CFhIo2AduK6jWthVukUHBRFeRVEFvLobXThp4/PnlqdVsryCLqZRPcino0H5XGgFfjNlJDPSuEDRZzJhdOwO4UWpDG8MZaJPmhHl3iYvB3n/e1vsFl4u93Z2qmdyhDF0bXhAlfVznAgGc5+x8FAi4nwOomeO+riwEiPHtNj60rBpyex42mE4z0fBFQ+VM+pJXkZWDoS7j2Z55NGH+TC//fxvI0pCB9pbT8slCLEmpiv0rDOj1Yhvm5PGDkNqnd0Yxs1fA6/G1EmQ7GsSOIqm17S9UHwBQbR33v2nbvGo/ZOdYDYGTtFq3KWRfTXP1O637XRLYFKGit7riVvWL825P0orpSOhgPC/C7+WAw7/Feh1O8dzaQgYK4Ili7TtJ2i4nBNR3aOp/VHGkKL5mVj79mm7wdi9ymJGs1uWUvV/zdsNRy6/Urpfr8fH39zJ91fw8N5AQL6ohAYCLgtdiuMLB5Yqq3etplCR2X+bzIYf/Y3t37xeEYFvO5wZ4tTrsB1VnyjKBAJ5M17XLrIFaAV8zLgXSW3gA1crYy0LBiseoOohi0auiJjL/m6wSYXZFF6WAXQ9KHobIGIluE0ZS/rkvRJQfmRBRxHC4ZvsOLf6+T5qUIydvLuPvXxMBCJecaqNf0u0RzFcSZUYovEj+zu30qlDAo/OIMHhpwyjNqJgfEEj4TDydQieOnDqYNxRI5Qz0Eo0oF7T2u6H4/6KDOA+PX7I6OCAbiEqUMccE+C+tWzLS+8IJ4OuDyFSUPFR9yQZN6aU6iIsAnNB3827C+4zCERF7YY7++1fpCtiXpG4tIsYBEIf8wJVaZWQjJE9YtZBgMm4dhaMg1RxcMMovw8GKjhI1MAoT3D5UbD0nBIdouOQAkNyJMbQ8m0t4lil+1jMawOfcxxyzjpjIwVDyQi9uSLctsm7TQ3D57Z1mtv6nv3CqUCL5uf42jQICiA7vD2GkJD1jVZc6g3K7UoYHE1KDuxemaDriZfgVJP3E1tHXv71Tk9pDxVDlB5R1wolJx8BDmbUHbwVQnQGYvVaElKM4Uwx1TfetgUyd7EHwDuiAw3W6z0tTWef286uoSUCC+odGl3lXy6un3pfKgN3MHb+4HdFjb/2vLmn6Pa5r68v3Z7IrAW6vWYCPv6O2ctXarcxpViepfcxciCh1l7T9D/gLS2qCyiByiM1gMmX1n3lHkabGrKUhpnK==
Secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: google-cloud-service-account
namespace: default
type: Opaque
data:
google-cloud-service-account: {{ .Values.encrypt_account_info | decryptAES (.Values.aesKey | b64dec) }}
command to install
helm install google-service-account-release google-service --set aesKey=myykey
It is all inspired by this - https://itnext.io/helm-3-secrets-management-4f23041f05c3
Secret management is a complex topic, and there are many approaches possible, like using the Secret Manager in GCP.
However, for the specific problem of managing Google Service Account credentials in GKE, the recommended approach is to use Workload Identity.
This way, you don't even have to create keys. You have to activate Workload Identity and create mappings for the Kubernetes service account to the GCP service account. Once this is set up, you can just set the Deployment's Kubernetes service account to that account using Workload Identity.

How to easily duplicate helm charts

10 microservices on kubernetes with helm3 charts, and saw that all of them have similar structure standard, deployment, service, hpa, network policies etc. and basically the <helm_chart_name>/templates directory is 99% same on all with some if statements on top of file whether we want to deploy that resource,
{{ if .Values.hpa.create }}
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.deployment.name }}
...
spec:
scaleTargetRef:
...
{{ end }}
and in values passing yes/no whether we want it - Is there some tool to easily create template for the helm charts ? To create Helm chart with this 5 manifests pre-populated with the reference to values as above ?
What you need is the Library Charts:
A library chart is a type of Helm chart that defines chart primitives
or definitions which can be shared by Helm templates in other charts.
This allows users to share snippets of code that can be re-used across
charts, avoiding repetition and keeping charts DRY.
You can find more details and examples in the linked documentation.
I think closest thing to the thing I want is https://helm.sh/docs/topics/library_charts/

Templating external files in helm

I want to use application.yaml file to be passed as a config map.
So I have written this.
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
my application.yaml is present in foo folder and
contains a service name which I need it to be dynamically populated via helm interpolation.
foo:
service:
name: {{.Release.Name}}-service
When I dry run , I am getting this
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
application.yaml: "ei:\r\n service:\r\n name: {{.Release.Name}}-service"
but I want name: {{.Release.Name}}-service to contain actual helm release name.
Is it possible to do templating for external files using helm , if yes then how to do it ?
I have gone through https://v2-14-0.helm.sh/docs/chart_template_guide/#accessing-files-inside-templates
I didn't find something which solves my use case.
I can also copy the content to config map yaml and can do interpolation but I don't want to do it. I want application.yml to be in a separate file, so that, it will be simple to deal with config changes..
Helm includes a tpl function that can be used to expand an arbitrary string as a Go template. In your case the output of ...AsConfig is a string that you can feed into the template engine.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-conf
data:
{{ tpl (.Files.Glob "foo/*").AsConfig . | indent 2 }}
Once you do that you can invoke arbitrary template code from within the config file. For example, it's common enough to have a defined template that produces the name prefix of the current chart as configured, and so your config file could instead specify
foo:
service:
name: {{ template "mychart.name" . }}-service
As best I can tell, there is no recursive template evaluation available in helm (nor in Sprig), likely by design
However, in your specific case, if you aren't expecting the full power of golang templates, you can cheat and use Sprig's regexReplaceAllLiteral:
kind: ConfigMap
data:
{{/* here I have used character classes rather that a sea of backslashes
you can use the style you find most legible */}}
{{ $myRx := "[{][{] *[.]Release[.]Name *[}][}]" }}
{{ regexReplaceAllLiteral $myRx (.Files.Glob "foo/*").AsConfig .Release.Name }}
If you genuinely need the full power of golang templates for your config files, then helm, itself, is not the mechanism for doing that -- but helmfile has a lot of fancy tricks for generating the ultimate helm chart that helm will install

Create kubernetes resources with helm only if custom resource definition exists

I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/