Prevent creating a secret if it already exists - kubernetes

At the moment I have the following secret set up:
apiVersion: v1
kind: Secret
metadata:
name: my-repository-key
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
Unfortunately, I have 2 subcharts using the same secret, which cause an issue when I try to install them using helm.
Per the stackoverflow answer, I've tried using the following line to prevent the re-creation of the secret:
{{- if not (lookup "v1" "Secret" "" "my-repository-key") }}
Unfortunately It did not work, and I'm unable to debug the lookup as it's impossible for the time being.
How do I prevent the creation with a lookup? Is there a better way?

In Helm charts, Kubernetes objects are often named with a prefix that's the name of the current release plus the name of the current chart. That will make the name unique, even if there are related subcharts that declare similar secrets. (A secret is pretty small and duplicating it between two subcharts shouldn't be an operational problem.)
metadata:
name: "{{ .Release.Name }}-{{ .Chart.Name }}-key"
If you created the chart with helm create, this pattern is common enough that the new-chart template includes a helper template that generates this. If the chart only has a single secret, you can use the default name:
metadata:
name: "{{ include "chartname.fullname" . }}"
Or, up to some corner cases around naming, you can add a suffix to it
metadata:
name: "{{ include "chartname.fullname" . }}-key"

Related

Approach for configmap and secret for a yaml file

I have a yaml file which needs to be loaded into my pods, this yaml file will have both sensitive and non-sensitive data, this yaml file need to be present in a path which i have included as env in containers.
env:
- name: CONFIG_PATH
value: /myapp/config/config.yaml
If my understanding is right, the configmap was the right choice, but i am forced to give the sensitive data like password as plain text in the values.yaml in helm chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
labels:
app: {{ .Release.Name }}-config
data:
config.yaml: |
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
Values.yaml
config:
password: "mypassword"
Mounted the above config map as follows
volumeMounts:
- name: {{ .Release.Name }}-config
mountPath: /myapp/config/
So i wanted to try secret, If i try secret, it is loading as Environment Variables inside pod, but it is not going into this config.yaml file.
If i convert the above yaml file into secret instead of configmap , should i convert the entire config.yaml into base64 secret? my yaml file has more entries and it will look cumbersome and i dont think it as a solution.
If i take secret as a stringdata then the base64 will be taken as it is.
How do i make sure that config.yaml loads into pods with passwords not exposed in the values.yaml Is there a way to combine configmap and secret
I read about projected volumes, but i dont see a use case for merging configmap and secrets into single config.yaml
Any help would be appreciated.
Kubernetes has no real way to construct files out of several parts. You can embed an entire (small) file in a ConfigMap or a Secret, but you can't ask the cluster to assemble a file out of parts in multiple places.
In Helm, one thing you can do is to put the configuration-file data into a helper template
{{- define "config.yaml" -}}
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
{{ end -}}
In the ConfigMap you can use this helper template rather than embedding the content directly
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
config.yaml: |
{{ include "config.yaml" . | indent 4 }}
If you move it to a Secret you do in fact need to base64 encode it. But with the helper template that's just a matter of invoking the template and encoding the result.
apiVersion: v1
kind: Secret
metadata: { ... }
data:
config.yaml: {{ include "config.yaml" . | b64enc }}
If it's possible to set properties in this file directly via environment variables (like Spring properties) or to insert environment-variable references in the file (like a Ruby ERB file) that could let you put the bulk of the file into a ConfigMap, but use a Secret for specific values; you would need a little more wiring to also make the environment variables available.
You briefly note a concern around passing the credential as a Helm value. This does in fact require having it in plain text at deploy time, and an operator could helm get values later to retrieve it. If this is a problem, you'll need some other path to inject or retrieve the secret value.

Setup a Kubernetes Namespace and Roles Using Helm

I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.
One of the tasks I have is to setup automation to take a supplied namespace name parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.
I can do this simply with a script that uses kubectl apply like this:
kubectl create namespace $namespaceName
kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName
But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of "deploying" things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.
How can I create a Kubernetes namespace and rolebinding using Helm?
A Namespace is a Kubernetes object and it can be described in YAML, so Helm can create one. #mdaniel's answer describes the syntax for doing it for a single Namespace and the corresponding RoleBinding.
There is a chicken-and-egg problem if you are trying to use this syntax to create the Helm installation namespace, though. In Helm 3, metadata about the installation is stored in Kubernetes objects, usually in the same namespace you're installing into
helm install release-name ./a-chart-that-creates-a-namespace --namespace ns
If the namespace doesn't already exist, then Helm can't retrieve the installation metadata; or, if it does, then the declaration of the Namespace object in the chart will conflict with an existing object in the cluster. You can create other objects this way (like RoleBindings) but Namespaces themselves are a problem.
But! You can create other namespaces safely. You can also use Helm's templating constructs to create multiple objects based on what's present in the .Values configuration. So if your values.yaml file (possibly environment-specific) has
namespaces: [service-a, service-b]
clusterRole: edit
user: deploy
Then you can write a template file like
{{- $top := . }}
{{- range $namespace := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ $namespace }}
name: deployer-edit
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ $top.Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ $top.Values.user }}
{{ end -}}
This will create two YAML documents for each item in .Values.namespaces. Since the range looping construct overwrites the . special variable, we save its value in a $top local variable before we start, and then use $top.Values where we'd otherwise need to reference .Values. We also need to make sure to explicitly name the metadata: { namespace: } of each object we create, since we're not using the default installation namespace.
You need to make sure the helm install --namespace name isn't any of the namespaces you're managing with this chart.
This would let you have a single chart that manages all of the per-service namespaces. If you needed to change the set of services, you can just update the chart values and helm update. The one other caution is that this will happily delete namespaces with no warning if you remove a value from the .Values.namespaces list, and also take everything in that namespace with it (notably, any PersistentVolumeClaims that have data you might need).
Almost any chart for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.bindingName }}
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ .Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ .Values.user }}
then a values.yaml isn't strictly required, but helps folks know what values could be provided:
# values.yaml
bindingName: deployment-edit
clusterRole: edit
user: deployer
Helm v3 has --create-namespace which will create the provided --namespace if it doesn't already exist, which isn't very declarative but does achieve the end result just like the kubectl version
It's also theoretically possible to have the chart create the Namespace but I would not guess that helm remove the-namespaced-rolebinding will do the right thing, since the order of item removal matters a lot:
# templates/00namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.theNamespace }}
and then run helm --namespace kube-system ... or any NS other than the real one, since it doesn't yet exist

Templating external files in helm

I want to use application.yaml file to be passed as a config map.
So I have written this.
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
my application.yaml is present in foo folder and
contains a service name which I need it to be dynamically populated via helm interpolation.
foo:
service:
name: {{.Release.Name}}-service
When I dry run , I am getting this
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
application.yaml: "ei:\r\n service:\r\n name: {{.Release.Name}}-service"
but I want name: {{.Release.Name}}-service to contain actual helm release name.
Is it possible to do templating for external files using helm , if yes then how to do it ?
I have gone through https://v2-14-0.helm.sh/docs/chart_template_guide/#accessing-files-inside-templates
I didn't find something which solves my use case.
I can also copy the content to config map yaml and can do interpolation but I don't want to do it. I want application.yml to be in a separate file, so that, it will be simple to deal with config changes..
Helm includes a tpl function that can be used to expand an arbitrary string as a Go template. In your case the output of ...AsConfig is a string that you can feed into the template engine.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-conf
data:
{{ tpl (.Files.Glob "foo/*").AsConfig . | indent 2 }}
Once you do that you can invoke arbitrary template code from within the config file. For example, it's common enough to have a defined template that produces the name prefix of the current chart as configured, and so your config file could instead specify
foo:
service:
name: {{ template "mychart.name" . }}-service
As best I can tell, there is no recursive template evaluation available in helm (nor in Sprig), likely by design
However, in your specific case, if you aren't expecting the full power of golang templates, you can cheat and use Sprig's regexReplaceAllLiteral:
kind: ConfigMap
data:
{{/* here I have used character classes rather that a sea of backslashes
you can use the style you find most legible */}}
{{ $myRx := "[{][{] *[.]Release[.]Name *[}][}]" }}
{{ regexReplaceAllLiteral $myRx (.Files.Glob "foo/*").AsConfig .Release.Name }}
If you genuinely need the full power of golang templates for your config files, then helm, itself, is not the mechanism for doing that -- but helmfile has a lot of fancy tricks for generating the ultimate helm chart that helm will install

Create kubernetes resources with helm only if custom resource definition exists

I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/

How to use Kubeseal to seal a helm-templated secret?

Imagine a secret like this:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "test-cicd.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
secret.yaml: |
{{ if eq .Values.env "prod" }}
foo: bar-prod
foo2: bar2_prod
{{ else if eq .Values.evn "dev" }}
foo: bar-dev
{{ end }}
Is it possible to seal this using Kubeseal?
Upon doing it now, I get invalid map key: map[interface {}]interface {}{"include \"test-cicd.fullname\" .":interface {}(nil)} which is probably because it is not a "valid" yaml file.
One thing that I tried was:
1. Removing the helm templating lines
2. Generating the sealedsecret
3. Templating the sealedsecret using helm
But by doing this, the sealedsecret could not be decrypted by the cluster-side operator on deployment time.
mkmik gave an answer to my question on Github, so I'm quoting it here as well just for the records.
So, you're composing a secret value with client-side templating.
Parts of your secret.yaml file are secret, yet parts must be templating directives (the if) and hence cannot be encrypted.
You have two options:
you encrypt your secrets somehow using some client-side vault software, possibly with helm integration (e.g. https://github.com/futuresimple/helm-secrets). That requires every user (and CI environment) that applies that helm chart, to be able to decrypt the secrets.
you re-factor your secrets so that secrets are "atomic", and use sealed-secrets to benefit from its "one-way encryption" approach, which allows your devops users (and CI automation) to apply the helm charts without ever seeing the secret values themselves.
The rest of this answer assumes you picked option (2)
Now, since you decided to use Helm, you have to deal with the fact that helm templates are not json/yaml files, but instead they are Go templates, and hence they cannot be manipulated by tools designed to manipulated structured data formats.
Luckily, kubeseal has a --raw command, that allows you to encrypt individual secret values and put them manually in whatever file format you're using to describe your k8s resources.
So, assuming you want to create a Helm template for SealedSecrets resource, which takes the name and label values as paramters, and also chooses which secrets to put also based on boolean prod/dev parameter, this example might work for you:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: {{ include "test-cicd.fullname" . }}
annotations:
# this is because the name is a deployment time parameter
# consider also using "cluster-wide" if the namespace is also a parameter
# please make sure you understand the implications, see README
sealedsecrets.bitnami.com/namespace-wide: "true"
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
spec:
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
encryptedData:
{{ if eq .Values.env "prod" }}
foo: AgASNmKx2+QYbbhSxBE0KTa91sDBeNSaicvgBPW8Y/q/f806c7lKfF0mnxzEirjBsvF67C/Yp0fwSokIpKyy3gXtatg8rhf8uiQAA3VjJGkl5VYLcad0t6hKQyIfHsD7wrocm36uz9hpH30DRPWtL5qy4Z+zbzHj8AvEV+xTpBHCSyJPF2hyvHXTr6iQ6KJrAKy04MDwjyQzllN5OQJT2w4zhVgTxXSg/c7m50U/znbcJ1x5vWLXLSeiDRrsJEJeNoPQM8OHmosf5afSOTDWQ4IhG3srSBfDExSFGBIC41OT2CUUmCCtrc9o61LJruqshZ3PkiS7PqejytgwLpw/GEnj2oa/uNSStiP9oa9mCY6IUMujwjF9rKLIT456DlrnsS0bYXO2NmYwSfFX+KDbEhCIVFMbMupMSZp9Ol2DTim5SLIgIza/fj0CXaO3jGiltSQ0aM8gLSMK9n3c1V+X5hKmzMI3/Xd01QmhMmwqKp+oy21iidLJjtz67EiWyfIg1l7hiD5IIVlM9Gvg3k67zij5mOcXPkFnMmUQhQWxVKgAf4z8qEgprt03C+q+Wwwt25UDhQicpwoGtVQzU5ChJi09ja5LeW4RrvDf2B5KRp9HXoj1eu93MMl1Kcnx+X7uVT5OqQz28c4wOLT4FDItFzh8zREGZbiG/B3o1vI8MmwvxXj++pQ7SfBxoz9Xe8gmQ7BuXno=
foo2: AgAkaTBYcESwogPiauZ15YbNldmk4a9esyYuR2GDt7hNcv+ycPLHmnsJcYs0hBtqucmrO3HbgCy/hQ6dMRCY12RA7w7XsFqNjZy3kavnhqwM6YkHntK2INwercRNQpO6B9bH6MxQTXcxfJbPqaPt30iTnTAhtpN47lueoyIoka4WWzwG/3PAikXhIlkTaq0hrclRJHRqg4z8Kmcaf5A/BRL2xX8syHbjA7MK9/OoK+zytv+LGrbLLHUtuhNNNQ2PG9u05rP6+59wRduQojEDtB9FTCa+daS+04/F4H1vi6XUNnjkK+Xna1T2Eavyuq2GieKj/7ig96et/4HoTAz44zwVhh8/pk0IFC8srcH3p+rFtZZmjvbURrFahEjFZbav3BDMBNhrU8SI3MDN0Abiyvz4vJJfSxIYcyLD1EQ507q7ZXrqYN/v1EiYgYUACi0JGxSWHB9TlCkZOAdCl+hroXEhBN2u5utLJ12njBQJ8ACNQDOYf+CmtV0y7foCZ6Aaap0pV7a8twyqK8c17kImzfi102Zel8ALfLAzdAXBV9c1+1pH76turnTCE33aSMQlaVF3VTmFQWqB8uIO/FQhZDPo8u/ki3L8J31nepup4/WE7i59IT0/9qGh2LKql4oAv6v4D7qtKziN6DvG7bsJlj14Dln0roiTfTWEEnBqdDER+GKZJlKayOWsPQdN0Wp+2KVfwLM=
{{ else if eq .Values.evn "dev" }}
foo: AgAkaTBYcESwogPi..........
{{ end }}
An alternative approach would be to have two templates, one for prod and one for dev and use Helm templating logic to pick the right file depending on which environment you're deploying to.
Anyway, each of those base64 blobs can be produced with:
$ kubeseal --raw --scope namespace-wide --from-file=yoursecret.txt
Pro-tip, you can pipe the secret if it's not in a file:
$ echo -n yoursecret | kubeseal --raw --scope namespace-wide --from-file=/dev/stdin
Then you have to paste the output of that command into your Helm Go template.
My approach
Use different .values.yml files for different environments
Create .secrets.yml files to store secret values (include in .gitignore)
Make a git pre-commit hook that uses kubeseal --raw to encrypt the individual secrets and then write them to the values file
Store the values file in git.
I wrote a gist on this: https://gist.github.com/foogunlana/b75175b4ff62bc07258ea78274c698cd
I would not put credentials from different environment into a single secret as it can be deployed into different cluster with different sealed controller.
Why don't you just separate secret files for each environment?
For seal a secret I use the following command:
kubeseal --name=name-of-the-config --controller-namespace=fluxcd \
--controller-name=sealed-secrets --format yaml \
< secret.yaml > sealedsecret.yaml
You can detect the controller-name and controller-namespace of the helm release by:
kubectl get HelmRelease -A -o jsonpath="{.items[?(#.spec.chart#.name=='sealed-secrets')]}"