How to use Kubeseal to seal a helm-templated secret? - kubernetes

Imagine a secret like this:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "test-cicd.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
secret.yaml: |
{{ if eq .Values.env "prod" }}
foo: bar-prod
foo2: bar2_prod
{{ else if eq .Values.evn "dev" }}
foo: bar-dev
{{ end }}
Is it possible to seal this using Kubeseal?
Upon doing it now, I get invalid map key: map[interface {}]interface {}{"include \"test-cicd.fullname\" .":interface {}(nil)} which is probably because it is not a "valid" yaml file.
One thing that I tried was:
1. Removing the helm templating lines
2. Generating the sealedsecret
3. Templating the sealedsecret using helm
But by doing this, the sealedsecret could not be decrypted by the cluster-side operator on deployment time.

mkmik gave an answer to my question on Github, so I'm quoting it here as well just for the records.
So, you're composing a secret value with client-side templating.
Parts of your secret.yaml file are secret, yet parts must be templating directives (the if) and hence cannot be encrypted.
You have two options:
you encrypt your secrets somehow using some client-side vault software, possibly with helm integration (e.g. https://github.com/futuresimple/helm-secrets). That requires every user (and CI environment) that applies that helm chart, to be able to decrypt the secrets.
you re-factor your secrets so that secrets are "atomic", and use sealed-secrets to benefit from its "one-way encryption" approach, which allows your devops users (and CI automation) to apply the helm charts without ever seeing the secret values themselves.
The rest of this answer assumes you picked option (2)
Now, since you decided to use Helm, you have to deal with the fact that helm templates are not json/yaml files, but instead they are Go templates, and hence they cannot be manipulated by tools designed to manipulated structured data formats.
Luckily, kubeseal has a --raw command, that allows you to encrypt individual secret values and put them manually in whatever file format you're using to describe your k8s resources.
So, assuming you want to create a Helm template for SealedSecrets resource, which takes the name and label values as paramters, and also chooses which secrets to put also based on boolean prod/dev parameter, this example might work for you:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: {{ include "test-cicd.fullname" . }}
annotations:
# this is because the name is a deployment time parameter
# consider also using "cluster-wide" if the namespace is also a parameter
# please make sure you understand the implications, see README
sealedsecrets.bitnami.com/namespace-wide: "true"
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
spec:
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
encryptedData:
{{ if eq .Values.env "prod" }}
foo: AgASNmKx2+QYbbhSxBE0KTa91sDBeNSaicvgBPW8Y/q/f806c7lKfF0mnxzEirjBsvF67C/Yp0fwSokIpKyy3gXtatg8rhf8uiQAA3VjJGkl5VYLcad0t6hKQyIfHsD7wrocm36uz9hpH30DRPWtL5qy4Z+zbzHj8AvEV+xTpBHCSyJPF2hyvHXTr6iQ6KJrAKy04MDwjyQzllN5OQJT2w4zhVgTxXSg/c7m50U/znbcJ1x5vWLXLSeiDRrsJEJeNoPQM8OHmosf5afSOTDWQ4IhG3srSBfDExSFGBIC41OT2CUUmCCtrc9o61LJruqshZ3PkiS7PqejytgwLpw/GEnj2oa/uNSStiP9oa9mCY6IUMujwjF9rKLIT456DlrnsS0bYXO2NmYwSfFX+KDbEhCIVFMbMupMSZp9Ol2DTim5SLIgIza/fj0CXaO3jGiltSQ0aM8gLSMK9n3c1V+X5hKmzMI3/Xd01QmhMmwqKp+oy21iidLJjtz67EiWyfIg1l7hiD5IIVlM9Gvg3k67zij5mOcXPkFnMmUQhQWxVKgAf4z8qEgprt03C+q+Wwwt25UDhQicpwoGtVQzU5ChJi09ja5LeW4RrvDf2B5KRp9HXoj1eu93MMl1Kcnx+X7uVT5OqQz28c4wOLT4FDItFzh8zREGZbiG/B3o1vI8MmwvxXj++pQ7SfBxoz9Xe8gmQ7BuXno=
foo2: AgAkaTBYcESwogPiauZ15YbNldmk4a9esyYuR2GDt7hNcv+ycPLHmnsJcYs0hBtqucmrO3HbgCy/hQ6dMRCY12RA7w7XsFqNjZy3kavnhqwM6YkHntK2INwercRNQpO6B9bH6MxQTXcxfJbPqaPt30iTnTAhtpN47lueoyIoka4WWzwG/3PAikXhIlkTaq0hrclRJHRqg4z8Kmcaf5A/BRL2xX8syHbjA7MK9/OoK+zytv+LGrbLLHUtuhNNNQ2PG9u05rP6+59wRduQojEDtB9FTCa+daS+04/F4H1vi6XUNnjkK+Xna1T2Eavyuq2GieKj/7ig96et/4HoTAz44zwVhh8/pk0IFC8srcH3p+rFtZZmjvbURrFahEjFZbav3BDMBNhrU8SI3MDN0Abiyvz4vJJfSxIYcyLD1EQ507q7ZXrqYN/v1EiYgYUACi0JGxSWHB9TlCkZOAdCl+hroXEhBN2u5utLJ12njBQJ8ACNQDOYf+CmtV0y7foCZ6Aaap0pV7a8twyqK8c17kImzfi102Zel8ALfLAzdAXBV9c1+1pH76turnTCE33aSMQlaVF3VTmFQWqB8uIO/FQhZDPo8u/ki3L8J31nepup4/WE7i59IT0/9qGh2LKql4oAv6v4D7qtKziN6DvG7bsJlj14Dln0roiTfTWEEnBqdDER+GKZJlKayOWsPQdN0Wp+2KVfwLM=
{{ else if eq .Values.evn "dev" }}
foo: AgAkaTBYcESwogPi..........
{{ end }}
An alternative approach would be to have two templates, one for prod and one for dev and use Helm templating logic to pick the right file depending on which environment you're deploying to.
Anyway, each of those base64 blobs can be produced with:
$ kubeseal --raw --scope namespace-wide --from-file=yoursecret.txt
Pro-tip, you can pipe the secret if it's not in a file:
$ echo -n yoursecret | kubeseal --raw --scope namespace-wide --from-file=/dev/stdin
Then you have to paste the output of that command into your Helm Go template.

My approach
Use different .values.yml files for different environments
Create .secrets.yml files to store secret values (include in .gitignore)
Make a git pre-commit hook that uses kubeseal --raw to encrypt the individual secrets and then write them to the values file
Store the values file in git.
I wrote a gist on this: https://gist.github.com/foogunlana/b75175b4ff62bc07258ea78274c698cd

I would not put credentials from different environment into a single secret as it can be deployed into different cluster with different sealed controller.
Why don't you just separate secret files for each environment?
For seal a secret I use the following command:
kubeseal --name=name-of-the-config --controller-namespace=fluxcd \
--controller-name=sealed-secrets --format yaml \
< secret.yaml > sealedsecret.yaml
You can detect the controller-name and controller-namespace of the helm release by:
kubectl get HelmRelease -A -o jsonpath="{.items[?(#.spec.chart#.name=='sealed-secrets')]}"

Related

Approach for configmap and secret for a yaml file

I have a yaml file which needs to be loaded into my pods, this yaml file will have both sensitive and non-sensitive data, this yaml file need to be present in a path which i have included as env in containers.
env:
- name: CONFIG_PATH
value: /myapp/config/config.yaml
If my understanding is right, the configmap was the right choice, but i am forced to give the sensitive data like password as plain text in the values.yaml in helm chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
labels:
app: {{ .Release.Name }}-config
data:
config.yaml: |
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
Values.yaml
config:
password: "mypassword"
Mounted the above config map as follows
volumeMounts:
- name: {{ .Release.Name }}-config
mountPath: /myapp/config/
So i wanted to try secret, If i try secret, it is loading as Environment Variables inside pod, but it is not going into this config.yaml file.
If i convert the above yaml file into secret instead of configmap , should i convert the entire config.yaml into base64 secret? my yaml file has more entries and it will look cumbersome and i dont think it as a solution.
If i take secret as a stringdata then the base64 will be taken as it is.
How do i make sure that config.yaml loads into pods with passwords not exposed in the values.yaml Is there a way to combine configmap and secret
I read about projected volumes, but i dont see a use case for merging configmap and secrets into single config.yaml
Any help would be appreciated.
Kubernetes has no real way to construct files out of several parts. You can embed an entire (small) file in a ConfigMap or a Secret, but you can't ask the cluster to assemble a file out of parts in multiple places.
In Helm, one thing you can do is to put the configuration-file data into a helper template
{{- define "config.yaml" -}}
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
{{ end -}}
In the ConfigMap you can use this helper template rather than embedding the content directly
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
config.yaml: |
{{ include "config.yaml" . | indent 4 }}
If you move it to a Secret you do in fact need to base64 encode it. But with the helper template that's just a matter of invoking the template and encoding the result.
apiVersion: v1
kind: Secret
metadata: { ... }
data:
config.yaml: {{ include "config.yaml" . | b64enc }}
If it's possible to set properties in this file directly via environment variables (like Spring properties) or to insert environment-variable references in the file (like a Ruby ERB file) that could let you put the bulk of the file into a ConfigMap, but use a Secret for specific values; you would need a little more wiring to also make the environment variables available.
You briefly note a concern around passing the credential as a Helm value. This does in fact require having it in plain text at deploy time, and an operator could helm get values later to retrieve it. If this is a problem, you'll need some other path to inject or retrieve the secret value.

Yaml dynamic variables

So, I'm just starting with YAML and k8s and maybe this questions comes from lack of understanding how YAML and Helm works together.
But I was wondering if I can declare a variable inside Values.YAML file that will be changed during the run of the scripts?
I was thinking about accumulating value for each pod I am starting, that will be saved as an environment variable in each pod. I can manually create different value for each pod but I was wondering if there is an automatic way to do so?
Hope my question is clear :)
Helm allows for conditionals using its templating. For example, I can have this in my values.yaml
environment: preprod
And then this inside a yaml within my helm chart
{{ if eq .Values.environment "preprod" }}
## Do preprod stuff here
{{ end }}
{{ if eq .Values.environment "prod" }}
## Do prod stuff here
{{ end }}
This means if I ran helm install, then .Values.environment would resolve to "preprod" and the block within the {{ if eq .Values.environment "preprod" }}...{{ end }} would be printed in the yaml.
If I wanted to override that default, I can by adding the --set switch (details here)
helm install --set environment=prod
Which would cause the .Values.environment variable to resolve to "prod" instead, and the block within {{ if eq .Values.environment "prod" }} ... {{ end }} would be output instead.
Helm templates are stateless, and the variables structure is immutable.
Can I declare a variable inside the values.yaml file that will be changed during the run of the scripts?
That's not possible, no.
If you have experience with functional programming, there are some related tricks you can use in the context of Helm templates. A template only takes one parameter, but you can make that parameter be a list, and then pass some state forward through a series of recursive template calls. If you're tempted to do this, consider writing a Kubernetes operator instead: even if you have to learn Go to do it, the language is much more mainstream and practical than the template language, and it's much easier to test.
This having been said:
... accumulating value for each pod I am starting ...
If all you're asking for is a set of Pods that are very similar except that they have sequential names, this is one of the things a StatefulSet provides.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: some-name
spec:
replicas: 5
The Pods generated by this StatefulSet will be named some-name-0, some-name-1, and so on. Your application code can see these names via the hostname command and language-specific equivalents. That could meet your needs.
If you need something more complex, you can also use a template range loop to generate a series of documents. Each document needs to begin with a --- YAML start-of-document marker. You need to be aware that range rebinds the . special template variable as appears in constructs like .Values, and I tend to save that away.
{{- $top := . }}
{{- $i := range until 5 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "mychart.fullname" $top }}-{{ $i }}
spec: { ... }
{{- end }}
You should almost always use higher-level constructs like Deployments, StatefulSets, or Jobs instead of creating bare Pods. Trying to fit within their patterns will usually be a little easier than trying to manually create several very slightly different Pods.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

Prevent creating a secret if it already exists

At the moment I have the following secret set up:
apiVersion: v1
kind: Secret
metadata:
name: my-repository-key
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
Unfortunately, I have 2 subcharts using the same secret, which cause an issue when I try to install them using helm.
Per the stackoverflow answer, I've tried using the following line to prevent the re-creation of the secret:
{{- if not (lookup "v1" "Secret" "" "my-repository-key") }}
Unfortunately It did not work, and I'm unable to debug the lookup as it's impossible for the time being.
How do I prevent the creation with a lookup? Is there a better way?
In Helm charts, Kubernetes objects are often named with a prefix that's the name of the current release plus the name of the current chart. That will make the name unique, even if there are related subcharts that declare similar secrets. (A secret is pretty small and duplicating it between two subcharts shouldn't be an operational problem.)
metadata:
name: "{{ .Release.Name }}-{{ .Chart.Name }}-key"
If you created the chart with helm create, this pattern is common enough that the new-chart template includes a helper template that generates this. If the chart only has a single secret, you can use the default name:
metadata:
name: "{{ include "chartname.fullname" . }}"
Or, up to some corner cases around naming, you can add a suffix to it
metadata:
name: "{{ include "chartname.fullname" . }}-key"

Templating external files in helm

I want to use application.yaml file to be passed as a config map.
So I have written this.
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
my application.yaml is present in foo folder and
contains a service name which I need it to be dynamically populated via helm interpolation.
foo:
service:
name: {{.Release.Name}}-service
When I dry run , I am getting this
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
application.yaml: "ei:\r\n service:\r\n name: {{.Release.Name}}-service"
but I want name: {{.Release.Name}}-service to contain actual helm release name.
Is it possible to do templating for external files using helm , if yes then how to do it ?
I have gone through https://v2-14-0.helm.sh/docs/chart_template_guide/#accessing-files-inside-templates
I didn't find something which solves my use case.
I can also copy the content to config map yaml and can do interpolation but I don't want to do it. I want application.yml to be in a separate file, so that, it will be simple to deal with config changes..
Helm includes a tpl function that can be used to expand an arbitrary string as a Go template. In your case the output of ...AsConfig is a string that you can feed into the template engine.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-conf
data:
{{ tpl (.Files.Glob "foo/*").AsConfig . | indent 2 }}
Once you do that you can invoke arbitrary template code from within the config file. For example, it's common enough to have a defined template that produces the name prefix of the current chart as configured, and so your config file could instead specify
foo:
service:
name: {{ template "mychart.name" . }}-service
As best I can tell, there is no recursive template evaluation available in helm (nor in Sprig), likely by design
However, in your specific case, if you aren't expecting the full power of golang templates, you can cheat and use Sprig's regexReplaceAllLiteral:
kind: ConfigMap
data:
{{/* here I have used character classes rather that a sea of backslashes
you can use the style you find most legible */}}
{{ $myRx := "[{][{] *[.]Release[.]Name *[}][}]" }}
{{ regexReplaceAllLiteral $myRx (.Files.Glob "foo/*").AsConfig .Release.Name }}
If you genuinely need the full power of golang templates for your config files, then helm, itself, is not the mechanism for doing that -- but helmfile has a lot of fancy tricks for generating the ultimate helm chart that helm will install