Using SecretProviderClass with Ingress basic Auth - kubernetes

I'm trying to setup Basic auth in ingress. The "nginx.ingress.kubernetes.io/auth-secret" I have stored in K8s secrets using SecretProviderClass. The secret is mounted correctly. As per this documentation (https://kubernetes.github.io/ingress-nginx/examples/auth/basic/), the secret should have "data.auth" inside the key. Hence, in my deployment file I created an environment variable named "BASIC_AUTH_VALUE" to achieve this.
env:
- name: SECRET_AUTH
valueFrom:
secretKeyRef:
name: {{ include "ui.fullname" . }}-azure-csi
key: FRONTEND_BASIC_AUTH
optional: false
- name: BASIC_AUTH_VALUE
value: data.auth:$(SECRET_AUTH)
Then in my ingress file, I set the annotations as below
nginx.ingress.kubernetes.io/auth-secret: BASIC_AUTH_VALUE
Even then I still get 503 error. The pod is up and running and there isn't anything in the logs that I can find.
I have tried several options but all in vain so far. Any guidance will be of great help. Thanks.

I found a solution. I had to adapt the SecretProviderClass's secretObjects as below
secretObjects:
- data:
{{- range $secret := .Values.azureSecretsCSI.secrets }}
- key: {{ $secret.k8sName }}
objectName: {{ $secret.azName }}
{{- end }}
secretName: {{ include "ui.fullname" . }}-auth-azure-csi
type: Opaque
Where "{{ $secret.k8sName }}" must be "auth" is derived from values.yaml file as below
azureSecretsCSI:
tenantId: XXX
kvName: XXX
secrets:
- azName: XXX
k8sName: auth
And then in ingress annotations add name of the secret provider class instead of a secret name or an environment variable (which I was trying to do and which wasn't working)
nginx.ingress.kubernetes.io/auth-secret: {{ include "ui.fullname" . }}-auth-azure-csi

Related

can we define env variable in the helm charts only if secrets are provisioned?

we have below env variable in the helm hooks yaml file.
env:
- name:PASSWORD
value: {{ .Values.cmpassword | default "User#1234" | quote }}
i need to check if a particular secrets are created already and then password should take values from secrets like
{ if kubectl get secret passwdsecret1 -o yaml | grep password }}
valueFrom:
secretKeyRef:
name: cmpasswdsecret
key: password
{{else}}
value: {{ .Values.global.cmpassword | default "User#1234" | quote }}
{{end}}
is it possible to check whether secrets are created. if the secrets are availbale derive the value from it, or take the values from value file.
The Bitnami Helm charts generally address this by making the related Secret name a configuration option. I'd suggest this as a useful general approach here, since you'll get a consistent deployment regardless of what's actually in the cluster. If the administrator says the secret is supposed to exist, but it doesn't, the Pods will fail to start up; that's probably a better outcome than falling back to an insecure default.
The values.yaml configuration for this could look like
# existingSecret, if set, has the name of a Secret to use for
# credentials. This must contain a key `password`.
# existingSecret:
# cmpassword specifies the password. Ignored if `existingSecret`
# is provided.
cmpassword: User#1234
Then in your actual template code you can generate the secretKeyRef if existingSecret is specified, or fall back to the cmpassword if not.
env:
- name: PASSWORD
{{- with .Values.existingSecret }}
valueFrom:
secretKeyRef:
name: {{ . }}
key: password
{{- else }}
value: {{ .Values.cmpassword | required "either existingSecret or cmpassword is required" | quote }}
{{- end }}

Empty variable when using `status.hostIP` as reference field for my env variable in kubernetes

I'm deploying a kubernetes pod using helm v3, my kubectl client and server are above 1.7 so it should support reference fields. However when i deploy, the value is just empty.
using
environment:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Where the DD_AGENT_HOST is my env variable that should be given the host ip.
Any idea on why this might be happening?
Had to add it this to the container specification directly, not passing from an env and using include from helm as that doesn't work
The issue is related to helm app deployment template(IF you use one). For instance, if you have deployment.yaml with
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
And one of env values is valueFrom, you have to add explicitly (unless there is a nicer way of doing it):
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Otherwise copy of the range above will not use valueFromand as a result DD_AGENT_HOST will be empty

Helm Deployment: Connecting Kubernetes to Postgres DB in Cloud SQL

So I am deploying my spring boot app using helm. I am following a pre-existing formula used by our company to try and accomplish this task, but for some reason I am unable.
my postgresql-secrets.yml file contains the following
apiVersion: v1
kind: Secret
metadata:
name: {{ template "codes-chart.fullname" . }}-postgresql
labels:
app: {{ template "codes-chart.name" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
SPRING_DATASOURCE_URL: {{ .Values.secrets.springDatasourceUrl | b64enc }}
SPRING_DATASOURCE_USERNAME: {{ .Values.secrets.springDatasourceUsername | b64enc}}
SPRING_DATASOURCE_PASSWORD: {{ .Values.secrets.springDatasourcePassword | b64enc}}
This picks up the values in the values.yaml file
secrets:
springDatasourceUrl: PLACEHOLDER
springDatasourceUsername: PLACEHOLDER
springDatasourcePassword: PLACEHOLDER
The place holders are being overwritten in helm using a variable override in the environment.
the secrets are referenced in the envFrom: of the codes-deployment.yaml
envFrom:
- configMapRef:
name: {{ template "codes-chart.fullname" . }}-application
- secretRef:
name: {{ template "codes-chart.fullname" . }}-postgresql
my helm file structure is as follows:
|helm
|-codes
|--configmaps
|---manifest
|----manifest-codes-configmap.yaml
|--templates
|---application-deploy-job.yaml
|---application-manifest-configmap.yaml
|---application-register-job.yaml
|---application-unregister-job.yaml
|---codes-application-configmap.yaml
|---codes-deployment.yaml
|---codes-hpa.yaml
|---codes-ingress.yaml
|---codes-service.yaml
|---postgresql-secret.yaml
|--values.yaml
|--Chart.yaml
The issues seems to be with the SPRING_DATASOURCE_URL:
if i use the private ip of the cloudsql db, then it says it is not accepting connections
if i use the jdbc url format:
ex: (jdbc:postgresql://google/<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>)
then I get an 403 authentication error.
What am I doing wrong?
403 Forbidden:
The server understood the request, but is refusing to fulfill it.
The 403 for authenticated users with insufficient permissions.
403 indicates that the resource can not be provided. This may be because it is known that no level of authentication is sufficient, but it may be because the user is already authenticated and does not have authority.
Let me add some examples:
https://www.baeldung.com/kubernetes-helm
https://medium.com/zoom-techblog/from-zero-to-kubernetes-4fd354423e6a

Kubernetes - config map templated variables

I have two env vars in a pod (or a config map):
- TARGET_URL=http://www.example.com
- TARGET_PARAM=param
Is there any way for me to provide a third env var which is derived from both these vars, something like ${TARGET_URL}/mysite/${TARGET_PARAM}.
Thanks!
For environment variables (and a couple of other fields in the pod spec, including args and command) there is a Make-like $(VARIABLE) name that will get expanded; see for example the documentation for .env.value. This could look like:
env:
- name: TARGET_URL
valueFrom:
configMapKeyRef:
key: cm
name: TARGET_URL
- name: TARGET_PARAM
valueFrom:
configMapKeyRef:
key: cm
name: TARGET_PARAM
- name: TARGET_DETAIL_URL
value: $(TARGET_URL)/mysite/$(TARGET_PARAM)
If you are depending on mounting a ConfigMap into a container as files, then it can only contain static content; this trick won't work.
I don't think it is possible right now, without a 3rd party tool. regarding api ref it does not support multi variable in YAML. But I will tell you about a 3rd party tool -- Helm
It is possible to achieve it using Helm. your template will look like:
containers:
- name: {{.Values.Backend.name }}
image: "{{ .Values.Backend.image.repository }}:{{ .Values.Backend.image.tag }}"
imagePullPolicy: "{{ .Values.Backend.image.pullPolicy }}"
args:
- name: TARGET_URL
value: {{ .Value.URL}}
- name: TARGET_PARAM
value: {{ .Value.PARAM}}
- name: URL
value: {{ .Value.URL }}/mysite/{{ .Value.PARAM}}
and you will add to the file values.yaml parameters for TARGET_URL and TARGET_PARAM
URL: http://www.example.com
PARAM: param
You can do using an init script that you can call at docker entry point.

How to pull environment variables with Helm charts

I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.