Expanding a manifest secret variable into a volume section - kubernetes

The chart is efs-provisioner. I am trying to tweak it a little bit and inject some of the fields by consuming some existing k8s secrets, however for the volumes part I must provide a value referencing the secrets. I tried with this approach Combining multiple k8s secrets into an env variable, but it seems it's not applicable to a volume section.
Is there a way to do this ?
The pod is not able to start and fails with: mount.nfs: Failed to
resolve server $(FILE_SYSTEM_ID).efs.$(AWS_REGION).amazonaws.com: Name
or service not known
containers:
- name: {{ template "efs-provisioner.fullname" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: FILE_SYSTEM_ID
valueFrom:
secretKeyRef:
name: efs-id-secret
key: String
- name: AWS_REGION
valueFrom:
secretKeyRef:
name: aws-region-secret
key: String
volumes:
- name: pv-volume
nfs:
server: $(FILE_SYSTEM_ID).efs.$(AWS_REGION).amazonaws.com

Related

how to refer for a secret object having environment variables inside a container

I have a question and I hope anyone can help me.
well, I have a deployment YAML file having a pod for an application and this app must be connected with redisDB using environment variables, I already setting the environment variables on the pod as u see here :
spec:
containers:
- name: app
image: nix/python
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
- name: ENVIRONMENT
value: "DEV"
- name: HOST
value: "localhost"
- name: PORT
value: "8000"
- name: REDIS_HOST
value: "nix"
- name: REDIS_PORT
value: "6379"
- name: REDIS_DB
value: "0"
but I think it's not a best practice as a secure way, so I am thinking of defining those environments all into a secret object and referring to it under the container env. I just wanna refer to the name of the secret name and the container must read all the variables all at once not one by one. so how to make it plz ?
Replace the env field with this:
envFrom:
- secretRef:
name: {{ .name }}
optional: false
Set {{ .name }} to the name of the secret object you create.
You secret object should look like this:
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}
type: Opaque
stringData:
ENVIRONMENT: "DEV"
HOST: "localhost"
PORT: "8000"
REDIS_HOST: "nix"
REDIS_PORT: "6379"
REDIS_DB: "0"

Mount (add) files to existing directory using configmap volume mount

I have a ConfigMap with multiple files, and want to add these files to an already existing directory. But the tricky part here is, the filenames(keys) can change. So I can't try to mount them individually using subPath.
Is there any way this can be achieved from Deployment manifest?
Configmap:
config-files-configmap
└── newFile1.yml
└── newFile2.yml
Existing directory after adding files from configmap:
config/
└── existingFile1.yml
└── existingFile2.yml
└── newFile1.yml
└── newFile2.yml
PS: I have tried mounting the configmap as directory, which will override existing contents of the directory.
Thanks
You can use the init container with configmap as a volume mount.
Not sure about the actual deployment architecture.
i would suggest injecting the configmap files to another directory and copying and pasting at starting of the main container.
Using life cycle hook of POD of init container.
As we can not go with subpath, this one option i am seeing as of now.
Example helm template from RabbitMQ
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-rabbitmq
labels: &RabbitMQDeploymentLabels
app.kubernetes.io/name: {{ .Release.Name }}
app.kubernetes.io/component: rabbitmq-server
spec:
selector:
matchLabels: *RabbitMQDeploymentLabels
serviceName: {{ .Release.Name }}-rabbitmq-discovery
replicas: {{ .Values.rabbitmq.replicas }}
updateStrategy:
# https://www.rabbitmq.com/upgrade.html
# https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps
type: RollingUpdate
template:
metadata:
labels: *RabbitMQDeploymentLabels
spec:
serviceAccountName: {{ .Values.rabbitmq.serviceAccount }}
terminationGracePeriodSeconds: 180
initContainers:
# This init container copies the config files from read-only ConfigMap to writable location.
- name: copy-rabbitmq-config
image: {{ .Values.rabbitmq.initImage }}
imagePullPolicy: Always
command:
- /bin/bash
- -euc
- |
# Remove cached erlang cookie since we are always providing it,
# that opens the way to recreate the application and access to existing data
# as a new erlang will be regenerated again.
echo ${RABBITMQ_ERLANG_COOKIE} > /var/lib/rabbitmq/.erlang.cookie
chmod 600 /var/lib/rabbitmq/.erlang.cookie
# Copy the mounted configuration to both places.
cp /rabbitmqconfig/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
# Change permission to allow to add more configurations via variables
chown :999 /etc/rabbitmq/rabbitmq.conf
chmod 660 /etc/rabbitmq/rabbitmq.conf
cp /rabbitmqconfig/enabled_plugins /etc/rabbitmq/enabled_plugins
volumeMounts:
- name: configmap
mountPath: /rabbitmqconfig
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-erlang-cookie
containers:
- name: rabbitmq
image: "{{ .Values.rabbitmq.image.repo }}:{{ .Values.rabbitmq.image.tag }}"
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RABBITMQ_USE_LONGNAME
value: 'true'
- name: RABBITMQ_NODENAME
value: 'rabbit#$(MY_POD_NAME).{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
- name: K8S_SERVICE_NAME
value: '{{ .Release.Name }}-rabbitmq-discovery'
- name: K8S_HOSTNAME_SUFFIX
value: '.{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
# User name to create when RabbitMQ creates a new database from scratch.
- name: RABBITMQ_DEFAULT_USER
value: '{{ .Values.rabbitmq.user }}'
# Password for the default user.
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-pass
ports:
- name: clustering
containerPort: 25672
- name: amqp
containerPort: 5672
- name: amqp-ssl
containerPort: 5671
- name: prometheus
containerPort: 15692
- name: http
containerPort: 15672
volumeMounts:
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
livenessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 20
timeoutSeconds: 30
lifecycle:
postStart:
exec:
command:
- /bin/bash
- -c
- |
# Wait for the RabbitMQ to be ready.
until rabbitmqctl node_health_check; do
sleep 5
done
# By default, RabbitMQ does not have Highly Available policies enabled,
# using the following command to enable it.
rabbitmqctl set_policy ha-all "." '{"ha-mode":"all", "ha-sync-mode":"automatic"}' --apply-to all --priority 0
{{ if .Values.metrics.exporter.enabled }}
- name: prometheus-to-sd
image: {{ .Values.metrics.image }}
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=rabbitmq:http://localhost:15692/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-type-prefix=k8s_
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{ end }}
volumes:
- name: configmap
configMap:
name: {{ .Release.Name }}-rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- name: config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: {{ .Release.Name }}-rabbitmq-pvc
labels: *RabbitMQDeploymentLabels
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.rabbitmq.persistence.storageClass }}
resources:
requests:
storage: {{ .Values.rabbitmq.persistence.size }}
Example reference : https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/rabbitmq/chart/rabbitmq/templates/statefulset.yaml

How can I reuse common configuration across different kubernetes manifests?

Assume I have this manifest:
apiVersion: batch/v1
kind: Job
metadata:
name: initialize-assets-fixtures
spec:
template:
spec:
initContainers:
- name: wait-for-minio
image: bitnami/minio-client
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
containers:
- name: initialize-assets-fixtures
image: bitnami/minio
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: Never
You see I have here one initContainer and one container. In both containers, I have the same configuration, i.e. the same env section.
Assume I have yet another Job manifest where I use the very same env section again.
It's a lot of duplicated configuration that I bet I can simplify drastically, but I don't know how to do it. Any hint? Any link to some documentation? After some googling, I was not able to come up with anything useful. Maybe with kustomize, but I'm not sure. Or maybe I'm doing it the wrong way with all those environment variables, but I don't think I have a choice, depending on the service I'm using (here it's minio, but I want to do the same kind of stuff with other services which might not be as flexible as minio).
Based on my knowledge you have those 3 options
Kustomize
Helm
ConfigMap
ConfigMap
You can use either kubectl create configmap or a ConfigMap generator in kustomization.yaml to create a ConfigMap.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line
value = the file contents or the literal value you provided on the command line.
More about how to use it in pod here
Helm
As #Matt mentionted in comments you can use helm
helm lets you template the yaml with values. Also once you get into it there are ways to create and include partial templates – Matt
By the way, helm has it's own created minio chart, you might take a look how it is created there.
Kustomize
It's well described here and here how could you do that in kustomize.
Let me know if you have any more questions.
So, long story short: to solve my problem, I first created a new chart for my service and transformed the k8s manifests I had into helm templates. Then, I completed the _helpers.tpl with the following code:
{{/*
Common minio environment variables setup
*/}}
{{- define "minio.envvarsblock" -}}
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: secret-key
- name: MINIO_SERVER_HOST
value: {{ .Values.minio.fullname }}
- name: MINIO_SERVER_PORT_NUMBER
value: {{ .Values.minio.server.port | quote }}
- name: MINIO_ALIAS
value: {{ .Values.minio.client.alias }}
{{- end -}}
{{/*
Wait for minio init container definition
*/}}
{{- define "wait-for-minio" -}}
- name: wait-for-minio
image: {{ .Values.minio.client.image }}
env: {{- include "minio.envvarsblock" . | nindent 4 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
{{- end -}}
The first section above allows to reuse the env section throughout all my templates and the second allows to reuse an initContainer that I use all over the place too. I was then able to inject those partial templates into my helm templates like so (to take the example I put in my original post):
{{- if .Values.fixtures.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "chart.fullname" . }}-init-fixtures
labels:
{{ include "chart.labels" . | indent 4 }}
spec:
template:
spec:
initContainers:
{{- include "wait-for-minio" . | nindent 6 }}
containers:
- name: {{ .Chart.Name }}-init-fixtures
image: {{ .Values.image }}
env: {{- include "minio.envvarsblock" . | nindent 10 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: OnFailure
{{- end -}}

helm reference secret in deployment yaml

I'm looking for a possible way to reference the secrets in my deployment.yaml (1 liner)
Currently I'm using the
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-secrets
key: COUCHDB_USER
- name: COUCHDB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-secrets
key: COUCHDB_PASSWORD
With the minimal modification possible, I want to achieve something like this:
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
env:
- name: COUCHDB_URL
value: http://${COUCHDB_USER}:${COUCHDB_PASSWORD}#{{ .Release.Name }}-couchdb:5984
Just carious if I can do this in 1 step in during the deployment, instead of passing 2 env vars and parse them in my application.
I am not seeing any way to achieve it without setting COUCHDB_USER and COUCHDB_PASSWORD in container env.
One workaround is, you can specify your secret in container.EnvFrom and all your secret keys will be converted to Environment variables. then, You can use those environment variables to create your composite env (ie, COUCHDB_URL).
FYI, To create env from another env in kubernetes, () is used. Curly braces {} won't work at this very moment.
One sample is,
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
COUCHDB_USER: YWRtaW4=
COUCHDB_PASSWORD: MWYyZDFlMmU2N2Rm
---
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
envFrom:
- secretRef:
name: mysecret
env:
- name: COUCHDB_URL
value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)rest-of-the-url
You can confirm, the output by,
$ kubectl exec -it secret-env-pod bash
root#secret-env-pod:/data# env | grep COUCHDB
COUCHDB_URL=http://admin:1f2d1e2e67dfrest-of-the-url
COUCHDB_PASSWORD=1f2d1e2e67df
COUCHDB_USER=admin
In your case, the yaml for container is:
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
envFrom:
- secretRef:
name: {{ .Release.Name }}-secrets
env:
- name: COUCHDB_URL
value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)#{{ .Release.Name }}-couchdb:5984

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(