Import data to config map from kubernetes secret - kubernetes

I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database

Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"

You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller

I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.

Related

How can I reference env variables in Job from the values.yaml file

I have a values file that I would like to reference the values in Job which the values are then used in a ConfigMaP. I get the error: "ERROR: zero-length delimited identifier at or near """".
Which means the values cannot be found. Does anyone have any idea how to solve this?
values.yaml
env:
- name: rds_db_name
value: test
- name: rds_host_name
valueFrom:
secretKeyRef:
name: my-pg-db
key: endpoint
Job.yaml
kind: Job
metadata:
spec:
template:
spec:
volumes:
configMap:
containers:
command:
volumeMounts:
env:
{{- toYaml .Values.env | nindent 12}}
ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
data:
psql << EOF
SELECT 'CREATE DATABASE "$rds_db_name"' WHERE NOT EXISTS (SELECT FROM pg_database
WHERE datname = '$rds_db_name'); \gexec

How to set the value in envFrom as the value of env?

I have a configMap that stores a json file:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
In my deployment.yaml, I use volume and envFrom
spec:
... ...
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
envFrom:
- configMapRef:
name: service-config
... ...
volumeMounts:
- mountPath: /src/config
name: config-vol
After I deployed the helm and run kubectl describe pods, I god this:
Environment Variables from:
service-config ConfigMap Optional: false
I wonder how can I get/use this service-config in my code? My values.yaml can extract the value under env but I don't know how to extract the value of configMap. Is there a way I can move this service-config or the json file stored in it as the variable of env in values.yaml? Thank you in advance!

helm values.yaml - use value from another node

so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?
Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4

How to convert k8s yaml to helm chart

Right now I'm deploying applications on k8s using yaml files.
Like the one below:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: flow
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: serviceA
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA-ingress
namespace: flow
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
rules:
- host: serviceA.xyz.com
http:
paths:
- path: /
backend:
serviceName: serviceA
servicePort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serviceA-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=serviceA-main
server.port=8080
logging.level.org.springframework.jdbc.core=debug
lead.pg.url=serviceB.flow.svc:8080/lead
task.pg.url=serviceB.flow.svc:8080/task
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: serviceA-deployment
namespace: flow
spec:
selector:
matchLabels:
app: serviceA
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: serviceA
spec:
containers:
- name: serviceA
image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1
command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: serviceA-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: serviceA-application-config
configMap:
name: serviceA-config
items:
- key: application-dev.properties
path: application-dev.properties
restartPolicy: Always
Is there any automated way to convert this yaml into helm charts.
Or any other workaround or sample template that I can use to achieve this.
Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.
Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.
At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.
There is no unambiguous way to map k8s YAML to a Helm chart. Because app/chart maintainer should decide:
which app parameters can be modified by chart users
which of these parameters are mandatory
what are default values, etc...
So creation of a Helm chart is a manual process. But it contains a lot of routine steps. For example, most of the chart creators want to:
remove the namespace from templates to set it with helm install -n
remove fields generated by k8s
add helm release name to resource name
preserve correct links between templated names
move some obvious parameters to values.yaml like:
container resources
service type and ports
configmap/secret values
image repo:tag
I've created a CLI called helmify to automate the steps listed above.
It reads a list of k8s objects from stdin and creates a helm chart from it.
You can install it with brew brew install arttor/tap/helmify. And then use to generate charts from YAML file:
cat my-app.yaml | helmify mychart
or from directory <my_directory> containing yamls:
awk 'FNR==1 && NR!=1 {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart
Both of the commands will create mychart Helm chart directory from your k8s objects similar to helm create command.
Here is a chart generated by helmify from the yaml published in the question:
mychart
├── Chart.yaml
├── templates
│   ├── _helpers.tpl
│   ├── config.yaml
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── serviceA.yaml
└── values.yaml
#values.yaml
config:
applicationDevProperties:
lead:
pg:
url: serviceB.flow.svc:8080/lead
logging:
level:
org:
springframework:
jdbc:
core: debug
server:
port: "8080"
spring:
application:
name: serviceA-main
task:
pg:
url: serviceB.flow.svc:8080/task
deployment:
replicas: 1
image:
serviceA:
repository: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test
tag: serviceA-v1
serviceA:
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
---
# templates/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config
labels:
{{- include "mychart.labels" . | nindent 4 }}
data:
application-dev.properties: |
spring.application.name={{ .Values.config.applicationDevProperties.spring.application.name | quote }}
server.port={{ .Values.config.applicationDevProperties.server.port | quote }}
logging.level.org.springframework.jdbc.core={{ .Values.config.applicationDevProperties.logging.level.org.springframework.jdbc.core | quote }}
lead.pg.url={{ .Values.config.applicationDevProperties.lead.pg.url | quote }}
task.pg.url={{ .Values.config.applicationDevProperties.task.pg.url | quote }}
---
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}-deployment
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
containers:
- command:
- java
- -jar
- -agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n
- serviceA-service.jar
- --spring.config.additional-location=/config/application-dev.properties
image: {{ .Values.image.serviceA.repository }}:{{ .Values.image.serviceA.tag |
default .Chart.AppVersion }}
name: serviceA
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /config
name: serviceA-application-config
readOnly: true
restartPolicy: Always
volumes:
- configMap:
items:
- key: application-dev.properties
path: application-dev.properties
name: {{ include "mychart.fullname" . }}-config
name: serviceA-application-config
---
# templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ include "mychart.fullname" . }}-ingress
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: serviceA.xyz.com
http:
paths:
- backend:
serviceName: serviceA
servicePort: 8080
path: /
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
---
# templates/serviceA.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}-serviceA
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
type: {{ .Values.serviceA.type }}
selector:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 4 }}
ports:
{{- .Values.serviceA.ports | toYaml | nindent 2 -}}
https://github.com/mailchannels/palinurus/tree/master
This git repo contains a python script which will convert basic yamls to helm charts
You can use this tools, helmtrans : https://github.com/codeandcode0x/helmtrans . It can transform YAML to Helm Charts automation.
example:
➜ helmtrans yamltohelm -p [source path] -o [output path]

SchedulerPredicates failed due to PersistentVolumeClaim is not bound

I am using helm with kubernetes on google cloud platform.
i get the following error for my postgres deployment:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound
it looks like it cant connect to the persistent storage but i don't understand why because the persistent storage loaded fine.
i have tried deleting the helm release completely, then on google-cloud-console > compute-engine > disks; i have deleted all persistent disk. and finally tried to install from the helm chart, but the postgres deployment still doesnt connect to the PVC.
my database configuration:
{{- $serviceName := "db-service" -}}
{{- $deploymentName := "db-deployment" -}}
{{- $pvcName := "db-disk-claim" -}}
{{- $pvName := "db-disk" -}}
apiVersion: v1
kind: Service
metadata:
name: {{ $serviceName }}
labels:
name: {{ $serviceName }}
env: production
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: http
selector:
name: {{ $deploymentName }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ $deploymentName }}
labels:
name: {{ $deploymentName }}
env: production
spec:
replicas: 1
template:
metadata:
labels:
name: {{ $deploymentName }}
env: production
spec:
containers:
- name: postgres-database
image: postgres:alpine
imagePullPolicy: Always
env:
- name: POSTGRES_USER
value: test-user
- name: POSTGRES_PASSWORD
value: test-password
- name: POSTGRES_DB
value: test_db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: {{ $pvcName }}
volumes:
- name: {{ $pvcName }}
persistentVolumeClaim:
claimName: {{ $pvcName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $pvcName }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: {{ $pvName }}
env: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.gcePersistentDisk }}
labels:
name: {{ $pvName }}
env: production
annotations:
volume.beta.kubernetes.io/mount-options: "discard"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: "ext4"
pdName: {{ .Values.gcePersistentDisk }}
is this config for kubenetes correct? i have read the documentation and it looks like this should work. i'm new to Kubernetes and helm so any advice is appreciated.
EDIT:
i have added a PersistentVolume and linked it to the PersistentVolumeClaim to see if that helps, but it seems that when i do this, the PersistentVolumeClaim status becomes stuck in "pending" (resulting in the same issue as before).
You don't have a bound PV for this claim. What storage you use for this claim. You need to mention it in the PVC file