Right now I'm deploying applications on k8s using yaml files.
Like the one below:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: flow
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: serviceA
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA-ingress
namespace: flow
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
rules:
- host: serviceA.xyz.com
http:
paths:
- path: /
backend:
serviceName: serviceA
servicePort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serviceA-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=serviceA-main
server.port=8080
logging.level.org.springframework.jdbc.core=debug
lead.pg.url=serviceB.flow.svc:8080/lead
task.pg.url=serviceB.flow.svc:8080/task
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: serviceA-deployment
namespace: flow
spec:
selector:
matchLabels:
app: serviceA
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: serviceA
spec:
containers:
- name: serviceA
image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1
command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: serviceA-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: serviceA-application-config
configMap:
name: serviceA-config
items:
- key: application-dev.properties
path: application-dev.properties
restartPolicy: Always
Is there any automated way to convert this yaml into helm charts.
Or any other workaround or sample template that I can use to achieve this.
Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.
Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.
At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.
There is no unambiguous way to map k8s YAML to a Helm chart. Because app/chart maintainer should decide:
which app parameters can be modified by chart users
which of these parameters are mandatory
what are default values, etc...
So creation of a Helm chart is a manual process. But it contains a lot of routine steps. For example, most of the chart creators want to:
remove the namespace from templates to set it with helm install -n
remove fields generated by k8s
add helm release name to resource name
preserve correct links between templated names
move some obvious parameters to values.yaml like:
container resources
service type and ports
configmap/secret values
image repo:tag
I've created a CLI called helmify to automate the steps listed above.
It reads a list of k8s objects from stdin and creates a helm chart from it.
You can install it with brew brew install arttor/tap/helmify. And then use to generate charts from YAML file:
cat my-app.yaml | helmify mychart
or from directory <my_directory> containing yamls:
awk 'FNR==1 && NR!=1 {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart
Both of the commands will create mychart Helm chart directory from your k8s objects similar to helm create command.
Here is a chart generated by helmify from the yaml published in the question:
mychart
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── config.yaml
│ ├── deployment.yaml
│ ├── ingress.yaml
│ └── serviceA.yaml
└── values.yaml
#values.yaml
config:
applicationDevProperties:
lead:
pg:
url: serviceB.flow.svc:8080/lead
logging:
level:
org:
springframework:
jdbc:
core: debug
server:
port: "8080"
spring:
application:
name: serviceA-main
task:
pg:
url: serviceB.flow.svc:8080/task
deployment:
replicas: 1
image:
serviceA:
repository: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test
tag: serviceA-v1
serviceA:
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
---
# templates/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config
labels:
{{- include "mychart.labels" . | nindent 4 }}
data:
application-dev.properties: |
spring.application.name={{ .Values.config.applicationDevProperties.spring.application.name | quote }}
server.port={{ .Values.config.applicationDevProperties.server.port | quote }}
logging.level.org.springframework.jdbc.core={{ .Values.config.applicationDevProperties.logging.level.org.springframework.jdbc.core | quote }}
lead.pg.url={{ .Values.config.applicationDevProperties.lead.pg.url | quote }}
task.pg.url={{ .Values.config.applicationDevProperties.task.pg.url | quote }}
---
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}-deployment
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
containers:
- command:
- java
- -jar
- -agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n
- serviceA-service.jar
- --spring.config.additional-location=/config/application-dev.properties
image: {{ .Values.image.serviceA.repository }}:{{ .Values.image.serviceA.tag |
default .Chart.AppVersion }}
name: serviceA
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /config
name: serviceA-application-config
readOnly: true
restartPolicy: Always
volumes:
- configMap:
items:
- key: application-dev.properties
path: application-dev.properties
name: {{ include "mychart.fullname" . }}-config
name: serviceA-application-config
---
# templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ include "mychart.fullname" . }}-ingress
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: serviceA.xyz.com
http:
paths:
- backend:
serviceName: serviceA
servicePort: 8080
path: /
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
---
# templates/serviceA.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}-serviceA
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
type: {{ .Values.serviceA.type }}
selector:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 4 }}
ports:
{{- .Values.serviceA.ports | toYaml | nindent 2 -}}
https://github.com/mailchannels/palinurus/tree/master
This git repo contains a python script which will convert basic yamls to helm charts
You can use this tools, helmtrans : https://github.com/codeandcode0x/helmtrans . It can transform YAML to Helm Charts automation.
example:
➜ helmtrans yamltohelm -p [source path] -o [output path]
Related
I have a Kubernetes cluster with Elasticsearch currently deployed.
The Elasticsearch coordinator node is accessible behind a service via a ClusterIP over HTTPS. It uses a self-signed TLS certificate.
I can retrieve the value of the CA:
kubectl get secret \
-n elasticsearch elasticsearch-coordinating-only-crt \
-o jsonpath="{.data.ca\.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIDIjCCAgqgAwIBAgIRANkAx51S
...
...
I need to provide this as a ca.crt to other app deployments.
Note: The Elasticsearch deployment is an an elasticsearch Kubernetes namespace. New deployments will be in different namespaces.
An example of this is a deployment of kafka that includes a kafka-connect-elasticsearch/ sink. The sink connector uses configuration such as:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
data:
connect-standalone-custom.properties: |-
bootstrap.servers={{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/usr/local/share/kafka/plugins
elasticsearch.properties: |-
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=4
topics=syslog,nginx
key.ignore=true
schema.ignore=true
connection.url=https://elasticsearch-coordinating-only.elasticsearch:9200
type.name=kafka-connect
connection.username=elastic
connection.password=xxxxxxxx
elastic.security.protocol=SSL
elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt
elastic.https.ssl.truststore.type=PEM
Notice the elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt; that's the file I need to put inside the kafka-based container.
What's the optimal way to do that with Helm templates?
Currently I have a fork of https://github.com/bitnami/charts/tree/master/bitnami/kafka. It adds 3 new templates under templates/:
kafka-connect-elasticsearch-configmap.yaml
kafka-connect-svc.yaml
kafka-connect.yaml
The configmap is shown above. The kafka-connect.yaml Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
How can I modify these Kafka Helm charts to allow them to retrieve the value for kubectl get secret -n elasticsearch elasticsearch-coordinating-only-crt -o jsonpath="{.data.ca\.crt}" | base64 -d and write its content to /etc/ssl/certs/elasticsearch-ca.crt ?
Got this working and learned a few things in the process:
Secret resources reside in a namespace. Secrets can only be referenced by Pods in that same namespace. (ref). Therefore, I switched to using a shared namespace for elasticsearch + kafka
The secret can be used in a straightforward way as documented at https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets. This is not a Helm-specific but rather core Kubernetes feature
In my case this looked like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
- name: ca
mountPath: /etc/ssl/certs
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- name: ca
secret:
secretName: elasticsearch-coordinating-only-crt
This gets the kafka-connect pod up and running, and I can validate the certs are written there also:
$ kubectl exec -it -n elasticsearch kafka-connect-c4f4d7dbd-wbxfq \
-- ls -1 /etc/ssl/certs
ca.crt
tls.crt
tls.key
I'm trying to implement a couple subcharts in our CI/CD pipeline but my helm upgrade --install command keeps returning the error message:
Release "testing" does not exists. Installing now.
Error: no objects visisted
It seems like this is a pretty general error message so I'm not quite sure what it's pointing to. Any suggestions/hints/tips etc would be greatly appreciated.
The folder structure of my helm directory is as follows:
deployment
|-helm
|testing
||charts
|application
|-templates
| |-deployment.yaml
|-.helmignore
|-Chart.yaml
|-value.yaml
|configuration
|-templates
| |-configmap.yaml
| |-secrets.yaml
|-.helmignore
|-Chart.yaml
|-values.yaml
||-Chart.yaml
||-values.yaml
Dependencies are defined as follows:
testing/Chart.yaml
apiVersion: v2
name: testing
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
testing/values.yaml
application:
enabled: true
#disable configuration for easier debugging
configuration:
enabled: false
configuration/Chart.yaml
apiVersion: v2
name: configuration
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/Chart.yaml
apiVersion: v2
name: application
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/deployment.yaml
{{- range $k, $v := .Values.region }}
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
namespace: {{ $.Values.global.namespace }}
spec:
replicas: 1
template:
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
spec:
containers:
- image: {{ $.Values.image_name }}
imagePullPolicy: Always
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
---
{{- end }}
application/values.yaml
image_name: image_name
config:
enabled: true
region:
- countryCode: US
volumeMounts:
- mountPath: /app/config
name: app-config-volume
volumes:
- name: app-config-volume
configMap:
defaultMode: 420
name: app-configmap
Finally below is the command I'm running in the pipeline:
helm upgrade --install testing deployment/helm/testing -f deployment/helm/testing/values.yaml --set known_hosts="***" --set image_name=$DOCKER_TAG
so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?
Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4
i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container
I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database
Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller
I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.