I'm trying to implement a couple subcharts in our CI/CD pipeline but my helm upgrade --install command keeps returning the error message:
Release "testing" does not exists. Installing now.
Error: no objects visisted
It seems like this is a pretty general error message so I'm not quite sure what it's pointing to. Any suggestions/hints/tips etc would be greatly appreciated.
The folder structure of my helm directory is as follows:
deployment
|-helm
|testing
||charts
|application
|-templates
| |-deployment.yaml
|-.helmignore
|-Chart.yaml
|-value.yaml
|configuration
|-templates
| |-configmap.yaml
| |-secrets.yaml
|-.helmignore
|-Chart.yaml
|-values.yaml
||-Chart.yaml
||-values.yaml
Dependencies are defined as follows:
testing/Chart.yaml
apiVersion: v2
name: testing
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
testing/values.yaml
application:
enabled: true
#disable configuration for easier debugging
configuration:
enabled: false
configuration/Chart.yaml
apiVersion: v2
name: configuration
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/Chart.yaml
apiVersion: v2
name: application
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/deployment.yaml
{{- range $k, $v := .Values.region }}
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
namespace: {{ $.Values.global.namespace }}
spec:
replicas: 1
template:
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
spec:
containers:
- image: {{ $.Values.image_name }}
imagePullPolicy: Always
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
---
{{- end }}
application/values.yaml
image_name: image_name
config:
enabled: true
region:
- countryCode: US
volumeMounts:
- mountPath: /app/config
name: app-config-volume
volumes:
- name: app-config-volume
configMap:
defaultMode: 420
name: app-configmap
Finally below is the command I'm running in the pipeline:
helm upgrade --install testing deployment/helm/testing -f deployment/helm/testing/values.yaml --set known_hosts="***" --set image_name=$DOCKER_TAG
Related
I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")
I try to create an umbrella helm chart for a complex solution that is made from few components. One of them is a database, for which I use a mariadb-galera chart. The problem I face is that I want to execute a flyway migration once my DB is available, and I can't find a way to do it properly.
First, I want to use range version, and I don't know how to indicate my hook to match the DB subchart without telling it the full version.
Second, I recently added an alias for my subcharts, and I haven't been able to trigger the hook properly since: it just trigger at installation and fails again and again until the DB became eventually available.
My Chart.yaml looks a bit like that :
apiVersion: v2
name: myApp
description: umbrella chart
type: application
version: 0.1.0
appVersion: "0.1-dev"
dependencies:
- name: "portal"
version: "0.1-dev"
alias: "portal"
- name: "mariadb-galera"
version: "~5.11"
repository: "https://charts.bitnami.com/bitnami"
alias: "database"
#...More dependencies...
My hook is defined as follow:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myApp.fullname" . }}-migration
labels:
{{- include "myApp.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}-migration"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "database-5.11.2"
spec:
containers:
- name: flyway-migration
image: flyway/flyway
args:
- "migrate"
- "-password=$(DB_PASS)"
volumeMounts:
- name: migration-files
mountPath: /flyway/sql/
- name: flyway-conf
mountPath: /flyway/conf/
env:
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mariadb-secret
key: mariadb-password
volumes:
- name: migration-files
configMap:
name: migration-files
- name: flyway-conf
configMap:
name: flyway-conf
Before using aliases, the helm.sh annotation used to look like :
helm.sh/chart: "mariadb-galera-5.11.2"
As you can see, it requires a full version which I don't want to include manually.
I tried to use somthing like :
{{ template ".Chart.name" .Subcharts.database }}
but it seems that it can't access the .Chart values of the subchart.
Is there something I missed ?
I found a way to get the proper name for the dependency thanks to variables embedded inside the package :
helm.sh/chart: "{{ template "common.names.chart" .Subcharts.database }}"
This produce the exact line I needed, but the job still starts without waiting for mariadb to be ready.
so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?
Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4
Right now I'm deploying applications on k8s using yaml files.
Like the one below:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: flow
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: serviceA
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA-ingress
namespace: flow
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
rules:
- host: serviceA.xyz.com
http:
paths:
- path: /
backend:
serviceName: serviceA
servicePort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serviceA-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=serviceA-main
server.port=8080
logging.level.org.springframework.jdbc.core=debug
lead.pg.url=serviceB.flow.svc:8080/lead
task.pg.url=serviceB.flow.svc:8080/task
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: serviceA-deployment
namespace: flow
spec:
selector:
matchLabels:
app: serviceA
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: serviceA
spec:
containers:
- name: serviceA
image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1
command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: serviceA-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: serviceA-application-config
configMap:
name: serviceA-config
items:
- key: application-dev.properties
path: application-dev.properties
restartPolicy: Always
Is there any automated way to convert this yaml into helm charts.
Or any other workaround or sample template that I can use to achieve this.
Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.
Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.
At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.
There is no unambiguous way to map k8s YAML to a Helm chart. Because app/chart maintainer should decide:
which app parameters can be modified by chart users
which of these parameters are mandatory
what are default values, etc...
So creation of a Helm chart is a manual process. But it contains a lot of routine steps. For example, most of the chart creators want to:
remove the namespace from templates to set it with helm install -n
remove fields generated by k8s
add helm release name to resource name
preserve correct links between templated names
move some obvious parameters to values.yaml like:
container resources
service type and ports
configmap/secret values
image repo:tag
I've created a CLI called helmify to automate the steps listed above.
It reads a list of k8s objects from stdin and creates a helm chart from it.
You can install it with brew brew install arttor/tap/helmify. And then use to generate charts from YAML file:
cat my-app.yaml | helmify mychart
or from directory <my_directory> containing yamls:
awk 'FNR==1 && NR!=1 {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart
Both of the commands will create mychart Helm chart directory from your k8s objects similar to helm create command.
Here is a chart generated by helmify from the yaml published in the question:
mychart
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── config.yaml
│ ├── deployment.yaml
│ ├── ingress.yaml
│ └── serviceA.yaml
└── values.yaml
#values.yaml
config:
applicationDevProperties:
lead:
pg:
url: serviceB.flow.svc:8080/lead
logging:
level:
org:
springframework:
jdbc:
core: debug
server:
port: "8080"
spring:
application:
name: serviceA-main
task:
pg:
url: serviceB.flow.svc:8080/task
deployment:
replicas: 1
image:
serviceA:
repository: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test
tag: serviceA-v1
serviceA:
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
---
# templates/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config
labels:
{{- include "mychart.labels" . | nindent 4 }}
data:
application-dev.properties: |
spring.application.name={{ .Values.config.applicationDevProperties.spring.application.name | quote }}
server.port={{ .Values.config.applicationDevProperties.server.port | quote }}
logging.level.org.springframework.jdbc.core={{ .Values.config.applicationDevProperties.logging.level.org.springframework.jdbc.core | quote }}
lead.pg.url={{ .Values.config.applicationDevProperties.lead.pg.url | quote }}
task.pg.url={{ .Values.config.applicationDevProperties.task.pg.url | quote }}
---
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}-deployment
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
containers:
- command:
- java
- -jar
- -agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n
- serviceA-service.jar
- --spring.config.additional-location=/config/application-dev.properties
image: {{ .Values.image.serviceA.repository }}:{{ .Values.image.serviceA.tag |
default .Chart.AppVersion }}
name: serviceA
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /config
name: serviceA-application-config
readOnly: true
restartPolicy: Always
volumes:
- configMap:
items:
- key: application-dev.properties
path: application-dev.properties
name: {{ include "mychart.fullname" . }}-config
name: serviceA-application-config
---
# templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ include "mychart.fullname" . }}-ingress
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: serviceA.xyz.com
http:
paths:
- backend:
serviceName: serviceA
servicePort: 8080
path: /
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
---
# templates/serviceA.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}-serviceA
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
type: {{ .Values.serviceA.type }}
selector:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 4 }}
ports:
{{- .Values.serviceA.ports | toYaml | nindent 2 -}}
https://github.com/mailchannels/palinurus/tree/master
This git repo contains a python script which will convert basic yamls to helm charts
You can use this tools, helmtrans : https://github.com/codeandcode0x/helmtrans . It can transform YAML to Helm Charts automation.
example:
➜ helmtrans yamltohelm -p [source path] -o [output path]
i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container