Why doesn't helm use the name defined in the deployment template? - kubernetes-helm

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.

Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container

Related

pass constant to skaffold

I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")

Does helm support Endpoints object type?

I've created the following to objects:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
ports:
- port: {{ .Values.postgres.port}}
selector: {}
for a service and its endpoint:
kind: Endpoints
apiVersion: v1
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
subsets:
- addresses:
- ip: "{{ .Values.external.ip }}"
ports:
- name: "db"
port: {{ .Values.external.port }}
When I use helm even in a dry run mode I can see the service object and cant see the endpoint object.
Why? Doesn't helm support all k8s objects?
Helm is just a "templating" tool, so technically it supports everything that your underlying k8 supports.
In your case, please check that both files are in the templates directory
Actually it does work. The problem was that the service and the endpoint MUST have same names (which I new) and MUST have port names exactly the same

helm values.yaml - use value from another node

so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?
Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4

How to get a pod index inside a helm chart

I'm deploying a Kubernetes stateful set and I would like to get the pod index inside the helm chart so I can configure each pod with this pod index.
For example in the following template I'm using the variable {{ .Values.podIndex }} to retrieve the pod index in order to use it to configure my app.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: Always
name: {{ .Values.name }}
command: ["launch"],
args: ["-l","{{ .Values.podIndex }}"]
ports:
- containerPort: 4000
imagePullSecrets:
- name: gitlab-registry
You can't do this in the way you're describing.
Probably the best path is to change your Deployment into a StatefulSet. Each pod launched from a StatefulSet has an identity, and each pod's hostname gets set to the name of the StatefulSet plus an index. If your launch command looks at hostname, it will see something like name-0 and know that it's the first (index 0) pod in the StatefulSet.
A second path would be to create n single-replica Deployments using Go templating. This wouldn't be my preferred path, but you can
{{ range $podIndex := until .Values.replicaCount -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Values.name }}-{{ $podIndex }}
spec:
replicas: 1
template:
spec:
containers:
- name: {{ .Values.name }}
command: ["launch"]
args: ["-l", "{{ $podIndex }}"]
{{ end -}}
The actual flow here is that Helm reads in all of the template files and produces a block of YAML files, then submits these to the Kubernetes API server (with no templating directives at all), and the Kubernetes machinery acts on it. You can see what's being submitted by running helm template. By the time a Deployment is creating a Pod, all of the template directives have been stripped out; you can't make fields in the pod spec dependent on things like which replica it is or which node it got scheduled on.

Deploying a kubernetes job via helm

I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -