Where do I put my NodePort Spec in my deployment.yaml - kubernetes

I am a total newbe with Helm charts, but I have managed to get a pod with with ApacheDS (LDAP server) running on it. I can exec shell into it and I can login and get responses from the LDAP server.
However, from outside the cluster, I get a connection refused. Looking this up, I "think" I need a NodePort: Kube documentation However, I cannot see where to put that spec. I have tried many things but just cant get it. According to the documentation I need something like this:
spec:
type: NodePort
selector:
app: MyApp
ports:
- port: 10389
targetPort: 10389
nodePort: 30007
Here is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "buildchart.fullname" . }}
labels:
{{- include "buildchart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "buildchart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "buildchart.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
serviceAccountName: {{ include "buildchart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: admin-port
containerPort: 8080
hostPort: 8080
protocol: TCP
- name: ldap-port
containerPort: 10389
hostPort: 10389
protocol: UDP
livenessProbe:
exec:
command:
- curl ldap://localhost:10389/
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
readinessProbe:
exec:
command:
- sh
- -c
- curl ldap://localhost:10389/
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
How do I open this port to the rest of the world? Or at least the box the container is on.

Yes, you need to create a Service for your deployment.
Also, I suggest you do it without hardcoding, because easier to change a value in the values.yaml file than edit templates files for adding a new hardcode values.
In the deployment.yaml set:
...
{{ if .Values.ports }}
ports:
{{ range .Values.ports }}
- name: {{ .name }}
containerPort: {{ .containerPort }}
protocol: {{ .protocol }}
{{ end }}
{{ end }}
...
In the values.yaml set:
ports:
- name: admin-port
containerPort: 8080
nodePort: 8080
protocol: TCP
- name: ldap-port
containerPort: 10389
nodePort: 10389
protocol: UDP
Create service.yaml file and set
apiVersion: v1
kind: Service
metadata:
name: {{ include "buildchart.fullname" . }}
labels:
{{- include "buildchart.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
{{ range .Values.ports }}
- port: {{ .nodePort }}
targetPort: {{ .containerPort }}
protocol: {{ .protocol }}
name: {{ .name }}
{{ end }}
selector:
{{- include "buildchart.selectorLabels" . | nindent 4 }}

You should be adding that as a Service object spec not in your deployment object.
For example,
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- port: 10389
targetPort: 10389
nodePort: 30007

Related

Passing values to include function inside range, using defaults with merge

Given this deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
revisionHistoryLimit: 5
template:
spec:
containers:
{{- include "app.container" (merge .Values.app $) | nindent 8 }}
{{- include "ports" (merge .Values.app ) | nindent 8 }}
{{- range $k, $v := .Values.extraContainers }}
{{- $nameDict := dict "name" $k -}}
{{- include "app.container" (mustMergeOverwrite $.Values.app $nameDict $v) | nindent 8 }}
{{- include "ports" (merge $nameDict $v ) | nindent 8 }}
{{- end }}
This helpers file...
{{/* vim: set filetype=helm: */}}
{{/*
app container base
*/}}
{{- define "app.containerBase" -}}
- name: {{ .name | default "app" }}
image: {{ .image.name }}:{{ .image.tag }}
{{- if .command }}
command:
{{- range .command }}
- {{ . | quote }}
{{- end }}
{{- end }}
{{- if .args }}
args:
{{- range .args }}
- {{ . | quote }}
{{- end }}
{{- end }}
{{- range .envVars }}
- name: {{ .name }}
{{- with .value }}
value: {{ . | quote }}
{{- end }}
{{- with .valueFrom }}
valueFrom:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- if or .Values.configMaps .Values.secrets .Values.configMapRef }}
envFrom:
{{- if .Values.configMaps }}
- configMapRef:
name: "{{- include "app.fullname" $ }}"
{{- end }}
{{- range .Values.configMapRef }}
- configMapRef:
name: {{ . }}
{{- end }}
{{- range $name, $idk := $.Values.secrets }}
- secretRef:
name: "{{- include "app.fullname" $ }}-{{ $name }}"
{{- end }}
{{- end }}
{{- end -}}
{{/*
app container
*/}}
{{- define "app.container" -}}
{{- include "app.containerBase" . }}
resources:
limits:
cpu: {{ .resources.limits.cpu | default "100m" | quote }}
memory: {{ .resources.limits.memory | default "128Mi" | quote }}
{{- if .resources.limits.ephemeralStorage }}
ephemeral-storage: {{ .resources.limits.ephemeralStorage }}
{{- end }}
requests:
cpu: {{ .resources.requests.cpu | default "100m" | quote }}
memory: {{ .resources.requests.memory | default "128Mi" | quote }}
{{- if .resources.requests.ephemeralStorage }}
ephemeral-storage: {{ .resources.requests.ephemeralStorage }}
{{- end }}
securityContext:
runAsNonRoot: true
runAsUser: {{ .securityContext.runAsUser }}
runAsGroup: {{ .securityContext.runAsGroup }}
allowPrivilegeEscalation: false
{{- end -}}
{{/*
ports
*/}}
{{- define "ports" -}}
{{- if .port }}
ports:
- containerPort: {{ .port }}
protocol: {{ .protocol | default "tcp" | upper }}
{{- range .extraPorts}}
- containerPort: {{ required ".port is required." .port }}
{{- if .name }}
name: {{ .name }}
{{- end }}
{{- if .protocol }}
protocol: {{ .protocol | upper }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
This values.yaml
extraContainers:
extra-container2:
image:
name: xxx.dkr.ecr.eu-west-1.amazonaws.com/foobar-two
tag: test
command:
- sleep
- 10
envVars:
- name: FOO
value: BAR
probes:
readinessProbe:
path: /ContainerTwoReadinessPath
livenessProbe:
path: /ContainerTwoLivenessPath
resources:
limits:
cpu: 200Mi
memory: 2Gi
requests:
cpu: 200Mi
memory: 2Gi
extra-container3:
image:
name: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-three
tag: latest
command:
- sleep
- 10
envVars:
- name: FOO
value: BAZ
probes:
readinessProbe:
enabled: false
livenessProbe:
enabled: false
app:
image:
name: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar
tag: latest
probes:
readinessProbe:
path: /_readiness
enabled: true
livenessProbe:
path: /_liveness
enabled: true
port: 100
resources:
limits:
cpu: 100Mi
memory: 1Gi
requests:
cpu: 100Mi
memory: 1Gi
Why does helm template result in the below:
---
# Source: corp-service/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
revisionHistoryLimit: 5
template:
spec:
containers:
- name: app
image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar:latest
resources:
limits:
cpu: "100Mi"
memory: "1Gi"
requests:
cpu: "100Mi"
memory: "1Gi"
securityContext:
runAsNonRoot: true
runAsUser: 2000
runAsGroup: 2000
allowPrivilegeEscalation: false
ports:
- containerPort: 100
protocol: TCP
- name: extra-container2
image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-two:test
command:
- "sleep"
- "10"
- name: FOO
value: "BAR"
resources:
limits:
cpu: "200Mi"
memory: "2Gi"
requests:
cpu: "200Mi"
memory: "2Gi"
securityContext:
runAsNonRoot: true
runAsUser: 2000
runAsGroup: 2000
allowPrivilegeEscalation: false
- name: extra-container3
image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-three:latest
command:
- "sleep"
- "10"
- name: FOO
value: "BAZ"
resources:
limits:
cpu: "200Mi"
memory: "2Gi"
requests:
cpu: "200Mi"
memory: "2Gi"
securityContext:
runAsNonRoot: true
runAsUser: 2000
runAsGroup: 2000
allowPrivilegeEscalation: false
i.e why does extra-container3 have resources: set with values from extra-container2 instead of taking them from the default set of resources found under .values.app.
essentially, i want extracontainers to use default values from .values.app unless i explicitly set them inside the extracontainers section. however it seems that if a previous iteration of the loop over extracontainers defines some values, they are used in the next iteration if they are not overriden.
the resources for extra-container3 i am expecting are:
resources:
limits:
cpu: 100Mi
memory: 1Gi
requests:
cpu: 100Mi
memory: 1Gi
what am i doing wrong?
So there are a couple of things going on here, but mostly the answer is that (mergeMustOverwrite) mutates the $dest map, which causes your range to "remember" the last value it saw, which according to your question isn't the behavior you want. The simplest answer is to use (deepCopy $.Values.app) as the $dest, but there's an asterisk to that due to another bug we'll cover in a second
{{- $nameDict := dict "name" $k -}}
- {{- include "app.container" (mustMergeOverwrite $.Values.app $nameDict $v) | nindent 8 }}
+ {{- include "app.container" (mustMergeOverwrite (deepCopy $.Values.app) $nameDict $v) | nindent 8 }}
{{- include "ports" (merge $nameDict $v ) | nindent 8 }}
{{- end }}
You see, (deepCopy $.Values.app) stack-overflows helm because of your inexplicable use of (merge .Values.app $) drags in Chart, Files, Capabilities and the whole world. I'm guessing your _helpers.tpl must have been copy-pasted from somewhere else, which explains the erroneous relative .Values reference inside that define. One way to fix that is to remove the .Values.configMaps, so it will track the .app context like I expect you meant, or you can change the first (merge) to artificially create a "Values": {} item just to keep the template from blowing up when in tries to reference .app.Values.configMaps. The correct one will depend on what you were intending, but (merge .Values.app $) is almost certainly not it
so, either:
--- a/templates/_helpers.tpl
+++ b/templates/_helpers.tpl
## -28,13 +28,13 ## app container base
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
- {{- if or .Values.configMaps .Values.secrets .Values.configMapRef }}
+ {{- if or .configMaps .secrets .configMapRef }}
envFrom:
- {{- if .Values.configMaps }}
+ {{- if .configMaps }}
- configMapRef:
name: "{{- include "app.fullname" $ }}"
{{- end }}
- {{- range .Values.configMapRef }}
+ {{- range .configMapRef }}
- configMapRef:
name: {{ . }}
{{- end }}
or
containers:
- {{- include "app.container" (merge .Values.app $) | nindent 8 }}
+ {{- include "app.container" (merge .Values.app (dict "Values" (dict))) | nindent 8 }}
{{- include "ports" (merge .Values.app ) | nindent 8 }}

Helm - Check if value not exists OR part of list

Assuming I have this values.yaml under my helm chart -
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
environment_variables:
STAGE: dev
I would like to run my cronjob based on these values -
if .env doesn't exist - run any time.
if .env exists - run only if environment_variables.STAGE is in the .env list.
This is what I've done so far ( with no luck ) -
{{- range $.Values.tasks}}
# check if $value.env not exists OR contains stage
{{if or .env (hasKey .env "$.Values.environment_variables.STAGE") }}
apiVersion: batch/v1
kind: CronJob
...
{{- end}}
---
{{- end}}
values.yaml
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
- name: test-production
env:
- production
environment_variables:
STAGE: dev
template/xxx.yaml
plan a
...
{{- range $.Values.tasks }}
{{- $flag := false }}
{{- if .env }}
{{- range .env }}
{{- if eq . $.Values.environment_variables.STAGE }}
{{- $flag = true }}
{{- end }}
{{- end }}
{{- else }}
{{- $flag = true }}
{{- end }}
{{- if $flag }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
plan b
...
{{- range $.Values.tasks }}
{{- if or (not .env) (has $.Values.environment_variables.STAGE .env) }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
output
...
apiVersion: batch/v1
kind: CronJob
meta:
name: test-production-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-all
...

helm helpers file can't evaluate field type interface array/string

I am rather new to helm, and I am trying to create a chart, but running into values not transforming from the values.yaml file into my generated chart.
here are my values.yaml
apiVersion: security.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
namespace: ns-01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- user1
- user2
then with my helm template:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences: |-
{{- range .Values.spec.jwtRules.audiences }}
- {{ . | title | quote }}
{{ end }}
---
I also have a helpers file.
_helpers.tpl
{{/* vim: set filetype=mustache: */}}
{{- define "jwtRules.audiences" -}}
{{- range $.Values.spec.jwtRules.audiences }}
audiences:
- {{ . | quote }}
{{- end }}
{{- end }}
the error its producing: at <.Values.spec.jwtRules.audiences>: can't evaluate field audiences in type interface {}
This one is simple - you don't have a spec.jwtRules.audiences in your values file! jwtRules contains an array, so you'll have to use some index or iterate over it. Also, i don't think that neither your indentation, nor using of |- for audiences is correct, per docs it should be an array of strings.
So i came up with this example (your values are unchanged):
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
{{- with (first .Values.spec.jwtRules) }}
{{- range .audiences }}
- {{ . | title | quote -}}
{{- end }}
{{- end }}
renders into:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- "User1"
- "User2"
In this case it uses a first element of array
Thank you #andrew. I also came to a simple solution but would like feedback on it.
I removed the helpers file then modified my helm chart with the following.
values.yaml (is kept the same as above)
helmchart:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
{{- with index .Values.spec.jwtRules 0 }}
audiences:
{{- range $a := .audiences }}
- {{ $a -}}
{{ end }}
{{ end }}
---
This produces the following:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- user1
- user2
Should I continue to use a helpers file?
Thanks again for all the help.

How to compare a value to a string with go templating

I want to loop through a values file to create a namespace and a networkpolicy in/for that namespace, except for default. I only want to create the policy and not the namespace for default since it is there by default.
values file:
namespaces:
- name: default
- name: test1
- name: test2
template file:
# Loop through the namespace names and create the namespaces
{{- range $namespaces := .Values.namespaces }}
{{- if ne "default" }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespaces.name }}
---
{{- end }}
{{- end }}
# Loop through the namespace names and create a network policy for those namespace
{{- range $namespaces := .Values.namespaces }}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ $namespaces.name }}-networkpolicy
namespace: {{ $namespaces.name }}
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: {{ $namespaces.name }}
---
{{- end }}
The error I get is:
Error: UPGRADE FAILED: template: namespaces/templates/namespaces.yaml:3:7: executing "namespaces/templates/namespaces.yaml" at <ne>: wrong number of args for ne: want 2 got 1
It's probably something simple, but not seeing it. Hope someone can help.
This worked for me:
# Loop through the namespace names and create the namespaces
{{- range $namespaces := .Values.namespaces }}
{{- if ne $namespaces.name "default" }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespaces.name }}
---
{{- end }}
{{- end }}
# Loop through the namespace names and create a network policy for those namespace
{{- range $namespaces := .Values.namespaces }}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ $namespaces.name }}-networkpolicy
namespace: {{ $namespaces.name }}
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: {{ $namespaces.name }}
---
{{- end }}

{{ If }} clause inside of range scope doesn't see values

The task is to range over workers collection and if the current worker has autoscaling.enabled=true create an hpa for it.
I've tried to compare .autoscaling.enabled to "true" but it returned "error calling eq: incompatible types for comparison". Here people say that it actually means that .autoscaling.enabled is nil. So {{ if .autoscaling.enabled }} somehow doesn't see the variable and assumes it doesn't exist.
Values:
...
workers:
- name: worker1
command: somecommand1
memoryRequest: 500Mi
memoryLimit: 1400Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: worker2
command: somecommand2
memoryRequest: 512Mi
memoryLimit: 1300Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: workerWithAutoscaling
command: somecommand3
memoryRequest: 600Mi
memoryLimit: 2048Mi
cpuRequest: 150m
cpuLimit: 400m
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 50
targetMemoryUtilization: 50
...
template:
...
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
...
name: "hpa-{{ .name }}-{{ $.Realeas.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilization}}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ . }}
{{- end }}
---
{{- end }}
{{- end }}
I expect the manifest for one hpa that targets workerWithAutoscaling, but the actual output is totally empty.
Your use of {{- range .Values.workers }} and {{- if .autoscaling.enabled }} is fine. You are not getting any values because .minReplicas, .maxReplicas, etc, are inside .autoscaling scope.
See Modifying scope using with
Adding {{- with .autoscaling}} will solve the issue.
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-{{ .name }}-{{ $.Release.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
{{- with .autoscaling}}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .targetCPUUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .targetMemoryUtilization}}
{{- end }}
{{- end }}
{{- end }}
helm template .
---
# Source: templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-workerWithAutoscaling-release-name"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: workerWithAutoscaling
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50