I have a secrets file that I am wanting to grab two files, a .crt and .key file and make them available in /etc/ssl in my pod. I have everything set up, but when I deploy with Helm, the files are just blank as well as the secrets even though the files exist.
Below is my secrets file:
---
apiVersion: v1
kind: Secret
metadata:
name: "{{ .Values.secret_certs }}"
labels:
app: {{ template "connect-mw.name" . }}
chart: {{ template "connect-mw.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
connect.crt: |-
{{ .Files.Get "connect.crt" | b64enc | indent 4 }}
connect.key: |-
{{ .Files.Get "connect.key" | b64enc | indent 4 }}
And my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "connect-mw.fullname" . }}
labels:
app: {{ template "connect-mw.name" . }}
chart: {{ template "connect-mw.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
revisionHistoryLimit: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "connect-mw.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "connect-mw.name" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
annotations:
buildID: {{ .Values.buildID | default "" | quote }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: runtime/default
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
seccompProfile:
type: RuntimeDefault
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
volumeMounts:
- name: {{template "connect-mw.name" .}}
mountPath: /etc/ssl
readOnly: true
volumes:
- name: {{template "connect-mw.name" .}}
projected:
sources:
- secret:
name: "{{ .Values.secret_certs }}"
The two files are in my path as such when I go to deploy:
connect-mw
├── Chart.yaml
├── Chart.yaml.j2
├── connect.crt
├── connect.key
├── templates
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── secrets.yaml
│ └── service.yaml
└── values.yaml
Just trying to figure out what I am doing wrong.
Related
I need to get the new environment values from the manifest file and copy those values to the depoyment.yml file. Here is my deployment and new yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appname }}
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.appname }}
env: {{ .Values.ingress.cluster_name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.appname }}
appname: {{ .Values.namespace }}
template:
metadata:
labels:
app: {{ .Values.appname }}
env: {{ .Values.ingress.cluster_name }}
appname: {{ .Values.namespace }}
spec:
containers:
- env:
- name: SPRING_CLOUD_CONFIG_URI
value: {{ .Values.env.SPRING_CLOUD_CONFIG_URI }}
- name: SPRING_CLOUD_CONFIG_LABEL
value: {{ .Values.env.SPRING_CLOUD_CONFIG_LABEL }}
optional: false
{{- end }}
name: {{ .Chart.Name }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 8080
manifest.yaml
applications:
- name: update-sample
path: build/libs/update-sample.jar
diego: true
instances: 1
memory: 2048M
services:
- udpate-sample
env:
SPRING_PROFILES_ACTIVE: development,canary
SPRING_CLOUD_CONFIG_URI: ""
JAVA_OPTS: ''
SPRING_CLOUD_CONFIG_LABEL: update-sample_1.0.0
testing: commonhelm
Role: devops
Need to copy the testing: commonhelm and Role: devops into the deployment.yml file
I just created a new Helm chart by adding curl command in liveness probe and readiness probe but when I run helm lint I get:
[ERROR] templates/deployment.yaml: unable to parse YAML: error converting YAML to JSON: yaml: line 46: did not find expected key
but when i do helm install of the same yaml it works fine its deployed successfully.
Problem is with helm lint when i run helm lint its throwing error.
And this is my yaml file.
Any help is much appreciated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "nodered.fullname" . }}-deployment
labels:
app: {{ template "nodered.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
tier: application
spec:
replicas: {{ .Values.nodered.replicaCount }}
selector:
matchLabels:
app: {{ template "nodered.fullname" . }}
tier: application
template:
metadata:
labels:
app: {{ template "nodered.fullname" . }}
tier: application
release: "{{ .Release.Name }}"
annotations:
timestamp: "{{ now }}"
{{- if .Values.nodered.istio.enabled }}
sidecar.istio.io/inject: "true"
traffic.sidecar.istio.io/excludeOutboundPorts: "7600,8889"
traffic.sidecar.istio.io/excludeInboundPorts: "7600,8889"
{{- end }}
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: kube-{{ template "nodered.fullname" . }}
{{- if .Values.registry }}
image: {{ .Values.registry}}/{{ .Values.nodered.image.name}}:{{ .Values.nodered.image.tag}}
{{- else }}
image: "{{ .Values.nodered.image.name}}:{{ .Values.nodered.image.tag}}"
{{- end }}
imagePullPolicy: {{ .Values.nodered.image.pullPolicy}}
livenessProbe:
{{- if .Values.nodered.istio.enabled }}
exec:
command:
- curl
- -f
- http://127.0.0.1:1880/nodered/admin
{{- else if .Values.nodered.securebackend.enabled }}
httpGet:
scheme: HTTPS
port: 1880
path: /nodered/admin
{{- else }}
httpGet:
scheme: HTTP
port: 1880
path: /nodered/admin
{{- end }}
{{ toYaml ( .Values.nodered.livenessProbeTimeouts ) | indent 12 }}
readinessProbe:
{{- if .Values.nodered.istio.enabled }}
exec:
command:
- curl
- -f
- http://127.0.0.1:1880/nodered/admin
{{- else if .Values.nodered.securebackend.enabled }}
httpGet:
scheme: HTTPS
port: 1880
path: /nodered/admin
{{- else }}
httpGet:
scheme: HTTP
port: 1880
path: /nodered/admin
{{- end }}
{{ toYaml ( .Values.nodered.readinessProbeTimeouts ) | indent 12 }}
env:
{{- if hasKey .Values.nodered "dependencyWaitTimeoutInSeconds" }}
- name: DEPENDENCY_WAIT_TIMEOUT_INSECONDS
value: {{ .Values.nodered.dependencyWaitTimeoutInSeconds | quote }}
{{- end }}
- name: NODE_RED_BIND_ADDRESS
valueFrom:
fieldRef:
fieldPath: status.podIP
envFrom:
- configMapRef:
name: {{ template "nodered.fullname" . }}-config
ports:
- name: http
containerPort: 1880
protocol: TCP
- name: https
containerPort: 1880
protocol: TCP
volumeMounts:
{{- if .Values.nodered.securebackend }}
{{- if .Values.nodered.securebackend.enabled }}
- mountPath: /usr/src/node-red/certificates/
name: certificates
readOnly: true
{{- end }}
{{- end }}
{{- with .Values.nodered.resources }}
resources:
{{ toYaml . | indent 12 }}
{{- end }}
{{- if and .Values.nodered.istio.enabled (eq .Values.nodered.istio.enabled true) }}
serviceAccountName: {{ template "-nodered.fullname" . }}-servicename
{{- end }}
volumes:
{{- if .Values.nodered.securebackend }}
{{- if .Values.nodered.securebackend.enabled }}
- name: certificates
secret:
{{- if .Values.nodered.securebackend.certificate }}
secretName: {{ .Values.nodered.securebackend.certificate }}
{{- else }}
secretName: {{ template "nodered.fullname" . }}-secret-{{ (index .Values.tlsConfig 0).name | default 0 }}
{{- end }}
{{- end }}
{{- end }}
{{- if hasKey .Values.nodered.image "imagePullSecretName" }}
imagePullSecrets:
- name: {{ .Values.nodered.image.imagePullSecretName }}
{{- end }}
{{- if hasKey .Values.nodered "nodeSelector" }}
{{- with .Values.nodered.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if hasKey .Values.nodered "affinity" }}
{{- with .Values.nodered.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if hasKey .Values.nodered "tolerations" }}
{{- with .Values.nodered.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
The strange thing is when i run helm install of this yaml it works fine . but problem is with helm lint . when i run helm lint its throwing error.
I am trying to add a zip file to our configmap due to the amount of files exceeding the 1mb limit. I deploy our charts with helm and was looking into binaryData but cannot get it to work properly. I wanted to see if anyone had any suggestions on how I could integrate this with helm so when the job is finished it deletes the configmap with it
Here is my configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
{{ .Files.Get "migrations.zip" | b64enc }}
immutable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}-test
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
{{ .Files.Get "test.zip" | b64enc }}
immutable: true
The two zip files live inside the charts and I have a command to unzip them and then run the migration afterwards
binaryData exepcts a map but you are passing a string to it.
When debugging the template we can see
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ConfigMap.binaryData): invalid type for io.k8s.api.core.v1.ConfigMap.binaryData: got "string", expected "map"
The way to fix this is to add a key before {{ .Files.Get "test.zip" | b64enc }}.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
migrations: {{ .Files.Get "migrations.zip" | b64enc }}
immutable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}-test
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
test: {{ .Files.Get "test.zip" | b64enc }}
immutable: true
I'am trying to use sha256sum to detect changes in ConfigMap and trigger pod restart.
ConfigMap is deployed by the parent chart in cluster. All works fine in the env section but there is a problem with second checksum. I did not understand what the right path for should look like.
deployment.yaml of sub chart:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test-project.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test-project.name" . }}
helm.sh/chart: {{ include "test-project.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ include "test-project.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test-project.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/config1: {{ include (print $.Files.Get "configmaps/myproduct.yaml") . | sha256sum }} //error line
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
{{- if not .Values.security.useSSLConnection }}
- name: http
containerPort: {{ .Values.springboot.server.port | default 8080 }}
protocol: TCP
{{- else }}
- name: https
containerPort: {{ .Values.springboot.server.port | default 8080 }}
protocol: TCP
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.readiness.path | default "/actuator/ready" }}
port: {{ .Values.springboot.actuator.port | default "http" }}
scheme: HTTP{{ if $.Values.security.useSSLConnection }}S{{ end }}
initialDelaySeconds: {{ .Values.readiness.initialDelaySeconds | default 10 }}
periodSeconds: {{ .Values.readiness.periodSeconds | default 10 }}
timeoutSeconds: {{ .Values.readiness.timeoutSeconds | default 1 }}
successThreshold: {{ .Values.readiness.successThreshold | default 1 }}
failureThreshold: {{ .Values.readiness.failureThreshold | default 3 }}
livenessProbe:
httpGet:
path: {{ .Values.liveness.path | default "/actuator/alive" }}
port: {{ .Values.springboot.actuator.port | default "http" }}
scheme: HTTP{{ if $.Values.security.useSSLConnection }}S{{ end }}
initialDelaySeconds: {{ .Values.liveness.initialDelaySeconds | default 300 }}
periodSeconds: {{ .Values.liveness.periodSeconds | default 10 }}
timeoutSeconds: {{ .Values.liveness.timeoutSeconds | default 1 }}
successThreshold: {{ .Values.liveness.successThreshold | default 1 }}
failureThreshold: {{ .Values.liveness.failureThreshold | default 3 }}
env:
- name: MY_PRODJECT
valueFrom:
configMapKeyRef:
name: my-product
key: prj-version
- name: ACTUATOR_PORT
value: "{{ .Values.springboot.actuator.port }}"
- name: USE_SSL
value: "{{ .Values.security.useSSLConnection }}"
- name: CRL_ENABLED
value: "{{ .Values.security.CRLEnabled }}"
ConfigMap myproduct.yaml from parent chart:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-product
namespace: {{ .Release.Namespace }}
data:
prj-version: "0.3"
I want to add preStop hook in public helm chart and then distribute that chart to different teams, whats the best way to achieve this programmatically?
source template example:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test.name" . }}
helm.sh/chart: {{ include "test.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "test.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: some-static-name
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
i want to add lifecycle.preStop block after - name: some-static-name, eg.
...
...
spec:
containers:
- name: some-static-name
lifecycle:
preStop:
exec:
command: [
"sh", "-c",
"sleep 5",
]
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
...
...
what would be the best way to achieve this instead of using sed or similar programs?