I am trying to build a template to deploy openrmf
helm template 1.8.3 chart/openrmf > ./openrmf.yaml
I downloaded their repo from github https://github.com/Cingulara/openrmf-docs/tree/master/deployments/chart/openrmf and one of the files is causing a parse error
Error: YAML parse error on openrmf/templates/checklistmsg.yaml: error converting YAML to JSON: yaml: line 26: mapping values are not allowed in this context.
Any ideas on how to debug this? Thanks.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openrmf-msg-checklist
namespace: {{.Values.namespace}}
labels:
app.kubernetes.io/name: openrmf
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/component: checklist-nats-message-client
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/revision: "{{ .Release.Revision }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/managed-by: helm
spec:
selector:
matchLabels:
run: openrmf-msg-checklist
replicas: 1
template:
metadata:
labels:
run: openrmf-msg-checklist
spec:
containers:
- name: openrmf-msg-checklist
image: cingulara/openrmf-msg-checklist:{{.Values.checklistmsgImage}}
env:
- name: NATSSERVERURL
value: nats://natsserver:4222
- name: DBCONNECTION
valueFrom:
secretKeyRef:
name: checklistdbsecret
key: appConnection
- name: DB
valueFrom:
secretKeyRef:
name: checklistdbsecret
key: initDBName
- name: DBTYPE
value: {{.Values.checklistdbtype}}
resources:
limits:
memory: "750M"
cpu: "250m"
requests:
memory: "250M"
cpu: "100m"
I have tried using the --debug flag but without that yaml it breaks
Related
Running a micronaut application on kubernetes where configs are loaded from configMap.
Firstly, my configmap.yml looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: data-loader-service-config
data:
application-devcloud.yml: |-
data.uploaded.event.queue: local-datauploaded-event-queue
data.uploaded.event.consumer.concurrency: 1-3
base.dir: basedir
aws:
region: XXX
datasources:
default:
dialect: POSTGRES
driverClassName: org.postgresql.Driver
micronaut:
config:
sources:
- file:/data-loader-service-config
debug: true
jms:
sqs:
enabled: true
My deployment yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: aws.region
value: xxx
image: mynamespace/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /data-loader-service-config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
When my micronaut app in the pod starts up, it is not able to resolve base.dir.
Not sure what's missing here.
Here is what I ended up doing. It works. I don't think it's the cleanest way though. Looking for a better way.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: MICRONAUT_CONFIG_FILES
value: "/config/application-common.yml,/config/application-devcloud.yml"
- name: aws.region
value: xxx
image: xxx/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
I do not want to "hard-code" the values for MICRONAUT_ENVIRONMENTS and MICRONAUT_CONFIG_FILES inmy deployment.yml. Is there a way to parameterise/externalise them so that I have a single deployment.yml for all the environments. At runtime, I can dynamically decide what is the environment to which I need to deploy to? I do not want to create multiple yml files (one for each environment/profile).
I am currently using emmisary-ingress in my namespace newcluster.I am setting up crds using the following crds.yaml file. In this i have only changed the hardcoded namespace.
By looking into following section of code, i came to know that the service name is emissary-apiext
apiVersion: v1
kind: Service
metadata:
name: emissary-apiext
namespace: newcluster
labels:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
spec:
type: ClusterIP
ports:
- name: https
port: 443
targetPort: https
selector:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
Now in following job from which i am trying to create one yaml of type KubernetesEndpointResolver, i simply mentioned the serviceAccountName: emissary-apiext and installed it with helm.
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Release.Namespace }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ .Release.Namespace }}-job-kube-resolvers
spec:
template:
spec:
serviceAccountName: emissary-apiext
containers:
- name: cert-manager-setup-certificates-crd
image: "bitnami/kubectl:1.23.10-debian-11-r9"
volumeMounts:
- name: cert-manager-setup-certificates-crd
mountPath: /etc/cert-manager-setup-certificates-crd
readOnly: true
command: ["kubectl", "apply", "-f", "/etc/cert-manager-setup-certificates-crd", "-n", "newcluster"]
volumes:
- name: cert-manager-setup-certificates-crd
configMap:
name: cert-manager-setup-certificates-crd
restartPolicy: OnFailure
the configmap having yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ .Release.Namespace }}
name: cert-manager-setup-certificates-crd
data:
crds.yaml: |-
apiVersion: getambassador.io/v3alpha1
kind: KubernetesEndpointResolver
metadata:
name: {{ .Release.Namespace }}-endpoint
labels:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
The error i am from job is :
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "getambassador.io/v3alpha1, Resource=kubernetesendpointresolvers", GroupVersionKind: "getambassador.io/v3alpha1, Kind=KubernetesEndpointResolver"
Name: "newcluster-endpoint", Namespace: "newcluster"
from server for: "/etc/cert-manager-setup-certificates-crd/crds.yaml":
kubernetesendpointresolvers.getambassador.io "newcluster-endpoint" is forbidden: User "system:serviceaccount:newcluster:emissary-apiext"
cannot get resource "kubernetesendpointresolvers" in API group "getambassador.io" in the namespace "newcluster"
I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?
you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation
I'm trying to create a number of pods from a yaml loop in helm. if I run with --debug --dry-run the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present.
some yaml for you:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
{{- end }}
{{ end }}
when I run helm upgrade --install --set componentTests="{a,b,c}" --debug --dry-run
I get the following output:
# Source: <path-to-file>.yaml
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: content-tests
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/a:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: b
labels:
app: b
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: b
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/b:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: c
labels:
app: users-tests
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: c
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/c:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
(some parts have been edited/removed due to sensitivity/irrelevance)
which looks to me like I it does what I want it to, namely create a pod for a another for b and a third for c.
however, when actually installing this into a cluster, I always only end up with the pod corresponding to the last element in the list. (in this case, c) it's almost as if they overwrite each other, but given that they have different names I don't think they should? even running with --debug but not --dry-run the output tells me I should have 3 pods, but using kubectl get pods I can see only one.
How can I iteratively create pods from a list using Helm?
found it!
so apparently, helm uses --- as a separator between specifications of pods/services/whatHaveYou.
specifying the same fields multiple times in a single chart is valid, it will use the last specified value for for any given field. To avoid overwriting values and instead have multiple pods created, simply add the separator at the end of the loop:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
{{- end }}
{{ end }}
I am using helm with kubernetes on google cloud platform.
i get the following error for my postgres deployment:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound
it looks like it cant connect to the persistent storage but i don't understand why because the persistent storage loaded fine.
i have tried deleting the helm release completely, then on google-cloud-console > compute-engine > disks; i have deleted all persistent disk. and finally tried to install from the helm chart, but the postgres deployment still doesnt connect to the PVC.
my database configuration:
{{- $serviceName := "db-service" -}}
{{- $deploymentName := "db-deployment" -}}
{{- $pvcName := "db-disk-claim" -}}
{{- $pvName := "db-disk" -}}
apiVersion: v1
kind: Service
metadata:
name: {{ $serviceName }}
labels:
name: {{ $serviceName }}
env: production
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: http
selector:
name: {{ $deploymentName }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ $deploymentName }}
labels:
name: {{ $deploymentName }}
env: production
spec:
replicas: 1
template:
metadata:
labels:
name: {{ $deploymentName }}
env: production
spec:
containers:
- name: postgres-database
image: postgres:alpine
imagePullPolicy: Always
env:
- name: POSTGRES_USER
value: test-user
- name: POSTGRES_PASSWORD
value: test-password
- name: POSTGRES_DB
value: test_db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: {{ $pvcName }}
volumes:
- name: {{ $pvcName }}
persistentVolumeClaim:
claimName: {{ $pvcName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $pvcName }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: {{ $pvName }}
env: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.gcePersistentDisk }}
labels:
name: {{ $pvName }}
env: production
annotations:
volume.beta.kubernetes.io/mount-options: "discard"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: "ext4"
pdName: {{ .Values.gcePersistentDisk }}
is this config for kubenetes correct? i have read the documentation and it looks like this should work. i'm new to Kubernetes and helm so any advice is appreciated.
EDIT:
i have added a PersistentVolume and linked it to the PersistentVolumeClaim to see if that helps, but it seems that when i do this, the PersistentVolumeClaim status becomes stuck in "pending" (resulting in the same issue as before).
You don't have a bound PV for this claim. What storage you use for this claim. You need to mention it in the PVC file