opensearch blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized] - kubernetes

I'm trying to setup an opensearch cluster on kubernetes.
when settings my nodes nothing fails but I get an error at a certain point and
this is my stateful set:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.global.name }} --> opensearch
namespace: {{ .Values.global.namespace }}
clusterName: {{ .Values.global.clusterName }}
labels:
app: {{ .Values.global.name }}
annotations:
majorVersion: "{{ include "opensearch.majorVersion" . }}"
spec:
serviceName: "opensearch"
selector:
matchLabels:
app: {{ .Values.global.name }}
replicas: {{ .Values.replicas }} ---> 3
template:
metadata:
name: {{ .Values.global.name }}
labels:
app: {{ .Values.global.name }}
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: [ "sh", "-c", "ulimit -n 65536" ]
containers:
- name: "{{.Values.global.name }}-master"
image: opensearchproject/opensearch
imagePullPolicy: IfNotPresent
resources:
limits:
memory: '8Gi'
cpu: "1"
requests:
memory: '8Gi'
cpu: "1"
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
env:
- name: node.name
value: opensearch
- name: cluster.name
value: "{{ .Values.global.clusterName }}"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: cluster.initial_master_nodes
value: "opensearch-0"
- name: discovery.seed_hosts
value: "opensearch-0"
- name: ES_JAVA_OPTS
value: "-Xms4g -Xmx4g"
volumeMounts:
- name: {{ .Values.global.name }}
mountPath: /etc/opensearch/data
- name: config
mountPath: /usr/share/opensearch/config/opensearch.yml
subPath: opensearch.yml
- name: node-key
mountPath: {{ .Values.privateKeyPathOnMachine }}
subPath: node-key.pem
readOnly: true
- name: node
mountPath: {{ .Values.certPathOnMachine }}
subPath: node.pem
readOnly: true
- name: root-ca
mountPath: {{ .Values.rootCertPathOnMachine }}
subPath: root-ca.pem
- name: admin-key
mountPath: {{ .Values.adminKeyCertPathOnMachine }}
subPath: admin-key.pem
readOnly: true
- name: admin
mountPath: {{ .Values.adminCertPathOnMachine }}
subPath: admin.pem
readOnly: true
- name: client
mountPath: {{ .Values.clientCertPathOnMachine }}
subPath: client.pem
readOnly: true
- name: client-key
mountPath: {{ .Values.clientKeyCertPathOnMachine }}
subPath: client-key.pem
readOnly: true
volumes:
- name: config
configMap:
name: opensearch-config
- name: config-opensearch
configMap:
name: config
- name: node
secret:
secretName: node
items:
- key: node.pem
path: node.pem
- name: node-key
secret:
secretName: node-key
items:
- key: node-key.pem
path: node-key.pem
- name: root-ca
secret:
secretName: root-ca
items:
- key: root-ca.pem
path: root-ca.pem
- name: admin-key
secret:
secretName: admin-key
items:
- key: admin-key.pem
path: admin-key.pem
- name: admin
secret:
secretName: admin
items:
- key: admin.pem
path: admin.pem
- name: client-key
secret:
secretName: client-key
items:
- key: client-key.pem
path: client-key.pem
- name: client
secret:
secretName: client
items:
- key: client.pem
path: client.pem
volumeClaimTemplates:
- metadata:
name: {{ .Values.global.name }}
labels:
app: {{ .Values.global.name }}
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "20Gi"
when I set use this definition, at some point I get this error:
[ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch] Exception while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, AUDIT] (index=.opendistro_security)
org.opensearch.cluster.block.ClusterBlockException: bl
ocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
now if I'm trying to set the nodes:
- name: cluster.initial_master_nodes
value: "opensearch-0.opensearch.search.svc.cluster.local,opensearch-1.opensearch.search.svc.cluster.local,opensearch-2.opensearch.search.svc.cluster.local"
- name: discovery.seed_hosts
value: "opensearch-0.opensearch.search.svc.cluster.local,opensearch-1.opensearch.search.svc.cluster.local,opensearch-2.opensearch.search.svc.cluster.local"
It fails on the same error, only this time this warning comes before.
[opensearch] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [opensearch-0.opensearch.search.svc.cluster.local, opensearch-1.opensearch.search.svc.cluster.local, opensearch-2.opensearch.search.svc.cluster.local] to bootstrap a cluster: have discovered [{opensearch}{SKON7g98RnyQsz6SAYqWRg}{GkUCV8mISZqITHiU0LDEzQ}{10.20.1.103}{10.20.1.103:9300}{dimr}{shard_indexing_pressure_enabled=true}, {opensearch}{qRuv6YgYQjGVatLGRGfPtQ}{62EmR4a_Sb-nhV9_7F05aA}{10.20.2.137}{10.20.2.137:9300}{dimr}{shard_indexing_pressure_enabled=true}, {opensearch}{8flMQsmxQEGN4LeBMemHsQ}{6zNV_pTZRnO6YneCzvOA4Q}{10.20.3.204}{10.20.3.204:9300}{dimr}{shard_indexing_pressure_enabled=true}]; discovery will continue using [10.20.2.137:9300, 10.20.3.204:9300] from hosts providers and [{opensearch}{SKON7g98RnyQsz6SAYqWRg}{GkUCV8mISZqITHiU0LDEzQ}{10.20.1.103}{10.20.1.103:9300}{dimr}{shard_indexing_pressure_enabled=true}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
When I try to run in the pod the security setup script
/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd ../securityconfig/ -icl -nhnv -cacert /usr/share/opensearch/config/certificates/root-ca.pem -cert /usr/share/opensearch/config/certificates/admin.pem -key /usr/share/opensearch/config/certificates/admin-key.pem
it fails too, output:
Cannot retrieve cluster state due to: null. This is not an error, will keep on trying ...
Root cause: MasterNotDiscoveredException[null] (org.opensearch.discovery.MasterNotDiscoveredException/org.opensearch.discovery.MasterNotDiscoveredException)
kubectl get svc opensearch -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"clusterName":"gloat-dev","labels":{"app.kubernetes.io/instance":"opensearch-gloat-dev-search"},"name":"opensearch","namespace":"search"},"spec":{"clusterIP":"None","ports":[{"name":"http","port":9200},{"name":"transport","port":9300}],"publishNotReadyAddresses":true,"selector":{"app":"opensearch"},"type":"ClusterIP"}}
creationTimestamp: "2022-01-17T12:21:56Z"
labels:
app.kubernetes.io/instance: opensearch-gloat-dev-search
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:spec:
f:clusterIP: {}
f:ports:
.: {}
k:{"port":9200,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":9300,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:publishNotReadyAddresses: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: argocd-application-controller
operation: Update
time: "2022-01-17T12:21:56Z"
name: opensearch
namespace: search
resourceVersion: "173096782"
selfLink: /api/v1/namespaces/search/services/opensearch
uid: ec2a49a1-f4e8-4419-9324-1761b892aeca
spec:
clusterIP: None
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
publishNotReadyAddresses: true
selector:
app: opensearch
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
error log trace:
https://pastebin.com/MtJp9iwf (loops)

Try with:
- name: cluster.initial_master_nodes
value: "opensearch-0,opensearch-1,opensearch-2" # opensearch master node names
- name: discovery.seed_hosts
value: "opensearch" # headless service dns which points to master nodes, in your case it's "opensearch".

Related

AKS - Pods created by HPA trigger are getting terminated immediately after they are created

When we had a look into the events in AKS, we observed the below errors for all the pods which were created and terminated:
2m47s Warning FailedMount pod/app-fd6c6b8d9-ssr2t Unable to attach or mount volumes: unmounted volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc], unattached volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc]: timed out waiting for the condition
We already have 2 replicas running for the application so don't think that the error will be due to AccessModes of volumes.
Below is the HPA config:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: app-cpu-hpa
namespace: namespace-dev
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
Below is the deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
group: app
obs: appd
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/default
labels:
app: app
group: app
obs: appd
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 2000
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
limits:
cpu: {{ .Values.app.limits.cpu }}
memory: {{ .Values.app.limits.memory }}
requests:
cpu: {{ .Values.app.requests.cpu }}
memory: {{ .Values.app.requests.memory }}
env:
- name: LOG_DIR_PATH
value: /opt/apps/
volumeMounts:
- name: log-volume
mountPath: /opt/apps/app/logs
- name: config-volume
mountPath: /script/start.sh
subPath: start.sh
- name: log4j2
mountPath: /opt/appdynamics-java/ver21.9.0.33073/conf/logging/log4j2.xml
subPath: log4j2.xml
- name: secrets-app-inline
mountPath: "/mnt/secrets-app"
readOnly: true
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
initialDelaySeconds: 60
livenessProbe:
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
initialDelaySeconds: 300
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: {{ .Values.apppvc.name }}
- name: config-volume
configMap:
name: {{ .Values.configmap.name }}-configmap
defaultMode: 0755
- name: secrets-app-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "app-kv-secret"
nodePublishSecretRef:
name: secrets-app-creds
- name: log4j2
configMap:
name: log4j2
defaultMode: 0755
restartPolicy: Always
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
Can someone please let me know where the config might be going wrong?

not load secret in k8s

I am learning to use k8s and I have a problem. I have been able to perform several deployments with the same yml without problems. My problem is that when I mount the secret volume it loads me the directory with the variables but it does not detect them as environments variable
my secret
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
- name: secret-volumen
mountPath: /etc/secret/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
- name: secret-volumen
secret:
secretName: authentications-sercret
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: undefined, <-- not load
PASSWORD: undefined,<-- not load
HOST: 'db-service',
PORT: '5432'
}
if I add them manually if it recognizes them
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_PASSWORD
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: 'insertmendoza', <-- work
PASSWORD: 'jKNP9ZdtELMm6K5', <-- work
HOST: 'db-service',
PORT: '5432'
}
listening queue
listening on *:8000
in the directory where I mount the secrets exist!
/etc/secret # ls
DB_PASSWORD DB_USERNAME SECRET_KEY TOKEN_EXPIRES_IN
/etc/secret # cat DB_PASSWORD
jKNP9ZdtELMm6K5/etc/secret #
EDIT
My solution speed is
envFrom:
- configMapRef:
name: authentications-config
- secretRef: <<--
name: authentications-sercret <<--
I hope it serves you, greetings from Argentina Insert Mendoza
If I understand the problem correctly, you aren't getting the secrets loaded into the environment. It looks like you're loading it incorrectly, use the envFrom form as documented here.
Using your example it would be:
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
- secretRef:
name: authentications-sercret
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
Note the volume and mount was removed and just add the secretRef section. Those should now be exported as environment variables in your pod.

Kubernetes CronJob - Multiple CronJob configuration is not working

I have to run two CronJobs in Kubernetes (AWS-EKS) and I have below configuration. When I apply the template, only one CronJob is getting created. The one that gets created is always the second one. So it looks like the first one is getting overwritten by the second. I am unable to figure out what am I doing wrong.
# Source: deploy-k8s-app/templates/multicron.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
schedule: '5/15 * * * *'
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
activeDeadlineSeconds: 900
template:
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
containers:
- env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/my-app"
- name: IS_JACOCO_ENABLED
value: "false"
- name: SPRING_PROFILES_ACTIVE
value: "int-dc4"
- name: METRICS_ADDRESS
value: "NA"
- name: APP_MODULE
value: "expand"
- name: JAVA_TOOL_OPTIONS
value: "-Xms256M -Xmx512M"
image: "xxxxx.dkr.ecr.us-east-1.amazonaws.com/my-ecr:my-app-latest-10"
imagePullPolicy: IfNotPresent
name: my-app
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: apps-logs
mountPath: /var/log/containers
- name: fluentdconf
mountPath: /fluentd/etc
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.11.2-debian-cloudwatch-1.0
env:
- name: REGION
value: us-east-1
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: MY-EKS-Cluster
- name: CI_VERSION
value: "k8s/1.0.1"
- name: LOG_GROUP_NAME
value: /aws/containerinsights/MY-EKS-Cluster/springapp
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: fluentdconf
mountPath: /fluentd/etc
- name: apps-logs
mountPath: /var/log/containers
volumes:
- name: fluentdconf
configMap:
name: fluentd-spring-config
- name: apps-logs
emptyDir: {}
- name: my-app-shared
emptyDir: {}
restartPolicy: OnFailure
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: my-app
name: my-app-addl
namespace: commercial
spec:
schedule: '15/30 * * * *'
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
activeDeadlineSeconds: 1800
template:
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
containers:
- env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/my-app"
- name: IS_JACOCO_ENABLED
value: "false"
- name: SPRING_PROFILES_ACTIVE
value: "int-dc4"
- name: METRICS_ADDRESS
value: "NA"
- name: APP_MODULE
value: "expand"
- name: JAVA_TOOL_OPTIONS
value: "-Xms256M -Xmx512M"
image: "xxxxx.dkr.ecr.us-east-1.amazonaws.com/my-ecr:my-app-latest-10"
imagePullPolicy: IfNotPresent
name: my-app
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: apps-logs
mountPath: /var/log/containers
- name: fluentdconf
mountPath: /fluentd/etc
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.11.2-debian-cloudwatch-1.0
env:
- name: REGION
value: us-east-1
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: MY-EKS-Cluster
- name: CI_VERSION
value: "k8s/1.0.1"
- name: LOG_GROUP_NAME
value: /aws/containerinsights/MY-EKS-Cluster/springapp
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: fluentdconf
mountPath: /fluentd/etc
- name: apps-logs
mountPath: /var/log/containers
volumes:
- name: fluentdconf
configMap:
name: fluentd-spring-config
- name: apps-logs
emptyDir: {}
- name: my-app-shared
emptyDir: {}
restartPolicy: OnFailure
kubectl apply -f multicron.yaml
cronjob.batch/my-app-addl created
(Expectation: Two CronJobs to be created. Actual: Only one is created, and that is the second one)
kubectl get cronjob -n commercial
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
my-app-addl 15/30 * * * * False 0 <none> 9s
(Expectation: Two CronJobs to be created. Actual: Only one is created, and that is the second one)
Thanks!
Abhilash
I could solve this by separating the Documents by using --- between CronJob entries

HELM UPGRADE ISSUE: spec.template.spec.containers[0].volumeMounts[2].name: Not found: "NAME"

I have been trying to create a POD with HELM UPGRADE:
helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
but running into below error:
2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
YML part
volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
Maybe, the first deploy need another command the first time? how can I specify these value to test it?
TL;DR
The issue you've encountered:
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
is connected with the fact that the variable: {{ .Values.secret.secretvol }} is missing.
To fix it you will need to set this value in either:
Helm command that you are using
File that stores your values in the Helm's chart.
A tip!
You can run your Helm command with --debug --dry-run to output generated YAML's. This should show you where the errors could be located.
There is an official documentation about values in Helm. Please take a look here:
Helm.sh: Docs: Chart template guid: Values files
Basing on
I have been trying to create a POD with HELM UPGRADE:
I've made an example basing on your issue and how you can fix it.
Steps:
Create a helm chart with correct values
Edit the values to reproduce the error
Create a helm chart
For the simplicity of the setup I created basic Helm chart.
Below is the structure of files and directories:
❯ tree helm-dir
helm-dir
├── Chart.yaml
├── templates
│ └── pod.yaml
└── values.yaml
1 directory, 3 files
Create Chart.yaml file
Below is the Chart.yaml file:
apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
Create a values.yaml file
Below is the simple values.yaml file which will be used by default in the $ helm install command
usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
Create a template for a pod
This template is stored in templates directory with a name pod.yaml
Below YAML definition will be a template for spawned pod:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
With above example you should be able to run $ helm install --name test-pod .
You should get output similar to this:
NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
Disclaimer!
The ubuntu pod is in the ContainerCreating state as there is no secret named secret-password in the cluster.
You can get more information about your pods by running:
$ kubectl describe pod POD_NAME
Edit the values to reproduce the error
The error you got as described earlier is most probably connected with the fact that the value: {{ .Values.secret.secretvol }} was missing.
If you were to edit the values.yaml file to:
usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
Notice the added #.
You should get below error when trying to deploy this chart:
Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
I previously mentioned the --debug --dry-run parameters for Helm.
If you run:
$ helm install --name test-pod --debug --dry-run .
You should get the output similar to this (this is only the part):
---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
As you can see the value of secretName was missing. That's the reason above error was showing up.
secretName: # value from "values.yaml"
Thank you Dawik, here we have the output:
2020-07-10T11:34:26.3090526Z LAST DEPLOYED: Fri Jul 10 11:34:25 2020
2020-07-10T11:34:26.3091661Z NAMESPACE: sda-NAME
2020-07-10T11:34:26.3092410Z STATUS: pending-upgrade
2020-07-10T11:34:26.3092796Z REVISION: 13
2020-07-10T11:34:26.3093182Z TEST SUITE: None
2020-07-10T11:34:26.3093781Z USER-SUPPLIED VALUES:
2020-07-10T11:34:26.3105880Z affinity: {}
2020-07-10T11:34:26.3106801Z containers:
2020-07-10T11:34:26.3107446Z port: 8080
2020-07-10T11:34:26.3108124Z portName: http
2020-07-10T11:34:26.3108769Z protocol: TCP
2020-07-10T11:34:26.3109440Z env:
2020-07-10T11:34:26.3110613Z APP_NAME: NAME
2020-07-10T11:34:26.3112959Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3115219Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3116160Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3116974Z TZ: Europe/Madrid
2020-07-10T11:34:26.3117647Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3119640Z spring_application_name: NAME
2020-07-10T11:34:26.3121048Z spring_cloud_config_uri: URI
2020-07-10T11:34:26.3122038Z envSecrets: {}
2020-07-10T11:34:26.3122789Z fullnameOverride: ""
2020-07-10T11:34:26.3123489Z image:
2020-07-10T11:34:26.3124470Z pullPolicy: Always
2020-07-10T11:34:26.3125908Z repository: NAME-REPO
2020-07-10T11:34:26.3126955Z imagePullSecrets: []
2020-07-10T11:34:26.3127675Z ingress:
2020-07-10T11:34:26.3128727Z enabled: ***
2020-07-10T11:34:26.3129509Z livenessProbe: {}
2020-07-10T11:34:26.3130143Z nameOverride: ""
2020-07-10T11:34:26.3131148Z nameSpace: sda-NAME
2020-07-10T11:34:26.3131820Z nodeSelector: {}
2020-07-10T11:34:26.3132444Z podSecurityContext: {}
2020-07-10T11:34:26.3133135Z readinessProbe: {}
2020-07-10T11:34:26.3133742Z replicaCount: 1
2020-07-10T11:34:26.3134636Z resources:
2020-07-10T11:34:26.3135362Z limits:
2020-07-10T11:34:26.3135865Z cpu: 150m
2020-07-10T11:34:26.3136404Z memory: 1444Mi
2020-07-10T11:34:26.3137257Z requests:
2020-07-10T11:34:26.3137851Z cpu: 100m
2020-07-10T11:34:26.3138391Z memory: 1024Mi
2020-07-10T11:34:26.3138942Z route:
2020-07-10T11:34:26.3139486Z alternateBackends: []
2020-07-10T11:34:26.3140087Z annotations: null
2020-07-10T11:34:26.3140642Z enabled: true
2020-07-10T11:34:26.3141226Z fullnameOverride: ""
2020-07-10T11:34:26.3142695Z host:HOST-NAME
2020-07-10T11:34:26.3143480Z labels: null
2020-07-10T11:34:26.3144217Z nameOverride: ""
2020-07-10T11:34:26.3145137Z path: ""
2020-07-10T11:34:26.3145637Z service:
2020-07-10T11:34:26.3146439Z name: NAME
2020-07-10T11:34:26.3147049Z targetPort: http
2020-07-10T11:34:26.3147607Z weight: 100
2020-07-10T11:34:26.3148121Z status: ""
2020-07-10T11:34:26.3148623Z tls:
2020-07-10T11:34:26.3149162Z caCertificate: null
2020-07-10T11:34:26.3149820Z certificate: null
2020-07-10T11:34:26.3150467Z destinationCACertificate: null
2020-07-10T11:34:26.3151091Z enabled: true
2020-07-10T11:34:26.3151847Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3152483Z key: null
2020-07-10T11:34:26.3153032Z termination: edge
2020-07-10T11:34:26.3154104Z wildcardPolicy: None
2020-07-10T11:34:26.3155687Z secret:
2020-07-10T11:34:26.3156714Z jks: NAME-jks
2020-07-10T11:34:26.3157408Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3157962Z securityContext: {}
2020-07-10T11:34:26.3158490Z service:
2020-07-10T11:34:26.3159127Z containerPort: 8080
2020-07-10T11:34:26.3159627Z port: 8080
2020-07-10T11:34:26.3160103Z portName: http
2020-07-10T11:34:26.3160759Z targetPort: 8080
2020-07-10T11:34:26.3161219Z type: ClusterIP
2020-07-10T11:34:26.3161694Z serviceAccount:
2020-07-10T11:34:26.3162482Z create: ***
2020-07-10T11:34:26.3162990Z name: null
2020-07-10T11:34:26.3163451Z tolerations: []
2020-07-10T11:34:26.3163836Z
2020-07-10T11:34:26.3164534Z COMPUTED VALUES:
2020-07-10T11:34:26.3165022Z affinity: {}
2020-07-10T11:34:26.3165474Z containers:
2020-07-10T11:34:26.3165931Z port: 8080
2020-07-10T11:34:26.3166382Z portName: http
2020-07-10T11:34:26.3166861Z protocol: TCP
2020-07-10T11:34:26.3167284Z env:
2020-07-10T11:34:26.3168046Z APP_NAME: NAME
2020-07-10T11:34:26.3169887Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3175782Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3176587Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3177184Z TZ: Europe/Madrid
2020-07-10T11:34:26.3177683Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3178559Z spring_application_name: NAME
2020-07-10T11:34:26.3179807Z spring_cloud_config_uri: https://URL
2020-07-10T11:34:26.3181055Z envSecrets: {}
2020-07-10T11:34:26.3181569Z fullnameOverride: ""
2020-07-10T11:34:26.3182077Z image:
2020-07-10T11:34:26.3182707Z pullPolicy: Always
2020-07-10T11:34:26.3184026Z repository: REPO
2020-07-10T11:34:26.3185001Z imagePullSecrets: []
2020-07-10T11:34:26.3185461Z ingress:
2020-07-10T11:34:26.3186215Z enabled: ***
2020-07-10T11:34:26.3186709Z livenessProbe: {}
2020-07-10T11:34:26.3187187Z nameOverride: ""
2020-07-10T11:34:26.3188416Z nameSpace: sda-NAME
2020-07-10T11:34:26.3189008Z nodeSelector: {}
2020-07-10T11:34:26.3189522Z podSecurityContext: {}
2020-07-10T11:34:26.3190056Z readinessProbe: {}
2020-07-10T11:34:26.3190552Z replicaCount: 1
2020-07-10T11:34:26.3191030Z resources:
2020-07-10T11:34:26.3191686Z limits:
2020-07-10T11:34:26.3192320Z cpu: 150m
2020-07-10T11:34:26.3192819Z memory: 1444Mi
2020-07-10T11:34:26.3193319Z requests:
2020-07-10T11:34:26.3193797Z cpu: 100m
2020-07-10T11:34:26.3194463Z memory: 1024Mi
2020-07-10T11:34:26.3194975Z route:
2020-07-10T11:34:26.3195470Z alternateBackends: []
2020-07-10T11:34:26.3196028Z enabled: true
2020-07-10T11:34:26.3196556Z fullnameOverride: ""
2020-07-10T11:34:26.3197601Z host: HOST-NAME
2020-07-10T11:34:26.3198314Z nameOverride: ""
2020-07-10T11:34:26.3198828Z path: ""
2020-07-10T11:34:26.3199285Z service:
2020-07-10T11:34:26.3200023Z name: NAME
2020-07-10T11:34:26.3233791Z targetPort: http
2020-07-10T11:34:26.3234697Z weight: 100
2020-07-10T11:34:26.3235283Z status: ""
2020-07-10T11:34:26.3235819Z tls:
2020-07-10T11:34:26.3236787Z enabled: true
2020-07-10T11:34:26.3237479Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3238168Z termination: edge
2020-07-10T11:34:26.3238800Z wildcardPolicy: None
2020-07-10T11:34:26.3239421Z secret:
2020-07-10T11:34:26.3240502Z jks: NAME-servers-jks
2020-07-10T11:34:26.3241249Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3241901Z securityContext: {}
2020-07-10T11:34:26.3242534Z service:
2020-07-10T11:34:26.3243157Z containerPort: 8080
2020-07-10T11:34:26.3243770Z port: 8080
2020-07-10T11:34:26.3244543Z portName: http
2020-07-10T11:34:26.3245190Z targetPort: 8080
2020-07-10T11:34:26.3245772Z type: ClusterIP
2020-07-10T11:34:26.3246343Z serviceAccount:
2020-07-10T11:34:26.3247308Z create: ***
2020-07-10T11:34:26.3247993Z tolerations: []
2020-07-10T11:34:26.3248511Z
2020-07-10T11:34:26.3249065Z HOOKS:
2020-07-10T11:34:26.3249600Z MANIFEST:
2020-07-10T11:34:26.3250504Z ---
2020-07-10T11:34:26.3252176Z # Source: NAME/templates/service.yaml
2020-07-10T11:34:26.3253107Z apiVersion: v1
2020-07-10T11:34:26.3253715Z kind: Service
2020-07-10T11:34:26.3254487Z metadata:
2020-07-10T11:34:26.3255338Z name: NAME
2020-07-10T11:34:26.3256318Z namespace: sda-NAME
2020-07-10T11:34:26.3256883Z labels:
2020-07-10T11:34:26.3257666Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3258533Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3259785Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3260503Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3261383Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3261955Z spec:
2020-07-10T11:34:26.3262427Z type: ClusterIP
2020-07-10T11:34:26.3263292Z ports:
2020-07-10T11:34:26.3264086Z - port: 8080
2020-07-10T11:34:26.3264659Z targetPort: 8080
2020-07-10T11:34:26.3265359Z protocol: TCP
2020-07-10T11:34:26.3265900Z name: http
2020-07-10T11:34:26.3266361Z selector:
2020-07-10T11:34:26.3267220Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3268298Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3269380Z ---
2020-07-10T11:34:26.3270539Z # Source: NAME/templates/deployment.yaml
2020-07-10T11:34:26.3271606Z apiVersion: apps/v1
2020-07-10T11:34:26.3272400Z kind: Deployment
2020-07-10T11:34:26.3273326Z metadata:
2020-07-10T11:34:26.3274457Z name: NAME
2020-07-10T11:34:26.3275511Z namespace: sda-NAME
2020-07-10T11:34:26.3276177Z labels:
2020-07-10T11:34:26.3277219Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3278322Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3279447Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3280249Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3281398Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3282289Z spec:
2020-07-10T11:34:26.3282881Z replicas: 1
2020-07-10T11:34:26.3283505Z selector:
2020-07-10T11:34:26.3284469Z matchLabels:
2020-07-10T11:34:26.3285628Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3286815Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3287549Z template:
2020-07-10T11:34:26.3288192Z metadata:
2020-07-10T11:34:26.3288826Z labels:
2020-07-10T11:34:26.3289909Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3291596Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3292439Z spec:
2020-07-10T11:34:26.3293109Z serviceAccountName: default
2020-07-10T11:34:26.3293774Z securityContext:
2020-07-10T11:34:26.3294666Z {}
2020-07-10T11:34:26.3295217Z containers:
2020-07-10T11:34:26.3296338Z - name: NAME
2020-07-10T11:34:26.3297240Z securityContext:
2020-07-10T11:34:26.3297859Z {}
2020-07-10T11:34:26.3299353Z image: "REGISTRY-IMAGE"
2020-07-10T11:34:26.3300638Z imagePullPolicy: Always
2020-07-10T11:34:26.3301358Z ports:
2020-07-10T11:34:26.3302491Z - name:
2020-07-10T11:34:26.3303380Z containerPort: 8080
2020-07-10T11:34:26.3304479Z protocol: TCP
2020-07-10T11:34:26.3305325Z env:
2020-07-10T11:34:26.3306418Z - name: APP_NAME
2020-07-10T11:34:26.3307576Z value: "NAME"
2020-07-10T11:34:26.3308757Z - name: JAVA_OPTS_EXT
2020-07-10T11:34:26.3311974Z value: "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
2020-07-10T11:34:26.3313760Z - name: SPRING_CLOUD_CONFIG_PROFILE
2020-07-10T11:34:26.3314842Z value: "pro"
2020-07-10T11:34:26.3315890Z - name: TZ
2020-07-10T11:34:26.3316777Z value: "Europe/Madrid"
2020-07-10T11:34:26.3317863Z - name: WILY_MOM_PORT
2020-07-10T11:34:26.3318485Z value: "5001"
2020-07-10T11:34:26.3319421Z - name: spring_application_name
2020-07-10T11:34:26.3320679Z value: "NAME"
2020-07-10T11:34:26.3321858Z - name: spring_cloud_config_uri
2020-07-10T11:34:26.3323093Z value: "https://config.sda-NAME-pro.svc.cluster.local"
2020-07-10T11:34:26.3324190Z resources:
2020-07-10T11:34:26.3324905Z limits:
2020-07-10T11:34:26.3325439Z cpu: 150m
2020-07-10T11:34:26.3325985Z memory: 1444Mi
2020-07-10T11:34:26.3326739Z requests:
2020-07-10T11:34:26.3327305Z cpu: 100m
2020-07-10T11:34:26.3327875Z memory: 1024Mi
2020-07-10T11:34:26.3328436Z volumeMounts:
2020-07-10T11:34:26.3329476Z - name: jks
2020-07-10T11:34:26.3330147Z mountPath: "/etc/jks"
2020-07-10T11:34:26.3331153Z readOnly: true
2020-07-10T11:34:26.3332053Z - name: jssecacerts
2020-07-10T11:34:26.3332739Z mountPath: "/etc/truststore"
2020-07-10T11:34:26.3333356Z readOnly: true
2020-07-10T11:34:26.3334402Z - name: secretvol
2020-07-10T11:34:26.3335565Z mountPath: "/etc/secret-vol"
2020-07-10T11:34:26.3336302Z readOnly: true
2020-07-10T11:34:26.3336935Z volumes:
2020-07-10T11:34:26.3338100Z - name: jks
2020-07-10T11:34:26.3338724Z secret:
2020-07-10T11:34:26.3339946Z secretName: NAME-servers-jks
2020-07-10T11:34:26.3340817Z - name: secretvol
2020-07-10T11:34:26.3341347Z secret:
2020-07-10T11:34:26.3341870Z secretName:
2020-07-10T11:34:26.3342633Z - name: jssecacerts
2020-07-10T11:34:26.3343444Z secret:
2020-07-10T11:34:26.3344103Z secretName: jssecacerts
2020-07-10T11:34:26.3344866Z ---
2020-07-10T11:34:26.3345846Z # Source: NAME/templates/route.yaml
2020-07-10T11:34:26.3346641Z apiVersion: route.openshift.io/v1
2020-07-10T11:34:26.3347112Z kind: Route
2020-07-10T11:34:26.3347568Z metadata:
2020-07-10T11:34:26.3354831Z name: NAME
2020-07-10T11:34:26.3357144Z labels:
2020-07-10T11:34:26.3358020Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3359360Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3360306Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3361002Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3361888Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3362463Z spec:
2020-07-10T11:34:26.3363374Z host: HOST
2020-07-10T11:34:26.3364364Z path:
2020-07-10T11:34:26.3364940Z wildcardPolicy: None
2020-07-10T11:34:26.3365630Z port:
2020-07-10T11:34:26.3366080Z targetPort: http
2020-07-10T11:34:26.3366496Z tls:
2020-07-10T11:34:26.3367144Z termination: edge
2020-07-10T11:34:26.3367630Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3368072Z to:
2020-07-10T11:34:26.3368572Z kind: Service
2020-07-10T11:34:26.3369571Z name: NAME
2020-07-10T11:34:26.3369919Z weight: 100
2020-07-10T11:34:26.3370115Z status:
2020-07-10T11:34:26.3370287Z ingress: []
2020-07-10T11:34:26.3370419Z
2020-07-10T11:34:26.3370579Z NOTES:
2020-07-10T11:34:26.3370833Z 1. Get the application URL by running these commands:
2020-07-10T11:34:26.3371698Z export POD_NAME=$(kubectl get pods --namespace sda-NAME -l "app.kubernetes.io/name=NAME,app.kubernetes.io/instance=NAME" -o jsonpath="{.items[0].metadata.name}")
2020-07-10T11:34:26.3372278Z echo "Visit http://127.0.0.1:8080 to use your application"
2020-07-10T11:34:26.3373358Z kubectl --namespace sda-NAME port-forward $POD_NAME 8080:80
2020-07-10T11:34:26.3373586Z
2020-07-10T11:34:26.3385047Z ##[section]Finishing: Helm Install/Upgrade NAME
looks well and don`t show any error...but if make it without --dry-run crash in the same part...
On the other hand, I try it without this volume and secret...and works perfect! I don't understand it.
Thank you for your patience and guidance.
UPDATE & FIX:
finally, the problem was in the file values-NAME.yml:
secret:
jks: VALUE
jssecacerts: VALUE
it need the following line in secret:
secretvol: VALUE

Kubernetes Helm Chart - Debugging

I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}