how can i add password to redis in helm template - kubernetes

I have described helm template for redis"
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Values.rds.service.name }}"
namespace: {{ .Values.environment.namespace }}
spec:
selector:
matchLabels:
app: "{{ .Values.rds.service.name }}"
template:
metadata:
labels:
app: "{{ .Values.rds.service.name }}"
component_type: "{{ .Values.component_type.name }}"
spec:
containers:
- image: "{{ .Values.rds.docker.hub }}{{ .Values.rds.docker.image }}"
name: "{{ .Values.rds.service.name }}"
env:
- name: REDIS_PASSWORD
value: "9dtjger"
ports:
- containerPort: {{ toYaml .Values.rds.service.port | indent 5 }}
resources:
requests:
memory: "{{ .Values.rds.resources.requests.memory }}"
cpu: "{{ .Values.rds.resources.requests.cpu }}"
limits:
memory: "{{ .Values.rds.resources.limits.memory }}"
cpu: "{{ .Values.rds.resources.limits.cpu }}"
---
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.rds.service.name }}"
namespace: "{{ .Values.environment.namespace }}"
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Department=sre,Team=engage-devops,Environment=ev-lab
external-dns.alpha.kubernetes.io/hostname: {{ .Values.rds.service.name }}.{{ .Values.environment.namespace }}.lab.engage.ringcentral.com
spec:
ports:
- protocol: TCP
port: {{ toYaml .Values.rds.service.port | indent 5 }}
selector:
app: "{{ .Values.rds.service.name }}"
type: LoadBalancer
Then I deployed it via kubectl apply:
kubectl -n mybanespace describe pod rds-5b6996bf-m6pbr
Name: rds-5b6996bf-m6pbr
Namespace: okta-cc-6
Priority: 0
Node: ip-10-8-29-49.eu-central-1.compute.internal/10.8.29.49
Start Time: Mon, 03 Aug 2020 21:55:09 +0300
Labels: app=rds
component_type=evt
pod-template-hash=5b6996bf
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 10.8.29.39
IPs: <none>
Controlled By: ReplicaSet/rds-5b6996bf
Containers:
rds:
Container ID: docker://3be73237324f8ba8c0a38420fceffcee65eb386e93afd8efa309212527761c74
Image: redis:6.0.6
Image ID: docker-pullable://redis#sha256:d86d6739fab2eaf590cfa51eccf1e9779677bd2502894579bcf3f80cb37b18d4
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 03 Aug 2020 21:55:14 +0300
Ready: True
Restart Count: 0
Limits:
cpu: 40m
memory: 100Mi
Requests:
cpu: 2m
memory: 10Mi
Environment:
REDIS_PASSWORD: 9dtjger
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-78c7b (ro)
When I try to connect to redis with the password I get error 'Warning: AUTH failed' if I connect without password - connection successful. I don't know why the password doesn't work when I use :
env:
- name: REDIS_PASSWORD
value: "9dtjger"
In docker-compose locally I cannot connect without a password.
How can I set a password for Redis in Kubernetes?

You didn't specify how you installed Redis❓ (Which Helm Chart). Essentially adding the --requirepass ${REDIS_PASSWORD}" and --masterauth ${REDIS_PASSWORD}" should do πŸ‘Œ.
If for example, you used the Bitnami Helm chart, you can use the usePassword parameter. Then in the template, it gets used here.
✌️

Related

AKS - Pods created by HPA trigger are getting terminated immediately after they are created

When we had a look into the events in AKS, we observed the below errors for all the pods which were created and terminated:
2m47s Warning FailedMount pod/app-fd6c6b8d9-ssr2t Unable to attach or mount volumes: unmounted volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc], unattached volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc]: timed out waiting for the condition
We already have 2 replicas running for the application so don't think that the error will be due to AccessModes of volumes.
Below is the HPA config:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: app-cpu-hpa
namespace: namespace-dev
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
Below is the deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
group: app
obs: appd
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/default
labels:
app: app
group: app
obs: appd
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 2000
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
limits:
cpu: {{ .Values.app.limits.cpu }}
memory: {{ .Values.app.limits.memory }}
requests:
cpu: {{ .Values.app.requests.cpu }}
memory: {{ .Values.app.requests.memory }}
env:
- name: LOG_DIR_PATH
value: /opt/apps/
volumeMounts:
- name: log-volume
mountPath: /opt/apps/app/logs
- name: config-volume
mountPath: /script/start.sh
subPath: start.sh
- name: log4j2
mountPath: /opt/appdynamics-java/ver21.9.0.33073/conf/logging/log4j2.xml
subPath: log4j2.xml
- name: secrets-app-inline
mountPath: "/mnt/secrets-app"
readOnly: true
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
initialDelaySeconds: 60
livenessProbe:
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
initialDelaySeconds: 300
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: {{ .Values.apppvc.name }}
- name: config-volume
configMap:
name: {{ .Values.configmap.name }}-configmap
defaultMode: 0755
- name: secrets-app-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "app-kv-secret"
nodePublishSecretRef:
name: secrets-app-creds
- name: log4j2
configMap:
name: log4j2
defaultMode: 0755
restartPolicy: Always
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
Can someone please let me know where the config might be going wrong?

Kubernetes deployment stuck on pending after create pvc

I'm trying to create a persistent storage to share with all of my application in the K8s cluster.
storageClass.yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
persistentVolume.yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-local-pv
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: my-local-storage
local:
path: /base-xapp/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- juniper-ric
persistentVolumeClaim.yaml file:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
and finally, this is the deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}-deployment
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.appName }}
image: "{{ .Values.image }}:{{ .Values.tag }}"
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
mountPath: {{ .Values.appName }}/data
envFrom:
- configMapRef:
name: {{ .Values.appName }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.appName }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
persistentVolumeClaim:
claimName: my-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appName }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.appName }}
type : NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
When i deploy the container the container status equals Pending
base-xapp-deployment-6799d6cbf6-lgjks 0/1 Pending 0 3m25s
this is the output of the describe:
Name: base-xapp-deployment-6799d6cbf6-lgjks
Namespace: near-rt-ric
Priority: 0
Node: <none>
Labels: app=base-xapp
pod-template-hash=6799d6cbf6
xappRelease=base-xapp
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/base-xapp-deployment-6799d6cbf6
Containers:
base-xapp:
Image: base-xapp:0.1.0
Ports: 4565/TCP, 4561/TCP
Host Ports: 0/TCP, 0/TCP
Environment Variables from:
base-xapp-configmap ConfigMap Optional: false
Environment: <none>
Mounts:
/rmr_route from app-cfg (rw,path="rmr_route")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxmwm (ro)
/vlevel from app-cfg (rw,path="vlevel")
base-xapp/data from base-xapp-persistent-storage (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
app-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: base-xapp-configmap
Optional: false
base-xapp-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-claim
ReadOnly: false
kube-api-access-rxmwm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10s (x6 over 4m22s) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "my-claim" not found.
this is the output of kubectl resources:
get pv:
dan#linux$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-local-pv 50Mi RWO Retain Available my-local-storage 6m2s
get pvc:
dan#linux$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-claim Pending my-local-storage 36m
You're missing spec.volumeName in your PVC manifest.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
volumeName: my-local-pv # this line was missing
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
I can see your deployment have namespace near-rt-ric.
But your PVC doesn't have a namespace, it probable placed in default namespace
Use this command to check kubectl get pvc -A

HELM UPGRADE ISSUE: spec.template.spec.containers[0].volumeMounts[2].name: Not found: "NAME"

I have been trying to create a POD with HELM UPGRADE:
helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
but running into below error:
2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
YML part
volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
Maybe, the first deploy need another command the first time? how can I specify these value to test it?
TL;DR
The issue you've encountered:
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
is connected with the fact that the variable: {{ .Values.secret.secretvol }} is missing.
To fix it you will need to set this value in either:
Helm command that you are using
File that stores your values in the Helm's chart.
A tip!
You can run your Helm command with --debug --dry-run to output generated YAML's. This should show you where the errors could be located.
There is an official documentation about values in Helm. Please take a look here:
Helm.sh: Docs: Chart template guid: Values files
Basing on
I have been trying to create a POD with HELM UPGRADE:
I've made an example basing on your issue and how you can fix it.
Steps:
Create a helm chart with correct values
Edit the values to reproduce the error
Create a helm chart
For the simplicity of the setup I created basic Helm chart.
Below is the structure of files and directories:
❯ tree helm-dir
helm-dir
β”œβ”€β”€ Chart.yaml
β”œβ”€β”€ templates
β”‚ └── pod.yaml
└── values.yaml
1 directory, 3 files
Create Chart.yaml file
Below is the Chart.yaml file:
apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
Create a values.yaml file
Below is the simple values.yaml file which will be used by default in the $ helm install command
usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
Create a template for a pod
This template is stored in templates directory with a name pod.yaml
Below YAML definition will be a template for spawned pod:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
With above example you should be able to run $ helm install --name test-pod .
You should get output similar to this:
NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
Disclaimer!
The ubuntu pod is in the ContainerCreating state as there is no secret named secret-password in the cluster.
You can get more information about your pods by running:
$ kubectl describe pod POD_NAME
Edit the values to reproduce the error
The error you got as described earlier is most probably connected with the fact that the value: {{ .Values.secret.secretvol }} was missing.
If you were to edit the values.yaml file to:
usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
Notice the added #.
You should get below error when trying to deploy this chart:
Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
I previously mentioned the --debug --dry-run parameters for Helm.
If you run:
$ helm install --name test-pod --debug --dry-run .
You should get the output similar to this (this is only the part):
---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
As you can see the value of secretName was missing. That's the reason above error was showing up.
secretName: # value from "values.yaml"
Thank you Dawik, here we have the output:
2020-07-10T11:34:26.3090526Z LAST DEPLOYED: Fri Jul 10 11:34:25 2020
2020-07-10T11:34:26.3091661Z NAMESPACE: sda-NAME
2020-07-10T11:34:26.3092410Z STATUS: pending-upgrade
2020-07-10T11:34:26.3092796Z REVISION: 13
2020-07-10T11:34:26.3093182Z TEST SUITE: None
2020-07-10T11:34:26.3093781Z USER-SUPPLIED VALUES:
2020-07-10T11:34:26.3105880Z affinity: {}
2020-07-10T11:34:26.3106801Z containers:
2020-07-10T11:34:26.3107446Z port: 8080
2020-07-10T11:34:26.3108124Z portName: http
2020-07-10T11:34:26.3108769Z protocol: TCP
2020-07-10T11:34:26.3109440Z env:
2020-07-10T11:34:26.3110613Z APP_NAME: NAME
2020-07-10T11:34:26.3112959Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3115219Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3116160Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3116974Z TZ: Europe/Madrid
2020-07-10T11:34:26.3117647Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3119640Z spring_application_name: NAME
2020-07-10T11:34:26.3121048Z spring_cloud_config_uri: URI
2020-07-10T11:34:26.3122038Z envSecrets: {}
2020-07-10T11:34:26.3122789Z fullnameOverride: ""
2020-07-10T11:34:26.3123489Z image:
2020-07-10T11:34:26.3124470Z pullPolicy: Always
2020-07-10T11:34:26.3125908Z repository: NAME-REPO
2020-07-10T11:34:26.3126955Z imagePullSecrets: []
2020-07-10T11:34:26.3127675Z ingress:
2020-07-10T11:34:26.3128727Z enabled: ***
2020-07-10T11:34:26.3129509Z livenessProbe: {}
2020-07-10T11:34:26.3130143Z nameOverride: ""
2020-07-10T11:34:26.3131148Z nameSpace: sda-NAME
2020-07-10T11:34:26.3131820Z nodeSelector: {}
2020-07-10T11:34:26.3132444Z podSecurityContext: {}
2020-07-10T11:34:26.3133135Z readinessProbe: {}
2020-07-10T11:34:26.3133742Z replicaCount: 1
2020-07-10T11:34:26.3134636Z resources:
2020-07-10T11:34:26.3135362Z limits:
2020-07-10T11:34:26.3135865Z cpu: 150m
2020-07-10T11:34:26.3136404Z memory: 1444Mi
2020-07-10T11:34:26.3137257Z requests:
2020-07-10T11:34:26.3137851Z cpu: 100m
2020-07-10T11:34:26.3138391Z memory: 1024Mi
2020-07-10T11:34:26.3138942Z route:
2020-07-10T11:34:26.3139486Z alternateBackends: []
2020-07-10T11:34:26.3140087Z annotations: null
2020-07-10T11:34:26.3140642Z enabled: true
2020-07-10T11:34:26.3141226Z fullnameOverride: ""
2020-07-10T11:34:26.3142695Z host:HOST-NAME
2020-07-10T11:34:26.3143480Z labels: null
2020-07-10T11:34:26.3144217Z nameOverride: ""
2020-07-10T11:34:26.3145137Z path: ""
2020-07-10T11:34:26.3145637Z service:
2020-07-10T11:34:26.3146439Z name: NAME
2020-07-10T11:34:26.3147049Z targetPort: http
2020-07-10T11:34:26.3147607Z weight: 100
2020-07-10T11:34:26.3148121Z status: ""
2020-07-10T11:34:26.3148623Z tls:
2020-07-10T11:34:26.3149162Z caCertificate: null
2020-07-10T11:34:26.3149820Z certificate: null
2020-07-10T11:34:26.3150467Z destinationCACertificate: null
2020-07-10T11:34:26.3151091Z enabled: true
2020-07-10T11:34:26.3151847Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3152483Z key: null
2020-07-10T11:34:26.3153032Z termination: edge
2020-07-10T11:34:26.3154104Z wildcardPolicy: None
2020-07-10T11:34:26.3155687Z secret:
2020-07-10T11:34:26.3156714Z jks: NAME-jks
2020-07-10T11:34:26.3157408Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3157962Z securityContext: {}
2020-07-10T11:34:26.3158490Z service:
2020-07-10T11:34:26.3159127Z containerPort: 8080
2020-07-10T11:34:26.3159627Z port: 8080
2020-07-10T11:34:26.3160103Z portName: http
2020-07-10T11:34:26.3160759Z targetPort: 8080
2020-07-10T11:34:26.3161219Z type: ClusterIP
2020-07-10T11:34:26.3161694Z serviceAccount:
2020-07-10T11:34:26.3162482Z create: ***
2020-07-10T11:34:26.3162990Z name: null
2020-07-10T11:34:26.3163451Z tolerations: []
2020-07-10T11:34:26.3163836Z
2020-07-10T11:34:26.3164534Z COMPUTED VALUES:
2020-07-10T11:34:26.3165022Z affinity: {}
2020-07-10T11:34:26.3165474Z containers:
2020-07-10T11:34:26.3165931Z port: 8080
2020-07-10T11:34:26.3166382Z portName: http
2020-07-10T11:34:26.3166861Z protocol: TCP
2020-07-10T11:34:26.3167284Z env:
2020-07-10T11:34:26.3168046Z APP_NAME: NAME
2020-07-10T11:34:26.3169887Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3175782Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3176587Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3177184Z TZ: Europe/Madrid
2020-07-10T11:34:26.3177683Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3178559Z spring_application_name: NAME
2020-07-10T11:34:26.3179807Z spring_cloud_config_uri: https://URL
2020-07-10T11:34:26.3181055Z envSecrets: {}
2020-07-10T11:34:26.3181569Z fullnameOverride: ""
2020-07-10T11:34:26.3182077Z image:
2020-07-10T11:34:26.3182707Z pullPolicy: Always
2020-07-10T11:34:26.3184026Z repository: REPO
2020-07-10T11:34:26.3185001Z imagePullSecrets: []
2020-07-10T11:34:26.3185461Z ingress:
2020-07-10T11:34:26.3186215Z enabled: ***
2020-07-10T11:34:26.3186709Z livenessProbe: {}
2020-07-10T11:34:26.3187187Z nameOverride: ""
2020-07-10T11:34:26.3188416Z nameSpace: sda-NAME
2020-07-10T11:34:26.3189008Z nodeSelector: {}
2020-07-10T11:34:26.3189522Z podSecurityContext: {}
2020-07-10T11:34:26.3190056Z readinessProbe: {}
2020-07-10T11:34:26.3190552Z replicaCount: 1
2020-07-10T11:34:26.3191030Z resources:
2020-07-10T11:34:26.3191686Z limits:
2020-07-10T11:34:26.3192320Z cpu: 150m
2020-07-10T11:34:26.3192819Z memory: 1444Mi
2020-07-10T11:34:26.3193319Z requests:
2020-07-10T11:34:26.3193797Z cpu: 100m
2020-07-10T11:34:26.3194463Z memory: 1024Mi
2020-07-10T11:34:26.3194975Z route:
2020-07-10T11:34:26.3195470Z alternateBackends: []
2020-07-10T11:34:26.3196028Z enabled: true
2020-07-10T11:34:26.3196556Z fullnameOverride: ""
2020-07-10T11:34:26.3197601Z host: HOST-NAME
2020-07-10T11:34:26.3198314Z nameOverride: ""
2020-07-10T11:34:26.3198828Z path: ""
2020-07-10T11:34:26.3199285Z service:
2020-07-10T11:34:26.3200023Z name: NAME
2020-07-10T11:34:26.3233791Z targetPort: http
2020-07-10T11:34:26.3234697Z weight: 100
2020-07-10T11:34:26.3235283Z status: ""
2020-07-10T11:34:26.3235819Z tls:
2020-07-10T11:34:26.3236787Z enabled: true
2020-07-10T11:34:26.3237479Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3238168Z termination: edge
2020-07-10T11:34:26.3238800Z wildcardPolicy: None
2020-07-10T11:34:26.3239421Z secret:
2020-07-10T11:34:26.3240502Z jks: NAME-servers-jks
2020-07-10T11:34:26.3241249Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3241901Z securityContext: {}
2020-07-10T11:34:26.3242534Z service:
2020-07-10T11:34:26.3243157Z containerPort: 8080
2020-07-10T11:34:26.3243770Z port: 8080
2020-07-10T11:34:26.3244543Z portName: http
2020-07-10T11:34:26.3245190Z targetPort: 8080
2020-07-10T11:34:26.3245772Z type: ClusterIP
2020-07-10T11:34:26.3246343Z serviceAccount:
2020-07-10T11:34:26.3247308Z create: ***
2020-07-10T11:34:26.3247993Z tolerations: []
2020-07-10T11:34:26.3248511Z
2020-07-10T11:34:26.3249065Z HOOKS:
2020-07-10T11:34:26.3249600Z MANIFEST:
2020-07-10T11:34:26.3250504Z ---
2020-07-10T11:34:26.3252176Z # Source: NAME/templates/service.yaml
2020-07-10T11:34:26.3253107Z apiVersion: v1
2020-07-10T11:34:26.3253715Z kind: Service
2020-07-10T11:34:26.3254487Z metadata:
2020-07-10T11:34:26.3255338Z name: NAME
2020-07-10T11:34:26.3256318Z namespace: sda-NAME
2020-07-10T11:34:26.3256883Z labels:
2020-07-10T11:34:26.3257666Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3258533Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3259785Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3260503Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3261383Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3261955Z spec:
2020-07-10T11:34:26.3262427Z type: ClusterIP
2020-07-10T11:34:26.3263292Z ports:
2020-07-10T11:34:26.3264086Z - port: 8080
2020-07-10T11:34:26.3264659Z targetPort: 8080
2020-07-10T11:34:26.3265359Z protocol: TCP
2020-07-10T11:34:26.3265900Z name: http
2020-07-10T11:34:26.3266361Z selector:
2020-07-10T11:34:26.3267220Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3268298Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3269380Z ---
2020-07-10T11:34:26.3270539Z # Source: NAME/templates/deployment.yaml
2020-07-10T11:34:26.3271606Z apiVersion: apps/v1
2020-07-10T11:34:26.3272400Z kind: Deployment
2020-07-10T11:34:26.3273326Z metadata:
2020-07-10T11:34:26.3274457Z name: NAME
2020-07-10T11:34:26.3275511Z namespace: sda-NAME
2020-07-10T11:34:26.3276177Z labels:
2020-07-10T11:34:26.3277219Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3278322Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3279447Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3280249Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3281398Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3282289Z spec:
2020-07-10T11:34:26.3282881Z replicas: 1
2020-07-10T11:34:26.3283505Z selector:
2020-07-10T11:34:26.3284469Z matchLabels:
2020-07-10T11:34:26.3285628Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3286815Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3287549Z template:
2020-07-10T11:34:26.3288192Z metadata:
2020-07-10T11:34:26.3288826Z labels:
2020-07-10T11:34:26.3289909Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3291596Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3292439Z spec:
2020-07-10T11:34:26.3293109Z serviceAccountName: default
2020-07-10T11:34:26.3293774Z securityContext:
2020-07-10T11:34:26.3294666Z {}
2020-07-10T11:34:26.3295217Z containers:
2020-07-10T11:34:26.3296338Z - name: NAME
2020-07-10T11:34:26.3297240Z securityContext:
2020-07-10T11:34:26.3297859Z {}
2020-07-10T11:34:26.3299353Z image: "REGISTRY-IMAGE"
2020-07-10T11:34:26.3300638Z imagePullPolicy: Always
2020-07-10T11:34:26.3301358Z ports:
2020-07-10T11:34:26.3302491Z - name:
2020-07-10T11:34:26.3303380Z containerPort: 8080
2020-07-10T11:34:26.3304479Z protocol: TCP
2020-07-10T11:34:26.3305325Z env:
2020-07-10T11:34:26.3306418Z - name: APP_NAME
2020-07-10T11:34:26.3307576Z value: "NAME"
2020-07-10T11:34:26.3308757Z - name: JAVA_OPTS_EXT
2020-07-10T11:34:26.3311974Z value: "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
2020-07-10T11:34:26.3313760Z - name: SPRING_CLOUD_CONFIG_PROFILE
2020-07-10T11:34:26.3314842Z value: "pro"
2020-07-10T11:34:26.3315890Z - name: TZ
2020-07-10T11:34:26.3316777Z value: "Europe/Madrid"
2020-07-10T11:34:26.3317863Z - name: WILY_MOM_PORT
2020-07-10T11:34:26.3318485Z value: "5001"
2020-07-10T11:34:26.3319421Z - name: spring_application_name
2020-07-10T11:34:26.3320679Z value: "NAME"
2020-07-10T11:34:26.3321858Z - name: spring_cloud_config_uri
2020-07-10T11:34:26.3323093Z value: "https://config.sda-NAME-pro.svc.cluster.local"
2020-07-10T11:34:26.3324190Z resources:
2020-07-10T11:34:26.3324905Z limits:
2020-07-10T11:34:26.3325439Z cpu: 150m
2020-07-10T11:34:26.3325985Z memory: 1444Mi
2020-07-10T11:34:26.3326739Z requests:
2020-07-10T11:34:26.3327305Z cpu: 100m
2020-07-10T11:34:26.3327875Z memory: 1024Mi
2020-07-10T11:34:26.3328436Z volumeMounts:
2020-07-10T11:34:26.3329476Z - name: jks
2020-07-10T11:34:26.3330147Z mountPath: "/etc/jks"
2020-07-10T11:34:26.3331153Z readOnly: true
2020-07-10T11:34:26.3332053Z - name: jssecacerts
2020-07-10T11:34:26.3332739Z mountPath: "/etc/truststore"
2020-07-10T11:34:26.3333356Z readOnly: true
2020-07-10T11:34:26.3334402Z - name: secretvol
2020-07-10T11:34:26.3335565Z mountPath: "/etc/secret-vol"
2020-07-10T11:34:26.3336302Z readOnly: true
2020-07-10T11:34:26.3336935Z volumes:
2020-07-10T11:34:26.3338100Z - name: jks
2020-07-10T11:34:26.3338724Z secret:
2020-07-10T11:34:26.3339946Z secretName: NAME-servers-jks
2020-07-10T11:34:26.3340817Z - name: secretvol
2020-07-10T11:34:26.3341347Z secret:
2020-07-10T11:34:26.3341870Z secretName:
2020-07-10T11:34:26.3342633Z - name: jssecacerts
2020-07-10T11:34:26.3343444Z secret:
2020-07-10T11:34:26.3344103Z secretName: jssecacerts
2020-07-10T11:34:26.3344866Z ---
2020-07-10T11:34:26.3345846Z # Source: NAME/templates/route.yaml
2020-07-10T11:34:26.3346641Z apiVersion: route.openshift.io/v1
2020-07-10T11:34:26.3347112Z kind: Route
2020-07-10T11:34:26.3347568Z metadata:
2020-07-10T11:34:26.3354831Z name: NAME
2020-07-10T11:34:26.3357144Z labels:
2020-07-10T11:34:26.3358020Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3359360Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3360306Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3361002Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3361888Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3362463Z spec:
2020-07-10T11:34:26.3363374Z host: HOST
2020-07-10T11:34:26.3364364Z path:
2020-07-10T11:34:26.3364940Z wildcardPolicy: None
2020-07-10T11:34:26.3365630Z port:
2020-07-10T11:34:26.3366080Z targetPort: http
2020-07-10T11:34:26.3366496Z tls:
2020-07-10T11:34:26.3367144Z termination: edge
2020-07-10T11:34:26.3367630Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3368072Z to:
2020-07-10T11:34:26.3368572Z kind: Service
2020-07-10T11:34:26.3369571Z name: NAME
2020-07-10T11:34:26.3369919Z weight: 100
2020-07-10T11:34:26.3370115Z status:
2020-07-10T11:34:26.3370287Z ingress: []
2020-07-10T11:34:26.3370419Z
2020-07-10T11:34:26.3370579Z NOTES:
2020-07-10T11:34:26.3370833Z 1. Get the application URL by running these commands:
2020-07-10T11:34:26.3371698Z export POD_NAME=$(kubectl get pods --namespace sda-NAME -l "app.kubernetes.io/name=NAME,app.kubernetes.io/instance=NAME" -o jsonpath="{.items[0].metadata.name}")
2020-07-10T11:34:26.3372278Z echo "Visit http://127.0.0.1:8080 to use your application"
2020-07-10T11:34:26.3373358Z kubectl --namespace sda-NAME port-forward $POD_NAME 8080:80
2020-07-10T11:34:26.3373586Z
2020-07-10T11:34:26.3385047Z ##[section]Finishing: Helm Install/Upgrade NAME
looks well and don`t show any error...but if make it without --dry-run crash in the same part...
On the other hand, I try it without this volume and secret...and works perfect! I don't understand it.
Thank you for your patience and guidance.
UPDATE & FIX:
finally, the problem was in the file values-NAME.yml:
secret:
jks: VALUE
jssecacerts: VALUE
it need the following line in secret:
secretvol: VALUE

Azure Devops Error : "unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec"

I am using Azure Devops, and getting unknown field imagePullPolicy"in io.k8s.api.core.v1.PodSpec while doing helm install :
2019-07-05T10:49:11.0064690Z ##[warning]Can't find command extension for ##vso[telemetry.command]. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296)
2019-07-05T09:56:41.1837910Z Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
2019-07-05T09:56:41.1980030Z ##[error]Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clusterfitusecaseapihelm.fullname" . }}
labels:
{{ include "clusterfitusecaseapihelm.labels" . | indent 4 }}
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Chart.Name }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: {{ .Values.environment }}
resources:
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
livenessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 90
timeoutSeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 15
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /app/config
name: {{ include "clusterfitusecaseapihelm.name" . }}
readOnly: true
volumes:
- name: {{ include "clusterfitusecaseapihelm.name" . }}
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
Tried this also but failed:
imagePullPolicy is a property of a Container object, not a Pod object, so you need to move this setting inside the containers: list (next to image:).

Data is empty when accessing config file in k8s configmap with Helm

I am trying to use a configmap in my deployment with helm charts. Now seems like files can be accessed with Helm according to the docs here: https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md
This is my deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "{{ template "service.fullname" . }}"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: "{{ template "service.fullname" . }}"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.registryHost }}/{{ .Values.userNamespace }}/{{ .Values.projectName }}/{{ .Values.serviceName }}:{{.Chart.Version}}"
volumeMounts:
- name: {{ .Values.configmapName}}configmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: {{ .Values.configmapName}}configmap-volume
configMap:
name: "{{ .Values.configmapName}}-configmap"
My configmap is accessing a config file. Here's the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
The charts directory looks like this:
files/
--runtime-config.json
templates/
--configmap.yaml
--deployment.yaml
--ingress.yaml
--service.yaml
chart.value
vaues.yaml
And this is how my runtime-confi.json file looks like:
{
"GameModeConfiguration": {
"command": "xx",
"modeId": 10,
"sessionId": 11
}
}
The problem is, when I install my chart (even with a dry-run mode), the data for my configmap is empty. It doesn't add the data from the config file into my configmap declaration. This is how it looks like when I do a dry-run:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "runtime-configmap"
labels:
app: "runtime"
data:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "whimsical-otter-runtime-service"
labels:
chart: "runtime-service-unknown/version"
spec:
replicas: 1
template:
metadata:
labels:
app: "whimsical-otter-runtime-service"
spec:
containers:
- name: "runtime-service"
image: "gcr.io/xxx-dev/xxx/runtime_service:unknown/version"
volumeMounts:
- name: runtimeconfigmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: runtimeconfigmap-volume
configMap:
name: "runtime-configmap"
---
What am I doing wrong that I don't get data?
The replacement of the variable within the string does not work:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
But you can gerenate a string using the printf function like this:
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 2 }}
Apart from the syntax problem pointed by #adebasi, you still need to set this code inside a key to get a valid configmap yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
my-file: |
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 4}}
Or you can use the handy configmap helper:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ (.Files.Glob "files/*").AsConfig | indent 2 }}