I'm trying to create a persistent storage to share with all of my application in the K8s cluster.
storageClass.yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
persistentVolume.yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-local-pv
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: my-local-storage
local:
path: /base-xapp/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- juniper-ric
persistentVolumeClaim.yaml file:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
and finally, this is the deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}-deployment
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.appName }}
image: "{{ .Values.image }}:{{ .Values.tag }}"
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
mountPath: {{ .Values.appName }}/data
envFrom:
- configMapRef:
name: {{ .Values.appName }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.appName }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
persistentVolumeClaim:
claimName: my-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appName }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.appName }}
type : NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
When i deploy the container the container status equals Pending
base-xapp-deployment-6799d6cbf6-lgjks 0/1 Pending 0 3m25s
this is the output of the describe:
Name: base-xapp-deployment-6799d6cbf6-lgjks
Namespace: near-rt-ric
Priority: 0
Node: <none>
Labels: app=base-xapp
pod-template-hash=6799d6cbf6
xappRelease=base-xapp
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/base-xapp-deployment-6799d6cbf6
Containers:
base-xapp:
Image: base-xapp:0.1.0
Ports: 4565/TCP, 4561/TCP
Host Ports: 0/TCP, 0/TCP
Environment Variables from:
base-xapp-configmap ConfigMap Optional: false
Environment: <none>
Mounts:
/rmr_route from app-cfg (rw,path="rmr_route")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxmwm (ro)
/vlevel from app-cfg (rw,path="vlevel")
base-xapp/data from base-xapp-persistent-storage (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
app-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: base-xapp-configmap
Optional: false
base-xapp-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-claim
ReadOnly: false
kube-api-access-rxmwm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10s (x6 over 4m22s) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "my-claim" not found.
this is the output of kubectl resources:
get pv:
dan#linux$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-local-pv 50Mi RWO Retain Available my-local-storage 6m2s
get pvc:
dan#linux$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-claim Pending my-local-storage 36m
You're missing spec.volumeName in your PVC manifest.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
volumeName: my-local-pv # this line was missing
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
I can see your deployment have namespace near-rt-ric.
But your PVC doesn't have a namespace, it probable placed in default namespace
Use this command to check kubectl get pvc -A
Related
I'm managing a kubernetes cluster and there is a duplicate pod that keeps coming back, but the duplicate ReplicaSet controlling it also keeps coming back after deletion. It's very strange. I also can't set the replica set to desire 0 pods, but that might be by design.
I can't really think of more information to share.
Anyone recognise this issue and know how to fix it?
Here's the ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: cash-idemix-ca-6bc646cbcc
namespace: boc-cash-portal
uid: babc0236-2053-4088-b8e8-b8ae2ed9303c
resourceVersion: '25466073'
generation: 1
creationTimestamp: '2022-11-28T13:18:42Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '7'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: cash-idemix-ca
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-28T13:18:42Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-11-29T13:27:37Z'
fieldsType: FieldsV1
subresource: status
selfLink: >-
/apis/apps/v1/namespaces/boc-cash-portal/replicasets/cash-idemix-ca-6bc646cbcc
status:
replicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
pod-template-hash: 6bc646cbcc
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
Edit: The Fool was correct, there is a deployment that recreates the the ReplicaSet. Though in the settings it seems to say that it only want to create 1 replica; so I still don't see why it wants to create two of them.
I'm using Lens to manage my cluster, and it shows that the desired number of replicas is indeed 2. I can set it to 1, but the change won't persist. Anything else where I could look?
cash-idemix-ca Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cash-idemix-ca
namespace: boc-cash-portal
uid: 2a3300ed-f666-4a30-98b7-7ab2ebcb2a0d
resourceVersion: '25467341'
generation: 10
creationTimestamp: '2022-07-18T14:13:57Z'
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cash-idemix-ca
app.kubernetes.io/version: manual-0.2.1
helm.sh/chart: cash-idemix-ca-0.12.0
annotations:
deployment.kubernetes.io/revision: '7'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"cash-idemix-ca","app.kubernetes.io/version":"manual-0.2.1","helm.sh/chart":"cash-idemix-ca-0.12.0"},"name":"cash-idemix-ca","namespace":"boc-cash-portal"},"spec":{"replicas":1,"revisionHistoryLimit":0,"selector":{"matchLabels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"cash-idemix-ca","app.kubernetes.io/name":"cash-idemix-ca"}},"spec":{"containers":[{"args":["-c","sleep
1 \u0026\u0026 fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel
debug"],"command":["sh"],"env":[{"name":"FABRIC_CA_SERVER_HOME","value":"/idemix-config/fabric-ca-gurvy"},{"name":"IDEMIX_ADMIN_USERNAME","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_USERNAME","name":"cash-idemix-ca-admin-credentials"}}},{"name":"IDEMIX_ADMIN_PASSWORD","valueFrom":{"secretKeyRef":{"key":"IDEMIX_ADMIN_PASSWORD","name":"cash-idemix-ca-admin-credentials"}}}],"image":"ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0","imagePullPolicy":"IfNotPresent","name":"cash-idemix-ca","ports":[{"containerPort":7054,"name":"api","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/idemix-config/fabric-ca-gurvy","name":"fabric-ca-server-home","readOnly":false}]}],"imagePullSecrets":[{"name":"all-icr-io"}],"serviceAccountName":"cash-idemix-ca","volumes":[{"name":"fabric-ca-server-home","persistentVolumeClaim":{"claimName":"cash-idemix-ca-fabric-ca-server-home"}}]}}}}
status:
observedGeneration: 10
replicas: 2
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2022-11-28T13:53:56Z'
lastTransitionTime: '2022-11-28T13:53:56Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2022-11-29T13:37:38Z'
lastTransitionTime: '2022-11-29T13:37:38Z'
reason: ProgressDeadlineExceeded
message: ReplicaSet "cash-idemix-ca-6bc646cbcc" has timed out progressing.
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: cash-idemix-ca
app.kubernetes.io/name: cash-idemix-ca
annotations:
kubectl.kubernetes.io/restartedAt: '2022-07-19T11:45:27Z'
spec:
volumes:
- name: fabric-ca-server-home
persistentVolumeClaim:
claimName: cash-idemix-ca-fabric-ca-server-home
containers:
- name: cash-idemix-ca
image: ca.icr.io/samara-dev-container-images/cash-idemix-ca:0.3.0
command:
- sh
args:
- '-c'
- >-
sleep 1 && fabric-ca-server start -b
$(IDEMIX_ADMIN_USERNAME):$(IDEMIX_ADMIN_PASSWORD) --port 7054
--idemix.curve gurvy.Bn254 --loglevel debug
ports:
- name: api
containerPort: 7054
protocol: TCP
env:
- name: FABRIC_CA_SERVER_HOME
value: /idemix-config/fabric-ca-gurvy
- name: IDEMIX_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_USERNAME
- name: IDEMIX_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: cash-idemix-ca-admin-credentials
key: IDEMIX_ADMIN_PASSWORD
resources: {}
volumeMounts:
- name: fabric-ca-server-home
mountPath: /idemix-config/fabric-ca-gurvy
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: cash-idemix-ca
serviceAccount: cash-idemix-ca
securityContext: {}
imagePullSecrets:
- name: all-icr-io
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 0
progressDeadlineSeconds: 600
Unable to create Verne MQ pod in AWS EKS cluster with persistent volume claim for authentication and SSL. Below is my yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: vernemq-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: verne-aws-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://ap-south-1a/vol-xxxxx
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: vernemq-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mysql
name: verne-aws-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: verne-aws-pv
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
serviceName: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
terminationGracePeriodSeconds: 200
containers:
- name: vernemq
image: vernemq/vernemq:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 60 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k; sleep 60;
ports:
- containerPort: 1883
name: mqtt
hostPort: 1883
- containerPort: 8883
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8888
name: health
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
- containerPort: 8888
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
- name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout
value: "6000"
- name: DOCKER_VERNEMQ_LISTENER__HTTP__DEFAULT
value: "0.0.0.0:8888"
- name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
value: "infinity"
- name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
value: "10000"
- name: DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES
value: "0"
- name: DOCKER_VERNEMQ_ALLOW_MULTIPLE_SESSIONS
value: "off"
- name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq/vmq.passwd"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/server.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/server.key"
volumeMounts:
- mountPath: /etc/ssl
name: vernemq-certifications
readOnly: true
- mountPath: /etc/vernemq-passwd
name: vernemq-passwd
readOnly: true
volumes:
- name: vernemq-certifications
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-certifications
- name: vernemq-passwd
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- name: mqtt
port: 1883
targetPort: 1883
- name: health
port: 8888
targetPort: 8888
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods", "statefulsets", "persistentvolumeclaims"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Created an AWS EBS volume in the same region and subnet as in the node group and added it to the persistent volume storage.
Pod is not getting created instead when we do kubectl describe statefulset vernemq getting below error:
Volumes:
vernemq-certifications:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-certifications
Optional: false
vernemq-passwd:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-passwd
Optional: false
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m2s (x5 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: pods "vernemq-0" is forbidden: error looking up service account default/vernemq: serviceaccount "vernemq" not found
Warning FailedCreate 40s (x10 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: Pod "vernemq-0" is invalid: [spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.volumes[1].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "vernemq-certifications", spec.containers[0].volumeMounts[1].name: Not found: "vernemq-passwd"]
I'm trying to setup an opensearch cluster on kubernetes.
when settings my nodes nothing fails but I get an error at a certain point and
this is my stateful set:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.global.name }} --> opensearch
namespace: {{ .Values.global.namespace }}
clusterName: {{ .Values.global.clusterName }}
labels:
app: {{ .Values.global.name }}
annotations:
majorVersion: "{{ include "opensearch.majorVersion" . }}"
spec:
serviceName: "opensearch"
selector:
matchLabels:
app: {{ .Values.global.name }}
replicas: {{ .Values.replicas }} ---> 3
template:
metadata:
name: {{ .Values.global.name }}
labels:
app: {{ .Values.global.name }}
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: [ "sh", "-c", "ulimit -n 65536" ]
containers:
- name: "{{.Values.global.name }}-master"
image: opensearchproject/opensearch
imagePullPolicy: IfNotPresent
resources:
limits:
memory: '8Gi'
cpu: "1"
requests:
memory: '8Gi'
cpu: "1"
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
env:
- name: node.name
value: opensearch
- name: cluster.name
value: "{{ .Values.global.clusterName }}"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: cluster.initial_master_nodes
value: "opensearch-0"
- name: discovery.seed_hosts
value: "opensearch-0"
- name: ES_JAVA_OPTS
value: "-Xms4g -Xmx4g"
volumeMounts:
- name: {{ .Values.global.name }}
mountPath: /etc/opensearch/data
- name: config
mountPath: /usr/share/opensearch/config/opensearch.yml
subPath: opensearch.yml
- name: node-key
mountPath: {{ .Values.privateKeyPathOnMachine }}
subPath: node-key.pem
readOnly: true
- name: node
mountPath: {{ .Values.certPathOnMachine }}
subPath: node.pem
readOnly: true
- name: root-ca
mountPath: {{ .Values.rootCertPathOnMachine }}
subPath: root-ca.pem
- name: admin-key
mountPath: {{ .Values.adminKeyCertPathOnMachine }}
subPath: admin-key.pem
readOnly: true
- name: admin
mountPath: {{ .Values.adminCertPathOnMachine }}
subPath: admin.pem
readOnly: true
- name: client
mountPath: {{ .Values.clientCertPathOnMachine }}
subPath: client.pem
readOnly: true
- name: client-key
mountPath: {{ .Values.clientKeyCertPathOnMachine }}
subPath: client-key.pem
readOnly: true
volumes:
- name: config
configMap:
name: opensearch-config
- name: config-opensearch
configMap:
name: config
- name: node
secret:
secretName: node
items:
- key: node.pem
path: node.pem
- name: node-key
secret:
secretName: node-key
items:
- key: node-key.pem
path: node-key.pem
- name: root-ca
secret:
secretName: root-ca
items:
- key: root-ca.pem
path: root-ca.pem
- name: admin-key
secret:
secretName: admin-key
items:
- key: admin-key.pem
path: admin-key.pem
- name: admin
secret:
secretName: admin
items:
- key: admin.pem
path: admin.pem
- name: client-key
secret:
secretName: client-key
items:
- key: client-key.pem
path: client-key.pem
- name: client
secret:
secretName: client
items:
- key: client.pem
path: client.pem
volumeClaimTemplates:
- metadata:
name: {{ .Values.global.name }}
labels:
app: {{ .Values.global.name }}
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "20Gi"
when I set use this definition, at some point I get this error:
[ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch] Exception while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, AUDIT] (index=.opendistro_security)
org.opensearch.cluster.block.ClusterBlockException: bl
ocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
now if I'm trying to set the nodes:
- name: cluster.initial_master_nodes
value: "opensearch-0.opensearch.search.svc.cluster.local,opensearch-1.opensearch.search.svc.cluster.local,opensearch-2.opensearch.search.svc.cluster.local"
- name: discovery.seed_hosts
value: "opensearch-0.opensearch.search.svc.cluster.local,opensearch-1.opensearch.search.svc.cluster.local,opensearch-2.opensearch.search.svc.cluster.local"
It fails on the same error, only this time this warning comes before.
[opensearch] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [opensearch-0.opensearch.search.svc.cluster.local, opensearch-1.opensearch.search.svc.cluster.local, opensearch-2.opensearch.search.svc.cluster.local] to bootstrap a cluster: have discovered [{opensearch}{SKON7g98RnyQsz6SAYqWRg}{GkUCV8mISZqITHiU0LDEzQ}{10.20.1.103}{10.20.1.103:9300}{dimr}{shard_indexing_pressure_enabled=true}, {opensearch}{qRuv6YgYQjGVatLGRGfPtQ}{62EmR4a_Sb-nhV9_7F05aA}{10.20.2.137}{10.20.2.137:9300}{dimr}{shard_indexing_pressure_enabled=true}, {opensearch}{8flMQsmxQEGN4LeBMemHsQ}{6zNV_pTZRnO6YneCzvOA4Q}{10.20.3.204}{10.20.3.204:9300}{dimr}{shard_indexing_pressure_enabled=true}]; discovery will continue using [10.20.2.137:9300, 10.20.3.204:9300] from hosts providers and [{opensearch}{SKON7g98RnyQsz6SAYqWRg}{GkUCV8mISZqITHiU0LDEzQ}{10.20.1.103}{10.20.1.103:9300}{dimr}{shard_indexing_pressure_enabled=true}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
When I try to run in the pod the security setup script
/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd ../securityconfig/ -icl -nhnv -cacert /usr/share/opensearch/config/certificates/root-ca.pem -cert /usr/share/opensearch/config/certificates/admin.pem -key /usr/share/opensearch/config/certificates/admin-key.pem
it fails too, output:
Cannot retrieve cluster state due to: null. This is not an error, will keep on trying ...
Root cause: MasterNotDiscoveredException[null] (org.opensearch.discovery.MasterNotDiscoveredException/org.opensearch.discovery.MasterNotDiscoveredException)
kubectl get svc opensearch -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"clusterName":"gloat-dev","labels":{"app.kubernetes.io/instance":"opensearch-gloat-dev-search"},"name":"opensearch","namespace":"search"},"spec":{"clusterIP":"None","ports":[{"name":"http","port":9200},{"name":"transport","port":9300}],"publishNotReadyAddresses":true,"selector":{"app":"opensearch"},"type":"ClusterIP"}}
creationTimestamp: "2022-01-17T12:21:56Z"
labels:
app.kubernetes.io/instance: opensearch-gloat-dev-search
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:spec:
f:clusterIP: {}
f:ports:
.: {}
k:{"port":9200,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":9300,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:publishNotReadyAddresses: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: argocd-application-controller
operation: Update
time: "2022-01-17T12:21:56Z"
name: opensearch
namespace: search
resourceVersion: "173096782"
selfLink: /api/v1/namespaces/search/services/opensearch
uid: ec2a49a1-f4e8-4419-9324-1761b892aeca
spec:
clusterIP: None
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
publishNotReadyAddresses: true
selector:
app: opensearch
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
error log trace:
https://pastebin.com/MtJp9iwf (loops)
Try with:
- name: cluster.initial_master_nodes
value: "opensearch-0,opensearch-1,opensearch-2" # opensearch master node names
- name: discovery.seed_hosts
value: "opensearch" # headless service dns which points to master nodes, in your case it's "opensearch".
I have a helm chart which involves a loop over a range of values. The chart includes a statefulset, pvc and cronjob. If I pass it a list with 4 values, all is well, but if I pass it a list of 12 values, most of the cronjobs just don't appear in the final template (i.e. using helm install --dry-run --debug).
Can anyone explain what might be causing this? I googled to see if I could find information about maximum length of templates but couldn't find anything...
helm template creates the manifest just fine, so maybe it's kubernetes is rejecting the cronjobs for some reason?
Is there a recommended approach for when you need to create many almost-duplicates of a manifest?
EXAMPLE: The chart template looks something like this
{{ $env := .Release.Namespace }}
{{ $image_tag := .Values.image_tag }}
{{ $aws_account_id := .Values.aws_account_id }}
{{- range $collector := .Values.collectors }}
apiVersion: v1
kind: Service
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: {{ $colname }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
selector:
matchLabels:
app: {{ $colname }}
serviceName: {{ $colname }}
replicas: 1
template:
metadata:
labels:
app: {{ $colname }}
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
imagePullPolicy: Always
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
volumeMounts:
- name: {{ $colname }}-a-claim
mountPath: /home/me/a
- name: {{ $colname }}-b-claim
mountPath: /home/me/b
- name: {{ $colname }}-c-claim
mountPath: /home/me/c
env:
- name: COLLECTOR
value: {{ $collector }}
- name: ENV
value: {{ $env }}
volumeClaimTemplates:
- metadata:
name: {{ $colname }}-a-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 50Gi
- metadata:
name: {{ $colname }}-b-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
- metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 20Gi
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $colname }}-cron
spec:
schedule: {{ $update_time }}
jobTemplate:
spec:
template:
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
env:
- name: COLLECTOR
value: {{ $collector_name }}
volumeMounts:
- name: c-storage
mountPath: /home/me/c
restartPolicy: Never
volumes:
- name: c-storage
persistentVolumeClaim:
claimName: {{ $colname }}-c-claim
---
{{ end }}
and I'm passing values like:
collectors:
- name: a
- name: b
- name: c
- name: d
- name: e
- name: f
- name: g
- name: h
- name: i
- name: j
- name: k
- name: l
I have described helm template for redis"
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Values.rds.service.name }}"
namespace: {{ .Values.environment.namespace }}
spec:
selector:
matchLabels:
app: "{{ .Values.rds.service.name }}"
template:
metadata:
labels:
app: "{{ .Values.rds.service.name }}"
component_type: "{{ .Values.component_type.name }}"
spec:
containers:
- image: "{{ .Values.rds.docker.hub }}{{ .Values.rds.docker.image }}"
name: "{{ .Values.rds.service.name }}"
env:
- name: REDIS_PASSWORD
value: "9dtjger"
ports:
- containerPort: {{ toYaml .Values.rds.service.port | indent 5 }}
resources:
requests:
memory: "{{ .Values.rds.resources.requests.memory }}"
cpu: "{{ .Values.rds.resources.requests.cpu }}"
limits:
memory: "{{ .Values.rds.resources.limits.memory }}"
cpu: "{{ .Values.rds.resources.limits.cpu }}"
---
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.rds.service.name }}"
namespace: "{{ .Values.environment.namespace }}"
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Department=sre,Team=engage-devops,Environment=ev-lab
external-dns.alpha.kubernetes.io/hostname: {{ .Values.rds.service.name }}.{{ .Values.environment.namespace }}.lab.engage.ringcentral.com
spec:
ports:
- protocol: TCP
port: {{ toYaml .Values.rds.service.port | indent 5 }}
selector:
app: "{{ .Values.rds.service.name }}"
type: LoadBalancer
Then I deployed it via kubectl apply:
kubectl -n mybanespace describe pod rds-5b6996bf-m6pbr
Name: rds-5b6996bf-m6pbr
Namespace: okta-cc-6
Priority: 0
Node: ip-10-8-29-49.eu-central-1.compute.internal/10.8.29.49
Start Time: Mon, 03 Aug 2020 21:55:09 +0300
Labels: app=rds
component_type=evt
pod-template-hash=5b6996bf
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 10.8.29.39
IPs: <none>
Controlled By: ReplicaSet/rds-5b6996bf
Containers:
rds:
Container ID: docker://3be73237324f8ba8c0a38420fceffcee65eb386e93afd8efa309212527761c74
Image: redis:6.0.6
Image ID: docker-pullable://redis#sha256:d86d6739fab2eaf590cfa51eccf1e9779677bd2502894579bcf3f80cb37b18d4
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 03 Aug 2020 21:55:14 +0300
Ready: True
Restart Count: 0
Limits:
cpu: 40m
memory: 100Mi
Requests:
cpu: 2m
memory: 10Mi
Environment:
REDIS_PASSWORD: 9dtjger
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-78c7b (ro)
When I try to connect to redis with the password I get error 'Warning: AUTH failed' if I connect without password - connection successful. I don't know why the password doesn't work when I use :
env:
- name: REDIS_PASSWORD
value: "9dtjger"
In docker-compose locally I cannot connect without a password.
How can I set a password for Redis in Kubernetes?
You didn't specify how you installed Redis❓ (Which Helm Chart). Essentially adding the --requirepass ${REDIS_PASSWORD}" and --masterauth ${REDIS_PASSWORD}" should do 👌.
If for example, you used the Bitnami Helm chart, you can use the usePassword parameter. Then in the template, it gets used here.
✌️