Kubernetes Error: secret "mongo-config" not found - kubernetes

I'm learning Kubernetes and below are the yaml configuration files:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 27017
targetPort: 27017
webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
After starting a test webapp i came across the below error:
NAME READY STATUS RESTARTS AGE
mongo-deployment-7875498c-psn56 1/1 Running 0 100m
my-go-app-664f7475d4-jgnsk 1/1 Running 1 (7d20h ago) 7d20h
webapp-deployment-7dc5b857df-6bx4s 0/1 CreateContainerConfigError 0 29m
which if i try to get more details about the CreateContainerConfigError i get:
~/K8s/K8s-demo$ kubectl describe pod webapp-deployment-7dc5b857df-6bx4s
Name: webapp-deployment-7dc5b857df-6bx4s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Thu, 06 Jan 2022 12:20:02 +0200
Labels: app=webapp
pod-template-hash=7dc5b857df
Annotations: <none>
Status: Pending
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/webapp-deployment-7dc5b857df
Containers:
webapp:
Container ID:
Image: nanajanashia/k8s-demo-app:v1.0
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Environment:
USER_NAME: <set to the key 'mongo-user' in secret 'mongo-secret'> Optional: false
USER_PWD: <set to the key 'mongo-password' in secret 'mongo-secret'> Optional: false
DB_URL: <set to the key 'mongo-url' in secret 'mongo-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkflh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkflh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/webapp-deployment-7dc5b857df-6bx4s to minikube
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
Normal Pulled 27s (x142 over 30m) kubelet Container image "nanajanashia/k8s-demo-app:v1.0" already present on machine
which seems that the issue with configuration is:
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
I don't have a secret with name "mongo-config" but there is a ConfigMap with "mongo-config" name:
>:~/K8s/K8s-demo$ kubectl get secret
NAME TYPE DATA AGE
default-token-gs25h kubernetes.io/service-account-token 3 5m57s
mongo-secret Opaque 2 5m48s
>:~/K8s/K8s-demo$ kubectl get configmap
NAME DATA AGE
kube-root-ca.crt 1 6m4s
mongo-config 1 6m4s
Could you please advice what is the issue here?

You have secretKeyRef in:
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
You have to use configMapKeyRef
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url

Basically your configmap is whats called a config map from literals. The only way to use that config map data as your env variables is :
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
envFrom:
- configMapRef:
name: mongo-config
Modify you web app deployment yaml this way. Also modify the config map itself . Use DB_URL instead of mongo-url as key.

The mongo secret has to be in the same namespace:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
namespace: mongodb
labels:
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: ""
mongodb-passwords: ""
mongodb-metrics-password: ""
mongodb-replica-set-key: ""

Related

Routing not worked in istio ingress gateway

I have an VM on Hyper-V running Kubernetes on it. I set the istio-ingressgateway as you see below.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tech-ingressgateway
namespace: tech-ingress-ns
spec:
selector:
istio: ingressgateway # default istio ingressgateway defined in istio-system namespace
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- '*'
I open two ports one is for http and second is for https. And I have two backend Service whose virtual service definition are;
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: activemq-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/activemq'
route:
- destination:
host: activemq-svc.tech-ns.svc.cluster.local
port:
number: 8161
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: userprofile-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/userprofile'
route:
- destination:
host: userprofile-svc.business-ns.svc.cluster.local
port:
number: 7002
The istio-ingressgateway hit successfully using curl as curl 192.168.x.x, but when I try to hit the activemq-backend-service it fails. I don't know why. The same issue I face while accessing the userprofile service. here is my curl command
curl -i 192.168.x.x/activemq
curl -i 192.168.x.x/userprofile
curl -i 192.168.x.x/userprofile/getUserDetails
The userProfile service has three endpoints which are; getUserDetails, verifyNaturalOtp and updateProfile.
EDIT
Here is Deployment and service manifest;
apiVersion: apps/v1
kind: Deployment
metadata:
name: userprofile-deployment
namespace: business-ns
labels:
app: userprofile
spec:
replicas: 1
selector:
matchLabels:
app: userprofile
template:
metadata:
labels:
app: userprofile
spec:
containers:
- env:
- name: APPLICATION_PORT
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePort
- name: APPLICATION_NAME
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileName
- name: IAM_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileIam
- name: GRAPHQL_DAO_LAYER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileGraphqlDao
- name: EVENT_PUBLISHER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePublisher
name: userprofile
image: 'abc/userprofilesvc:tag1'
imagePullPolicy: IfNotPresent
resources: {}
ports:
- containerPort: 7002
---
apiVersion: v1
kind: Service
metadata:
name: userprofile-svc
namespace: business-ns
labels:
app: userprofile
spec:
selector:
app: userprofile
ports:
- name: http
protocol: TCP
port: 7002
targetPort: 7002
EDIT: kubectl describe pod activemq-deployment-5fb57d5f7c-2v9x5 -n tech-ns the outout is;
Name: activemq-deployment-5fb57d5f7c-2v9x5
Namespace: tech-ns
Priority: 0
Node: kworker-2/192.168.18.223
Start Time: Fri, 27 May 2022 06:11:50 +0000
Labels: app=activemq
pod-template-hash=5fb57d5f7c
Annotations: cni.projectcalico.org/containerID: e2a5a843ee02655ed3cfc4fa538abcccc3dae34590cc61dab341465aa78565fb
cni.projectcalico.org/podIP: 10.233.107.107/32
cni.projectcalico.org/podIPs: 10.233.107.107/32
kubesphere.io/restartedAt: 2022-05-27T06:11:42.602Z
Status: Running
IP: 10.233.107.107
IPs:
IP: 10.233.107.107
Controlled By: ReplicaSet/activemq-deployment-5fb57d5f7c
Containers:
activemq:
Container ID: docker://94a1f07489f6d2db51d2fe3bfce0ed3654ea7150eb17223696363c1b7f355cd7
Image: vialogic/activemq:cluster1.0
Image ID: docker-pullable://vialogic/activemq#sha256:f3954187bf1ead0a0bc91ec5b1c654fb364bd2efaa5e84e07909d0a1ec062743
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 06:51:19 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 30 May 2022 08:21:52 +0000
Finished: Wed, 01 Jun 2022 06:21:43 +0000
Ready: True
Restart Count: 20
Environment: <none>
Mounts:
/home/alpine/apache-activemq-5.16.4/conf/jetty-realm.properties from active-creds (rw,path="jetty-realm.properties")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbwv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
active-creds:
Type: Secret (a volume populated by a Secret)
SecretName: creds
Optional: false
kube-api-access-nvbwv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 55m (x5 over 58m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 51m kubelet Container image "vialogic/activemq:cluster1.0" already present on machine
Normal Created 51m kubelet Created container activemq
Normal Started 51m kubelet Started container activemq
As per pod description shared, neither istio-init nor istio-proxy containers arent injected into application pod. So routing might not be happening from gateway to application pod. The namespace / deployment has to be istio enabled so that at the time application pod creation it will inject istio sidecar into it. Post that routing will happen to application.
Your Istio sidecar is not injected into the pod. You would see an output like this in the Annotations section for a pod that has been injected:
pod-template-hash=546859454c
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=userprofile
service.istio.io/canonical-revision=latest
Annotations: kubectl.kubernetes.io/default-container: svc
kubectl.kubernetes.io/default-logs-container: svc
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
You can enforce the injection at the namespace level with the following command:
kubectl label namespace <namespace> istio-injection=enabled --overwrite
Since the injection takes place at the pod's creation, you will need to kill the running pods after the command is issued.
Check out the Sidecar Injection Problems document as a reference.

Should the Vault Service Account Be Using the Default Service Account API Token to Authenticate to Kubernetes?

I've been following this tutorial to set up vault and kubernetes on minikube with helm.
It seems to me the vault service account is using the default service account JWT token to access the API to authenticate. Should the vault service account instead be using its own token, not the default one?
Here are the relevant configuration steps:
# Configure the auth.
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=#/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
disable_iss_validation=true
# Configure the policy.
vault policy write webapp - <<EOF
path "secret/data/webapp/config" {
capabilities = ["read"]
}
EOF
# Create the role.
vault write auth/kubernetes/role/webapp \
bound_service_account_names=vault \
bound_service_account_namespaces=default \
policies=webapp \
ttl=24h
The client (expressjs) on the webapp pod reads the mounted token to authenticate:
...
await axiosInst.post("/auth/kubernetes/login",
{jwt: fs.readFileSync(process.env.JWT_PATH, {encoding: "utf-8"}),
role: "webapp"});
...
However, the token at /var/run/secrets/kubernetes.io/serviceaccount/token on the webapp pod is the default service account token, not the vault service account token right?
Am I doing something wrong here?
web-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
serviceAccountName: vault
containers:
- name: web
image: kahunacohen/hello-k8s
imagePullPolicy: IfNotPresent
env:
- name: VAULT_ADDR
value: 'http://vault:8200'
- name: JWT_PATH
value: '/var/run/secrets/kubernetes.io/serviceaccount/token'
- name: SERVICE_PORT
value: '8080'
envFrom:
- configMapRef:
name: web-configmap
ports:
- containerPort: 3000
protocol: TCP
I describe the vault sa:
$ kubectl describe serviceaccounts vault
Name: vault
Namespace: default
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=vault
helm.sh/chart=vault-0.14.0
Annotations: meta.helm.sh/release-name: vault
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: vault-token-jhp5q
Tokens: vault-token-jhp5q
Events: <none>
I describe the secret associated with this sa:
$ kubectl describe secret vault-token-jhp5q
Name: vault-token-jhp5q
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: vault
kubernetes.io/service-account.uid: 9a091a35-9292-4d77-abac-a7d62f312d27
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1111 bytes
namespace: 7 bytes
token: eyJhb...
But when I cat the token on my pod it doesn't match above:
$ kubectl exec web-deployment-66968775cb-8td6c -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
...
pod yaml:
$ kubectl get pods web-deployment-66968775cb-8td6c -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-08-14T19:32:54Z"
generateName: web-deployment-66968775cb-
labels:
app: web-pod
app.kubernetes.io/managed-by: skaffold
pod-template-hash: 66968775cb
skaffold.dev/run-id: f58f771b-258a-4304-baf6-1cc4d9bc3029
name: web-deployment-66968775cb-8td6c
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: web-deployment-66968775cb
uid: 6e62ba08-8b56-402a-ae10-e5d95f93dc39
resourceVersion: "1444983"
uid: 991199f5-0e53-45b9-a4d7-ab1d3cc44079
spec:
containers:
- env:
- name: VAULT_ADDR
value: http://vault:8200
- name: JWT_PATH
value: /var/run/secrets/kubernetes.io/serviceaccount/token
- name: SERVICE_PORT
value: "8080"
envFrom:
- configMapRef:
name: web-configmap
image: kahunacohen/hello-k8s:fb961e142e46220f11853eba6f50eaf94510a95dc08046ad8a1517d92fcb2d85
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 3000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zjkvp
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: vault
serviceAccountName: vault
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-zjkvp
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-08-14T19:32:54Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-08-15T15:01:02Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-08-15T15:01:02Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-08-14T19:32:54Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://4759183417a8ef0bbee8f28bf345aed373371f7ca462b9a5f5a966eba33117d2
image: kahunacohen/hello-k8s:b65b899-dirty
imageID: docker://sha256:fb961e142e46220f11853eba6f50eaf94510a95dc08046ad8a1517d92fcb2d85
lastState:
terminated:
containerID: docker://87636058668d022160149f176cd3c565bf17c239e97c55d873e42c9c074215d6
exitCode: 255
finishedAt: "2021-08-15T15:00:01Z"
reason: Error
startedAt: "2021-08-14T19:32:55Z"
name: web
ready: true
restartCount: 1
started: true
state:
running:
startedAt: "2021-08-15T15:01:01Z"
hostIP: 192.168.49.2
phase: Running
podIP: 172.17.0.10
podIPs:
- ip: 172.17.0.10
qosClass: BestEffort
startTime: "2021-08-14T19:32:54Z"
"/var/run/secrets/kubernetes.io/serviceaccount/token" on the webapp pod is the default service account token, right?
No, it depends on the Pod YAML.
Case 1: If you didn't set the service account for your pod, it uses the default service account.
Case 2: If you set the service account for your pod, it will use the jwt token of that service account. In your case, it's vault.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: vault
... ... ...
ServiceAccount Admission Controller
Kubernetes Auth & Vault:
You need a service account (say admin) that will be used to configure (ie. token_reviewer_jwt=...) Vault k8s authentication. It will have system-authdelegator RBAC role assigned to it. Now it has the power to validate other service account JWT tokens.
You have an application running with another service account (say client), the vault admin has created a k8s auth role (ie. bound_service_account_names=client) with it.
Your App will send the client's JWT to Vault--> Vault will send that to kube-api, with both admin's JWT (to introduce kube-api that he is the admin) and client's JWT ( whom the admin want to verify).

HPA showing unknown in k8s

I configured HPA using a command as shown below
kubectl autoscale deployment isamruntime-v1 --cpu-percent=20 --min=1 --max=3 --namespace=default
horizontalpodautoscaler.autoscaling/isamruntime-v1 autoscaled
However, the HPA cannot identify the CPU load.
pranam#UNKNOWN kubernetes % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
isamruntime-v1 Deployment/isamruntime-v1 <unknown>/20% 1 3 0 3s
I read a number of articles which suggested installing metrics server. So, I did that.
pranam#UNKNOWN kubernetes % kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server configured
deployment.apps/metrics-server configured
service/metrics-server configured
clusterrole.rbac.authorization.k8s.io/system:metrics-server configured
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server configured
I can see the metrics server.
pranam#UNKNOWN kubernetes % kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7d88b45844-lz8zw 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-bsx6p 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
calico-node-g229m 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
calico-node-slwrh 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-tztjg 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
coredns-7d6bb98ccc-d8nrs 1/1 Running 0 25d 172.30.93.205 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-n28dm 1/1 Running 0 25d 172.30.93.204 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-zx5jx 1/1 Running 0 25d 172.30.93.197 10.164.27.28 <none> <none>
coredns-autoscaler-848db65fc6-lnfvf 1/1 Running 0 25d 172.30.93.201 10.164.27.28 <none> <none>
dashboard-metrics-scraper-576c46d9bd-k6z85 1/1 Running 0 25d 172.30.93.195 10.164.27.28 <none> <none>
ibm-file-plugin-7c57965855-494bz 1/1 Running 0 22d 172.30.93.216 10.164.27.28 <none> <none>
ibm-iks-cluster-autoscaler-7df84fb95c-fhtgv 1/1 Running 0 2d23h 172.30.137.98 10.164.27.46 <none> <none>
ibm-keepalived-watcher-9w4gb 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-keepalived-watcher-ps5zm 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-keepalived-watcher-rzxbs 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-keepalived-watcher-w6mxb 1/1 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.28 2/2 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.39 2/2 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-master-proxy-static-10.164.27.44 2/2 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-master-proxy-static-10.164.27.46 2/2 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-storage-watcher-67466b969f-ps55m 1/1 Running 0 22d 172.30.93.217 10.164.27.28 <none> <none>
kubernetes-dashboard-c6b4b9d77-27zwb 1/1 Running 2 22d 172.30.93.218 10.164.27.28 <none> <none>
metrics-server-79d847cf58-6frsf 2/2 Running 0 3m23s 172.30.93.226 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-88vl5 4/4 Running 0 11h 172.30.93.225 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-vx68d 4/4 Running 0 11h 172.30.137.104 10.164.27.46 <none> <none>
vpn-58b48cdc7c-4lp9c 1/1 Running 0 25d 172.30.93.193 10.164.27.28 <none> <none>
I am using Istio and sysdig. Not sure if that breaks anything. My k8s versions are shown below.
pranam#UNKNOWN kubernetes % kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7+IKS", GitCommit:"3305158dfe9ee1f89f596ef260135dcba881848c", GitTreeState:"clean", BuildDate:"2020-06-17T18:32:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
My YAML file is
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v2
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v2
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
I am not sure why the CPU load is shown as unknown. Have I missed a step or made any mistake ? Can someone help ?
Regards
Pranam
Based on the issue shown, it appears that you have not set the resource limits in the delployment.yaml file.
if you go for executing kubectl explain deployment then you will see in containers specs -
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
If you add values to above mentioned keys then surely the hpa issue will get solved
Did you specify the resources block when you defined your app's deployment?
I don't remembered where it was stated but I encountered this case once when I forgot that.
More information about managing resources for containers: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

pod has unbound immediate PersistentVolumeClaims (repeated 3 times)

What is wrong with below.
# config for es data node
apiVersion: v1
kind: ConfigMap
metadata:
namespace: infra
name: elasticsearch-data-config
labels:
app: elasticsearch
role: data
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: false
data: true
ingest: false
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
---
# service for es data node
apiVersion: v1
kind: Service
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
selector:
app: elasticsearch
role: data
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
statefull output:
Name: elasticsearch-data-0
Namespace: infra
Priority: 0
Node: <none>
Labels: app=elasticsearch-data
controller-revision-hash=elasticsearch-data-76bdf989b6
role=data
statefulset.kubernetes.io/pod-name=elasticsearch-data-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/elasticsearch-data
Init Containers:
increase-vm-max-map:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Containers:
elasticsearch-data:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
Port: 9300/TCP
Host Port: 0/TCP
Environment:
CLUSTER_NAME: elasticsearch
NODE_NAME: elasticsearch-data
NODE_LIST: elasticsearch-master,elasticsearch-data,elasticsearch-client
MASTER_NODES: elasticsearch-master
ES_JAVA_OPTS: -Xms300m -Xmx300m
Mounts:
/data/db from elasticsearch-data-persistent-storage (rw)
/usr/share/elasticsearch/config/elasticsearch.yml from config (ro,path="elasticsearch.yml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-persistent-storage-elasticsearch-data-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: elasticsearch-data-config
Optional: false
default-token-9nhmg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9nhmg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 46s (x4 over 3m31s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
kubectl get sc
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 5d19h
kubectl get pv
No resources found in infra namespace.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
It looks like there is some issue with your PVC.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
As you can see your PV is also not created.I think there is an issue with your storage class.Looks like gp2 storage class is not available in your cluster.
Either run this yaml file if you are in AWS EKS
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
or simply change the storage class to standard in GCP GKE
From the docs here
The storage for a given Pod must either be provisioned by a
PersistentVolume Provisioner based on the requested storage class, or
pre-provisioned by an admin.
There should be a StorageClass which can dynamically provision the PV and mention that storageClassName in the volumeClaimTemplates or there needs to be a PV which can satisfy the PVC.
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 10Gi
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
This worked for me. Like Avinash said I simply changed the storage class to standard in GCP GKE

what is Environment <none> in a kubectl describe pod command

When I do a kubectl describe pod, I can see
Environment: <none>
just after secrets. I wonder what it is. Is it possible to assign secrets to an environment? ( local, dev, staging, prod for instance ? )
➜ espace-client git:(master) ✗ kubectl describe pod -n espace-client espace-client-client-6b7b994b4c-gx58t
Name: espace-client-client-6b7b994b4c-gx58t
Namespace: espace-client
Priority: 0
Node: minikube/192.168.0.85
Start Time: Fri, 27 Sep 2019 11:37:06 +0200
Labels: app=espace-client-client
pod-template-hash=6b7b994b4c
Annotations: kubectl.kubernetes.io/restartedAt: 2019-09-27T11:37:06+02:00
Status: Running
IP: 172.17.0.21
IPs: <none>
Controlled By: ReplicaSet/espace-client-client-6b7b994b4c
Containers:
espace-client-client:
Container ID: docker://b3ee1efe45bb8ed9f27aca60e3bfecc1d7e29bc12600787d8d674ffb62ffc3f4
Image: espace_client_client:local
Image ID: docker://sha256:4cf73af7615ebfd30e7a8b0126154fa12b605dd34ead7cb0eefc43cd3ccc869b
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 27 Sep 2019 11:37:09 +0200
Ready: True
Restart Count: 0
Environment Variables from:
espace-client-client-env Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lzb8h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-lzb8h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lzb8h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
The environment section contains any environment variables defined as part of the PodSpec:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
It is because most likely no Env vars where defined for the Pod. You can also assign Secrets to environment. They would show up in the Environment section like this:
SECURITY_JWT_PRIVATEKEY: <set to the key 'privateKey' in secret 'tokens'> Optional: false
For example:
apiVersion: v1
kind: Pod
metadata:
name: secrets-demo
labels:
purpose: demonstrate-secrets-in-env
spec:
containers:
- name: secret-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password