Unable to scale application - containercreating - kubernetes

I am using this deployment template (is that what its called?). Two pods are running but two are stuck at containercreating. If I scale to 2 replicas, 1 is running and 1 is stuck at containercreating. How to have all 4 pods running?
This declaration creates a PV in AWS and attaches the pvc. Data is persistent. But unable to get past the containercreating issue.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
activemq-deployment-58cc677d85-497xt 1/1 Running 0 2m
activemq-deployment-58cc677d85-b4tpx 0/1 ContainerCreating 0 1m
activemq-deployment-58cc677d85-hprpv 1/1 Running 0 1m
activemq-deployment-58cc677d85-vdtcs 0/1 ContainerCreating 0 1m
Describe gives this:
$ kubectl describe deployments activemq-deployment
Name: activemq-deployment
Namespace: default
CreationTimestamp: Wed, 27 Feb 2019 21:49:11 -0800
Labels: app=activemq
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=activemq
Replicas: 4 desired | 4 updated | 4 total | 2 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=activemq
Containers:
activemq:
Image: activemq:1.0
Port: 8161/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/opt/apache-activemq-5.15.6/data from activemq-data (rw)
Volumes:
activemq-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: amq-pv-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available False MinimumReplicasUnavailable
OldReplicaSets: <none>
NewReplicaSet: activemq-deployment-58cc677d85 (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m8s deployment-controller Scaled up replica set activemq-deployment-58cc677d85 to 1
Normal ScalingReplicaSet 103s deployment-controller Scaled up replica set activemq-deployment-58cc677d85 to 4
Declaration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-deployment
labels:
app: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
securityContext:
fsGroup: 2000
containers:
- name: activemq
image: activemq:1.0
ports:
- containerPort: 8161
volumeMounts:
- name: activemq-data
mountPath: /opt/apache-activemq-5.15.6/data
readOnly: false
volumes:
- name: activemq-data
persistentVolumeClaim:
claimName: amq-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: amq-nodeport-service
spec:
selector:
app: activemq
ports:
- port: 8161
targetPort: 8161
type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: amq-pv-claim
spec:
#storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

Use StatefulSets for persisting the state of the container. Deployment is not recommended for running stateful containers

Related

Kubernetes and mongo, PV, PVC

Hi just A noobiew question.
I manage(?) to implement PV and PVC over mongo DB. I'm using PV as local and not on the cloud.
There is a way to save the data when k8s runs on my pc after container restart ?
I'm not sure I got this right but what I need is to save the mongo data after he restart. What is the best way ? (no mongo atlas)
UPDATE:
I managed to make tickets service db work great, but I have 2 other services that it just wont work ! i update the yaml files so u can see the current state. the auth-mongo is just the same as tickets-mongo so why it wont work ?
the ticket-depl-mongo yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets-mongo
template:
metadata:
labels:
app: tickets-mongo
spec:
containers:
- name: tickets-mongo
image: mongo
args: ["--dbpath", "data/auth"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/auth
name: tickets-data
volumes:
- name: tickets-data
persistentVolumeClaim:
claimName: tickets-pvc
---
apiVersion: v1
kind: Service
metadata:
name: tickets-mongo-srv
spec:
selector:
app: tickets-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
auth-mongo-depl.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
args: ["--dbpath", "data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/db
name: auth-data
volumes:
- name: auth-data
persistentVolumeClaim:
claimName: auth-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-auth 1Gi RWO Retain Bound default/auth-pvc auth 78m
pv-orders 1Gi RWO Retain Bound default/orders-pvc orders 78m
pv-tickets 1Gi RWO Retain Bound default/tickets-pvc tickets 78m
I'm using mongo containers with tickets, orders, and auth services.
Just adding some info to make it clear.
NAME READY STATUS RESTARTS AGE
auth-depl-66c5d54988-ffhwc 1/1 Running 0 36m
auth-mongo-depl-594b98fcc5-k9hj8 1/1 Running 0 36m
client-depl-787cf6c7c6-xxks9 1/1 Running 0 36m
expiration-depl-864d846445-b95sh 1/1 Running 0 36m
expiration-redis-depl-64bd9fdb95-sg7fc 1/1 Running 0 36m
nats-depl-7d6c7dc46-m6mcg 1/1 Running 0 36m
orders-depl-5478cf4dfd-zmngj 1/1 Running 0 36m
orders-mongo-depl-5f974847d7-bz9s4 1/1 Running 0 36m
payments-depl-78f85d94fd-4zs55 1/1 Running 0 36m
payments-mongo-depl-5d5c47494b-7zjrl 1/1 Running 0 36m
tickets-depl-84d59fd47c-cs4k5 1/1 Running 0 36m
tickets-mongo-depl-66798d9874-cfbqb 1/1 Running 0 36m
example for pv:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-tickets
labels:
type: local
spec:
storageClassName: tickets
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
All I had to do is to change the path of hostPath in each PV. the same path will make the app to faill.
pv1:
hostPath:
path: "/path/x1"
pv2:
hostPath:
path: "/path/x2"
like so.. just not the same path.

Error: 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims

I am deploying a CouchDB cluster on Kubernetes.
It worked and I got an error when I tried to scale it.
I try scale my Statefulset and I am getting this error when I desscribe couchdb-3:
0/3 nodes are available: 3 pod has unbound immediate
PersistentVolumeClaims.
And this error when I describe hpa:
invalid metrics (1 invalid out of 1), first error is: failed to get
cpu utilization: missing request for cpu
failed to get cpu utilization: missing request for cpu
I run "kubectl get pod -o wide" and receive this result:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
couchdb-0 1/1 Running 0 101m 10.244.2.13 node2 <none> <none>
couchdb-1 1/1 Running 0 101m 10.244.2.14 node2 <none> <none>
couchdb-2 1/1 Running 0 100m 10.244.2.15 node2 <none> <none>
couchdb-3 0/1 Pending 0 15m <none> <none> <none> <none>
How can I fix it !?
Kubernetes Version: 1.22.4
Docker Version 20.10.11, build dea9396
Ubuntu 20.04
My hpa file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-couchdb
spec:
maxReplicas: 16
minReplicas: 6
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: couchdb
targetCPUUtilizationPercentage: 50
pv.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-0
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-1
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-2
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-2"
I set nfs in /etc/exports: /var/couchnfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb#$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the “password” used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
You have 3 persistent volumes and 3 pods claiming each. One PV can’t be claimed by more than one pod.
Since you are using NFS as backend, you can use dynamic provisioning of persistent volumes.
https://github.com/openebs/dynamic-nfs-provisioner

"Must specify limits.cpu" error during pod deployment even though cpu limit is specified

I am trying to run a test pod with OpenShift CLI:
$oc run nginx --image=nginx --limits=cpu=2,memory=4Gi
deploymentconfig.apps.openshift.io/nginx created
$oc describe deploymentconfig.apps.openshift.io/nginx
Name: nginx
Namespace: myproject
Created: 12 seconds ago
Labels: run=nginx
Annotations: <none>
Latest Version: 1
Selector: run=nginx
Replicas: 1
Triggers: Config
Strategy: Rolling
Template:
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Limits:
cpu: 2
memory: 4Gi
Environment: <none>
Mounts: <none>
Volumes: <none>
Deployment #1 (latest):
Name: nginx-1
Created: 12 seconds ago
Status: New
Replicas: 0 current / 0 desired
Selector: deployment=nginx-1,deploymentconfig=nginx,run=nginx
Labels: openshift.io/deployment-config.name=nginx,run=nginx
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal DeploymentCreated 12s deploymentconfig-controller Created new replication controller "nginx-1" for version 1
Warning FailedCreate 1s (x12 over 12s) deployer-controller Error creating deployer pod: pods "nginx-1-deploy" is forbidden: failed quota: quota-svc-myproject: must specify limits.cpu,limits.memory
I get "must specify limits.cpu,limits.memory" error, despite both limits being present in the same describe output.
What might be the problem and how do I fix it?
I found a solution!
Part of the error message was "Error creating deployer pod". It means that the problem is not with my pod, but with the deployer pod which performs my pod deployment.
It seems the quota in my project affects deployer pods as well.
I couldn't find a way to set deployer pod limits with CLI, so I've made a DeploymentConfig.
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "test-app"
spec:
template:
metadata:
labels:
name: "test-app"
spec:
containers:
- name: "test-app"
image: "nginxinc/nginx-unprivileged"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 1
selector:
name: "test-app"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- "test-app"
from:
kind: "ImageStreamTag"
name: "nginx-unprivileged:latest"
strategy:
type: "Rolling"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
A you can see, two sets of limitations are specified here: for container and for deployment strategy.
With this configuration it worked fine!
Looks like you have specified resource quota and the values you specified for limits seems to be larger than that. Can you describe the resource quota oc describe quota quota-svc-myproject and adjust your configs accordingly.
A good reference could be https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html

GKE Pod not scheduled in different namespace

I want to deploy a monstache deployment in my already existing namespace "test-namespace". When I deploy it in "default" namespace it works but when I deploy it in "test-namespace" the pod does not schedule.
kubectl get pods -n test-namespace monstache-74466dc7-5tnrr -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
monstache-74466dc7-5tnrr 0/1 Pending 0 57m <none> <none> <none> <none>
and:
kubectl describe pods -n test-namespace monstache-74466dc7-5tnrr
Name: monstache-74466dc7-5tnrr
Namespace: test-namespace
Priority: 0
Node: <none>
Labels: app=monstache
pod-template-hash=74466dc7
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/monstache-74466dc7
Containers:
monstache:
Image: rwynn/monstache:latest
Port: <none>
Host Port: <none>
Command:
/bin/monstache
-f
/app/monstache.test.config.toml
Environment:
MONSTACHE_DIRECT_READ_NS: xxx.XXX
MONSTACHE_CHANGE_STREAM_NS: xxx.XXX
MONSTACHE_MONGO_URL: mongodb://xxx?replicaSet=rs0
MONSTACHE_ES_USER: elastic
MONSTACHE_ES_PASS: XXX
Mounts:
/app from monstache-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qmcwm (ro)
Volumes:
monstache-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: monstache-config
Optional: false
default-token-qmcwm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qmcwm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
and:
kubectl get events -n test-namespace
LAST SEEN TYPE REASON OBJECT MESSAGE
55m Normal SuccessfulCreate replicaset/monstache-74466dc7 Created pod: monstache-74466dc7-snrdb
55m Normal SuccessfulCreate replicaset/monstache-74466dc7 Created pod: monstache-74466dc7-5tnrr
55m Normal ScalingReplicaSet deployment/monstache Scaled up replica set monstache-74466dc7 to 1
55m Normal ScalingReplicaSet deployment/monstache Scaled up replica set monstache-74466dc7 to 1
and:
This is my monstache deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: monstache
namespace: test-namespace
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: monstache
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: monstache
spec:
containers:
- command:
- /bin/monstache
- -f
- /app/monstache.test.config.toml
env:
- name: MONSTACHE_DIRECT_READ_NS
value: xxx.xxx
- name: MONSTACHE_CHANGE_STREAM_NS
value: xxx.xxx
- name: MONSTACHE_MONGO_URL
value: mongodb://mongodb-service:27017/xxx?replicaSet=rs0
- name: MONSTACHE_ES_USER
value: elastic
- name: MONSTACHE_ES_PASS
value: XXXX
image: rwynn/monstache:latest
imagePullPolicy: Always
name: monstache
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app
name: monstache-config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: monstache-config
name: monstache-config
---
apiVersion: v1
data:
monstache.test.config.toml: |
resume = true
gzip = true
elasticsearch-urls = ["https://elasticsearch:9200"]
elasticsearch-max-conns = 10
elasticsearch-max-seconds = 1
elasticsearch-max-docs = 1
#namespace-regex = '*'
verbose = false
enable-http-server = true
elasticsearch-validate-pem-file = false
[[mapping]]
namespace = "XXX.XXX"
index = "XXX"
kind: ConfigMap
metadata:
name: monstache-config
namespace: test-namespace
Few more Things to know:
I already have pods scheduled in that namespace
I tried to delete the deployment an re-create
I even created a new nodepool and tried to schedule the deployment there - also didn't work.
I searched for a pod count limit and pod quota, and it does not conflict.
I have 12 namespaces in that GKE cluster
I have total 113 pods in that GKE cluster
I have some successfully scheduled monstache deployments in other namespaces in that cluster.
It happens in the 2 most recent namespaces I've created.
Any clues?
It was a bug. Re-deploying it with GKE version 1.17.14-gke.1200 solved the problem.

kubernetes volume mount timeout

I am using PVC to attach the volume to one deployment for grafana, but it times out and container kept in creating stage.
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: grafana-storagetest
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
iopsPerGB: "10"
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-claimtest
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: grafana-storagetest
resources:
requests:
storage: 10G
Service
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: grafana-proxytest_mapping
prefix: /v1/anonymous/grafana-proxytest
rewrite: /
service: grafana-proxytest.grafana-proxytest:8080
timeout_ms: 20000
connect_timeout_ms: 20000
labels:
app: grafana-proxytest
name: grafana-proxytest
namespace: grafana-proxytest
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: grafana-proxytest
type: ClusterIP
status:
loadBalancer: {}
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "20"
labels:
version: v1
name: grafana-proxytest-v1
namespace: grafana-proxytest
spec:
replicas: 1
selector:
matchLabels:
app: grafana-proxytest
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: grafana-proxytest
version: v1
spec:
containers:
image: <aws_ecr>
imagePullPolicy: Always
name: grafana-proxytest
ports:
- containerPort: 3000
protocol: TCP
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafanapdtest
image: grafana/grafana:latest
imagePullPolicy: Always
name: grafanatest
ports:
- containerPort: 3000
protocol: TCP
resources:
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: grafanapdtest
persistentVolumeClaim:
claimName: grafana-claimtest
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Pod status
grafana-proxytest-v1-7cb5b6b6cf-z5zml 0/2 ContainerCreating 0 4m18s
Describe Pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m14s (x2 over 4m14s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 7 times)
Normal Scheduled 4m12s default-scheduler Successfully assigned grafana-proxytest/grafana-proxytest-v1-7cb5b6b6cf-z5zml to ip-10-10-107-59.ap-southeast-1.compute.internal
Normal SuccessfulAttachVolume 4m10s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-f8ad51be-74c5-11ea-8623-068411799338"
Warning FailedMount 2m9s kubelet, ip-10-10-107-59.ap-southeast-1.compute.internal Unable to mount volumes for pod "grafana-proxytest-v1-7cb5b6b6cf-z5zml_grafana-proxytest(fcf38cab-74c5-11ea-8623-068411799338)": timeout expired waiting for volumes to attach or mount for pod "grafana-proxytest"/"grafana-proxytest-v1-7cb5b6b6cf-z5zml". list of unmounted volumes=[grafanapdtest]. list of unattached volumes=[grafanapdtest default-token-pzxdk]
Expected result - The volume should be attached properly and the pod should be in running status.
I checked and the storage class, PVC, and PV are created.
Edit 1
Added the namespace into PVC as below, but still it fails:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-claimtest
namespace: grafana-proxytest
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: grafana-storagetest
resources:
requests:
storage: 10G
Still getting below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m18s default-scheduler Successfully assigned grafana-proxytest/grafana-proxytest-v1-7cb5b6b6cf-gpk4l to ip-10-10-107-59.ap-southeast-1.compute.internal
Normal SuccessfulAttachVolume 2m16s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-5bbc63a3-755f-11ea-920e-0a280935f44c"
Warning FailedMount 15s kubelet, ip-10-10-107-59.ap-southeast-1.compute.internal Unable to mount volumes for pod "grafana-proxytest-v1-7cb5b6b6cf-gpk4l_grafana-proxytest(62c50717-755f-11ea-8623-068411799338)": timeout expired waiting for volumes to attach or mount for pod "grafana-proxytest"/"grafana-proxytest-v1-7cb5b6b6cf-gpk4l". list of unmounted volumes=[grafanapdtest]. list of unattached volumes=[grafanapdtest default-token-pzxdk]
One thing as per the above logs i says that attach succeeded for volume, but then it fails for mounting grafanapdtest, which I am not able to understand.
PVC details:
get pvc -n grafana-proxytest
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-claimtest Bound pvc-5bbc63a3-755f-11ea-920e-0a280935f44c 10Gi RWO grafana-storagetest 15m