I get the following error when creating a PVC and I have no idea what it means.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalExpanding 1m (x3 over 5m) volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external
controller to process this PVC.
My PV for it is there and seems to be fine.
Here is the spec for my PV and PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
class: projects-service-nfs
company: mulesoft
component: projects
component-asset: projects-service
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs
selfLink: /api/v1/persistentvolumes/projects-service-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: projects-service-nfs-block
namespace: design-center
resourceVersion: "7932052"
uid: d114dd38-f411-11e8-b7b1-1230f683f84a
mountOptions:
- nfsvers=3
- hard
- sync
nfs:
path: /
server: 1.1.1.1
persistentVolumeReclaimPolicy: Retain
volumeMode: Block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
company: mulesoft
component: projects
component-asset: projects-service
example: test
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs-block
selfLink: /api/v1/namespaces/design-center/persistentvolumeclaims/projects-service-nfs-block
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 20Gi
selector:
matchLabels:
class: projects-service-nfs
storageClassName: ""
volumeMode: Block
volumeName: projects-service-nfs
Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Looks like at some point you updated/expanded the PVC? which is calling:
func (expc *expandController) pvcUpdate(oldObj, newObj interface{})
...
The in the function it's trying to find a plugin for expansion and it can't find it with this:
volumePlugin, err := expc.volumePluginMgr.FindExpandablePluginBySpec(volumeSpec)
if err != nil || volumePlugin == nil {
err = fmt.Errorf("didn't find a plugin capable of expanding the volume; " +
"waiting for an external controller to process this PVC")
...
return
}
If you see this document it shows that the following volume types support PVC expansion with in-tree plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD. So NFS is not one of them and that's why you are seeing the event. Looks like it could be supported in the future or also it could be supported with a custom plugin.
If you haven't updated the PVC I would recommend using the same capacity for the PV and PVC as described here
Related
I am trying to use the VolumeSnapshot backup mechanism promoted in kubernetes to beta from 1.17.
Here is my scenario:
Create the nginx deployment and the PVC used by it
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: premium-rwo
Exec into the running nginx container, cd into the PVC mounted path and create some files
▶ k exec -it nginx-deployment-84765795c-7hz5n bash
root#nginx-deployment-84765795c-7hz5n:/# cd /root/test
root#nginx-deployment-84765795c-7hz5n:~/test# touch {1..10}.txt
root#nginx-deployment-84765795c-7hz5n:~/test# ls
1.txt 10.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt lost+found
root#nginx-deployment-84765795c-7hz5n:~/test#
Create the following VolumeSnapshot using as source the nginx-pvc
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
namespace: default
name: nginx-volume-snapshot
spec:
volumeSnapshotClassName: pd-retain-vsc
source:
persistentVolumeClaimName: nginx-pvc
The VolumeSnapshotClass used is the following
apiVersion: snapshot.storage.k8s.io/v1beta1
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
creationTimestamp: "2020-09-25T09:10:16Z"
generation: 1
name: pd-retain-vsc
and wait until it becomes readyToUse: true
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
creationTimestamp: "2020-11-04T09:38:00Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
generation: 1
name: nginx-volume-snapshot
namespace: default
resourceVersion: "34170857"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/nginx-volume-snapshot
uid: ce1991f8-a44c-456f-8b2a-2e12f8df28fc
spec:
source:
persistentVolumeClaimName: nginx-pvc
volumeSnapshotClassName: pd-retain-vsc
status:
boundVolumeSnapshotContentName: snapcontent-ce1991f8-a44c-456f-8b2a-2e12f8df28fc
creationTime: "2020-11-04T09:38:02Z"
readyToUse: true
restoreSize: 8Gi
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Delete the nginx deployment and the initial PVC
▶ k delete pvc,deploy --all
persistentvolumeclaim "nginx-pvc" deleted
deployment.apps "nginx-deployment" deleted
Create a new PVC, using the previously created VolumeSnapshot as its dataSource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc-restored
name: nginx-pvc-restored
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
dataSource:
name: nginx-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
▶ k create -f nginx-pvc-restored.yaml
persistentvolumeclaim/nginx-pvc-restored created
▶ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc-restored Bound pvc-56d0a898-9f65-464f-8abf-90fa0a58a048 8Gi RWO standard 39s
Set the name of the new (restored) PVC to the nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc-restored
and create the Deployment again
▶ k create -f nginx-deployment-restored.yaml
deployment.apps/nginx-deployment created
cd into the PVC mounted directory. It should contain the previously created files but its empty
▶ k exec -it nginx-deployment-67c7584d4b-l7qrq bash
root#nginx-deployment-67c7584d4b-l7qrq:/# cd /root/test
root#nginx-deployment-67c7584d4b-l7qrq:~/test# ls
lost+found
root#nginx-deployment-67c7584d4b-l7qrq:~/test#
▶ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.12", GitCommit:"5ec472285121eb6c451e515bc0a7201413872fa3", GitTreeState:"clean", BuildDate:"2020-09-16T13:39:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-gke.1504", GitCommit:"17061f5bd4ee34f72c9281d49f94b4f3ac31ac25", GitTreeState:"clean", BuildDate:"2020-10-19T17:00:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
This is a community wiki answer posted for more clarity of the current problem. Feel free to expand on it.
As mentioned by #pkaramol, this is an on-going issue registered under the following thread:
Creating an intree PVC with datasource should fail #96225
What happened: In clusters that have intree drivers as the default
storageclass, if you try to create a PVC with snapshot data source and
forget to put the csi storageclass in it, then an empty PVC will be
provisioned using the default storageclass.
What you expected to happen: PVC creation should not proceed and
instead have an event with an incompatible error, similar to how we
check proper csi driver in the csi provisioner.
This issue has not yet been resolved at the moment of writing this answer.
EDIT: SEE BELOW
I am new trying to build a local cluster with 2 physical machines with kubeadm. I am following this https://github.com/mongodb/mongodb-enterprise-kubernetes steps and everything is ok. At first i am installing kubernetes operator, but when i tried to install ops manager i am geting:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims ops manager.
the yaml i used to install ops manager is:
---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: opsmanager1
spec:
replicas: 2
version: 4.2.0
adminCredentials: mongo-db-admin1 # Should match metadata.name
# in the Kubernetes secret
# for the admin user
externalConnectivity:
type: NodePort
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
podSpec:
persistence:
single:
storage: 1Gi
i can't figure out what the problem is. I am at a testing phase, and my goal is to make a scaling mongo database. Thanks in advance
edit: i made a few changes.I created storage class like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localstorage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-01
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo01"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-02
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo02"
And now my yaml for ops manger is:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager-localmode
spec:
replicas: 2
version: 4.2.12
adminCredentials: mongo-db-admin1
externalConnectivity:
type: NodePort
statefulSet:
spec:
# the Persistent Volume Claim will be created for each Ops Manager Pod
volumeClaimTemplates:
- metadata:
name: mongodb-versions
spec:
storageClassName: localstorage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongodb-versions
# this is the directory in each Pod where all MongoDB
# archives must be put
mountPath: /mongodb-ops-manager/mongodb-releases
backup:
enabled: false
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
But i get a new error : Warning ProvisioningFailed 44s (x26 over 6m53s) persistentvolume-controller no volume plugin matched name: kubernetes.io/no-provisioner
At a quick glance, it looks like you don't have any volume that can create a PVC on your cluster. see https://v1-15.docs.kubernetes.io/docs/concepts/storage/volumes/
Your app needs to create a persistant volume, but your cluster doesn't know how to do that.
UPDATE Finally this is nothing to do with Azure File Share. It is actually the same case with Azure Disk and NFS or HostPath
I have mounted an Azure file Shares volume to a mongoDb pod with the mountPath /data. Everything seems to work as expected. When I exec into the pod, I can see mongo data in /data/db. But on the Azure File Shares I can only see the folders /db and /dbconfig, not the files. Any idea ? I have granted the permission 0777 to the volume.
This is my yaml files
StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=999
- gid=999
parameters:
storageAccount: ACCOUNT_NAME
skuName: Standard_LRS
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 20Gi
Mongo deployement file
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: "mongo"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data
name: mongovolume
subPath: mongo
imagePullSecrets:
- name: secret-acr
volumes:
- name: mongovolume
persistentVolumeClaim:
claimName: azurefile
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.6", GitCommit:"a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e", GitTreeState:"clean", BuildDate:"2018-07-26T10:04:08Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Solved the problem by changing mongo image for docker.io/bitnami/mongodb:4.0.2-debian-9. With this image, mongo data is written on the file share and the data is now persistant
This setup doesn't work with Azure Files nor with Azure Disks.
I am working on one of the projects where I faced similar kind of issue and took Azure support but they don't have any specific resolution for the same.
Root cause provided by Azure Support : The data/files which doesn't remain persistent are the ones which has ownership of mongodb.
In this repository https://github.com/mappedinn/kubernetes-nfs-volume-on-gke I am trying to share a volume through NFS service on GKE. The NFS file sharing is successful if hard coded IP address is used.
But, in my point of view, it would be better to use DNS name in stead of hard coded IP address.
Below is the declaration of the NFS service being used for sharing a volume in Google Cloud Platform:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Below is the definition of the PersistentVolume with hard coded IP address:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.247.248.43 # with hard coded IP, it works
path: "/"
Below is the definition of the PersistentVolume with DNS name:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs-service.default.svc.cluster.local # with DNS, it does not works
path: "/"
I am using this https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ for getting the DNS of the service. Is there any thing missed?
Thanks
The problem is in DNS resolution on node it self. Mounting of the NFS share to the pod is a job of kubelet that is launched on the node. Hence the DNS resolution happens according to /etc/resolv.conf on the node it self as well. What could suffice is adding a nameserver <your_kubedns_service_ip> to the nodes /etc/resolv.conf, but it can become somewhat chicken-and-egg problem in some corner cases
I solved the problem by just upgrading kubectl of my GKE cluster from the version 1.7.11-gke.1 to 1.8.6-gke.0.
kubectl version
# Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.6-gke.0", GitCommit:"ee9a97661f14ee0b1ca31d6edd30480c89347c79", GitTreeState:"clean", BuildDate:"2018-01-05T03:36:42Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
Actually, this the final version of yaml files:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
# clusterIP: 10.3.240.20
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
# type: "LoadBalancer"
and
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXED: Use internal DNS name
server: nfs-server.default.svc.cluster.local
path: "/"
I deployed heketi/gluster on Kubernetes 1.6 cluster. Then I followed the guide to create a StorageClass for dynamic persistent volumes, but no pv are created if I create pvc.
heketi and glusterfs pods running and working as expected if I use heketi-cli manually and create pv manually. The pv are also claimed by pvc.
I feels like that I'm missing a step, but I don't know which one. I followed the guides and I assumed that dynamic persistent volumes should "just work".
install heketi-cli and glusterfs-client
use ./gk-deploy -g
create StorageClass
create PVC
Did I missed a step?
StorageClass
$ kubectl get storageclasses
NAME TYPE
slow kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2017-06-07T06:54:35Z
name: slow
resourceVersion: "82741"
selfLink: /apis/storage.k8s.io/v1/storageclassesslow
uid: 2aab0a5c-4b4e-11e7-9ee4-001a4a3f1eb3
parameters:
restauthenabled: "false"
resturl: http://10.2.35.3:8080/
restuser: ""
restuserkey: ""
provisioner: kubernetes.io/glusterfs
PVC
$ kubectl -nkube-system get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
gluster1 Bound glusterfs-b427d1f1 1Gi RWO 15m
influxdb-persistent-storage Pending slow 14h
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"slow"},"labels":{"k8s-app":"influxGrafana"},"name":"influxdb-persistent-storage","namespace":"kube-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
volume.beta.kubernetes.io/storage-class: slow
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: 2017-06-06T16:48:46Z
labels:
k8s-app: influxGrafana
name: influxdb-persistent-storage
namespace: kube-system
resourceVersion: "87638"
selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/influxdb-persistent-storage
uid: 021b69c4-4ad8-11e7-9ee4-001a4a3f1eb3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status:
phase: Pending
Sources:
https://github.com/gluster/gluster-kubernetes
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
Environment:
$ kubectl cluster-info
Kubernetes master is running at https://andrea-master-0.muellerpublic.de:443
KubeDNS is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ heketi-cli cluster list
Clusters:
24dca142f655fb698e523970b33238a9
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The problem was the slash in the StorageClass resturl.
resturl: http://10.2.35.3:8080/ must be resturl: http://10.2.35.3:8080
PS: o.O ....