I have a file has PV, Service and 2 Pod statefulset including Dynamic PVC.
When I deployed the file, a problem happened at PVC status.
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pv-test 10Gi RWO my-storage-class 7m19s
www-web-1 Pending my-storage-class 7m17s
One of PVC's Status is "Pending" and the reason is "Storage class name not found".
But one of PVC was created normally.
Below is the content of the file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: "my-storage-class"
capacity:
storage: 10Gi
hostPath:
path: /tmp/data
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
If someone knows the cause, let me know.
Thanks in advance.
Describe information about PV, PVC (www-web-1), Pod (web-1)
kubectl describe pv pv-test
Name: pv-test
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"pv-test"},"spec":{"accessModes...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: my-storage-class
Status: Bound
Claim: default/www-web-0
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType: DirectoryOrCreate
Events: <none>
#kubectl describe pvc www-web-1
Name: www-web-1
Namespace: default
StorageClass: my-storage-class
Status: Pending
Volume:
Labels: app=nginx
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 20s (x7 over 2m) persistentvolume-controller storageclass.storage.k8s.io "my-storage-class" not found
Mounted By: web-1
#kubectl describe po web-1
Name: web-1
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=nginx
controller-revision-hash=web-6596ffb49b
statefulset.kubernetes.io/pod-name=web-1
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/web
Containers:
nginx:
Image: k8s.gcr.io/nginx-slim:0.8
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lnfvq (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: www-web-1
ReadOnly: false
default-token-lnfvq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lnfvq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m43s (x183 over 8m46s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Your volume pv-test has accessModes: - ReadWriteOnce I think you need to create one more volume for second pod.
So I think if www-web-1 is also trying to mount pv-test it won't be able to mount it.
You are using hostpath volume to store the data. You are using /tmp/data on the host. Ensure that /tmp/data directory exists in all the nodes in the cluster.
Related
I am migrating an application that consists of 1 or more tomcat servers and 1 or more wildfly servers to k8s. I have it up and working with a deployments for tomcat and wildfly with a clusterIP for each. I am now trying to combine log files from the multiple nodes into a single log directory on the host (for now single host deployment). I have created a PVC with the following yaml files:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /C/k8s/local-pv/logs
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: local-storage
resources:
requests:
storage: 2Gi
I have the tomcat deployment mapped to the voles by:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
replicas: 2
selector:
matchLabels:
component: tomcat-deployment
template:
metadata:
labels:
component: tomcat
spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /usr/local/tomcat/logs"]
volumeMounts:
- name: local-persistent-storage
mountPath: /usr/local/tomcat/logs
containers:
- name: tomcat
image: tomcat-full:latest
volumeMounts:
- name: local-persistent-storage
mountPath: /usr/local/tomcat/logs
subPath: tomcat
volumes:
- name: local-persistent-storage
persistentVolumeClaim:
claimName: local-claim
I startup all the components and the system is functioning as expected. However, when I look in the host directory C/k8s/local-pv/logs/tomcat, no files are showing. I connect to the tomcat pod with docker exec and I see the log files from both tomcat servers. The files shown in the /usr/local/tomcat/logs survive a deployment restart so I know they are written somewhere. I searched my entire hard drive and the files are not anywhere.
I checked the pvc, pv, and storage class
kubectl describe pvc local-claim
Name: local-claim
Namespace: default
StorageClass: local-storage
Status: Bound
Volume: local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: tomcat-deployment-644698fdc6-jmxz9
tomcat-deployment-644698fdc6-qvx9s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12m (x3 over 14m) persistentvolume-controller waiting for first consumer to be created before binding
kubectl describe pv local-pv
Name: local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: default/local-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /C/k8s/pv/logs
HostPathType:
Events: <none>
kubectl get storageclass local-storage
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 19m
What piece am I missing? It appears that the local storage class is not bound to a volume claim and tomcat application.
I'm having some trouble configuring my Kubernetes deployment on minikube use local-storage. I'm trying to set up a rethinkdb instance that will mount a directory from the minikube VM to the rethinkdb Pod. My setup is the following
Storage
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rethinkdb-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rethinkdb-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
So I define a storageClass of local-storage type as described online in the tutorials. I then make a PersistentVolume that asks for 10GB of storage from the /mnt/data path on the underlying host. I have made this directory on the minikube VM
$ minikube ssh
$ ls /mnt
data sda1
This PersistentVolume has the storage class of local-storage and requests it from nodes matching the nodeAffinity section of hostname in 'minikube'.
I then make a PersistentVolumeClaim that asks for the type local-storage and requests 5GB.
Everything is good here, right? Here is the output of kubectl
$ kubectl get pv,pvc,storageClass
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/rethinkdb-pv 10Gi RWO Delete Available local-storage 9m33s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/rethinkdb-pv-claim Pending local-storage 7m51s
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 9m33s
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 24h
RethinkDB Deployment
I now attempt to make a Deployment with a single replica of the standard RethinkDB container.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: database
name: rethinkdb
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
selector:
matchLabels:
service: rethinkdb
template:
metadata:
creationTimestamp: null
labels:
service: rethinkdb
spec:
containers:
- name: rethinkdb
image: rethinkdb:latest
volumeMounts:
- mountPath: /data
name: rethinkdb-data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: rethinkdb-data
persistentVolumeClaim:
claimName: rethinkdb-pv-claim
This asks for a single replica of rethinkdb and it tries to mount the rethinkdb-pv-claim Persistent Volume Claim as the name rethinkdb-data and then attempts to mount that at /data in the container.
This is what shows, though
Name: rethinkdb-6dbf4ccdb-64gk5
Namespace: development
Priority: 0
Node: <none>
Labels: pod-template-hash=6dbf4ccdb
service=rethinkdb
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/rethinkdb-6dbf4ccdb
Containers:
rethinkdb:
Image: rethinkdb:latest
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from rethinkdb-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d5ncp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
rethinkdb-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: rethinkdb-pv-claim
ReadOnly: false
default-token-d5ncp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d5ncp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 73s (x7 over 8m38s) default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
"1 node(s) didn't find available persistent volumes to bind". I'm not sure how that is because the PVC is available.
$ kubectl describe pvc
Name: rethinkdb-pv-claim
Namespace: development
StorageClass: local-storage
Status: Pending
Volume:
Labels: <none>
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: rethinkdb-6dbf4ccdb-64gk5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 11s (x42 over 10m) persistentvolume-controller waiting for first consumer to be created before binding
I think one hint is that the field
Nodes <null> for the Pod - does that mean it isn't assigned to a node?
I think the issue is that one of mine was ReadWriteOnce and the other one was ReadWriteMany, then I had trouble getting permissions right when running minikube mount /tmp/data:/mnt/data so I just got rid of mounting it to the underlying filesystem and now it works
With the storage add-on for MicroK8s, Persistent Volume Claims are by default given storage under /var/snap/microk8s/common/default-storage on the host system. How can that be changed?
Viewing the declaration for the hostpath-provisioner pod, shows that there is an environment setting called PV_DIR pointing to /var/snap/microk8s/common/default-storage - seems like what I'd like to change, but how can that be done?
Not sure if I'm asking a MicroK8s specific question or if this is something that applies to Kubernetes in general?
$ microk8s.kubectl describe -n kube-system pod/hostpath-provisioner-7b9cb5cdb4-q5jh9
Name: hostpath-provisioner-7b9cb5cdb4-q5jh9
Namespace: kube-system
Priority: 0
Node: ...
Start Time: ...
Labels: k8s-app=hostpath-provisioner
pod-template-hash=7b9cb5cdb4
Annotations: <none>
Status: Running
IP: ...
IPs:
IP: ...
Controlled By: ReplicaSet/hostpath-provisioner-7b9cb5cdb4
Containers:
hostpath-provisioner:
Container ID: containerd://0b74a5aa06bfed0a66dbbead6306a0bc0fd7e46ec312befb3d97da32ff50968a
Image: cdkbot/hostpath-provisioner-amd64:1.0.0
Image ID: docker.io/cdkbot/hostpath-provisioner-amd64#sha256:339f78eabc68ffb1656d584e41f121cb4d2b667565428c8dde836caf5b8a0228
Port: <none>
Host Port: <none>
State: Running
Started: ...
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: ...
Finished: ...
Ready: True
Restart Count: 3
Environment:
NODE_NAME: (v1:spec.nodeName)
PV_DIR: /var/snap/microk8s/common/default-storage
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from microk8s-hostpath-token-nsxbp (ro)
/var/snap/microk8s/common/default-storage from pv-volume (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pv-volume:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/common/default-storage
HostPathType:
microk8s-hostpath-token-nsxbp:
Type: Secret (a volume populated by a Secret)
SecretName: microk8s-hostpath-token-nsxbp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
HostPath
If You want to add your own path to your persistentVolume You can use spec.hostPath.path value
example yamls
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: base
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: base
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: base
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
kindly reminder
Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See PersistentVolumeClaim documentation for details.
You can check your storageclass by using
kubectl get storageclass
If there is no <your-class-name>(default) that means You need to make your own default storage class.
Mark a StorageClass as default:
kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
After You make defualt storageClass You can use those yamls to create pv and pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume3
labels:
type: local
spec:
storageClassName: ""
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data2"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim3
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
one pv for each pvc
Based on kubernetes documentation
Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping.
I have my own cluster k8s. I'm trying to link the cluster to openstack / cinder.
When I'm creating a PVC, I can a see a PV in k8s and the volume in Openstack.
But when I'm linking a pod with the PVC, I have the message k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict".
My yml test:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: classic
provisioner: kubernetes.io/cinder
parameters:
type: classic
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-infra-consuldata4
namespace: infra
spec:
storageClassName: classic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul
namespace: infra
labels:
app: consul
spec:
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: consul:1.4.3
volumeMounts:
- name: data
mountPath: /consul
resources:
requests:
cpu: 100m
limits:
cpu: 500m
command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-infra-consuldata4
The result:
kpro describe pvc -n infra
Name: pvc-infra-consuldata4
Namespace: infra
StorageClass: classic
Status: Bound
Volume: pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 61s persistentvolume-controller Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By: consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name: consul-85684dd7fc-j84v7
Namespace: infra
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=consul
pod-template-hash=85684dd7fc
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/consul-85684dd7fc
Containers:
consul:
Image: consul:1.4.3
Port: <none>
Host Port: <none>
Command:
consul
agent
-server
-bootstrap
-ui
-bind
0.0.0.0
-client
0.0.0.0
-data-dir
/consul
Limits:
cpu: 2
Requests:
cpu: 500m
Environment: <none>
Mounts:
/consul from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-infra-consuldata4
ReadOnly: false
default-token-nxchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nxchv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x6 over 2m40s) default-scheduler 0/6 nodes are available: 6 node(s) had volume node affinity conflict.
Why K8s successful create the Cinder volume, but it can't schedule the pod ?
Try finding out the nodeAffinity of your persistent volume:
$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [xxx]
Then try to figure out if xxx matches the node label yyy that your pod is supposed to run on:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
yyy Ready worker 8d v1.15.3
If they don't match, you will have the "x node(s) had volume node affinity conflict" error, and you need to re-create the persistent volume with the correct nodeAffinity configuration.
I also ran into this issue when I forgot to deploy the EBS CSI driver before I tried to get my pod to connect to it.
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
I am having issues deploying juypterhub on kubernetes cluster. The issue I am getting is that the hub pod is stuck in pending.
Stack:
kubeadm
flannel
weave
helm
jupyterhub
Runbook:
$kubeadm init --pod-network-cidr="10.244.0.0/16"
$sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf && export KUBECONFIG=$HOME/admin.conf
$kubectl create -f pvc.yml
$kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-aliyun.yml
$kubectl apply --filename https://git.io/weave-kube-1.6
$kubectl taint nodes --all node-role.kubernetes.io/master-
Helm installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html
Jupyter installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html
config.yml
proxy:
secretToken: "asdf"
singleuser:
storage:
dynamic:
storageClass: local-storage
pvc.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /dev/vdb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: standard
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The warning is:
$kubectl --namespace=jhub get pod
NAME READY STATUS RESTARTS AGE
hub-fb48dfc4f-mqf4c 0/1 Pending 0 3m33s
proxy-86977cf9f7-fqf8d 1/1 Running 0 3m33s
$kubectl --namespace=jhub describe pod hub
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x3 over 35s) default-scheduler pod has unbound immediate PersistentVolumeClaims
$kubectl --namespace=jhub describe pv
Name: standard
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/standard
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /dev/vdb
HostPathType:
Events: <none>
$kubectl --namespace=kube-system describe pvc
Name: hub-db-dir
Namespace: jhub
StorageClass:
Status: Pending
Volume:
Labels: app=jupyterhub
chart=jupyterhub-0.8.0-beta.1
component=hub
heritage=Tiller
release=jhub
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x7 over 85s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: hub-fb48dfc4f-mqf4c
I tried my best to follow the localstorage volume configuration on the official kubernetes website, but with no luck
-G
Managed to fix it using the following configuration.
Key points:
- I forgot to add the node in nodeAffinity
- it works without putting in volumeBindingMode
apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /temp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- INSERT_NODE_NAME_HERE
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: local-storage
provisioner: kubernetes.io/no-provisioner
config.yaml
proxy:
secretToken: "token"
singleuser:
storage:
dynamic:
storageClass: local-storage
make sure your storage/pv looks like this:
root#asdf:~# kubectl --namespace=kube-system describe pv
Name: standard
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"standard"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: jhub/hub-db-dir
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [asdf]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /temp
Events: <none>
root#asdf:~# kubectl --namespace=kube-system describe storageclass
Name: local-storage
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Now the hub pod looks something like this:
root#asdf:~# kubectl --namespace=jhub describe pod hub
Name: hub-5d4fcd8fd9-p6crs
Namespace: jhub
Priority: 0
PriorityClassName: <none>
Node: asdf/192.168.0.87
Start Time: Sat, 23 Feb 2019 14:29:51 +0800
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=5d4fcd8fd9
release=jhub
Annotations: checksum/config-map: --omitted
checksum/secret: --omitted--
Status: Running
IP: 10.244.0.55
Controlled By: ReplicaSet/hub-5d4fcd8fd9
Containers:
hub:
Container ID: docker://d2d4dec8cc16fe21589e67f1c0c6c6114b59b01c67a9f06391830a1ea711879d
Image: jupyterhub/k8s-hub:0.8.0
Image ID: docker-pullable://jupyterhub/k8s-hub#sha256:e40cfda4f305af1a2fdf759cd0dcda834944bef0095c8b5ecb7734d19f58b512
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/srv/jupyterhub_config.py
--upgrade-db
State: Running
Started: Sat, 23 Feb 2019 14:30:28 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 512Mi
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/secret/ from secret (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-bxzl7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-bxzl7:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-bxzl7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>