K8s Volume doesn't detach from host - kubernetes

We're using Kubernetes on-premise and it's currently running on VMWare. So far, we have been successfull in being able to provision volumes for the apps that we deploy. The problem comes if the pods - for whatever reason - switch to a different worker node. When that happens, the disk fails to mount to the second worker as it's already present on the first worker where the pod was originally running. See below:
As it stands, we have no app on either worker1 or worker2:
[root#worker01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 199.5G 0 part
├─vg_root-lv_root 253:0 0 20G 0 lvm /
├─vg_root-lv_swap 253:1 0 2G 0 lvm
├─vg_root-lv_var 253:2 0 50G 0 lvm /var
└─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks
sr0 11:0 1 1024M 0 rom
[root#worker02 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 199.5G 0 part
├─vg_root-lv_root 253:0 0 20G 0 lvm /
├─vg_root-lv_swap 253:1 0 2G 0 lvm
├─vg_root-lv_var 253:2 0 50G 0 lvm /var
└─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks
sr0 11:0 1 4.5G 0 rom
Next we create our PVC with the following:
[root#master01 ~]$ cat app-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-pvc
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
namespace: tools
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
[root#master01 ~]$ kubectl create -f app-pvc.yaml
persistentvolumeclaim "app-pvc" created
This works fine as the disk is created and bound:
[root#master01 ~]$ kubectl get pvc -n tools
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
app-pvc Bound pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 10Gi RWO thin-disk 12s
[root#master01 ~]$ kubectl get pv -n tools
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 10Gi RWO Delete Bound tools/app-pvc thin-disk 12s
Now we can deploy our application which creates the pod and sorts storage etc:
[centos#master01 ~]$ cat app.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: tools
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- image: sonatype/app3:latest
imagePullPolicy: IfNotPresent
name: app
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- mountPath: /app-data
name: app-data-volume
securityContext:
fsGroup: 2000
volumes:
- name: app-data-volume
persistentVolumeClaim:
claimName: app-pvc
---
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: tools
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
- port: 5000
targetPort: 5000
protocol: TCP
name: docker
selector:
app: app
[centos#master01 ~]$ kubectl create -f app.yaml
deployment.extensions "app" created
service "app-service" created
This deploys fine:
[centos#master01 ~]$ kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
app-6588cf4b87-wvwg2 0/1 ContainerCreating 0 6s
[centos#neb-k8s02-master01 ~]$ kubectl describe pod app-6588cf4b87-wvwg2 -n tools
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18s default-scheduler Successfully assigned nexus-6588cf4b87-wvwg2 to neb-k8s02-worker01
Normal SuccessfulMountVolume 18s kubelet, worker01 MountVolume.SetUp succeeded for volume "default-token-7cv62"
Normal SuccessfulAttachVolume 15s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7"
Normal SuccessfulMountVolume 7s kubelet, worker01 MountVolume.SetUp succeeded for volume "pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7"
Normal Pulled 7s kubelet, worker01 Container image "acme/app:latest" already present on machine
Normal Created 7s kubelet, worker01 Created container
Normal Started 6s kubelet, worker01 Started container
We can also see the disk has been created and mounted in VMWare for Worker01 and not for Worker02:
[root#worker01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 199.5G 0 part
├─vg_root-lv_root 253:0 0 20G 0 lvm /
├─vg_root-lv_swap 253:1 0 2G 0 lvm
├─vg_root-lv_var 253:2 0 50G 0 lvm /var
└─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks
sdb 8:16 0 10G 0 disk /var/lib/kubelet/pods/1e55ad6a-294f-11e9-9175-005056a47f18/volumes/kubernetes.io~vsphere-volume/pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7
sr0 11:0 1 1024M 0 rom
[root#worker02 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 199.5G 0 part
├─vg_root-lv_root 253:0 0 20G 0 lvm /
├─vg_root-lv_swap 253:1 0 2G 0 lvm
├─vg_root-lv_var 253:2 0 50G 0 lvm /var
└─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks
sr0 11:0 1 4.5G 0 rom
If Worker01 falls over then Worker02 kicks in and we can see the disk being attached to the other node:
[root#worker02 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 199.5G 0 part
├─vg_root-lv_root 253:0 0 20G 0 lvm /
├─vg_root-lv_swap 253:1 0 2G 0 lvm
├─vg_root-lv_var 253:2 0 50G 0 lvm /var
└─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks
sdb 8:16 0 10G 0 disk /var/lib/kubelet/pods/a0695030-2950-11e9-9175-005056a47f18/volumes/kubernetes.io~vsphere-volume/pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7
sr0 11:0 1 4.5G 0 rom
However, seeing as though the disk is now attached to Worker01 and Worker02, Worker01 will no longer start citing the following error in vCenter:
Cannot open the disk '/vmfs/volumes/5ba35d3b-21568577-efd4-469e3c301eaa/kubevols/kubernetes-dynamic-pvc-e55ad6a-294f-11e9-9175-005056a47f18.vmdk' or one of the snapshot disks it depends on.
This error occurs because (I assume) Worker02 has access to the disk and is reading/writing from/to it. Shouldn't Kubernetes detach the disk from nodes that do not need it if it's been attached to another node. How can we go about fixing this issue? If a pods moves to another host due to node failure then we have to manually detach the disk and then start the other worker manually.
Any and all help appreciated.

First, I'll assume your running in tree vsphere disks.
Second, in this case (and more so, with CSI) kubernetes doesn't have control over all volume operations. The VMWare functionality for managing attachment and detachment of a disk is implemented in the volume plugin which you are using. Kubernetes doesn't strictly control all volume attachment/detachment semantics as a generic function.
To see the in-tree implementation details, check out:
https://kubernetes.io/docs/concepts/storage/volumes/#vspherevolume
Overall i think the way you are doing failover is going to mean that when your worker1 pod dies, worker2 can schedule. At that point, worker1 should not be able to grab the same PVC, and it should not schedule until the worker2 pod dies.
However if worker1 is scheduling, it means that Vsphere is trying to (erroneously) let worker1 start, and the kubelet is failing.
There is a chance that this is a bug in the VMWare driver in that it will bind a persistent volume even though it is not ready to.
To further elaborate, details about how worker2 is being launched may be helped. Is it a separate replication controller ? or is it running outside of kubernetes? If the latter, then the volumes wont be managed the same way, and you cant use a the same PVC as the locking mechanism.

Related

Getting The node was low on resource: ephemeral-storage?

I am trying to understand master/node concept depoyment in labs.playwithk8s.com https://labs.play-with-k8s.com/
I have two nodes and one master.
It has the following config memory.
node1 ~]$ kubectl describe pod myapp-7f4dffc449-qh7pk
Name: myapp-7f4dffc449-qh7pk
Namespace: default
Priority: 0
Node: node3/192.168.0.16
Start Time: Tue, 07 Feb 2023 12:31:23 +0000
Labels: app=myapp
pod-template-hash=7f4dffc449
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/myapp-7f4dffc449
Containers:
myapp:
Container ID:
Image: changan1111/newdocker:latest
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 1Gi
Requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 1Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t4nf7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-t4nf7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t4nf7
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34s default-scheduler Successfully assigned default/myapp-7f4dffc449-qh7pk to node3
Normal Pulling 31s kubelet Pulling image "changan1111/newdocker:latest"
Warning Evicted 25s kubelet The node was low on resource: ephemeral-storage.
Warning ExceededGracePeriod 15s kubelet Container runtime did not kill the pod within specified grace period.
My yaml file is here: https://raw.githubusercontent.com/changan1111/UserManagement/main/kube/kube.yaml
Looks like that i am not seeing anything wrong.. but still i am seeing The node was low on resource: ephemeral-storage
How to resolve this?
Disk Usage:
overlay 10G 130M 9.9G 2% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sdb 64G 29G 36G 44% /etc/hosts
shm 64M 0 64M 0% /dev/shm
shm 64M 0 64M 0% /var/lib/docker/containers/403c120b0dd0909bd34e66d86c58fba18cd71468269e1aaa66e3244d331c3a1e/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/56dd63dad42dd26baba8610f70f1a0bd22fdaea36742c32deca3c196ce181851/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/50c4585ae8cc63de9077c1a58da67cc348c86a6643ca21a06b8998f94a2a2daf/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/6e9529ad6e6a836e77b17c713679abddf861fdc0e86946484dc2ec68a00ca2ff/mounts/shm
tmpfs 16G 12K 16G 1% /var/lib/kubelet/pods/8e56095e-b0ec-4f13-a022-d29d04897410/volumes/kubernetes.io~secret/kube-proxy-token-j7sl8
shm 64M 0 64M 0% /var/lib/docker/containers/2b84d6dfebd4ea0c379588985cd43b623004632e71d63d07a39d521ddf694e8e/mounts/shm
tmpfs 16G 12K 16G 1% /var/lib/kubelet/pods/1271ca18-97d0-48d2-9280-68eb8c57795f/volumes/kubernetes.io~secret/kube-router-token-rmpqv
shm 64M 0 64M 0% /var/lib/docker/containers/c4506095bf36356790795353862fc13b759d72af8edc0e4233341f2d3234fa02/mounts/shm
tmpfs 16G 12K 16G 1% /var/lib/kubelet/pods/39885a73-d724-4be8-a9cf-3de8756c5b0c/volumes/kubernetes.io~secret/coredns-token-ckxbw
tmpfs 16G 12K 16G 1% /var/lib/kubelet/pods/8f137411-3af6-4e44-8be4-3e4f79570531/volumes/kubernetes.io~secret/coredns-token-ckxbw
shm 64M 0 64M 0% /var/lib/docker/containers/c32431f8e77652686f58e91aff01d211a5e0fb798f664ba675715005ee2cd5b0/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/3e284dd5f9b321301647eeb42f9dd82e81eb78aadcf9db7b5a6a3419504aa0e9/mount
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m16s default-scheduler Successfully assigned default/myapp-b5856bb-4znkj to node4
Normal Pulling 3m15s kubelet Pulling image "changan1111/newdocker:latest"
Normal Pulled 83s kubelet Successfully pulled image "changan1111/newdocker:latest" in 1m51.97169753s
Normal Created 28s kubelet Created container myapp
Normal Started 27s kubelet Started container myapp
Warning Evicted 1s kubelet Pod ephemeral local storage usage exceeds the total limit of containers 500Mi.
Normal Killing 1s kubelet Stopping container myapp
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
imagePullSecrets:
- name: dockercreds
containers:
- name: myapp
image: changan1111/newdocker:latest
resources:
limits:
memory: "2Gi"
cpu: "500m"
ephemeral-storage: "2Gi"
requests:
ephemeral-storage: "1Gi"
cpu: "500m"
memory: "1Gi"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
nodePort: 31110
type: LoadBalancer
Worker nodes may be running out of disk space in which case you should see something like no space left on device or The node was low on resource: ephemeral-storage.
Mitigation is to specify larger disk size for node VMs during Composer environment creation.
Pod eviction and scheduling problems are side effects of Kubernetes limits and requests, usually caused by a lack of planning. See Understanding Kubernetes pod evicted and scheduling problems for more information.
Refer to the similar SO how to set a quota limits.ephemeral-storage, requests.ephemeral-storage to limit this, as otherwise any container can write any amount of storage to its node filesystem.
Warning : Pod ephemeral local storage usage exceeds the total limit of containers 500Mi.
It may be because you're putting an upper limit of ephemeral-storage usage by setting resources.limits.ephemeral-storage to 500Mi. Try removing the limits.ephemeral-storage if safe or change the value depending upon your requirement.
Also see How to determine kubernetes pod ephemeral storage request and limit and how to Avoid running out of ephemeral storage space on your Kubernetes worker Nodes for more information.

What does Kubelet use to determine the ephemeral-storage capacity of the node?

I have Kubernetes cluster running on a VM. A truncated overview of the mounts is:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 20G 4.5G 15G 24% /
/dev/mapper/vg001-lv--docker 140G 33G 108G 23% /var/lib/docker
As you can see, I added an extra disk to store the docker images and its volumes. However, when querying the node's capacity, the following is returned
Capacity:
cpu: 12
ephemeral-storage: 20145724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65831264Ki
nvidia.com/gpu: 1
pods: 110
ephemeral-storage is 20145724Ki which is 20G, referring to the disk mounted at /.
How does Kubelet calculate its ephemeral-storage? Is it simply looking at the disk space available at /? Or is it looking at another folder like /var/log/containers?
This is a similar post where the user eventually succumbed to increasing the disk mounted at /.
Some theory
By default Capacity and Allocatable for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to /var/lib/kubelet).
This is the default location for kubelet directory.
The kubelet supports the following filesystem partitions:
nodefs: The node's main filesystem, used for local disk volumes, emptyDir, log storage, and more. For example, nodefs contains
/var/lib/kubelet/.
imagefs: An optional filesystem that container runtimes use to store container images and container writable layers.
Kubelet auto-discovers these filesystems and ignores other
filesystems. Kubelet does not support other configurations.
From Kubernetes website about volumes:
The storage media (such as Disk or SSD) of an emptyDir volume is
determined by the medium of the filesystem holding the kubelet root
dir (typically /var/lib/kubelet).
Location for kubelet directory can be configured by providing:
Command line parameter during kubelet initialization
--root-dir string
Default: /var/lib/kubelet
Via kubeadm with config file (e.g.)
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
root-dir: "/data/var/lib/kubelet"
Customizing kubelet:
To customize the kubelet you can add a KubeletConfiguration next to
the ClusterConfiguration or InitConfiguration separated by ---
within the same configuration file. This file can then be passed to
kubeadm init.
When bootstrapping kubernetes cluster using kubeadm, Capacity reported by kubectl get node is equal to the disk capacity mounted into /var/lib/kubelet
However Allocatable will be reported as:
Allocatable = Capacity - 10% nodefs using the standard kubeadm configuration, since the kubelet has the following default hard eviction thresholds:
nodefs.available<10%
It can be configured during kubelet initialization with:
-eviction-hard mapStringString
Default: imagefs.available<15%,memory.available<100Mi,nodefs.available<10%
Example
I set up a test environment for Kubernetes with a master node and two worker nodes (worker-1 and worker-2).
Both worker nodes have volumes of the same capacity: 50Gb.
Additionally, I mounted a second volume with a capacity of 20Gb for the Worker-1 node at the path /var/lib/kubelet.
Then I created a cluster with kubeadm.
Result
From worker-1 node:
skorkin#worker-1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.8G 46G 6% /
...
/dev/sdb 20G 45M 20G 1% /var/lib/kubelet
and
Capacity:
cpu: 2
ephemeral-storage: 20511312Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027428Ki
pods: 110
Size of ephemeral-storage is the same as volume mounted at /var/lib/kubelet.
From worker-2 node:
skorkin#worker-2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.7G 46G 6% /
and
Capacity:
cpu: 2
ephemeral-storage: 50633164Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027420Ki
pods: 110

Minio Kubernetes Intstallation no memory error

I am trying to install the minio storage using kubernetes on my local .
Following the link , However i am facing error with no memory in all types of install ..
I am not sure how to set up the presistantVolume in my case.
https://github.com/minio/operator/blob/master/README.md
I am trying to create persistent volume so that enough memory will be available in the path i am selecting
cat pv.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
kubectl create -f pv.yaml
kubectl get sc
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 131m
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 56m
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-node
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storage-class: local-storage
local:
path: /mnt/d/minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
kubectl create -f pvc.yaml
error: error parsing pvc.yaml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 126m v1.21.2
See 'kubectl get --help' for usage.
:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-j72z4 1/1 Running 1 128m
kube-system coredns-558bd4d5db-vw98z 1/1 Running 1 128m
kube-system etcd-docker-desktop 1/1 Running 1 128m
kube-system kube-apiserver-docker-desktop 1/1 Running 1 128m
kube-system kube-controller-manager-docker-desktop 1/1 Running 1 128m
kube-system kube-proxy-tqfnr 1/1 Running 1 128m
kube-system kube-scheduler-docker-desktop 1/1 Running 1 128m
kube-system storage-provisioner 1/1 Running 2 127m
kube-system vpnkit-controller 1/1 Running 12 127m
minio-operator console-6b6cf8946c-vxcqh 1/1 Running 0 76m
minio-operator minio-operator-69fd675557-s62nl 1/1 Running 0 76m
:/$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 251G 1.9G 237G 1% /
tmpfs 6.2G 401M 5.8G 7% /mnt/wsl
tools 477G 69G 409G 15% /init
none 6.1G 0 6.1G 0% /dev
none 6.2G 12K 6.2G 1% /run
none 6.2G 0 6.2G 0% /run/lock
none 6.2G 0 6.2G 0% /run/shm
none 6.2G 0 6.2G 0% /run/user
tmpfs 6.2G 0 6.2G 0% /sys/fs/cgroup
C:\ 477G 69G 409G 15% /mnt/c
D:\ 932G 132M 932G 1% /mnt/d
/dev/sdd 251G 2.7G 236G 2% /mnt/wsl/docker-desktop-data/isocache
none 6.2G 12K 6.2G 1% /mnt/wsl/docker-desktop/shared-sockets/host-services
/dev/sdc 251G 132M 239G 1% /mnt/wsl/docker-desktop/docker-desktop-proxy
/dev/loop0 396M 396M 0 100% /mnt/wsl/docker-desktop/cli-tools
I beeilve creating a persistent volume and using that in a namesapce and using that namespace whil creating a tenant should solve this issue. But i am stuck with the error of no memory available
As per the code:
if (memReqSize < minMemReq) {
return {
error: "The requested memory size must be greater than 2Gi",
request: 0,
limit: 0,
};
}
You need 2GB of RAM per node. Since you have 4 nodes, you need 8 GB of RAM for Minio alone. It's likely that you don't have the enough RAM to run this.

How does OpenEBS determine the location of my volume?

I am experimenting with OpenEBS as storage provider for our Kubernetes cluster. OpenEBS is installed via helm on a cluster consisting of 5 nodes, created by Rancher. It seems to work, however I don't really understand how the volume itself is provisioned.
Each node is created with 2 disks, with logical volumes spanning over the disks. For example:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos_intern--rancher--node05-root 253:0 0 50G 0 lvm /
└─centos_intern--rancher--node05-swap 253:1 0 7,9G 0 lvm [SWAP]
sdb 8:16 0 80G 0 disk
└─sdb1 8:17 0 80G 0 part
├─centos_intern--rancher--node05-root 253:0 0 50G 0 lvm /
└─centos_intern--rancher--node05-home 253:2 0 41,1G 0 lvm /home
The node device manage (NDM) is configured with a filter excluding loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md. So far, so good.
When we list the block device resources created by NDM, it lists 2 resources for this node (other nodes are omitted)
> kubectl get blockdevice --all-namespaces
NAMESPACE NAME NODENAME SIZE CLAIMSTATE STATUS AGE
openebs blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5 intern-rancher-node05 85899345920 Unclaimed Active 18h
openebs sparse-e4ea6423e7d139104049e67566a2b634 intern-rancher-node05 10737418240 Unclaimed Active 18
Exploring the created blockdevice, we see that it uses /dev/sdb as disk:
> kubectl describe blockdevice blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5 -n openebs
Name: blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5
...
Node Attributes:
Node Name: intern-rancher-node05
Partitioned: No
Path: /dev/sdb
Status:
Claim State: Unclaimed
State: Active
Events: <none>
So here stops my understanding. Why did NDM pick /dev/sdb, and not /dev/sda? What is the difference between the disks that one is used and the other not? Should /dev/sdb not be skipped because it is in use by the logical volumes? If I create a persistent volume, does this limit the size of my logical volumes (/home)?
Also, if I create a persistent volume claim (using jiva), a persistent volume is created in /var/openebs, for example /var/openebs/pvc-cdc4c5a2-89e1-41ed-b9e7-c672f27a8bed. Does this mean it doesn't use the disk at all but stores everything in the filesystem in the logical volume?

how to scale daemon set about kubernetes using kubectl

Now I only have terminal to access kubernetes cluster now, check the ingress controller like this:
$ k get daemonset --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system traefik-ingress-controller 0 0 0 0 0 IngressProxy=true 60d
logging fluentd-es 0 0 0 0 0 beta.kubernetes.io/fluentd-ds-ready=true 28d
I am now using kubectl(v1.15.2) to scale daemon set like this:
kubectl scale --replicas=1 DaemonSet/traefik-ingress-controller -n kube-system
but it shows:
Error from server (NotFound): the server could not find the requested resource
what should I do to start the traefik in terminal using command line? This is my daemon set describe output:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/traefik-deployment-yaml/k8s-backup ⌚ 17:49:58
$ k describe daemonset traefik-ingress-controller -n kube-system
Name: traefik-ingress-controller
Selector: app=traefik
Node-Selector: IngressProxy=true
Labels: app=traefik
Annotations: deprecated.daemonset.template.generation: 18
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"traefik"},"name":"traefik-ingress-controller","na...
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=traefik
Service Account: traefik-ingress-controller
Containers:
traefik-ingress-lb:
Image: traefik:v2.1.6
Ports: 80/TCP, 443/TCP, 8080/TCP
Host Ports: 80/TCP, 443/TCP, 0/TCP
Args:
--configfile=/config/traefik.yaml
--logLevel=INFO
--metrics=true
--metrics.prometheus=true
--entryPoints.metrics.address=:8080
--metrics.prometheus.entryPoint=metrics
--metrics.prometheus.addServicesLabels=true
--metrics.prometheus.addEntryPointsLabels=true
--metrics.prometheus.buckets=0.100000, 0.300000, 1.200000, 5.000000
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Environment: <none>
Mounts:
/config from config (rw)
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: traefik-config
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedDaemonPod 3h32m daemonset-controller Found failed daemon pod kube-system/traefik-ingress-controller-wdpsq on node azshara-k8s03, will try to kill it
Normal SuccessfulDelete 3h32m daemonset-controller Deleted pod: traefik-ingress-controller-wdpsq
Normal SuccessfulCreate 3h32m daemonset-controller Created pod: traefik-ingress-controller-qmttl
Warning FailedDaemonPod 3h32m daemonset-controller Found failed daemon pod kube-system/traefik-ingress-controller-qmttl on node azshara-k8s03, will try to kill it
Normal SuccessfulDelete 3h32m daemonset-controller Deleted pod: traefik-ingress-controller-qmttl
Normal SuccessfulCreate 3h32m daemonset-controller Created pod: traefik-ingress-controller-nlxwc
You don not need to scale a deamon set on K8s.
A Daemon Set ensures that all eligible nodes run a copy of a Pod..
As nodes are added to the cluster, Pods are added to them. So you need to add new node to cluster and deamon set will be scheduled there unless you have a very unique taint to disallow given deamon set.