k8s 1.2 deployed locally, single-node docker
Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment?
Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not:
[root#totoro brs-kubernetes]# kubectl create -f example.yaml
configmap "special-config" created
pod "dapi-test-pod" created
[root#totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh
/ # cd /etc/config/
/etc/config # ls
/etc/config # ls -alh
total 4
drwxrwxrwt 2 root root 40 Mar 23 18:47 .
drwxr-xr-x 7 root root 4.0K Mar 23 18:47 ..
/etc/config #
example.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: ["sleep", "100"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: how.file
restartPolicy: Never
Summary of conformance test failures follows (asked to run by jayunit100). Full run in this gist.
Summarizing 7 Failures:
[Fail] ConfigMap [It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
[Fail] Downward API volume [It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Downward API volume [It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
[Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Networking [It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
[Fail] Downward API volume [It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
[Fail] ConfigMap [It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
Ran 93 of 265 Specs in 2875.468 seconds
FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s)
FAIL
Output of findmnt:
[schou#totoro single-node]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/fedora-root
│ ext4 rw,relatime,data=ordere
├─/sys sysfs sysfs rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec,
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,
│ │ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,
│ ├─/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime
│ ├─/sys/kernel/config configfs configf rw,relatime
│ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime
├─/proc proc proc rw,nosuid,nodev,noexec,
│ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=
│ └─/proc/fs/nfsd nfsd nfsd rw,relatime
├─/dev devtmpfs devtmpf rw,nosuid,size=8175208k
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relati
│ ├─/dev/mqueue mqueue mqueue rw,relatime
│ └─/dev/hugepages hugetlbfs hugetlb rw,relatime
├─/run tmpfs tmpfs rw,nosuid,nodev,mode=75
│ ├─/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim
│ │ └─/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
│ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim
│ └─/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
├─/tmp tmpfs tmpfs rw
├─/boot /dev/sda2 ext4 rw,relatime,data=ordere
│ └─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077,
├─/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime
├─/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv
│ tmpfs tmpfs rw,relatime
├─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume
│ tmpfs tmpfs rw,relatime
└─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe
tmpfs tmpfs rw,relatime
[schou#totoro single-node]$
Thanks to #Paul Morie for helping me diagnose and fix this (from github issue):
bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to -v /var/lib/kubelet:/var/lib/kubelet:rw,shared
I also had to change MountFlags=slave to MountFlags=shared in my docker systemd file.
Related
I am facing this issue when mounting a static Ceph volume to K8s.
MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph │
│ 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/test1-pv/gl │
│ obalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-1586083215,mds_namespace=myfs,_netdev] stderr: mount error 13 = Permission denied
But I'm not sure if this is the right thing to do. Please point me to the right direction. Here is my use case: I want to setup a shared file system that can be accessed from all Pods in all namespaces. Concurrent write operations are not a big concern, as most of the Pods will read from this shared location, such as Python packages etc.
It is not possible by re-using the same PVC as it is a namespaced object.
What I did was to create a static volume in Ceph under a SubVolumeGroup, and create one pv-pvc pair for each namespace, and expect it will access the same files in the Ceph volume.
Here is the volume that I mounted to a Pod:
ubuntu#host1:~$ ls -l /mnt/ceph/volumes/sharedvg/sharedvolume/
total 0
drwxrwxrwx 2 root root 0 Mar 9 11:22 8a370586-60e6-4ec7-9d5b-c8c7ce7786c6
ubuntu#host1:~$ ls -l /mnt/ceph/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6/
total 0
ubuntu#host1:~$
Here is the PV and PVC yaml file. I copied the adminID and adminKey values in the secret 'rook-csi-cephfs-provisioner' of rook-ceph namespace.
apiVersion: v1
kind: Secret
metadata:
name: rook-csi-cephfs-static-provisioner
type: Opaque
data:
userID: "XXX"
userKey: "XXX"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: test1-pv
namespace: default
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 128Gi
csi:
driver: rook-ceph.cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-static-provisioner
namespace: default
volumeAttributes:
clusterID: rook-ceph
fsName: "myfs"
staticVolume: "true"
rootPath: /volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6
volumeHandle: test1-pv
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
claimRef:
name: test-pvc-1
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 128Gi
storageClassName: ""
volumeMode: Filesystem
volumeName: test1-pv
This is the busybox deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
namespace: default
labels:
app: test
spec:
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: test1
mountPath: /test1
command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
volumes:
- name: test1
persistentVolumeClaim:
claimName: test-pvc-1
This is the log of the Pod:
Warning FailedMount 29m kubelet MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal desc = an error (exit status 32) │
│ occurred while running mount args: [-t ceph 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6 /var/lib/kube │
│ let/plugins/kubernetes.io/csi/pv/test1-pv/globalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-973267258,mds_namespace=myfs,_netdev] stderr: mount error 13 = │
│ Permission denied │
│ Warning FailedMount 27m kubelet MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal desc = an error (exit status 32) │
│ occurred while running mount args: [-t ceph 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6 /var/lib/kube │
│ let/plugins/kubernetes.io/csi/pv/test1-pv/globalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-2348945139,mds_namespace=myfs,_netdev] stderr: mount error 13 │
│ = Permission denied │
│ Warning FailedMount 25m kubelet MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal desc = an error (exit status 32) │
│ occurred while running mount args: [-t ceph 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6 /var/lib/kube │
│ let/plugins/kubernetes.io/csi/pv/test1-pv/globalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-3861388178,mds_namespace=myfs,_netdev] stderr: mount error 13 │
│ = Permission denied │
│ Warning FailedMount 23m kubelet MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal desc = an error (exit status 32) │
│ occurred while running mount args: [-t ceph 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7-9d5b-c8c7ce7786c6 /var/lib/kube │
│ let/plugins/kubernetes.io/csi/pv/test1-pv/globalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-4165129570,mds_namespace=myfs,_netdev] stderr: mount error 13 │
│ = Permission denied │
│ Warning FailedMount 7m14s (x10 over 34m) kubelet Unable to attach or mount volumes: unmounted volumes=[test1], unattached volumes=[test1 kube-api-access-fwr79]: tim │
│ ed out waiting for the condition │
│ Warning FailedMount 3m3s (x13 over 21m) kubelet (combined from similar events): MountVolume.MountDevice failed for volume "test1-pv" : rpc error: code = Internal d │
│ esc = an error (exit status 32) occurred while running mount args: [-t ceph 10.107.127.65:6789,10.98.28.166:6789,10.96.128.54:6789:/volumes/sharedvg/sharedvolume/8a370586-60e6-4ec7 │
│ -9d5b-c8c7ce7786c6 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/test1-pv/globalmount -o name=csi-cephfs-provisioner,secretfile=/tmp/csi/keys/keyfile-2075406143,mds_namespace=myfs, │
│ _netdev] stderr: mount error 13 = Permission denied │
│ Warning FailedMount 27s (x2 over 29m) kubelet Unable to attach or mount volumes: unmounted volumes=[test1], unattached volumes=[kube-api-access-fwr79 test1]: tim │
│ ed out waiting for the condition
The Pod is in 'ContainerCreating' state.
Any suggestions? Thanks
I want to rm -rf /var/run/secrets/kubernetes.io/serviceaccount/ to delete the default Kubernetes service account for testing anonymous API access.
However, running the above command shows that many of the files are on a read-only filesystem, so I want to temporarily remount the filesystem (mount -o remount) to delete the files.
Now, how can I see which filesystem in the output of mount to remount? None of the filesystems below are mounted on a /var/run/ path. The closest match is /var/lib/ in the options for the overlay filesystem mounted on /.
How can I safely delete the files under /var/run/secrets/kubernetes.io/serviceaccount/?
root#ctf1-deploy1-6fd44cbcd6-vrckg:~# rm -rf /var/run/secrets/kubernetes.io/serviceaccount/
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/..data': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/token': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/namespace': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/..2020_03_25_08_59_25.631059710/namespace': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/..2020_03_25_08_59_25.631059710/ca.crt': Read-only file system
rm: cannot remove '/var/run/secrets/kubernetes.io/serviceaccount/..2020_03_25_08_59_25.631059710/token': Read-only file system
root#ctf1-deploy1-6fd44cbcd6-vrckg:~# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/ZLOIVO6AXDQFZ3NU3O4VYG5QJC:/var/lib/docker/overlay2/l/MZY5Y3MJC6IVDUFSNOQEU55JZJ:/var/lib/docker/overlay2/l/E5HAR5VEWTG6MCFYN22KDJNMK3:/var/lib/docker/overlay2/l/5Z2WGKVJRNGPXV5QIFR5CWHJSE:/var/lib/docker/overlay2/l/U5HNVHXGGWRIGBX3XJV5CW5VIZ:/var/lib/docker/overlay2/l/TI5WPYBQSWQJXBMOY7DT5DN26Z:/var/lib/docker/overlay2/l/XOZNIEPFIZTLP2HEHFKL66H4EO,upperdir=/var/lib/docker/overlay2/4ea51426b9eb47af0faf53e60a25b6cffae64c3338130663efd1068c7d2ffb20/diff,workdir=/var/lib/docker/overlay2/4ea51426b9eb47af0faf53e60a25b6cffae64c3338130663efd1068c7d2ffb20/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on /dev/termination-log type ext4 (rw,relatime,data=ordered)
/dev/nvme0n1p2 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/nvme0n1p2 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/nvme0n1p2 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /sys/firmware type tmpfs (ro,relatime)
Our server runs Centos 7.2, today it seems to fail.
When I connect to it using ssh, I get
-bash: /share/home/MGI/.bash_profile: Input/output error
and log in like this:
-bash-4.2$
ls failed,
ls: cannot open directory .: Input/output error
Trying to edit a file with vi will get Permission denied
but cd and pwd is still of use.
I googled it and found the disk might be damaged, then I tried some suggestions. mount give me this:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=32831312k,nr_inodes=8207828,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/md126p2 on / type ext4 (rw,relatime,data=ordered)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/mapper/mpatha1 on /share type xfs (rw,relatime,attr2,inode64,noquota)
/dev/md126p1 on /boot type ext4 (rw,relatime,data=ordered)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/1038 type tmpfs (rw,nosuid,nodev,relatime,size=6569364k,mode=700,uid=1038,gid=1039)
tmpfs on /run/user/1016 type tmpfs (rw,nosuid,nodev,relatime,size=6569364k,mode=700,uid=1016,gid=1016)
tmpfs on /run/user/1008 type tmpfs (rw,nosuid,nodev,relatime,size=6569364k,mode=700,uid=1008,gid=1008)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=6569364k,mode=700)
gvfsd-fuse on /run/user/0/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
tmpfs on /run/user/1019 type tmpfs (rw,nosuid,nodev,relatime,size=6569364k,mode=700,uid=1019,gid=1020)
dmesg gives a long output and these two lines appear frequently:
XFS (dm-1): xfs_log_force: error -5 returned.
scsi 11:0:0:0: alua: rtpg failed with 8000002
What should I do now to find out what is actually going wrong and how can I fix it? Many thanks.
k8s 1.2 deployed locally, single-node docker
Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment?
Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not:
[root#totoro brs-kubernetes]# kubectl create -f example.yaml
configmap "special-config" created
pod "dapi-test-pod" created
[root#totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh
/ # cd /etc/config/
/etc/config # ls
/etc/config # ls -alh
total 4
drwxrwxrwt 2 root root 40 Mar 23 18:47 .
drwxr-xr-x 7 root root 4.0K Mar 23 18:47 ..
/etc/config #
example.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: ["sleep", "100"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: how.file
restartPolicy: Never
Summary of conformance test failures follows (asked to run by jayunit100). Full run in this gist.
Summarizing 7 Failures:
[Fail] ConfigMap [It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
[Fail] Downward API volume [It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Downward API volume [It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
[Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Networking [It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
[Fail] Downward API volume [It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
[Fail] ConfigMap [It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
Ran 93 of 265 Specs in 2875.468 seconds
FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s)
FAIL
Output of findmnt:
[schou#totoro single-node]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/fedora-root
│ ext4 rw,relatime,data=ordere
├─/sys sysfs sysfs rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec,
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,
│ │ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,
│ ├─/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime
│ ├─/sys/kernel/config configfs configf rw,relatime
│ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime
├─/proc proc proc rw,nosuid,nodev,noexec,
│ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=
│ └─/proc/fs/nfsd nfsd nfsd rw,relatime
├─/dev devtmpfs devtmpf rw,nosuid,size=8175208k
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relati
│ ├─/dev/mqueue mqueue mqueue rw,relatime
│ └─/dev/hugepages hugetlbfs hugetlb rw,relatime
├─/run tmpfs tmpfs rw,nosuid,nodev,mode=75
│ ├─/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim
│ │ └─/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
│ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim
│ └─/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
├─/tmp tmpfs tmpfs rw
├─/boot /dev/sda2 ext4 rw,relatime,data=ordere
│ └─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077,
├─/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime
├─/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv
│ tmpfs tmpfs rw,relatime
├─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume
│ tmpfs tmpfs rw,relatime
└─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe
tmpfs tmpfs rw,relatime
[schou#totoro single-node]$
Thanks to #Paul Morie for helping me diagnose and fix this (from github issue):
bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to -v /var/lib/kubelet:/var/lib/kubelet:rw,shared
I also had to change MountFlags=slave to MountFlags=shared in my docker systemd file.
After starting kubernetes from this guide: http://kubernetes.io/docs/getting-started-guides/docker/ I get a lot of unused mount points on my node. It seems to depend on how many pods are running. Just now I had to unmount over 2600 mount points. When these build up it causes findmnt to take a lot of resources to run. The mount entries look like this:
tmpfs on /var/lib/kubelet/pods/599d6157-081e-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha type tmpfs (rw)
Does anybody know why these are not getting unmounted automatically? From the tutorial it seems to anticipate that you will have to clean some of these up (look under the Turning down your cluster section), but this seems excessive. A few days ago I had to clean up 22,000 or so of them because I had a mongo cluster and redis running for a while.
--- UPDATE ---
After purging my system of the unused mounts, and waiting a few minutes, findmnt produces entries like this:
├─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
├─/var/lib/docker/containers/c84ad9b0f2ec580bedef394aa46bb147ed6c4f1e9454cd3729459d9127c0986e/shm shm tmpfs rw,nosuid,nodev,noe
├─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
├─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
├─/var/lib/docker/containers/5392d49f5140274ddcfbe757cf6a07336aa60975f3ea122d865a3b80f5540c1f/shm
-- Update #2 -- This is how I am starting kubelet
ARCH=amd64
DNS_IP=10.0.0.10
K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
--name=kubelet \
-d \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=$DNS_IP \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
In looking at some other suggestions (thanks Thibault Deheurles), I tried removing --containerized and --volume=/:/rootfs:ro, but that caused k8s to not start at all.
-- UPDATE #3 --
I tried adding the mount flag ,shared to my /var/lib/kubelet volume command, it now looks like this:
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared
This didn't make a difference.
However, I noticed while tailing my kubelet docker container's logs, that this message occurs every time I get a new mount...
2016-04-26T20:30:52.447842722Z I0426 20:30:52.447559 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
Failed GC comment also appears in log... Here are a few more entries
2016-04-26T20:38:11.436858475Z E0426 20:38:11.436757 21740 kubelet.go:956] Image garbage collection failed: non-existent label "docker-images"
2016-04-26T20:38:12.448049454Z I0426 20:38:12.447852 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:38:52.448175137Z I0426 20:38:52.447949 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:39:14.447892769Z I0426 20:39:14.447649 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:39:28.441137221Z I0426 20:39:28.440920 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:40:20.441118739Z I0426 20:40:20.441018 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:40:22.447832573Z I0426 20:40:22.447590 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:40:53.447612605Z I0426 20:40:53.447534 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:41:27.449053007Z I0426 20:41:27.448820 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:41:30.440974280Z I0426 20:41:30.440889 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:41:58.441001603Z I0426 20:41:58.440906 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
-- UPDATE #4 --
#PaulMorie asked about the mount/findmnt versions
$ which findmnt
/bin/findmnt
$ uname -a
Linux andromeda 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
$ dpkg -L mount | grep findmn
/usr/share/man/man8/findmnt.8.gz
/bin/findmnt
$ dpkg -l mount
ii mount 2.20.1-5.1ubuntu20.7 amd64 Tools for mounting and manipulating filesystems
-- UPDATE #5 -- #tsaarni asked what I did to solve this problem... Here is my hack
[eric#andromeda [feature/k8s-packaging-openvpn]util]$ cat clean-mounts.sh
#!/bin/bash
counter=0
for m in $( mount| grep secret | awk '{print $3}' ); do
sudo umount $m
counter=$[counter + 1]
done
echo "cleaned $counter mounts"
[eric#andromeda [feature/k8s-packaging-openvpn]util]$ cat clean-mounts-watcher.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)"
while : ; do $DIR/clean-mounts.sh ; sleep 60; done
The docker tutorial is outdated for now. You need to add some element on how to run the different services.
The last working version for me was this one