secret default-token mount leak while running kubernetes in docker - kubernetes

After starting kubernetes from this guide: http://kubernetes.io/docs/getting-started-guides/docker/ I get a lot of unused mount points on my node. It seems to depend on how many pods are running. Just now I had to unmount over 2600 mount points. When these build up it causes findmnt to take a lot of resources to run. The mount entries look like this:
tmpfs on /var/lib/kubelet/pods/599d6157-081e-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha type tmpfs (rw)
Does anybody know why these are not getting unmounted automatically? From the tutorial it seems to anticipate that you will have to clean some of these up (look under the Turning down your cluster section), but this seems excessive. A few days ago I had to clean up 22,000 or so of them because I had a mongo cluster and redis running for a while.
--- UPDATE ---
After purging my system of the unused mounts, and waiting a few minutes, findmnt produces entries like this:
├─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/02929977-0812-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
├─/var/lib/docker/containers/c84ad9b0f2ec580bedef394aa46bb147ed6c4f1e9454cd3729459d9127c0986e/shm shm tmpfs rw,nosuid,nodev,noe
├─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/0eb8631e-0810-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-2vjia tmpfs tmpfs rw,relatime
├─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
│ └─/var/lib/kubelet/pods/fae71387-08aa-11e6-a512-0090f5ea551f/volumes/kubernetes.io~secret/default-token-kkzha tmpfs tmpfs rw,relatime
├─/var/lib/docker/containers/5392d49f5140274ddcfbe757cf6a07336aa60975f3ea122d865a3b80f5540c1f/shm
-- Update #2 -- This is how I am starting kubelet
ARCH=amd64
DNS_IP=10.0.0.10
K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
--name=kubelet \
-d \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=$DNS_IP \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
In looking at some other suggestions (thanks Thibault Deheurles), I tried removing --containerized and --volume=/:/rootfs:ro, but that caused k8s to not start at all.
-- UPDATE #3 --
I tried adding the mount flag ,shared to my /var/lib/kubelet volume command, it now looks like this:
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared
This didn't make a difference.
However, I noticed while tailing my kubelet docker container's logs, that this message occurs every time I get a new mount...
2016-04-26T20:30:52.447842722Z I0426 20:30:52.447559 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
Failed GC comment also appears in log... Here are a few more entries
2016-04-26T20:38:11.436858475Z E0426 20:38:11.436757 21740 kubelet.go:956] Image garbage collection failed: non-existent label "docker-images"
2016-04-26T20:38:12.448049454Z I0426 20:38:12.447852 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:38:52.448175137Z I0426 20:38:52.447949 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:39:14.447892769Z I0426 20:39:14.447649 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:39:28.441137221Z I0426 20:39:28.440920 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:40:20.441118739Z I0426 20:40:20.441018 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:40:22.447832573Z I0426 20:40:22.447590 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:40:53.447612605Z I0426 20:40:53.447534 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
2016-04-26T20:41:27.449053007Z I0426 20:41:27.448820 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/d95a6048198f747c5fcb74ee23f1f25c/volumes/kubernetes.io~empty-dir/data: exit status 1
2016-04-26T20:41:30.440974280Z I0426 20:41:30.440889 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/6bc8072c-0be9-11e6-b3e6-0090f5ea551f/volumes/kubernetes.io~empty-dir/etcd-storage: exit status 1
2016-04-26T20:41:58.441001603Z I0426 20:41:58.440906 21740 nsenter_mount.go:185] Failed findmnt command for path /var/lib/kubelet/pods/1df6a8b4d6e129d5ed8840e370203c11/volumes/kubernetes.io~empty-dir/varetcd: exit status 1
-- UPDATE #4 --
#PaulMorie asked about the mount/findmnt versions
$ which findmnt
/bin/findmnt
$ uname -a
Linux andromeda 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
$ dpkg -L mount | grep findmn
/usr/share/man/man8/findmnt.8.gz
/bin/findmnt
$ dpkg -l mount
ii mount 2.20.1-5.1ubuntu20.7 amd64 Tools for mounting and manipulating filesystems
-- UPDATE #5 -- #tsaarni asked what I did to solve this problem... Here is my hack
[eric#andromeda [feature/k8s-packaging-openvpn]util]$ cat clean-mounts.sh
#!/bin/bash
counter=0
for m in $( mount| grep secret | awk '{print $3}' ); do
sudo umount $m
counter=$[counter + 1]
done
echo "cleaned $counter mounts"
[eric#andromeda [feature/k8s-packaging-openvpn]util]$ cat clean-mounts-watcher.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)"
while : ; do $DIR/clean-mounts.sh ; sleep 60; done

The docker tutorial is outdated for now. You need to add some element on how to run the different services.
The last working version for me was this one

Related

Disk Usage in kubernetes pod

I am trying to debug the storage usage in my kubernetes pod. I have seen the pod is evicted because of Disk Pressure. When i login to running pod, the see the following
Filesystem Size Used Avail Use% Mounted on
overlay 30G 21G 8.8G 70% /
tmpfs 64M 0 64M 0% /dev
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sda1 30G 21G 8.8G 70% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 14G 12K 14G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 14G 0 14G 0% /proc/acpi
tmpfs 14G 0 14G 0% /proc/scsi
tmpfs 14G 0 14G 0% /sys/firmware
root#deploy-9f45856c7-wx9hj:/# du -sh /
du: cannot access '/proc/1142/task/1142/fd/3': No such file or directory
du: cannot access '/proc/1142/task/1142/fdinfo/3': No such file or directory
du: cannot access '/proc/1142/fd/4': No such file or directory
du: cannot access '/proc/1142/fdinfo/4': No such file or directory
227M /
root#deploy-9f45856c7-wx9hj:/# du -sh /tmp
11M /tmp
root#deploy-9f45856c7-wx9hj:/# du -sh /dev
0 /dev
root#deploy-9f45856c7-wx9hj:/# du -sh /sys
0 /sys
root#deploy-9f45856c7-wx9hj:/# du -sh /etc
1.5M /etc
root#deploy-9f45856c7-wx9hj:/#
As we can see 21G is consumed, but when i try to run du -sh it just returns 227M. I would like to find out who(which directory) is consuming the space
According to the docs Node Conditions, DiskPressure has to do with conditions on the node causing kubelet to evict the pod. It doesn't necessarily mean it's the pod that caused the conditions.
DiskPressure
Available disk space and inodes on either the node’s root filesystem
or image filesystem has satisfied an eviction threshold
You may want to investigate what's happening on the node instead.
Looks like the process 1142 is still running and holding file descriptors and/or perhaps some space (You may have other processes and other file descriptors too not being released) Is it the kubelet?. To alleviate the problem you can verify that it's running and then kill it:
$ ps -Af | grep 1142
$ kill -9 1142
P.D. You need to provide more information about the processes and what's running on that node.

How to attach extra volume on Centos7 server

I have created additional volume on my server.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 3.4G 15G 19% /
devtmpfs 874M 0 874M 0% /dev
tmpfs 896M 0 896M 0% /dev/shm
tmpfs 896M 17M 879M 2% /run
tmpfs 896M 0 896M 0% /sys/fs/cgroup
tmpfs 180M 0 180M 0% /run/user/0
/dev/sdb 25G 44M 24G 1% /mnt/HC_Volume_1788024
How can I attach /dev/sdb either to the whole server (I mean merge it with "/dev/sda1") or assign it to specific directory on server "/var/lib" without overwriting current /var/lib...
Your will not be able to "merge" as you are using standard sdX devices and not using something like LVM for file systems.
As root, you can manually run:
mount /dev/sdb /var/lib/
The original content of /var/lib will still be there (taking up space on your / filesystem)
To make permanent, (carefully) edit your /etc/fstab and add a line like:
/dev/sdb /var/lib FILESYSTEM_OF_YOUR_SDB_DISK defaults 0 0
You will need to replace "FILESYSTEM_OF_YOUR_SDB_DISK" with the correct filesystem type ("df -T", "blkid" or "lsblk -f" will show the type)
You should test the "correctness" of your /etc/fstab by first:
umount /var/lib (if you previously mounted)
Then run:
mount -a then
df -T
and you should see the mount point corrected and the "mount -a" should have not produced any error.

/dev/mapper/centos_server2-root is full

i have a problem with centOS and DirectAdmin,
i can't login to panel, because not space available for create session
when i run df -h command, get the following result:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_server2-root 50G 50G 20K 100% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 8.6M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 1014M 143M 872M 15% /boot
/dev/mapper/centos_server2-home 192G 124G 68G 65% /home
tmpfs 1.6G 0 1.6G 0% /run/user/0
how i can free up /dev/mapper/centos_server2-root ?
You can remove the old logs from your system.
For removing old log files, are your logs being rotated, e.g. do you have /var/log/messages, /var/log/messages.1, /var/log/messages.2.gz, etc, or maybe /var/log/messages-20101221, /var/log/messages-20101220.gz, etc?
The obvious way to remove those is by age, e.g.
# find /var/log -type f -mtime +14 -print
# find /var/log -type f -mtime +14 -exec rm '{}' \;
Found out here https://serverfault.com/questions/215179/centos-100-disk-full-how-to-remove-log-files-history-etc (by mikel)
Note: Use the rm command with cautious.

Populating Kubernetes volume from ConfigMap [duplicate]

k8s 1.2 deployed locally, single-node docker
Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment?
Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not:
[root#totoro brs-kubernetes]# kubectl create -f example.yaml
configmap "special-config" created
pod "dapi-test-pod" created
[root#totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh
/ # cd /etc/config/
/etc/config # ls
/etc/config # ls -alh
total 4
drwxrwxrwt 2 root root 40 Mar 23 18:47 .
drwxr-xr-x 7 root root 4.0K Mar 23 18:47 ..
/etc/config #
example.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: ["sleep", "100"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: how.file
restartPolicy: Never
Summary of conformance test failures follows (asked to run by jayunit100). Full run in this gist.
Summarizing 7 Failures:
[Fail] ConfigMap [It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
[Fail] Downward API volume [It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Downward API volume [It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
[Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Networking [It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
[Fail] Downward API volume [It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
[Fail] ConfigMap [It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
Ran 93 of 265 Specs in 2875.468 seconds
FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s)
FAIL
Output of findmnt:
[schou#totoro single-node]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/fedora-root
│ ext4 rw,relatime,data=ordere
├─/sys sysfs sysfs rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec,
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,
│ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,
│ │ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,
│ ├─/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec,
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime
│ ├─/sys/kernel/config configfs configf rw,relatime
│ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime
├─/proc proc proc rw,nosuid,nodev,noexec,
│ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=
│ └─/proc/fs/nfsd nfsd nfsd rw,relatime
├─/dev devtmpfs devtmpf rw,nosuid,size=8175208k
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relati
│ ├─/dev/mqueue mqueue mqueue rw,relatime
│ └─/dev/hugepages hugetlbfs hugetlb rw,relatime
├─/run tmpfs tmpfs rw,nosuid,nodev,mode=75
│ ├─/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim
│ │ └─/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
│ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim
│ └─/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
├─/tmp tmpfs tmpfs rw
├─/boot /dev/sda2 ext4 rw,relatime,data=ordere
│ └─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077,
├─/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime
├─/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv
│ tmpfs tmpfs rw,relatime
├─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume
│ tmpfs tmpfs rw,relatime
└─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe
tmpfs tmpfs rw,relatime
[schou#totoro single-node]$
Thanks to #Paul Morie for helping me diagnose and fix this (from github issue):
bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to -v /var/lib/kubelet:/var/lib/kubelet:rw,shared
I also had to change MountFlags=slave to MountFlags=shared in my docker systemd file.

Auto mount Google cloud computing disk in Centos (using /etc/fstab)

I'm currently setting up a server # Google Cloud computing, everything works fine except I'm having problems automatically mount the secondary disk to the system.
I'm running centos 6 / cloudlinux 6. I can mount to secondary disk without problems after boot with the following command:
"mount /dev/sdb /data"
Please find below the log of /etc/fstab:
UUID=6c782ac9-0050-4837-b65a-77cb8a390772 / ext4 defaults,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=10417d68-7e4f-4d00-aa55-4dfbb2af8332 / ext4 default 0 0
log of df -h: (*after manual mount)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.8G 1.2G 8.2G 13% /
tmpfs 3.6G 0 3.6G 0% /dev/shm
/dev/sdb 99G 60M 94G 1% /data
Thank you already in advance,
~ Luc
Our provisioning scripts look something like this for our secondary disks:
# Mount the appropriate disk
sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /your_path
# Add disk UID to fstab
DISK=$(sudo blkid -s UUID -o value /dev/sdb)
echo "UUID=$DISK /your_path a ext4 defaults 0 0" | sudo tee -a /etc/fstab