Auto mount Google cloud computing disk in Centos (using /etc/fstab) - centos

I'm currently setting up a server # Google Cloud computing, everything works fine except I'm having problems automatically mount the secondary disk to the system.
I'm running centos 6 / cloudlinux 6. I can mount to secondary disk without problems after boot with the following command:
"mount /dev/sdb /data"
Please find below the log of /etc/fstab:
UUID=6c782ac9-0050-4837-b65a-77cb8a390772 / ext4 defaults,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=10417d68-7e4f-4d00-aa55-4dfbb2af8332 / ext4 default 0 0
log of df -h: (*after manual mount)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.8G 1.2G 8.2G 13% /
tmpfs 3.6G 0 3.6G 0% /dev/shm
/dev/sdb 99G 60M 94G 1% /data
Thank you already in advance,
~ Luc

Our provisioning scripts look something like this for our secondary disks:
# Mount the appropriate disk
sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /your_path
# Add disk UID to fstab
DISK=$(sudo blkid -s UUID -o value /dev/sdb)
echo "UUID=$DISK /your_path a ext4 defaults 0 0" | sudo tee -a /etc/fstab

Related

MYSQL Pod on minikube failing (mkdir: cannot create directory '/bitnami/mysql/data': No space left on device)

Friends,
I'm trying to deploy a mysql cluster on minikube using the helm chart of Bitnami. It's not working apparently because of lack of space since I'm getting the following error: mkdir: cannot create directory '/bitnami/mysql/data': No space left on device.
I am running minikube (version: v1.15.0) on a macOS with 500GB RAM, more than the half of it is still free. Any ideas about how could I solve this problem?
I ssh into the minikube environment and run df -h. This the result:
$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.4G 487M 3.0G 15% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 18M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 176K 1.9G 1% /tmp
**/dev/vda1 17G 16G 0 100% /mnt/vda1**
It seems it minikube is really out of space. What can be done in this case?
Here the complete logs of my pod:
mysql 17:08:49.22 Welcome to the Bitnami mysql container
mysql 17:08:49.22 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mysql
mysql 17:08:49.22 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mysql/issues
mysql 17:08:49.23
mysql 17:08:49.23 INFO ==> ** Starting MySQL setup **
mysql 17:08:49.24 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mysql 17:08:49.24 INFO ==> Initializing mysql database
mkdir: cannot create directory '/bitnami/mysql/data': No space left on device
minikube stop && minikube delete then
minikube start --disk-size 50000mb
Since you are using Minikube on a MacOS, Minikube is virtualized. Thus even if you have space on your entire device, you need to allocate more space in the VM, as Abhijit previously mentioned.

Disk Usage in kubernetes pod

I am trying to debug the storage usage in my kubernetes pod. I have seen the pod is evicted because of Disk Pressure. When i login to running pod, the see the following
Filesystem Size Used Avail Use% Mounted on
overlay 30G 21G 8.8G 70% /
tmpfs 64M 0 64M 0% /dev
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sda1 30G 21G 8.8G 70% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 14G 12K 14G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 14G 0 14G 0% /proc/acpi
tmpfs 14G 0 14G 0% /proc/scsi
tmpfs 14G 0 14G 0% /sys/firmware
root#deploy-9f45856c7-wx9hj:/# du -sh /
du: cannot access '/proc/1142/task/1142/fd/3': No such file or directory
du: cannot access '/proc/1142/task/1142/fdinfo/3': No such file or directory
du: cannot access '/proc/1142/fd/4': No such file or directory
du: cannot access '/proc/1142/fdinfo/4': No such file or directory
227M /
root#deploy-9f45856c7-wx9hj:/# du -sh /tmp
11M /tmp
root#deploy-9f45856c7-wx9hj:/# du -sh /dev
0 /dev
root#deploy-9f45856c7-wx9hj:/# du -sh /sys
0 /sys
root#deploy-9f45856c7-wx9hj:/# du -sh /etc
1.5M /etc
root#deploy-9f45856c7-wx9hj:/#
As we can see 21G is consumed, but when i try to run du -sh it just returns 227M. I would like to find out who(which directory) is consuming the space
According to the docs Node Conditions, DiskPressure has to do with conditions on the node causing kubelet to evict the pod. It doesn't necessarily mean it's the pod that caused the conditions.
DiskPressure
Available disk space and inodes on either the node’s root filesystem
or image filesystem has satisfied an eviction threshold
You may want to investigate what's happening on the node instead.
Looks like the process 1142 is still running and holding file descriptors and/or perhaps some space (You may have other processes and other file descriptors too not being released) Is it the kubelet?. To alleviate the problem you can verify that it's running and then kill it:
$ ps -Af | grep 1142
$ kill -9 1142
P.D. You need to provide more information about the processes and what's running on that node.

How to attach extra volume on Centos7 server

I have created additional volume on my server.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 3.4G 15G 19% /
devtmpfs 874M 0 874M 0% /dev
tmpfs 896M 0 896M 0% /dev/shm
tmpfs 896M 17M 879M 2% /run
tmpfs 896M 0 896M 0% /sys/fs/cgroup
tmpfs 180M 0 180M 0% /run/user/0
/dev/sdb 25G 44M 24G 1% /mnt/HC_Volume_1788024
How can I attach /dev/sdb either to the whole server (I mean merge it with "/dev/sda1") or assign it to specific directory on server "/var/lib" without overwriting current /var/lib...
Your will not be able to "merge" as you are using standard sdX devices and not using something like LVM for file systems.
As root, you can manually run:
mount /dev/sdb /var/lib/
The original content of /var/lib will still be there (taking up space on your / filesystem)
To make permanent, (carefully) edit your /etc/fstab and add a line like:
/dev/sdb /var/lib FILESYSTEM_OF_YOUR_SDB_DISK defaults 0 0
You will need to replace "FILESYSTEM_OF_YOUR_SDB_DISK" with the correct filesystem type ("df -T", "blkid" or "lsblk -f" will show the type)
You should test the "correctness" of your /etc/fstab by first:
umount /var/lib (if you previously mounted)
Then run:
mount -a then
df -T
and you should see the mount point corrected and the "mount -a" should have not produced any error.

Centos: Add unportitioned space to root

My main partition mounted on / is 14GB.
The Same drive has additional 47 GB of free unpartitioned space.
How do I add the space to the root partition?
Thank you very much for your help.
df -h output:
[root#dev-dla /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 13G 968M 94% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sdb1 99G 9.2G 85G 10% /data
/dev/sda1 497M 255M 243M 52% /boot
tmpfs 380M 0 380M 0% /run/user/1294246044
lsblk output:
[root#dev-dla /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 15.5G 0 part
├─centos-root 253:0 0 13.9G 0 lvm /
└─centos-swap 253:1 0 1.6G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
└─sdb1 8:17 0 100G 0 part /data
sr0 11:0 1 1024M 0 rom
parted print free output:
(parted) print free
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 64.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary xfs boot
2 525MB 17.2GB 16.7GB primary lvm
17.2GB 64.4GB 47.2GB Free Space
Found the answer myself. Took a little while. The solution as follows:
Create Partition:
[$]# fdisk /dev/sda
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
Partition number (3,4, default 3): 3
First sector (33554432-125829119, default 33554432):
Using default value 33554432
Last sector, +sectors or +size{K,M,G} (33554432-125829119, default 125829119):
Using default value 125829119
Partition 3 of type Linux and of size 44 GiB is set
Command (m for help): w
The partition table has been altered!
Initialize the partition:
[$]# pvcreate /dev/sda3
Extend Volume:
[$]# vgextend centos /dev/sda3
Extend Partition:
[$]# lvextend /dev/centos/root /dev/sda3
Resize Partition:
[$]# xfs_growfs /dev/centos/root

Which device Docker Container writing to?

I am trying to throttle the disk I/O of a Docker container using the blkio controller (without destroying the container), but I am unsure how to find out which device to run the throttling on.
The Docker container is running Mongo. Running a df -h inside the bash of the container gives the following:
root#82e7bdc56db0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-524400-916a3c171357c4f0349e0145e34e7faf60720c66f9a68badcc09d05397190c64 10G 379M 9.7G 4% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/xvda1 32G 3.2G 27G 11% /data/db
shm 64M 0 64M 0% /dev/shm
Is there a way to find out which device to limit on the host machine? Thanks!
$docker info
Containers: 9
Running: 9
Paused: 0
Stopped: 0
Images: 6
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-202:1-524400-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.694 GB
Data Space Total: 107.4 GB
Data Space Available: 30.31 GB
Metadata Space Used: 3.994 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.143 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.110 (2015-10-30)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-38-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.675 GiB
Name: ip-172-31-6-72
ID: 4RCS:IMKM:A5ZT:H5IA:6B4B:M3IG:XGWK:2223:UAZX:GHNA:FUST:E5XC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
df -h on the host machine:
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 377M 6.1M 371M 2% /run
/dev/xvda1 32G 3.2G 27G 11% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 377M 0 377M 0% /run/user/1001