Centos: Add unportitioned space to root - centos

My main partition mounted on / is 14GB.
The Same drive has additional 47 GB of free unpartitioned space.
How do I add the space to the root partition?
Thank you very much for your help.
df -h output:
[root#dev-dla /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 13G 968M 94% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sdb1 99G 9.2G 85G 10% /data
/dev/sda1 497M 255M 243M 52% /boot
tmpfs 380M 0 380M 0% /run/user/1294246044
lsblk output:
[root#dev-dla /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 15.5G 0 part
├─centos-root 253:0 0 13.9G 0 lvm /
└─centos-swap 253:1 0 1.6G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
└─sdb1 8:17 0 100G 0 part /data
sr0 11:0 1 1024M 0 rom
parted print free output:
(parted) print free
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 64.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary xfs boot
2 525MB 17.2GB 16.7GB primary lvm
17.2GB 64.4GB 47.2GB Free Space

Found the answer myself. Took a little while. The solution as follows:
Create Partition:
[$]# fdisk /dev/sda
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
Partition number (3,4, default 3): 3
First sector (33554432-125829119, default 33554432):
Using default value 33554432
Last sector, +sectors or +size{K,M,G} (33554432-125829119, default 125829119):
Using default value 125829119
Partition 3 of type Linux and of size 44 GiB is set
Command (m for help): w
The partition table has been altered!
Initialize the partition:
[$]# pvcreate /dev/sda3
Extend Volume:
[$]# vgextend centos /dev/sda3
Extend Partition:
[$]# lvextend /dev/centos/root /dev/sda3
Resize Partition:
[$]# xfs_growfs /dev/centos/root

Related

Why does it show I have no disk space left while I still have a lot of space available?

I am on a CentOs system, and df shows that I have a lot of disk spaces available:
See this command:
$ git pull
fatal: write error: No space left on device
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 4.2G 24G 15% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 435M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda2 30G 28G 0 100% /usr
/dev/sda7 148G 24G 118G 17% /data0
/dev/sda6 30G 1.3G 27G 5% /var
/dev/sda5 30G 45M 28G 1% /tmp
/dev/sdc1 3.9T 462G 3.3T 13% /data1
/dev/sdb1 274G 107G 154G 42% /data2
tmpfs 13G 0 13G 0% /run/user/60422
And I am currently running the git pull command under /data1, which has 87% spaces left.
Why is that?
EDIT:
df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 1.9M 14K 1.9M 1% /
devtmpfs 16M 610 16M 1% /dev
tmpfs 16M 1 16M 1% /dev/shm
tmpfs 16M 1022 16M 1% /run
tmpfs 16M 16 16M 1% /sys/fs/cgroup
/dev/sda2 1.9M 344K 1.6M 18% /usr
/dev/sda7 9.5M 58K 9.4M 1% /data0
/dev/sda6 1.9M 14K 1.9M 1% /var
/dev/sda5 1.9M 35 1.9M 1% /tmp
/dev/sdc1 251M 160K 251M 1% /data1
/dev/sdb1 18M 1.2K 18M 1% /data2
tmpfs 16M 1 16M 1% /run/user/60422
Maybe you are running out of inodes? Check with df -ih.

is it possible to decrease swap partition space in centos7

Some of my friends told me that larger swap partition is very bad like thousands of web hits/min should be possible on my server,
The swap space is 16 GB, and i installed centos7 with "CWP" -> Control web panel
with enable csf, should i consider decrease swap partition space if it possible without formatting the server and how? or there is a solution to maintain this space not to harm the server,,
[root#server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 201M 16G 2% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 1.8T 256G 1.5T 15% /
/dev/sdb1 1.8T 275G 1.5T 16% /backup
/dev/sda5 16G 83M 15G 1% /tmp
/dev/sda1 969M 187M 716M 21% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1075
[root#server ~]#
I found that swap file i more effective and easier to do in my case, so i disabled the swap partition and created a swapfile instead..
Here how you do it exactly.

Which device Docker Container writing to?

I am trying to throttle the disk I/O of a Docker container using the blkio controller (without destroying the container), but I am unsure how to find out which device to run the throttling on.
The Docker container is running Mongo. Running a df -h inside the bash of the container gives the following:
root#82e7bdc56db0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-524400-916a3c171357c4f0349e0145e34e7faf60720c66f9a68badcc09d05397190c64 10G 379M 9.7G 4% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/xvda1 32G 3.2G 27G 11% /data/db
shm 64M 0 64M 0% /dev/shm
Is there a way to find out which device to limit on the host machine? Thanks!
$docker info
Containers: 9
Running: 9
Paused: 0
Stopped: 0
Images: 6
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-202:1-524400-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.694 GB
Data Space Total: 107.4 GB
Data Space Available: 30.31 GB
Metadata Space Used: 3.994 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.143 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.110 (2015-10-30)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-38-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.675 GiB
Name: ip-172-31-6-72
ID: 4RCS:IMKM:A5ZT:H5IA:6B4B:M3IG:XGWK:2223:UAZX:GHNA:FUST:E5XC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
df -h on the host machine:
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 377M 6.1M 371M 2% /run
/dev/xvda1 32G 3.2G 27G 11% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 377M 0 377M 0% /run/user/1001

Apache Kafka - Uneven load across disks and brokers

I have 2x broker Kafka setup running on EC2, each with 4x4GB GP2 SSDs, the topic has 6 partitions and 1 replica. They drives mounted and I have set them up in the server.properties. But when I was load testing my system and seeing what was happening with the drives, 1 of the 4 drive on broker 1 had a had stored a lot of the data, eg of what I got:
Broker 1: ** NOTE: I manually reproduced the figures for mount /a for the post ***
Filesystem Size Used Avail Use% Mounted on
udev 16G 12K 16G 1% /dev
tmpfs 3.2G 344K 3.2G 1% /run
/dev/xvda1 7.8G 1.3G 6.1G 17% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 16G 0 16G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdg 3.9G 8.0M 3.6G 1% /b
/dev/xvdf 3.9G 600M 3.2G 17% /a
/dev/xvdh 3.9G 8.0M 3.6G 1% /c
/dev/xvdi 3.9G 8.0M 3.6G 1% /d
Broker 2:
Filesystem Size Used Avail Use% Mounted on
udev 16G 12K 16G 1% /dev
tmpfs 3.2G 344K 3.2G 1% /run
/dev/xvda1 7.8G 1.3G 6.1G 17% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 16G 0 16G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdg 3.9G 8.0M 3.6G 1% /b
/dev/xvdf 3.9G 8.0M 3.6G 1% /a
/dev/xvdh 3.9G 8.0M 3.6G 1% /c
/dev/xvdi 3.9G 8.0M 3.6G 1% /d
Can someone explain what is happening and if I have set something up wrong? I thought they were supposed to be approx even across all drives?
When you send load over Kafka, the producer uses a Partitioner implementation over the set of keys being sent, in order to work out which partition to write the message into. The default Partitioner implementation uses a hashing function. If you send all of your messages with the same key, then they will all hash into the same partition. The same can be true of a small set of keys - hashing often produces uneven distributions.
Your best bet is to use a larger key set, or configure the producer with a Partitioner that performs a more even distribution of messages - via round-robin for example. Whether this is something you want to do depends on whether you have a requirement to ensure that some messages are processed in order, in which case you should ensure that related messages use the same key, and take this into account in your Partitioner.

Auto mount Google cloud computing disk in Centos (using /etc/fstab)

I'm currently setting up a server # Google Cloud computing, everything works fine except I'm having problems automatically mount the secondary disk to the system.
I'm running centos 6 / cloudlinux 6. I can mount to secondary disk without problems after boot with the following command:
"mount /dev/sdb /data"
Please find below the log of /etc/fstab:
UUID=6c782ac9-0050-4837-b65a-77cb8a390772 / ext4 defaults,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
UUID=10417d68-7e4f-4d00-aa55-4dfbb2af8332 / ext4 default 0 0
log of df -h: (*after manual mount)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.8G 1.2G 8.2G 13% /
tmpfs 3.6G 0 3.6G 0% /dev/shm
/dev/sdb 99G 60M 94G 1% /data
Thank you already in advance,
~ Luc
Our provisioning scripts look something like this for our secondary disks:
# Mount the appropriate disk
sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /your_path
# Add disk UID to fstab
DISK=$(sudo blkid -s UUID -o value /dev/sdb)
echo "UUID=$DISK /your_path a ext4 defaults 0 0" | sudo tee -a /etc/fstab