Centos Disk LVM Extension - centos

I am having a problem with a VMWare VM which Centos7 is installed in it.
lsblk command gives something like below
df -h gives this
I am trying to extend root lvm to the partition but I am not able to do it no matter what I tried.
I tried fdisk /dev/sda to create a new partition and extend lvm to this partition but fdisk is getting stuck after partition number.
Some other useful commands give these results just in case they are helpful.
Any help would be appreciated. Thanks in advance.

from your screenshot, your sda2 partition is 199G, sda1 takes 1G, your sda total size is 200G, you cannot make new partition from sda, that's why it stuck there;
and to resolve your issue I provide two options for you to refer,but before operation please make sure you back up all your important data or vm files.
Option 1: this one is just my thoughts,unverified:
from your vgs and pvs, you can see your sda2 only contributes <29G to whole VG(centos), so very simple way of extending your /root is:
1) pvresize /dev/sda2
,after execute, pls run pvs to check whether the pv size increased, if not, stop here
2) vgextend centos /dev/sda2
,after execute,pls check your vgs, see whether the size increased, if so go on to the next
3) lvextend -l 100%FREE /dev/mapper/centos-root
,after this, check lvs, if the root size not increased, go on
4) try:
xfs_growfs /dev/mapper/centos-root
or
resize2fs /dev/mapper/centos-root
Option 2: best practice is to use pvmove, this is strongly recommended for product environment, you can learn from
https://askubuntu.com/questions/161279/how-do-i-move-my-lvm-250-gb-root-partition-to-a-new-120gb-hard-disk

I have executed pvresize /dev/sda2 command and after this I got pvs like below.
After this, I tried to execute vgextend centos /dev/sda2 but I got this error.
However vgs and vgdisplay centos commands are giving something different than before like below.

Related

Adding OSDs to Ceph with WAL+DB

I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.

HowTo zdb -e poolname to recover data from a single ZFS device

I have the following situation:
1*10TB Drive, full of data on a ZFS
I wanted to add a 100GB NVME partition as a cache
instead of using zpool add poolname cache nvmepartion I wrote zpool add poolname nvmepartition
I did not see my mistake and exported this pool.
Neither the NVME drive is availeable any more, nor the system has any information about this pool in the ZFS cache (due to export).
current status:
zpool import shows the pool but I cannot import the pool using any way found on the internet.
zdb -e poolname shows me what i know: the pool, its name, that it (sadly) has 2 children which one is not availeable any more - and that the system has no informatioon about the missing child (so all tricks i found on internet in linked a ghost device etc. wont work either)
as far i know the only way is to use ZDB to generate all files through
the "journal" and pipe/save them to another path.
**
but how? Nowhere I found any documentation on that.
**
note: the 10 TB drive was 90% full, then I added the NVME partion as a sibling - as ZFS is no real raid 0 and due to the fact that these sibling have been so unequal in size and as I did not wrote many data after my mistake happened - I am quite sure that most of my data is still there.

How to merge unallocated space in Raspberry Pi?

I have a 300 GB external drive connected to a Raspberry Pi. I changed the file boot/cmdline.txt in order to use the SD card only to boot up the system. Then the partition /root is located in /dev/sda2 (external drive). How can I increase the size of the root partition? I want to merge all the unallocated space with /sda/dev2.
I tried by using GParted, but it is necessary to unmount before merging:
sudo umount /dev/sda2
umount: /: device is busy.
Gparted Image:
I used fdisk on Ubuntu to achieve something similar. Rasbian is very similar to Ubuntu and should work the same.
You have to actually remove the original partition and the extended partition before creating a new, larger partition. It doesn't loose any data as long as you don't write changes back to the disk. Still, I would take a backup. :)
I wrote a wiki on it here: http://headstation.com/archives/resize-ubuntu-filesystem/

Lustre hangs while mounting oss

I have installed Parallel file system "Lustre" along with this slide with RPM.
Have set node A, B.
Installed mds and mdt to node A. Its mount was successful.
But, After format oss to node B using mkfs.lustre, then I mounted it, but it began Infinite waiting.
And it retrieve this error once 120 seconds.
INFO: task mount.lustre:1541 blocked for more than 120 seconds.
Not tainted 2.6.32-504.8.1.el6_lustre.x86_64 #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Why it occurs? Or can you give me better tutorial or experience? Its version of Lustre is 2.7.0.
Thanks a lot.
It is a info message. As mentioned in the message, though you can echo 0 to "hung_task_timeout_secs" to disable the message from showing up but still I will not recommend it.
Try to lower the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=5″ & "vm.dirty_background_ratio=5" in /etc/sysctl.conf. Activate it by using sysctl -p command, there is no need to reboot the system.

Flashing an internal SD card that is mounted as root

I am working on an embedded linux device that has an internal SD card. This device needs to be updatable without opening the device and taking out the SD card. The goal is to allow users to update their device with a USB flash drive. I would like to completely overwrite the internal SD card with a new SD card image.
My first thought was to unmount the root filesystem and use something to the effect of:
dd if=/mnt/flashdrive/update.img of=/dev/sdcard
However, It appears difficult to actually unmount a root filesystem correctly, as processes like "login" and "systemd" are still using resources on root. As soon as you kill login, for example, the update script is killed as well.
Of course, we could always use dd without unmounting root. This seems rather foolish, however. :P
I was also thinking of modifying the system init script to perform this logic before the system actually mounts the root filesystem.
Is there a correct/easy way to perform this type of update? I would imagine it has been done before.
Thank you!
Re-imaging a mounted file system doesn't sound like a good idea, even if the mount is read-only.
Consider:
Use a ramdisk (initialized from a compressed image) as your actual root filesystem, but perhaps have all but the most essential tools in file systems mounted beneath, which you can drop to upgrade. Most Linux implementations do this early in their boot process anyway before they mount the main disk filesystems: rebooting to do the upgrade may be an option.
SD cards are likely larger than you need anyway. Have two partitions and alternate between them each time you upgrade. Or have a maintenance partition which you boot into to perform upgrades/recovery.
Don't actually image the file system, but instead upgrade individual files.
Try one of or both:
Bring down to single user mode first: telinit 1
or/and
Remount / as readonly: mount -o remount,ro /
before running the dd
Personally I would never do something as you do, but it is possible to do.
Your linux system does it every time it is booted. In fact, what happens is that your kernel initially mounts the initrd, loads all the modules and after that it calls pivot_root to mount the real / .
pivot_root is also a command that can be used from shell, you'd better run man 8 pivot_root but just to give you an idea, you can do something like this
mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
One last thing: this way of performing software upgrade is extremely weak. Please consider other solutions.