How to merge unallocated space in Raspberry Pi? - merge

I have a 300 GB external drive connected to a Raspberry Pi. I changed the file boot/cmdline.txt in order to use the SD card only to boot up the system. Then the partition /root is located in /dev/sda2 (external drive). How can I increase the size of the root partition? I want to merge all the unallocated space with /sda/dev2.
I tried by using GParted, but it is necessary to unmount before merging:
sudo umount /dev/sda2
umount: /: device is busy.
Gparted Image:

I used fdisk on Ubuntu to achieve something similar. Rasbian is very similar to Ubuntu and should work the same.
You have to actually remove the original partition and the extended partition before creating a new, larger partition. It doesn't loose any data as long as you don't write changes back to the disk. Still, I would take a backup. :)
I wrote a wiki on it here: http://headstation.com/archives/resize-ubuntu-filesystem/

Related

Truncated file in XFS filesystem using dd - how to recover

Disk layout
My HDD raid array is going for his end of life, and I bought some new disks for it.
Old HDD I have used as a storage of raw disk images for kvm/qemu virtual machines.
Raid array was built using mdadm. On md device I have physical volume for LVM. On physical volume I have XFS file system which stores raw disk images.
Every raw disk image was made by qemu-img and contains physical volume for LVM. One PV = one LV = one VG inside raw disk image.
Action
When I tried to use cp for data moving I was encountered with bad blocks and i/o problems in my raid array, so I switched from cp to dd with noerror,sync flags
I wrote dd if=/mnt/old/file.img of=/mnt/**old**/file.img bs=4k conv=noerror,sync
Problem
Now file /mnt/old/file.img has zero size in XFS file system.
Is there a simple solution to recover it?
My sense is your RAID array has failed. You can see the RAID state with...
cat /proc/mdstat
Since you are seeing i/o errors that is likely the source of your problem. The best path forward would be to make sector level copies of each RAID member (or at a minimum the member(s) that are throwing i/o errors). See Linux ddrescue. It is desigened to copy failing hard drives. Then perform recovery work from the copies.
Finally I have found the solution, but it isn't very simple.
Xfs_undelete haven't matched my problem because it does not support B+Tree extent storage format (V3) for very big files.
Successfull semi-manual procedure that had successed my problem is consists of theese main steps:
Unmount filesystem immediately and make full partition backup using dd to a file
Investigate XFS log entries about truncated file
Revert manually inode core header using xfs_db in expert mode
MB. Recovering inode core will not unmark extents as non-free, and when you try to copy some data in usual way from file with recovered inode header you will get i/o error.
It was a cause for developing python script.
Use script that extracts extents data from B+Tree tree for inode and writes them to disk
I have published recovery script under LGPL license at GitHub
P.S. Some data was lost because of corrupted inode b+tree extent records, but they are haven't make sense for me.

Centos Disk LVM Extension

I am having a problem with a VMWare VM which Centos7 is installed in it.
lsblk command gives something like below
df -h gives this
I am trying to extend root lvm to the partition but I am not able to do it no matter what I tried.
I tried fdisk /dev/sda to create a new partition and extend lvm to this partition but fdisk is getting stuck after partition number.
Some other useful commands give these results just in case they are helpful.
Any help would be appreciated. Thanks in advance.
from your screenshot, your sda2 partition is 199G, sda1 takes 1G, your sda total size is 200G, you cannot make new partition from sda, that's why it stuck there;
and to resolve your issue I provide two options for you to refer,but before operation please make sure you back up all your important data or vm files.
Option 1: this one is just my thoughts,unverified:
from your vgs and pvs, you can see your sda2 only contributes <29G to whole VG(centos), so very simple way of extending your /root is:
1) pvresize /dev/sda2
,after execute, pls run pvs to check whether the pv size increased, if not, stop here
2) vgextend centos /dev/sda2
,after execute,pls check your vgs, see whether the size increased, if so go on to the next
3) lvextend -l 100%FREE /dev/mapper/centos-root
,after this, check lvs, if the root size not increased, go on
4) try:
xfs_growfs /dev/mapper/centos-root
or
resize2fs /dev/mapper/centos-root
Option 2: best practice is to use pvmove, this is strongly recommended for product environment, you can learn from
https://askubuntu.com/questions/161279/how-do-i-move-my-lvm-250-gb-root-partition-to-a-new-120gb-hard-disk
I have executed pvresize /dev/sda2 command and after this I got pvs like below.
After this, I tried to execute vgextend centos /dev/sda2 but I got this error.
However vgs and vgdisplay centos commands are giving something different than before like below.

HowTo zdb -e poolname to recover data from a single ZFS device

I have the following situation:
1*10TB Drive, full of data on a ZFS
I wanted to add a 100GB NVME partition as a cache
instead of using zpool add poolname cache nvmepartion I wrote zpool add poolname nvmepartition
I did not see my mistake and exported this pool.
Neither the NVME drive is availeable any more, nor the system has any information about this pool in the ZFS cache (due to export).
current status:
zpool import shows the pool but I cannot import the pool using any way found on the internet.
zdb -e poolname shows me what i know: the pool, its name, that it (sadly) has 2 children which one is not availeable any more - and that the system has no informatioon about the missing child (so all tricks i found on internet in linked a ghost device etc. wont work either)
as far i know the only way is to use ZDB to generate all files through
the "journal" and pipe/save them to another path.
**
but how? Nowhere I found any documentation on that.
**
note: the 10 TB drive was 90% full, then I added the NVME partion as a sibling - as ZFS is no real raid 0 and due to the fact that these sibling have been so unequal in size and as I did not wrote many data after my mistake happened - I am quite sure that most of my data is still there.

How to increase boot partition of hdd.img yocto

I created a hdd.img using Yocto and copied the same to pendrive using dd command.
On PC it is mounted as single partition "boot".
When I checked on command line using sudo fdisk -l, it is showing four partitions.
I booted my hardware from pendrive, now I want to remove rootfs.img from boot partition mounted at /media/realroot and copy same rootfs.img again to the same partition.
It is giving No disk space error. I am trying to copy the same rootfs.img which I deleted.
Why it is using only 700MB for boot partiton of 16GB pendrive, or how to increase boot partition size in Yocto to make use of full 16GB?
I'm not sure I understood the real problem but if you are looking for a way to increase the free space on the rootfs, take a look at IMAGE_ROOTFS_EXTRA_SPACE as well as IMAGE_OVERHEAD_FACTOR.

Flashing an internal SD card that is mounted as root

I am working on an embedded linux device that has an internal SD card. This device needs to be updatable without opening the device and taking out the SD card. The goal is to allow users to update their device with a USB flash drive. I would like to completely overwrite the internal SD card with a new SD card image.
My first thought was to unmount the root filesystem and use something to the effect of:
dd if=/mnt/flashdrive/update.img of=/dev/sdcard
However, It appears difficult to actually unmount a root filesystem correctly, as processes like "login" and "systemd" are still using resources on root. As soon as you kill login, for example, the update script is killed as well.
Of course, we could always use dd without unmounting root. This seems rather foolish, however. :P
I was also thinking of modifying the system init script to perform this logic before the system actually mounts the root filesystem.
Is there a correct/easy way to perform this type of update? I would imagine it has been done before.
Thank you!
Re-imaging a mounted file system doesn't sound like a good idea, even if the mount is read-only.
Consider:
Use a ramdisk (initialized from a compressed image) as your actual root filesystem, but perhaps have all but the most essential tools in file systems mounted beneath, which you can drop to upgrade. Most Linux implementations do this early in their boot process anyway before they mount the main disk filesystems: rebooting to do the upgrade may be an option.
SD cards are likely larger than you need anyway. Have two partitions and alternate between them each time you upgrade. Or have a maintenance partition which you boot into to perform upgrades/recovery.
Don't actually image the file system, but instead upgrade individual files.
Try one of or both:
Bring down to single user mode first: telinit 1
or/and
Remount / as readonly: mount -o remount,ro /
before running the dd
Personally I would never do something as you do, but it is possible to do.
Your linux system does it every time it is booted. In fact, what happens is that your kernel initially mounts the initrd, loads all the modules and after that it calls pivot_root to mount the real / .
pivot_root is also a command that can be used from shell, you'd better run man 8 pivot_root but just to give you an idea, you can do something like this
mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
One last thing: this way of performing software upgrade is extremely weak. Please consider other solutions.