Yocto separate home partition; best practice for fstab generation - yocto

I've started experimenting with Yocto, starting from my evaluation kit manufacturer's repository. Since I want to have a read-only root, I want to move /home to a separate read/write partition.
I can create the partitions on the SD image by creating a custom wic.in.
part u-boot --source rawcopy --sourceparams="file=imx-boot" --ondisk mmcblk --no-table --align ${IMX_BOOT_SEEK}
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root --align 8192 --fixed-size=4096M --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk mmcblk --fstype=ext4 --label home --align 8192 --fixed-size=2048M
bootloader --ptable msdos
But the home partition is not used as HOME, but mounted as a normal data partition in media.
root#imx8mp-var-dart:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk2 179:0 0 14.6G 0 disk
`-mmcblk2p1 179:1 0 14.6G 0 part /run/media/mmcblk2p1
mmcblk2boot0 179:32 0 4M 1 disk
mmcblk2boot1 179:64 0 4M 1 disk
mmcblk1 179:96 0 14.8G 0 disk
|-mmcblk1p1 179:97 0 4G 0 part /
`-mmcblk1p2 179:98 0 4G 0 part /run/media/mmcblk1p2
Default fstab (file located in poky's layer) doesn't have the new home entry, so that's the expected behavior.
# stock fstab - you probably want to override this with a machine specific one
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
# uncomment this if your device has a SD/MMC/Transflash slot
#/dev/mmcblk0p1 /media/card auto defaults,sync,noauto 0 0
I've "stolen" the wic.in configuration from [here][1]. A comment in the same thread states that "WIC will automatically add an entry in fstab". But that doesn't seems the case...
How to let Yocto/WIC populate fstab correctly, so that the home partition is actually used as HOME?
Another approach is to add a manually generated fstab in a custom layer.
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
/dev/mmcblk1p2 /home auto defaults 0 0
Done it, and it works. Although I don't really like the fact that I have to define the SD card partition name as input. As an example, if I add a third partition between root and home, home partition will become mmcblk1p3, and I need to edit fstab, too.
Looks like the root partition is just pointing to /dev/root. I suppose it works because the partition is labeled root. (On a side note, /dev/root doesn't exist in root filesystem (searched with ls -la /dev)). Tried to use /dev/home, but it didn't work.
Is there a way to declare the home partition in fstab without using the "low level" SD partition name, but something more generic/portable?

Related

How to set /dev/root filesystem size to the partition size

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 4.3G 1.9G 2.2G 47% /
devtmpfs 980M 0 980M 0% /dev
tmpfs 981M 0 981M 0% /dev/shm
tmpfs 981M 33M 948M 4% /run
tmpfs 981M 0 981M 0% /sys/fs/cgroup
tmpfs 981M 0 981M 0% /tmp
tmpfs 981M 16K 981M 1% /var/volatile
# fdisk -l
Disk /dev/mmcblk1: 7.3 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier:
Device Start End Sectors Size Type
/dev/mmcblk1p1 16384 24575 8192 4M unknown
/dev/mmcblk1p2 24576 32767 8192 4M unknown
/dev/mmcblk1p3 32768 69859 37092 18.1M unknown
/dev/mmcblk1p4 81920 15269854 15187935 7.2G unknown
As far as I know, the /dev/root filesystem size is the size of the content that is being copied to the /dev/root.
My goal is to have /dev/root size the same as /dev/mmcblk1p4 which is 7.2G.
How can I instruct Yocto give the /dev/root filesystem the size of the partition it is mounted to?
I can see two possible solutions to your issue.
The first one is by telling Yocto to generate an image with a sepcific IMAGE_ROOTFS_SIZE value. As stated in the Yocto Mega-Manual. The size is specified in KBytes. Modify your machine.conf or local.conf to add this parameter.
In your case, the value seems to be:
15269854 - 81920 = 15187934 sectors
sectors are 512 Bytes on your system (see verification below)
15187935 * 512 = 7776222208 Bytes
7776222208 / 1024 = 7593967 KBytes
To verify the sector size of 512B:
7593967 / (1024 * 1024) = 7.242 GB
With 512 Bytes blocksize, the partition size is 7.2GB, as stated by fdisk
I think it is a good idea to reduce it a little bit, a value like 7230000 Kilobytes (~7.23 GB) can be a good candidate.
The other method is to use resize2fs program if your partition type is ext2/3/4. This program can be executed on mounted or unmounted devices. If you are using a SDcard, the simplest method will be to insert it into your computer, to unmount it, and run resize2fs /dev/<mysdcarddevice>. You can also execute it directly from your embedded board. In this case you will need to add the package e2fsprogs-resize2fs on the board with IMAGE_INSTALL += "e2fsprogs-resize2fs", then run resize2fs /dev/mmcblk1p4.
As PierreOlivier proposed I used resize2fs program.
Because I am using Yocto to build the custom image I use pkg_postinst_ontarget which runs only once at the first boot on the target machine.
In one of my recipe I put
pkg_postinst_ontarget_${PN}() {
#!/bin/bash
resize2fs /dev/mmcblk1p4
}
This resizes the selected partition on the first boot.
Thank you PierreOlivier

ddrescue read non tried blocks

I'm trying to rescue a 1TB disk which has read errors. Because I didn't have a free 1TB drive, I created a raid 0 of two 500GB drives.
I used the command line from Wikipedia for the first run:
sudo ddrescue -f -n /dev/sdk /dev/md/md_test /home/user/rescue.map
ddrescue already completed this run after approximately 20 hours and more than 7000 read errors.
Now I'm trying to do a second run
sudo ddrescue -d -f -v -r3 /dev/sdk /dev/md/md_test /home/user/rescue.map
and read the non tried blocks but ddrescue gives me this:
GNU ddrescue 1.23
About to copy 1000 GBytes from '/dev/sdk' to '/dev/md/md_test'
Starting positions: infile = 0 B, outfile = 0 B
Copy block size: 128 sectors Initial skip size: 19584 sectors
Sector size: 512 Bytes
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 635060 MB, tried: 0 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 1000 GB, non-trimmed: 0 B, current rate: 0 B/s
opos: 1000 GB, non-scraped: 0 B, average rate: 0 B/s
non-tried: 365109 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 635060 MB, bad areas: 0, run time: 0s
pct rescued: 63.49%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Copying non-tried blocks... Pass 1 (forwards)
ddrescue: Write error: Invalid argument
I can't figure out what this write errors means, already searched the manual for answers.
Any help is appreciated! Thx!
After a while I found the cause for the write error, the capacity of the corrupt drive is 931,5G but the total capacity of the raid 0 was just 931,3G.
Realized it, while I took a closer look to the output of lsblk command.
So I rebuild the raid 0 array with 3 500G drives and ddrescue now works as expected.

VMWare Ubuntu warns me on no disk space, but GParted shows there is lots of space

I have a vmware ubuntu allocated with 300G of disk space, but recently I got a disk space warning.
I run df -h as this:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 797M 88M 710M 11% /run
/dev/mapper/vgroot-root 25G 18G 5.8G 76% /
tmpfs 3.9G 106M 3.8G 3% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 945M 75M 806M 9% /boot
/dev/mapper/vgroot-home 15G 14G 56K 100% /home
vmhgfs-fuse 239G 200G 40G 84% /mnt/hgfs
tmpfs 797M 0 797M 0% /run/user/999
tmpfs 797M 64K 797M 1% /run/user/500
yes I see /home directory is 100% full, but know can I enlarge it?
I tried to run gparted, but there seems to be a lot of space.
What works for me was lowering space for Previous versions of files (http://windows.microsoft.com/en-us/windows/previous-versions-files-faq#1TC=windows-7).
1.Click Start Icon
2.Left Click on "Computer" and click on Properties
3.Click on "System Protection" on left side
4.Click on Disk and "Configure"
5.Lower your quota
As a quick & easy fix is try a different data (SATA?) cable, maybe even power cable.
Check the syslog (usually /var/log/syslog & dmesg for any messages about the drive when it disappears/reappears.
You might even be able to hear the drive working while testing / reading / writing, or even just spinning idle. So if it suddenly goes quiet while it should be working that's bad, especially if it disappears from all Linux tools / listings.
That's a little weird that the self test didn't log anything...
You could try running the short self-test again, optionally in -C, --captive mode (some of my older drives would always abort the test ~90% if captive).
While testing (not in captive mode) you could check the test's status with smartctl -c /dev/sdX to see "Self-test execution status:" and the next line has % remaining. Or just cut out those lines with:
smartctl -c /dev/sdX | grep "^Self" -A1
-c will also show what tests are supported.
Try the other self tests (conveyance, offline, long)
I like smartctl --xall to see all the results.
I believe the "SMART Attributes Data Structure -> Vendor Specific SMART Attributes with Thresholds" has "problem" attributes showing a "VALUE" of 100 or less (higher numbers being better)
the "RAW_VALUES" are very vendor specific & might be a code & might not have any direct relation to the attribute (Power_On_Minutes & Power_Cycle_Count should be actual minutes & a count, but there may be no guarantees)
Drives can put themselves to sleep after a while, but they should still remain connected & listed in Linux. There's a smartctl command to get & set it, here's the relevant section from the man page:

Mongodb build/compile error: not enough memory on Ubuntu

Preface so this isn't marked as a duplicate: I've seen lots of mongodb memory issues posted on stack overflow, but none that have to do with errors on the compilation.
I just freshly downloaded and ran Ubuntu on Virtualbox (on a mac), so I feel like there should be enough memory. However, when I try to compile Mongodb from the source code I've gotten the following errors about an hour into the compilation (I have done this a few times now)
scons: *** [<whatever file it was working on>] No space left on device
scons: building terminated because of errors
and on a separate occasion
IOError: [Errno 28] No space left on device:
File "/usr/lib/scons/SCons/Script/Main.py", line 1359:
_exec_main(parser, values)
File "/usr/lib/scons/SCons/Script/Main.py", line 1323:
_main(parser)
File "/usr/lib/scons/SCons/Script/Main.py", line 1072:
nodes = _build_targets(fs, options, targets, target_top)
File "/usr/lib/scons/SCons/Script/Main.py", line 1281:
jobs.run(postfunc = jobs_postfunc)
File "/usr/lib/scons/SCons/Job.py", line 113:
postfunc()
File "/usr/lib/scons/SCons/Script/Main.py", line 1278:
SCons.SConsign.write()
File "/usr/lib/scons/SCons/SConsign.py", line 109:
syncmethod()
File "/usr/lib/scons/SCons/dblite.py", line 117:
self._pickle_dump(self._dict, f, 1)
Exception IOError: (28, 'No space left on device') in <bound method dblite.__del__ of <SCons.dblite.dblite object at 0x7fbe2a577dd0>> ignored
I've tried both of the following build commands:
scons all --dbg=on -j1
scons --dbg=on -j1
According to VirtualBox the virtual size is 8 GB and the Actual size is 4.09 GB. Also, if it makes the difference, the odds that the memory on my mac is actually full is slim to none.
Any help would be greatly appreciated, thanks in advance.
EDIT: I've tried creating more memory (24 GB) and resizing partitions but I still cannot complete a build.
Here is the output of the df -T command:
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext4 15345648 14304904 238184 99% /
none tmpfs 4 0 4 0% /sys/fs/cgroup
udev devtmpfs 1014316 12 1014304 1% /dev
tmpfs tempfs 205012 860 204152 1% /run
none tempfs 5120 0 5120 0% /run/lock
none tempfs 1025052 152 1024900 1% /run/shm
none tempfs 102400 40 102360 1% /run/user
When you say memory, I believe you mean disk space. Try running the command
df -T to see what % usage you really have. You will probably need to resize the amount of space virtualbox has assigned to your image, as well as resize your repartition. It may be simpler to just create a new virtualbox image with 16 or 24GB of disk space.
If you decide to go the resize partition route, here is a helpful resource: https://askubuntu.com/questions/126153/how-to-resize-partitions

Save file in eclipse makes processor works hard

I am using eclipse juno. Every time i save a file, eclipse consume 100% processor.
Here are the snapshot from top command :
Tasks: 303 total, 1 running, 301 sleeping, 1 stopped, 0 zombie
%Cpu(s): 31,2 us, 1,4 sy, 0,0 ni, 65,6 id, 0,4 wa, 0,0 hi, 1,4 si, 0,0 st
KiB Mem: 8077332 total, 5122068 used, 2955264 free, 509476 buffers
KiB Swap: 8252412 total, 0 used, 8252412 free, 2242736 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3816 iwan 20 0 1141m 410m 35m S 100,9 5,2 59:00.47 eclipse
3882 iwan 20 0 594m 162m 52m S 2,3 2,1 6:09.30 skype
2646 iwan 20 0 309m 82m 32m S 2,0 1,0 9:05.18 compiz
3894 iwan 20 0 851m 171m 42m S 2,0 2,2 3:00.66 thunderbird
1305 root 20 0 266m 68m 55m S 1,3 0,9 7:55.87 Xorg
any ideas ?
Apply any available updates. If problems continue, keep an eye on http://bugs.eclipse.org/402018 .