How to set /dev/root filesystem size to the partition size - yocto

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 4.3G 1.9G 2.2G 47% /
devtmpfs 980M 0 980M 0% /dev
tmpfs 981M 0 981M 0% /dev/shm
tmpfs 981M 33M 948M 4% /run
tmpfs 981M 0 981M 0% /sys/fs/cgroup
tmpfs 981M 0 981M 0% /tmp
tmpfs 981M 16K 981M 1% /var/volatile
# fdisk -l
Disk /dev/mmcblk1: 7.3 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier:
Device Start End Sectors Size Type
/dev/mmcblk1p1 16384 24575 8192 4M unknown
/dev/mmcblk1p2 24576 32767 8192 4M unknown
/dev/mmcblk1p3 32768 69859 37092 18.1M unknown
/dev/mmcblk1p4 81920 15269854 15187935 7.2G unknown
As far as I know, the /dev/root filesystem size is the size of the content that is being copied to the /dev/root.
My goal is to have /dev/root size the same as /dev/mmcblk1p4 which is 7.2G.
How can I instruct Yocto give the /dev/root filesystem the size of the partition it is mounted to?

I can see two possible solutions to your issue.
The first one is by telling Yocto to generate an image with a sepcific IMAGE_ROOTFS_SIZE value. As stated in the Yocto Mega-Manual. The size is specified in KBytes. Modify your machine.conf or local.conf to add this parameter.
In your case, the value seems to be:
15269854 - 81920 = 15187934 sectors
sectors are 512 Bytes on your system (see verification below)
15187935 * 512 = 7776222208 Bytes
7776222208 / 1024 = 7593967 KBytes
To verify the sector size of 512B:
7593967 / (1024 * 1024) = 7.242 GB
With 512 Bytes blocksize, the partition size is 7.2GB, as stated by fdisk
I think it is a good idea to reduce it a little bit, a value like 7230000 Kilobytes (~7.23 GB) can be a good candidate.
The other method is to use resize2fs program if your partition type is ext2/3/4. This program can be executed on mounted or unmounted devices. If you are using a SDcard, the simplest method will be to insert it into your computer, to unmount it, and run resize2fs /dev/<mysdcarddevice>. You can also execute it directly from your embedded board. In this case you will need to add the package e2fsprogs-resize2fs on the board with IMAGE_INSTALL += "e2fsprogs-resize2fs", then run resize2fs /dev/mmcblk1p4.

As PierreOlivier proposed I used resize2fs program.
Because I am using Yocto to build the custom image I use pkg_postinst_ontarget which runs only once at the first boot on the target machine.
In one of my recipe I put
pkg_postinst_ontarget_${PN}() {
#!/bin/bash
resize2fs /dev/mmcblk1p4
}
This resizes the selected partition on the first boot.
Thank you PierreOlivier

Related

Yocto separate home partition; best practice for fstab generation

I've started experimenting with Yocto, starting from my evaluation kit manufacturer's repository. Since I want to have a read-only root, I want to move /home to a separate read/write partition.
I can create the partitions on the SD image by creating a custom wic.in.
part u-boot --source rawcopy --sourceparams="file=imx-boot" --ondisk mmcblk --no-table --align ${IMX_BOOT_SEEK}
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root --align 8192 --fixed-size=4096M --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk mmcblk --fstype=ext4 --label home --align 8192 --fixed-size=2048M
bootloader --ptable msdos
But the home partition is not used as HOME, but mounted as a normal data partition in media.
root#imx8mp-var-dart:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk2 179:0 0 14.6G 0 disk
`-mmcblk2p1 179:1 0 14.6G 0 part /run/media/mmcblk2p1
mmcblk2boot0 179:32 0 4M 1 disk
mmcblk2boot1 179:64 0 4M 1 disk
mmcblk1 179:96 0 14.8G 0 disk
|-mmcblk1p1 179:97 0 4G 0 part /
`-mmcblk1p2 179:98 0 4G 0 part /run/media/mmcblk1p2
Default fstab (file located in poky's layer) doesn't have the new home entry, so that's the expected behavior.
# stock fstab - you probably want to override this with a machine specific one
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
# uncomment this if your device has a SD/MMC/Transflash slot
#/dev/mmcblk0p1 /media/card auto defaults,sync,noauto 0 0
I've "stolen" the wic.in configuration from [here][1]. A comment in the same thread states that "WIC will automatically add an entry in fstab". But that doesn't seems the case...
How to let Yocto/WIC populate fstab correctly, so that the home partition is actually used as HOME?
Another approach is to add a manually generated fstab in a custom layer.
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
/dev/mmcblk1p2 /home auto defaults 0 0
Done it, and it works. Although I don't really like the fact that I have to define the SD card partition name as input. As an example, if I add a third partition between root and home, home partition will become mmcblk1p3, and I need to edit fstab, too.
Looks like the root partition is just pointing to /dev/root. I suppose it works because the partition is labeled root. (On a side note, /dev/root doesn't exist in root filesystem (searched with ls -la /dev)). Tried to use /dev/home, but it didn't work.
Is there a way to declare the home partition in fstab without using the "low level" SD partition name, but something more generic/portable?

Centos not using available memory

I have Centos installed on a server with 64gb memory and it seems as if the memory usage is being suppressed.
I came to this conclusion by running an insert statement where I insert 10million rows into a Postgres table in both a Timescaledb and a standard Postgres instance hosted on Docker.
I monitored the insert process in three different ways:
Docker stats timescaledb:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
timescaledb 73.14% 10.42 MiB / 62.75 GiB 0.02% 8.46 kB / 8.39 kB 0 B / 15.1 GB 12
free -i gives the following:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16298 avahi 20 0 16.2g 762356 759908 R 41.5 1.2 0:22.72 postgres
16127 avahi 20 0 16.2g 693080 691968 S 4.3 1.1 0:01.29 postgres
16129 avahi 20 0 16.2g 17748 16712 S 2.3 0.0 0:00.87 postgres
1578 root 30 10 1232780 86976 11568 S 0.7 0.1 0:46.34 osqueryd
17014 root 20 0 162264 2480 1596 R 0.7 0.0 0:00.03 top
928 root 20 0 90608 3212 2352 S 0.3 0.0 0:03.47 rngd
16128 avahi 20 0 16.2g 132064 131016 S 0.3 0.2 0:00.18 postgres
free -h gives the following
total used free shared buff/cache available
Mem: 62G 1.0G 58G 1.1G 3.1G 56G
Swap: 62G 0B 62G
I know that Timescaledb is an extension of Postgres which comes with its own memory configurations, but the Docker container of Timescaledb configures these automatically for you (for instance effective cache size is set at 48gb as opposed to the default 4gb that Postgres ships with). I also ran a similar process with Apache spark with 16gb assigned to the worker and it ran into an oom error. Additionally, I did a similar test on a different smaller VM and the memory usage increased as expected. All of this leads me to believe that it's a Centos config setting that I am missing somewhere, and nothing to do with Timescale/Postgres?
I have added the following parameters to vm.overcommit_memory = 2 and vm.overcommit_ratio = 95 in /etc/sysctl.conf and ran sysctl -p to implement the settings, but this didn't make a difference.
kernel.shmall = 8224280
kernel.shmmax = 33686650880
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95
Below is the output from cat /proc/meminfo
MemTotal: 65794240 kB
MemFree: 61098656 kB
MemAvailable: 59252660 kB
Buffers: 2120 kB
Cached: 3467144 kB
SwapCached: 0 kB
Active: 2817620 kB
Inactive: 884816 kB
Active(anon): 1109220 kB
Inactive(anon): 234708 kB
Active(file): 1708400 kB
Inactive(file): 650108 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 65535996 kB
SwapFree: 65535996 kB
Dirty: 88 kB
Writeback: 0 kB
AnonPages: 233188 kB
Mapped: 1175120 kB
Shmem: 1110756 kB
Slab: 204044 kB
SReclaimable: 142700 kB
SUnreclaim: 61344 kB
KernelStack: 7232 kB
PageTables: 14672 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 128040524 kB
Committed_AS: 18709300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 408824 kB
VmallocChunk: 34325399548 kB
Percpu: 9216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 96256 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 133604 kB
DirectMap2M: 66965504 kB
Is there maybe something I can try to increase my memory usage? Is there maybe a config setting that I am missing somehere?
Thanks in advance for any help
PostgreSQL also uses "unused" memory, because it uses buffered I/O. So this "unused memory" is used by the kernel to cache files – in the case of a database server, these will be database files. That way, I/O requests by PostgreSQL can be served from the kernel cache rather than causing disk I/O requests.

ddrescue read non tried blocks

I'm trying to rescue a 1TB disk which has read errors. Because I didn't have a free 1TB drive, I created a raid 0 of two 500GB drives.
I used the command line from Wikipedia for the first run:
sudo ddrescue -f -n /dev/sdk /dev/md/md_test /home/user/rescue.map
ddrescue already completed this run after approximately 20 hours and more than 7000 read errors.
Now I'm trying to do a second run
sudo ddrescue -d -f -v -r3 /dev/sdk /dev/md/md_test /home/user/rescue.map
and read the non tried blocks but ddrescue gives me this:
GNU ddrescue 1.23
About to copy 1000 GBytes from '/dev/sdk' to '/dev/md/md_test'
Starting positions: infile = 0 B, outfile = 0 B
Copy block size: 128 sectors Initial skip size: 19584 sectors
Sector size: 512 Bytes
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 635060 MB, tried: 0 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 1000 GB, non-trimmed: 0 B, current rate: 0 B/s
opos: 1000 GB, non-scraped: 0 B, average rate: 0 B/s
non-tried: 365109 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 635060 MB, bad areas: 0, run time: 0s
pct rescued: 63.49%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Copying non-tried blocks... Pass 1 (forwards)
ddrescue: Write error: Invalid argument
I can't figure out what this write errors means, already searched the manual for answers.
Any help is appreciated! Thx!
After a while I found the cause for the write error, the capacity of the corrupt drive is 931,5G but the total capacity of the raid 0 was just 931,3G.
Realized it, while I took a closer look to the output of lsblk command.
So I rebuild the raid 0 array with 3 500G drives and ddrescue now works as expected.

VMWare Ubuntu warns me on no disk space, but GParted shows there is lots of space

I have a vmware ubuntu allocated with 300G of disk space, but recently I got a disk space warning.
I run df -h as this:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 797M 88M 710M 11% /run
/dev/mapper/vgroot-root 25G 18G 5.8G 76% /
tmpfs 3.9G 106M 3.8G 3% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 945M 75M 806M 9% /boot
/dev/mapper/vgroot-home 15G 14G 56K 100% /home
vmhgfs-fuse 239G 200G 40G 84% /mnt/hgfs
tmpfs 797M 0 797M 0% /run/user/999
tmpfs 797M 64K 797M 1% /run/user/500
yes I see /home directory is 100% full, but know can I enlarge it?
I tried to run gparted, but there seems to be a lot of space.
What works for me was lowering space for Previous versions of files (http://windows.microsoft.com/en-us/windows/previous-versions-files-faq#1TC=windows-7).
1.Click Start Icon
2.Left Click on "Computer" and click on Properties
3.Click on "System Protection" on left side
4.Click on Disk and "Configure"
5.Lower your quota
As a quick & easy fix is try a different data (SATA?) cable, maybe even power cable.
Check the syslog (usually /var/log/syslog & dmesg for any messages about the drive when it disappears/reappears.
You might even be able to hear the drive working while testing / reading / writing, or even just spinning idle. So if it suddenly goes quiet while it should be working that's bad, especially if it disappears from all Linux tools / listings.
That's a little weird that the self test didn't log anything...
You could try running the short self-test again, optionally in -C, --captive mode (some of my older drives would always abort the test ~90% if captive).
While testing (not in captive mode) you could check the test's status with smartctl -c /dev/sdX to see "Self-test execution status:" and the next line has % remaining. Or just cut out those lines with:
smartctl -c /dev/sdX | grep "^Self" -A1
-c will also show what tests are supported.
Try the other self tests (conveyance, offline, long)
I like smartctl --xall to see all the results.
I believe the "SMART Attributes Data Structure -> Vendor Specific SMART Attributes with Thresholds" has "problem" attributes showing a "VALUE" of 100 or less (higher numbers being better)
the "RAW_VALUES" are very vendor specific & might be a code & might not have any direct relation to the attribute (Power_On_Minutes & Power_Cycle_Count should be actual minutes & a count, but there may be no guarantees)
Drives can put themselves to sleep after a while, but they should still remain connected & listed in Linux. There's a smartctl command to get & set it, here's the relevant section from the man page:

Mongodb build/compile error: not enough memory on Ubuntu

Preface so this isn't marked as a duplicate: I've seen lots of mongodb memory issues posted on stack overflow, but none that have to do with errors on the compilation.
I just freshly downloaded and ran Ubuntu on Virtualbox (on a mac), so I feel like there should be enough memory. However, when I try to compile Mongodb from the source code I've gotten the following errors about an hour into the compilation (I have done this a few times now)
scons: *** [<whatever file it was working on>] No space left on device
scons: building terminated because of errors
and on a separate occasion
IOError: [Errno 28] No space left on device:
File "/usr/lib/scons/SCons/Script/Main.py", line 1359:
_exec_main(parser, values)
File "/usr/lib/scons/SCons/Script/Main.py", line 1323:
_main(parser)
File "/usr/lib/scons/SCons/Script/Main.py", line 1072:
nodes = _build_targets(fs, options, targets, target_top)
File "/usr/lib/scons/SCons/Script/Main.py", line 1281:
jobs.run(postfunc = jobs_postfunc)
File "/usr/lib/scons/SCons/Job.py", line 113:
postfunc()
File "/usr/lib/scons/SCons/Script/Main.py", line 1278:
SCons.SConsign.write()
File "/usr/lib/scons/SCons/SConsign.py", line 109:
syncmethod()
File "/usr/lib/scons/SCons/dblite.py", line 117:
self._pickle_dump(self._dict, f, 1)
Exception IOError: (28, 'No space left on device') in <bound method dblite.__del__ of <SCons.dblite.dblite object at 0x7fbe2a577dd0>> ignored
I've tried both of the following build commands:
scons all --dbg=on -j1
scons --dbg=on -j1
According to VirtualBox the virtual size is 8 GB and the Actual size is 4.09 GB. Also, if it makes the difference, the odds that the memory on my mac is actually full is slim to none.
Any help would be greatly appreciated, thanks in advance.
EDIT: I've tried creating more memory (24 GB) and resizing partitions but I still cannot complete a build.
Here is the output of the df -T command:
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 ext4 15345648 14304904 238184 99% /
none tmpfs 4 0 4 0% /sys/fs/cgroup
udev devtmpfs 1014316 12 1014304 1% /dev
tmpfs tempfs 205012 860 204152 1% /run
none tempfs 5120 0 5120 0% /run/lock
none tempfs 1025052 152 1024900 1% /run/shm
none tempfs 102400 40 102360 1% /run/user
When you say memory, I believe you mean disk space. Try running the command
df -T to see what % usage you really have. You will probably need to resize the amount of space virtualbox has assigned to your image, as well as resize your repartition. It may be simpler to just create a new virtualbox image with 16 or 24GB of disk space.
If you decide to go the resize partition route, here is a helpful resource: https://askubuntu.com/questions/126153/how-to-resize-partitions