I'm trying to partition my Yocto image using wic and a .wks file, but I'm running into issues. This is what my current .wks file looks like:
part /boot --source bootimg-partition --ondisk mmcblk0 --fstype=vfat --label boot --active --size 20M
part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root --align 4096 --size 1G
part /update --ondisk mmcblk0 --fstype=ext4 --label update --align 4096 --size 20M
part /user-data --ondisk mmcblk0 --fstype=ext4 --label user-data --align 4096 --size 20M
I generate the .direct image using the wic create command, and write to the card using the wic write command. However, this causes a kernel panic error. The GNOME Disks application shows the contents of update and user-data to be unknown:
Removing the part commands for the update and user-data partitions produces an image which - when written to the board - does not cause a kernel panic, so my assumption is that there is an issue with what I'm doing in these lines. The question is, what am I doing wrong?
Related
I need to create an image using Yocto that includes a rescue partition. That is, another root partition which is selectable in grub menu.
I am currently creating an image which I dd to our target Intel board's internal SSD chip after booting from USB. This is working, but I now need to duplicate the current root partition as a "rescue" partition for in case something ever goes wrong with the root partition.
So currently I have a working wks (kickstart) file with 5 partitions defined as:
part /boot --source bootimg-efi --sourceparams="loader=grub-efi" --ondisk sda --label boot --active --align 1024
part / --source rootfs --ondisk sda --fstype=ext4 --label root --align 1024 --use-uuid --extra-space 1024
part /rescue --source rootfs --ondisk sda --fstype=ext4 --label rescue --align 1024 --use-uuid --extra-space 1024
part swap --ondisk sda --size 1024 --label swap --fstype=swap
part /home --ondisk sda --fstype=ext4 --label home --align 1024 --use-uuid --size=1024
bootloader --configfile="grub.cfg"
The grub configuration has two menu options and I am able to boot from the default first menu option.
menuentry 'boot'{
linux /bzImage root=/dev/sda2 3 console=ttyS2,115200n8 rootfstype=ext4
}
menuentry 'rescue'{
linux /bzImage root=/dev/sda3 3 console=ttyS2,115200n8 rootfstype=ext4
}
The grub configuration selects the second or the third partition as root. This works with the fixed order of partitions in the wks file.
My problem is that if I select the 'rescue' partition in grub, the system starts to boot but the Linux kernel fails to find a root partition. Here is the last kernel messages before the kernel panic:
:<snip>
:
ata1.00: ATA-8: 32GB NANDrive, D A431F4, max UDMA/133
ata1.00: 62533296 sectors, multi 0: LBA48
ata1.00: configured for UDMA/133
scsi 0:0:0:0: Direct-Access ATA 32GB NANDrive 31F4 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 62533296 512-byte logical blocks: (32.0 GB/29.8 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: Attached scsi generic sg0 type 0
sda: sda1 sda2 sda3 sda4 < sda5 sda6 >
:
:<snip>
:
List of all partitions:
0100 16384 ram0
(driver?)
0101 16384 ram1
(driver?)
0102 16384 ram2
(driver?)
0103 16384 ram3
(driver?)
0800 31266648 sda
driver: sd
0801 24571 sda1 9f293dfb-01
0802 4151722 sda2 9f293dfb-02
0803 4151722 sda3 9f293dfb-03
0804 1 sda4
0805 1048576 sda5 9f293dfb-05
0806 1048576 sda6 9f293dfb-06
No filesystem could mount root, tried:
ext4
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,3)
:
:
I suspect Yocto does something when building the image which I just need to control but am unable to find the source of this problem.
Any help is appreciated
I am trying to install kubernetes with kubeadm in my laptop which has Ubuntu 16.04. I have disabled swap, since kubelet does not work with swap on. The command I used is :
swapoff -a
I also commented out the reference to swap in /etc/fstab.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
#UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none swap sw 0 0
I confirmed swap is turned off by running the following:
free -m
total used free shared buff/cache available
Mem: 15936 2108 9433 954 4394 12465
Swap: 0 0 0
When I start kubeadm, I get the following error:
kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I also tried restarting my laptop, but I get the same error. What could the reason be?
below was the root cause.
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
you need to update the docker cgroup driver.
follow the below fix
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
you could try kubeadm reset , then kubeadm init --ignore-preflight-errors Swap .
first try with sudo
sudo swapoff -a
then check if there's anything swapped
cat /proc/swaps
and
free -h
Hi im trying to resize a disk for a pod in my kubernetes cluster,following the steps on the docs, i ssh in to the instance node to follow the steps, but its giving me an error
sudo growpart /dev/sdb 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/sdb
/dev/sdb: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8)
if this is unexpected, in order to avoid possible collisions
sfdisk: failed to dump partition table: Success
FAILED: failed to dump sfdisk info for /dev/sdb
i try running the commands from inside the pod but doesnt even locate the disk even tho its there
root#rc-test-r2cfg:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 59G 2.5G 56G 5% /
/dev/sdb 49G 22G 25G 47% /var/lib/postgresql/data
root#rc-test-r2cfg:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 96G 0 disk /var/lib/postgresql/data
sda 8:0 0 60G 0 disk
└─sda1 8:1 0 60G 0 part /etc/hosts
root#rc-test-r2cfg:/# growpart /dev/sdb 1
FAILED: /dev/sdb: does not exist
where /dev/sdb is the disk location
This can now be easily done by updating the storage specification directly of the Persistent Volume Claim. See these posts for reference:
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
https://dev.to/bzon/resizing-persistent-volumes-in-kubernetes-like-magic-4f96 (GKE example)
I have this Docker command:
docker run -d mongo
this will build and run a mongodb server running in a docker container
However, I get an error:
no space left on device
I am on MacOS, and using the newer versions of Docker which use hyper-v instead of VirtualBox (I think that's correct).
Here is the exact error message from the mongo container:
$ docker logs efee16702c5756659d563b98d4ae0f58ecf1f1bba8a54f63443c0ae4b520ab4e
about to fork child process, waiting until server is ready for connections.
forked process: 21
2017-05-04T20:23:51.412+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2017-05-04T20:23:51.430+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.Lo035QkbfL: No space left on device
ERROR: child process failed, exited with error number 1
Any idea how to fix this and prevent it from happening in future?
As suggested, the output of df -h is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 116Gi 349Gi 25% 1963838 4293003441 0% /
devfs 183Ki 183Ki 0Bi 100% 634 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
Output of docker info is:
$ docker info
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 741
Server Version: 17.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.13-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: OR4L:WYWW:FFAP:IDX3:B6UK:O2AN:UVTO:EPH6:GYSV:4GV4:L5WP:BQTH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 17
Goroutines: 30
System Time: 2017-05-04T20:45:27.056157913Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
As you state in the comments to the question, ls -altrh ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 returns the following:
-rw-r--r--# 1 alexamil staff 53G
This is a known bug on MacOS (actually, not only) and an official dev comment could be found here. Except for one thing: I read, that different people get different size limit. In the comment it is 64Gb, but for another person it was 20Gb.
There are a couple walkarounds, but no definite solution that I could find.
The manual one
Run docker ps -a and manually remove all unused containers. Then run docker images and remove manually all the intermediate and unused images.
The simplest one
Delete the Docker.qcow2 file entirely. But you will lose all images and containers. Completely.
The less simple
Another way is to run docker volume prune, which will remove all unused volumes
The resizing one (keeps the data)
Another idea that comes to me is to expand the disk image size with QEMU or something like it:
$ brew install qemu
$ /Applications/Docker.app/Contents/MacOS/qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +5G
After you expanded the image, you will need to run a VM in which you should run GParted against Docker.qcow2 and expand the partition to use added space. You could use GParted Live ISO for that:
$ qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live.iso -boot d -device usb-mouse -usb
Some people report this either doesn't work or doesn't help.
Yet another resizing one (wipes the data)
Create a substitute image with desired size (120G):
$ qemu-img create -f qcow2 ~/data.qcow2 120G
$ cp ~/data.qcow2 /Application/Docker.app/Contents/Resources/moby/data.qcow2
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
data.qcow2 is copied to ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 when you restart docker.
This walkaround comes from this comment.
Hope this helps. Good luck!
On CentOS has recently started to happen a strange thing - the user can not create files in its directory:
[Deployer # server ~] $ echo test> test.file
-Bash: echo: write error: Disk quota exceeded
Although quotas had not been established:
[Deployer # server ~] $ quota
Disk quotas for user deployer (uid 500): none
Disk space is sufficient:
[Deployer # server ~] $ df-h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vzfs 9.6G 6.9G 2.8G 72% /
none 256M 4.0K 256M 1% / dev
Problems with inodes should not be:
[Deployer # server ~] $ df-i
Filesystem Inodes IUsed IFree IUse% Mounted on
/ Dev / vzfs 10000000 130959 9869041 2% /
none 65536 95 65441 1% / dev
Can you please tell what could be the problem?
You have group quota also enabled for that user:
repquota -avug|grep username
repquota -avug|grep groupname
Edit quota for group:
edquota -g groupname