I am using Buildroot to build an initramfs image for my IMX Board. On the board, I run a custom Linux 4.19.35 Linux kernel (4.19.35-gexxxxxx) and a custom U-Boot bootloader. Hence, do not require these from Buildroot. My use case is only the rootfs.cpio (initramfs) image that gets built.
I am able to load the above initramfs into memory and execute my custom init and post-init scripts. However, I am unable to spawn an interactive shell. On reaching the command /bin/sh in the init script, I am greeted with a shell prompt but it seems that the serial console is not registering any keyboard inputs. Note that all other shell utilities and commands are executed just fine, but only when they are run in a script. Since one of my objectives is to have a minimal image, I am using busybox (1.32.0).
This gets even more confusing when I run the same initramfs along with the kernel image that is generated by buildroot. In this case, I do get an interactive shell prompt and I am able to enter my input like in a regular terminal.
I am suspecting that this might happen because of the different kernels. The buildroot kernel image is 4.19.35 but the kernel I use is 4.19.35-gexxxxx. However, I am not sure how the initramfs might be dependent on the kernel version string.
Any directions on what might be going wrong would be very helpful.
Edit 1: Below is my init code:
#!/bin/sh
/bin/mount -t devtmpfs devtmpfs /dev
export PATH=/sbin:/usr/sbin:/bin:/usr/bin
[ -d /dev ] || mkdir -m 0755 /dev
[ -d /root ] || mkdir -m 0700 /root
[ -d /sys ] || mkdir /sys
[ -d /proc ] || mkdir /proc
[ -d /tmp ] || mkdir /tmp
[ -d /run ] || mkdir /run
mkdir -p /dev/pts
mkdir -p /var/lock
/bin/mount -t sysfs -o nodev,noexec,nosuid sysfs /sys
/bin/mount -t proc -o nodev,noexec,nosuid proc /proc
/bin/mknod -m 666 /dev/ttyS0 c 4 64
/bin/mknod -m 666 /dev/ttyS0 c 4 64
/bin/mknod -m 622 /dev/console c 5 1
/bin/mknod -m 666 /dev/null c 1 3
/bin/mknod -m 666 /dev/tty c 5 0
/bin/mknod -m 666 /dev/zero c 1 5
/bin/mknod -m 666 /dev/ttymxc3 c 5 1
/bin/sh # --------------------> Spawning a shell
Try using a 5v serial adapter instead of 3.3v - with the lower voltage, you can still see what it's sending, but your adapter doesn't get heard by the device.
Related
What is the significance of -- in the command line of commands like lxc-create or lxc-start.
I tried to use Google in order to get an answer but without success.
// Example 1
lxc-create -t download -n u1 -- -d ubuntu -r DISTRO-SHORT-CODENAME -a amd64
// Example 1
application="/root/app.out"
start="/root/lxc-app/lxc-start"
$start -n LXC_app -d -f /etc/lxc/lxc-app/lxc-app.conf -- $application &
As explained in the references provided in the comments, the "--" indicates the end of the options passed to the command. The following parameters/options will be eventually used by a sub-command called by the command.
In your example:
lxc-create -t download -n u1 -- -d ubuntu -r DISTRO-SHORT-CODENAME -a amd64
lxc-create command will interpret "-t download -n u1" and the remaining "-d ubuntu -r DISTRO-SHORT-CODENAME -a amd64" will be passed to the template script which will configure/populate the container.
In this specific example, the "-t download" makes lxc-create run a template script named something like "/usr/share/lxc/templates/lxc-download" to which it will pass "-d ubuntu -r DISTRO-SHORT-CODENAME -a amd64".
using vscode and wsl2, I have tried to launch a container using the default method and no customization. This generated the same error as below.
so following vscode docs I set a "workspaceMount" in devcontainer.json
"workspaceMount": "source=${localWorkspaceFolder},target=/workspaces/myRepo,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",
I select Reopen in container, the launch sequence happens but an error is generated
a mount config is invalid, make sure it has the right format and a source folder that exists on the machine where the Docker daemon is running
the log error is
Command failed: docker run -a STDOUT -a STDERR --mount source=d:\git\myRepo,target=/workspaces/myRepo,type=bind,consistency=delegated --mount type=volume,src=vscode,dst=/vscode -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=d:\git\myRepo --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --entrypoint /bin/sh vsc-myRepo-a878aa9edbcf04f717c76e764dabcde6 -c echo Container started ; trap "exit 0" 15; while sleep 1 & wait $!; do :; done
by launching the container from docker desktop I can confirm
cd /workspaces
ls -l
drwxr-xr-x 2 root root 4096 Dec 3 11:48 myRepo
Is this issue due to owner root:root ?
Should this be changed by chown in the Dokerfile? if so could you provide a sample code to do this, is it by RUN chown ...?
I guess you followed the documentation in https://code.visualstudio.com/docs/remote/containers-advanced
The source should contains the subfolder "myRepo" and the target only "workspaces"
"workspaceMount": "source=${localWorkspaceFolder}/myRepo,target=/workspaces,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",
I'm trying to populate a disk image in a container environment (podman) on Centos 8. I had originally run into issues with accessing the loop device from the container until finding on SO and other sources that I needed to run podman as root and with the --privileged option.
While this did solve my problem in general, I noticed that after rebooting my host, my first attempt to setup a loop device in the container would fail (failed to set up loop device: No such file or directory), but after exiting and relaunching the container it would succeed (/dev/loop0). If for some reason I needed to set up a second loop device (/dev/loop1) in the container (after having gotten a first one working), it too would fail until I exited and relaunched the container.
Experimenting a bit further, I found I could avoid the errors entirely if I ran losetup --find --show <file created with dd> enough times to attach the maximum number of loop devices I would need, then detached all of those with losetup -D, I could avoid the loop device errors in the container entirely.
I suspect I'm missing something obvious about what losetup does on the host which it is apparently not able to do entirely within a container, or maybe this is more specifically a Centos+podman+losetup issue. Any insight as to what is going on and why I have to preattach/detach the loop devices after a reboot to avoid problems inside my container?
Steps to reproduce on a Centos 8 system (after having attached/detached once following a reboot):
$ dd if=/dev/zero of=file bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00826706 s, 1.3 GB/s
$ cp file 1.img
$ cp file 2.img
$ cp file 3.img
$ cp file 4.img
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#2da5317bde3e /]# cd images
[root#2da5317bde3e images]# ls
1.img 2.img 3.img 4.img file
[root#2da5317bde3e images]# losetup --find --show 1.img
/dev/loop0
[root#2da5317bde3e images]# losetup --find --show 2.img
losetup: 2.img: failed to set up loop device: No such file or directory
[root#2da5317bde3e images]# losetup -D
[root#2da5317bde3e images]# exit
exit
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#f9e41a21aea4 /]# cd images
[root#f9e41a21aea4 images]# losetup --find --show 1.img
/dev/loop0
[root#f9e41a21aea4 images]# losetup --find --show 2.img
/dev/loop1
[root#f9e41a21aea4 images]# losetup --find --show 3.img
losetup: 3.img: failed to set up loop device: No such file or directory
[root#f9e41a21aea4 /]# losetup -D
[root#f9e41a21aea4 images]# exit
exit
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#c93cb71b838a /]# cd images
[root#c93cb71b838a images]# losetup --find --show 1.img
/dev/loop0
[root#c93cb71b838a images]# losetup --find --show 2.img
/dev/loop1
[root#c93cb71b838a images]# losetup --find --show 3.img
/dev/loop2
[root#c93cb71b838a images]# losetup --find --show 4.img
losetup: 4.img: failed to set up loop device: No such file or directory
I know it's a little old but I've stumbled across similar problem and here what I've discovered:
After my vm boots up it does not have any loop device configured and it's ok since mount can create additional devices if needed but:
it seems that docker puts overlay over /dev so it won't see any changes that were done in /dev/ after container was started so even if mount requested new loop devices to be created and they actually were created my running container won't see it and fail to mount because of no loop device available.
Once you restart container it will pick up new changes from /dev and see loop devices and successfully mount until it run out of them and try to request again.
So what i tried (and it seems working) I passed /dev to docker as volume mount like this
docker -v /dev:/dev -it --rm <image> <command> and it did work.
If you still have this stuff I was wondering if you could try it too to see if it helps.
The only other method I can think of, beyond what you've already found is to create the /dev/loop devices yourself on boot. Something like this should work:
modprobe loop # This may not be necessary, depending on your kernel build but is harmless.
major=$(grep loop /proc/devices | cut -c3)
for index in 0 1 2 3 4 5
do
mknod /dev/loop$i $major $i
done
Put this in /etc/rc.local, your system's equivalent or otherwise arrange for it to run on boot.
I use the openocd script below to dump the flash memory of a STM32 microcontroller.
mkdir -p dump
openocd -f board/stm3241g_eval_stlink.cfg \
\
-c "init" \
-c "reset halt" \
-c "dump_image dump/image.bin 0x08000000 0x100000" \
-c "shutdown" \
FILENAME=dump/image.bin
FILESIZE=$(stat -c%s "$FILENAME")
echo "Size of $FILENAME = $FILESIZE bytes."
The script is supposed to read the whole memory which is 1MB in my case but it does it very rarely. Generally it stops reading the memory before the end.
Why can't I obtain 1MB each time I execute this script? What is the problem here to cause openocd stop dumping the rest of the memory?
You can use dfu-utils to reflash your STM32 micros.
In Ubuntu/Debian distros you can install dfu-utils with apt:
$ sudo apt-get install dfu-util
$ sudo apt-get install fwupd
Boot your board in DFU mode (check datasheet). Once in DFU mode, you should see something similar to this:
$ lsusb | grep DFU
Bus 003 Device 076: ID 0483:df11 STMicroelectronics STM Device in DFU Mode
Once booted in DFU mode, reflash your binary:
$ sudo dfu-util -d 0483:df11 -a 0 -s 0x08000000:leave -D build/$(PROJECT).bin
With -d option you choose product:vendorid such as listed by lsusb in DFU mode.
With the -a 0 option you select alternate mode 0, check the options available as in the following example:
$ sudo dfu-util -l
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=1, name="#Option Bytes /0x1FFFF800/01*016 e", serial="FFFFFFFEFFFF"
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=0, name="#Internal Flash /0x08000000/064*0002Kg", serial="FFFFFFFEFFFF"
As you can see, alt=0 is for internal flash memory.
With the -s option you specify the flash memory address where you save your binary. Check your memory map in datasheet.
Hope this helps! :-)
I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/