I'm currently using a slightly modified script https://github.com/JasperE84/root-ro for booting system from squashfs image. It works almost as expected.
It does boot to the new read-only filesystem from the image, however, it boots with the kernel from the "main" system, the system that initramfs was built on it. I tried with the switch_root command from initramfs, but I can't get it working, actually since this script creates overlay I don't think I should use switch_root at all.
Could somebody help me with an idea or solution on how to boot to the kernel that is in the read-only image instead of the one that initramfs was built with?
Uros
If you want to use the kernel inside a squashfs file, you either need a boot loader that can read squashfs files, or you need to use kexec to start on a kernel that your boot loader can read and jump into a kernel from any filesystem that the first kernel can read.
To elaborate on the kexec option, you would have
A kernel and initramfs stored normally in the boot partition on a common filesystem
A simple init script in the initramfs that would mount the squashfs file and then find the new kernel
Call kexec to switch to the new kernel
Run another initramfs that again mounts the squashfs, (because it was lost during kexec) initializes the overlay like in your example, and finish booting the system
switch_root might still be needed in the second initramfs, but it only changes the view of the filesystem for userspace. It doesn't change kernels.
U-Boot could simplify this by loading the initial kernel directly from the squashfs file, but I've never used it and don't know if it is compatible with raspberry pi, so I can't make recommendations.
Related
I am working in Google Vertex AI, which has a two-disk system of a boot disk and a data disk, the latter of which is mounted to /home/jupyter. I am trying to expose python venv environments with kernelspec files, and then keep those environments exposed across repeated stop-start cycles. All of the default locations for kernelspec files are on the boot disk, which is ephemeral and recreated each time the VM is started (i.e., the exposed kernels vaporize each time the VM is stopped). Conceptually, I want to use a VM start-up script to add a persistent data disk path to the JUPYTER_PATH variable, since, according to the documentation, "Jupyter uses a search path to find installable data files, such as kernelspecs and notebook extensions." During interactive testing in the Terminal, I have not found this to be true. I have also tried setting the data directory variable, but it does not help.
export JUPYTER_PATH=/home/jupyter/envs
export JUPYTER_DATA_DIR=/home/jupyter/envs
I have a beginner's understanding of jupyter and of the important ramifications of using two-disk systems. Could someone please help me understand:
(1) Why is Jupyter failing to search for kernelspec files on the JUPYTER_PATH or in the JUPYTER_DATA_DIR?
(2) If I am mistaken about how the search paths work, what is the best strategy for maintaining virtual environment exposure when Jupyter is installed on an ephemeral boot disk? (Note, I am aware of nb_conda_kernels, which I am specifically avoiding)
A related post focused on the start-up script can be found at this url. Here I am more interested in the general Jupyter + two-disk use case.
Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy
I have a number of legacy services running which read their configuration files from disk and a separate daemon which updates these files as they change in zookeeper (somewhat similar to confd).
For most of these types of configuration we would love to move to a more environment variable like model, where the config is fixed for the lifetime of the pod. We need to keep the outside config files as the source of truth as services are transitioning from the legacy model to kubernetes, however. I'm curious if there is a clean way to do this in kubernetes.
A simplified version of the current model that we are pursuing is:
Create a docker image which has a utility for fetching config files and writing them to disk ones. Then writes a /donepath/done file.
The main image waits until the done file exists. Then allows the normal service startup to progress.
Use an empty dir volume and volume mounts to get the conf from the helper image into the main image.
I keep seeing instances of this problem where I "just" need to get a couple of files into the docker image at startup (to allow per-env/canary/etc variance), and running all of this machinery each time seems like a burden throw on devs. I'm curious if there is a more simplistic way to do this already in kubernetes or on the horizon.
You can use the ADD command in your Dockerfile. It is used as ADD File /path/in/docker. This will allow you to add a complete file quickly to your container. You need to have the file you want to add to the image in the same directory as the Dockerfile when you build the container. You can also add a tar file this way which will be expanded during the build.
Another option is the ENV command in a your Dockerfile. This adds the data as an environment variable.
I am installing ubuntu 14.04 on an acer machine and I realize that the OS can't initialize if the booting files are lost.
I would really appreciate if somebody could bring information about how these files work.
Thank you very much.
There are several stages of booting in GRUB, each of them uses differect file(s)
Stage 1: boot.img is stored in the master boot record (MBR), or optionally in any of the volume boot records (VBRs), and addresses the next stage. At installation time it is configured to load the first sector of core.img.
Stage 2: core.img is by default written to the sectors between the MBR and the first partition, when these sectors are free and available. Once executed, core.img will load its configuration file and any other modules needed, particularly file system drivers; at installation time, it is generated from diskboot.img and configured to load the stage 3 by its file path.
_
This is a little piece of info, for full information check Wikipedia
I am working on an embedded linux device that has an internal SD card. This device needs to be updatable without opening the device and taking out the SD card. The goal is to allow users to update their device with a USB flash drive. I would like to completely overwrite the internal SD card with a new SD card image.
My first thought was to unmount the root filesystem and use something to the effect of:
dd if=/mnt/flashdrive/update.img of=/dev/sdcard
However, It appears difficult to actually unmount a root filesystem correctly, as processes like "login" and "systemd" are still using resources on root. As soon as you kill login, for example, the update script is killed as well.
Of course, we could always use dd without unmounting root. This seems rather foolish, however. :P
I was also thinking of modifying the system init script to perform this logic before the system actually mounts the root filesystem.
Is there a correct/easy way to perform this type of update? I would imagine it has been done before.
Thank you!
Re-imaging a mounted file system doesn't sound like a good idea, even if the mount is read-only.
Consider:
Use a ramdisk (initialized from a compressed image) as your actual root filesystem, but perhaps have all but the most essential tools in file systems mounted beneath, which you can drop to upgrade. Most Linux implementations do this early in their boot process anyway before they mount the main disk filesystems: rebooting to do the upgrade may be an option.
SD cards are likely larger than you need anyway. Have two partitions and alternate between them each time you upgrade. Or have a maintenance partition which you boot into to perform upgrades/recovery.
Don't actually image the file system, but instead upgrade individual files.
Try one of or both:
Bring down to single user mode first: telinit 1
or/and
Remount / as readonly: mount -o remount,ro /
before running the dd
Personally I would never do something as you do, but it is possible to do.
Your linux system does it every time it is booted. In fact, what happens is that your kernel initially mounts the initrd, loads all the modules and after that it calls pivot_root to mount the real / .
pivot_root is also a command that can be used from shell, you'd better run man 8 pivot_root but just to give you an idea, you can do something like this
mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
One last thing: this way of performing software upgrade is extremely weak. Please consider other solutions.