Flashing an internal SD card that is mounted as root - sd-card

I am working on an embedded linux device that has an internal SD card. This device needs to be updatable without opening the device and taking out the SD card. The goal is to allow users to update their device with a USB flash drive. I would like to completely overwrite the internal SD card with a new SD card image.
My first thought was to unmount the root filesystem and use something to the effect of:
dd if=/mnt/flashdrive/update.img of=/dev/sdcard
However, It appears difficult to actually unmount a root filesystem correctly, as processes like "login" and "systemd" are still using resources on root. As soon as you kill login, for example, the update script is killed as well.
Of course, we could always use dd without unmounting root. This seems rather foolish, however. :P
I was also thinking of modifying the system init script to perform this logic before the system actually mounts the root filesystem.
Is there a correct/easy way to perform this type of update? I would imagine it has been done before.
Thank you!

Re-imaging a mounted file system doesn't sound like a good idea, even if the mount is read-only.
Consider:
Use a ramdisk (initialized from a compressed image) as your actual root filesystem, but perhaps have all but the most essential tools in file systems mounted beneath, which you can drop to upgrade. Most Linux implementations do this early in their boot process anyway before they mount the main disk filesystems: rebooting to do the upgrade may be an option.
SD cards are likely larger than you need anyway. Have two partitions and alternate between them each time you upgrade. Or have a maintenance partition which you boot into to perform upgrades/recovery.
Don't actually image the file system, but instead upgrade individual files.

Try one of or both:
Bring down to single user mode first: telinit 1
or/and
Remount / as readonly: mount -o remount,ro /
before running the dd

Personally I would never do something as you do, but it is possible to do.
Your linux system does it every time it is booted. In fact, what happens is that your kernel initially mounts the initrd, loads all the modules and after that it calls pivot_root to mount the real / .
pivot_root is also a command that can be used from shell, you'd better run man 8 pivot_root but just to give you an idea, you can do something like this
mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
One last thing: this way of performing software upgrade is extremely weak. Please consider other solutions.

Related

Install Postgres on removable volume on linux?

Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy

Google jib - Change owner of all files and folders

All the app files and extraDirectories are owned by root.
/app/libs/
/app/resources/
/app/classes/
/app/logs
I want to run the application as non-root user and i want these files/folders to be owned by that user only and not root.
Is there any way to do this ? I found below mentioned jib maven plugin to alter the owner but it recommends not to do it. Is there any better way ?
https://github.com/GoogleContainerTools/jib-extensions/tree/master/first-party/jib-ownership-extension-maven
The reason you want to change the ownership of some part of the app directory is that your app wants to modify some files or create new ones inside it at runtime. Generally speaking, it is considered a good practice to build an image to be immutable as much as possible.
Since you mentioned /app/logs, I suspect that your app generates log files while it is running. On some modern container orchestration platforms (such as Kubernetes), apps are usually designed to output logs to stdout and stderr.
The best practice is to write your application logs to the standard output (stdout) and standard error (stderr) streams.
Think about it: if your app generates logs files at /app/logs inside a container (there will be multiple containers of the same image running), how would you collect and monitor them in a unified way? What if different apps generate log files at different file system locations? But more importantly, if your container crashes, you'll just lose the log files. By writing logs to stdout and stderr, the platform (say, Kubernetes) will take care of all the complexities of managing and co-relating logs from all pods.
If you cannot change your app about the log files, at least you should mount a volume at /app/logs at runtime. For any container runtime (be it k8s or Docker), this is easily configurable. The mounted directory will be usually world-writable, so you won't need to change the ownership. But you'll still have to think about how to collect and manage the log files.
Likewise, if it is not for log files but that your app needs a file system to create a temporary file inside the app directory and you cannot change the location for some reason, at least you should try to mount an ephemeral volume before falling back to the last-resort of using the Jib Ownership Extension you mentioned.
Conclusively, give a careful assessment of why you have to change the ownership first. If the app wants to mutate itself at runtime, usually it's not a good practice for containerization and there must be some root cause that you may need to resolve in a proper way.

Change kernel switch_root with JasperE84/root-ro script

I'm currently using a slightly modified script https://github.com/JasperE84/root-ro for booting system from squashfs image. It works almost as expected.
It does boot to the new read-only filesystem from the image, however, it boots with the kernel from the "main" system, the system that initramfs was built on it. I tried with the switch_root command from initramfs, but I can't get it working, actually since this script creates overlay I don't think I should use switch_root at all.
Could somebody help me with an idea or solution on how to boot to the kernel that is in the read-only image instead of the one that initramfs was built with?
Uros
If you want to use the kernel inside a squashfs file, you either need a boot loader that can read squashfs files, or you need to use kexec to start on a kernel that your boot loader can read and jump into a kernel from any filesystem that the first kernel can read.
To elaborate on the kexec option, you would have
A kernel and initramfs stored normally in the boot partition on a common filesystem
A simple init script in the initramfs that would mount the squashfs file and then find the new kernel
Call kexec to switch to the new kernel
Run another initramfs that again mounts the squashfs, (because it was lost during kexec) initializes the overlay like in your example, and finish booting the system
switch_root might still be needed in the second initramfs, but it only changes the view of the filesystem for userspace. It doesn't change kernels.
U-Boot could simplify this by loading the initial kernel directly from the squashfs file, but I've never used it and don't know if it is compatible with raspberry pi, so I can't make recommendations.

Google Compute Engine snapshot of instance with persistent disks attached failed

I have a working VM instance that I'm trying to copy to allow redundancy behind google load balancer.
A test run with a dummy instance worked fine, creating a new instance from a snapshot of a running one.
Now, the real "original" instance have a persistent disk attached and this cause a problem in starting up the cloned instance because of the (obviously) missing persistent disk mount.
Logs from serial console output is as:
* Stopping cold plug devices[74G[ OK ]
* Stopping log initial device creation[74G[ OK ]
* Starting enable remaining boot-time encrypted block devices[74G[ OK ]
The disk drive for /mnt/XXXX-log is not ready yet or not present.
keys:Continue to wait, or Press S to skip mounting or M for manual recovery
As I understand there is no way to send any of this key strokes to the instance, is there any other way to overcome this issue? I know that I could unmount the disk before the snapshot, but the workflow I would like to instate is creating period snapshots of production servers, so un-mounting disks every time before performing it would require instance downtime (plus all the unnecessary risks of doing an action that would seem pointless).
Is there a way to boot this type of cloned instances successfully, and attach a new persistence disk afterwards?
Is this happening because the original persistent disk is in use, or the same problem would occur even if the original instance is offline (for example due to a failure in which case I would try to created a new instance from a snapshot)?
One workaround that I am using to get away from the same issue is that I dont't actually unmount the disk rather just comment out the the mount line in /etc/fstab and take the snapshot. This way my instance has no downtime or down disks while snapshoting. (I am using Ubuntu 14.04 as OS if that matters)
Later I fix and uncomment it when I use that snapshot on a new instance.
However you can also look into adding the nofail option in the commented line to get a better solution.
By the way I am doing a similar task building a load balanced setup with multiple webserver nodes. Each being cloned from the said snapshot with extra persistent disks mounted for eg uploads,data and logs etc
I'm a little unclear as to what you're trying to accomplish. It sounds like you're looking to periodically snapshot the data volumes of a production server so you can clone them later.
In all likelihood, you simply need to sync and fsfreeze to before you make your snapshot, rather than just unmounting/remounting it. The GCP documentation has a basic example of this in the Snapshots documentation.

How to scale MongoDB?

I know that MongoDB can scale vertically. What about if I am running out of disk?
I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size.
What if the MongoDB growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files?
Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.
If you're running out of disk, you obviously need to get a bigger disk.
There are several ways to migrate your data, it really depends on the type of up-time you need. First steps of course involve bundling the machine and creating the new volume.
These tips go from easiest to hardest.
Can you take the database completely off-line for several minutes?
If so, do this (migration by copy):
Mount new EBS on the server.
Stop your app from connecting to Mongo.
Shut down mongod and wait for everything to write (check the logs)
Copy all of the data files (and probably the logs) to the new EBS volume.
While the copy is happening, update your mongod start script (or config file) to point to the new volume.
Start mongod and check connection
Restart your app.
Can you take the database off-line for just a few minutes?
If so, do this (slaving and switch):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a --slave pointing at the current database. (you may need to re-start the current as --master)
The slave will do a fresh synchronization. Once the slave is up-to-date, you'll do a "switch" (next steps).
Turn off writes from the system.
Shut down the original mongod process.
Re-start the "new" mongod as a master instead of the slave.
Re-activate system writes pointing at the new master.
Done correctly those last three steps can happen in minutes or even seconds.
Can you not afford any down-time?
If so, do this (master-master):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a master and a slave against the current database. (may need to re-start current as master, minimal down-time?)
The new computer should do a fresh synchronization.
Once the new computer is up-to-date, switch the system to point at the new server.
I know it seems like this last version is actually the best, but it can be a little dicey (as of this writing). The reason is simply that I've honestly had a lot of issues with "Master-Master" replication, especially if you don't start with both active.
If you plan on using this method, I highly suggest a smaller practice run first. If something bombs here, Mongo might simply wipe all of your data files which will have the effect of taking more stuff down.
If you get a good version of this please post the commands, I'd like to see it in action.
Doesn't the E in EBS stand for elastic meaning something like resizing on the fly?
Currently the MongoDB team is working on finishining sharding which will allow you horizontal scaling by partitioning data separately on different servers. Give it a month or two and it will work fine. The developers are quite good at keeping their promises.
http://api.mongodb.org/wiki/current/Sharding%20Introduction.html
http://api.mongodb.org/wiki/current/Sharding%20Limits.html
You could slave the bigger disk off the smaller until it's caught up
or
fsync+lock and take a file system snapshot and copy it onto the bigger disk.
well, I am using Mongo DB now. I am pretty amazed the performance it generated, especially on some simple sorting.
I believe it's a good tool for simple web application logic. The remaining concern for is how to scale and backup. I will continue to explore.
The only disadvantage I have is that I didn't have any good tools to reveal the data stored inside. For example, I want to put my logging from MYSQL into Mongo as well. However, it's pretty difficult for me to view the log. Previously, i can use MYSQL query to fetch what I want easily.
Anyway, it's a good tool and I will continue to use it.