How to create a custom partition layout using yocto wic? - yocto

This might be a fairly simple question, there is a few things I'm missing. I'm trying to use wic as a replacement for a custom script for laying out a boot partition. The device is an IMX6 and has uboot written at 0x400, and a fat32 boot partition to load off of with a /boot folder.. containing some files
/boot
uImage
root.squashfs
splash.bmp
devicetree.dts
6x_bootscript
I briefly looked into the plugin that uses bootimg-partition for wic. Seems like a simple way to include files, but not enough control over the name of the files. It can take an entire folder but I'm not sure how to create a directory with those specific files. The files have to have the correct name after copying.
# Copied from https://community.nxp.com/thread/389816
# Image Creator .wks
part u-boot --source rawcopy --sourceparams="file=u-boot.imx" --ondisk mmcblk --no-table --align 1
# Boot partition
part /boot --source bootimg-partition --ondisk mmcblk --fstype=vfat --label boot --active --align 4096 --size 8M --extra-space 0

You can rename the files with bootimg-partition wic plugin. You need to specify the new name after a semi-colon in the IMAGE_BOOT_FILES variable. There is an example to rename "uImage -> kernel" (and also add u-boot.img as is) taken from documentation:
IMAGE_BOOT_FILES = "u-boot.img uImage;kernel"
You can also pick several files using glob pattern and save them into the directory (but renaming of individual files is not possible). Again, an example from doc:
IMAGE_BOOT_FILES = "bcm2835-bootfiles/*;boot/"
See documentation for IMAGE_BOOT_FILES variable for full explanation and more examples.
Build-time dependencies for wic images (e.g. native tools, bootloaders) are defined in WKS_FILE_DEPENDS variable (see doc for more information). Ensure that files listed in IMAGE_BOOT_FILES have the proper dependency on respective recipe.
Of course, you can also rename your files during do_deploy, so you don't need to handle renaming in wic. And you can also create a new wic plugin in case that you need something very specific.

Related

Why does Yocto use absolute paths in TMPDIR?

Changing the path of a Yocto environment is not a good idea, as I found out. This also explains why e.g. bitbake can be run regardless the current working directory. Absolute paths are stored in many places during the build process, even subdirectory structures are created into the tmp directory tree. I ended up in rebuilding from scratch - which takes a long time.
A documentation of how I tried to modify all paths:
find . -name *.conf -exec sed -i 's/media\/rob\/3210bcd4-49ef-473e-97a6-e4b7a2c1973e/home/g' {} +
This step replaces absolute paths, within many dynamic conf files (from xx/xx/linux to /home/linux - where linux was chosen for historical reasons. I could mount the partition also as /home/yocto or whatever name).
Next was deletion of subdirectory structures with the old path in the hope that the build process would recognize these deletions, and still rebuild quickly:
find . -name *3210bcd4-49ef-473e-97a6-e4b7a2c1973e* -exec fakeroot rm -r {} +
It was not recognized. Then I gave up.
From a user new to Yocto, familiar with former/classic crossbuild environments based on make menuconfig etc.
My question is:
Why are absolute paths generated & used throughout tmp instead of treating everything as relative?
Or, asked differently:
Why not use something like ${TOPDIR}/tmp throughout the build configuration, instead of hardcoding the absolute path to tmp?

Snakemake: how to realize a mechanism to copy input/output files to/from tmp folder and apply rule there

We use Slurm workload manager to submit jobs to our high performance cluster. During runtime of a job, we need to copy the input files from a network filesystem to the node's local filesystem, run our analysis there and then copy the output files back to the project directory on the network filesystem.
While the workflow management system Snakemake integrates with Slurm (by defining profiles) and allows to run each rule/step in the workflow as Slurm job, I haven't found a simple way to specify for each rule, wether a tmp folder should be used (with all the implications stated above or not.
I am very happy for simple solutions how to realise this behaviour.
I am not entirely sure if I understand correctly. I am guessing you do not want to copy the input of each rule to a certain directory, do the rule, then copy the output back to another filesystem, since that would be a lot of unnecessary files moving around. So for the first half of the answer I assume before execution you move your files to /scratch/mydir.
I believe you could use the --directory command (https://snakemake.readthedocs.io/en/stable/executing/cli.html). However I find this works poorly, since then snakemake has difficulty finding the config.yaml and samples.tsv.
The way I solve this is just by adding a working dir in front of my paths in each rule...
rule example:
input:
config["cwd"] + "{sample}.txt"
output:
config["cwd"] + "processed/{sample}.txt"
shell:
"""
touch {output}
"""
So all you then have to do is change cwd in your config.yaml.
local:
cwd: ./
slurm:
cwd: /scratch/mydir
You would then have to manually copy them back to your long-term filesystem or make a rule that would do that for you.
Now if however you do want to copy your files from filesystem A -> B, do your rule, and then move the result from B -> A, then I think you want to make use of shadow rules. I think the docs properly explain how to use that so I just give a link :).

(Yocto Raspberrypi) How to write the file with estension .tar.xz into micro sdCard?

I'm building a custom image-qt5 using yocto tools. Once the build terminates, there is a file named qt5-image-raspberrypi2.tar.xz in the folder ...build/tmp/deploy/images/raspberrepy2/.
How can we write to sdcard for the raspberrypi?
Your question is a bit vague.
If you were just looking at writing the qt5-image-raspberrypi2.tar.xz file to your SD card; you can use the command: dd to achieve this. Given that it is a tar ball you will need to first handle this before writing it to the sd card.
Example (where /dev/sdX is your mounted SD card on your computer):
tar -xzOf qt5-image-raspberrypi2.tar.xz | dd of=/dev/sdX bs=1M
If you already have an sdcard image that has been generated within Yocto and you wish to include the qt5-image-raspberrypi2.tar.xz tar ball to the sdcard image, then you will need to modify your recipe to add the file to your SRC_URI list.
Example:
SRC_URI += '$(DEPLOY_DIR}/qt5-image-raspberrypi2.tar.xz'
Replace the tar ball file name with appropriate variables which are used when generating the name, or you could hard code it, as above.

Path Issue in Matlab 2014b MCR Deployments with Added Folders

The compile command mcc -m app.m -a file.ext -a ./dir creates a directory /app containing app.m and file.ext and a directory /dir on the same level as /app containing all the files in /dir. What is the solution to add /dir in the /app directory, not on the same directory level (i.e. /app/dir)?
(Here is why I want to do this: In directory /dir are stored the images that are used by app.m, such as splash screen, button icons, default images, etc. app.m is accessing them using imread('./dir/img.jpg'). Since the compiler is adding the /dir directory one level below where it appears in the Matlab structure at development time, the images are no longer accessible when the standalone software is deployed. Hence I need to use a isdeployed switch to specify the correct path to the images for the development and deployment cases. I would rather avoid this, probably on code aesthetic grounds of an inconsistency of treating added files differently according to whether they are or not on the same directory level as the compiled application [file file.ext is put in /app while the image files from /dir are moved on the same level as /app].)
Instead of adding the path with the compile command, you may be able to use mkdir at run time.
Your command would look like this:
status=mkdir('dir');
The advantage to this is that the dir path is now below your app path, i.e. /app/dir.
The disadvantage is that the user of your program will need the privileges to create that directory.
If you need to compile the directory with the app, you could still use mkdir and then movefile to move all of your files to the new directory from the packaged one.
A possible solution is to use two paths depending on whether the application is deployed or not. Code example:
% path to images directory 'pix'
apppath = mfilename('fullpath');
idx = strfind(apppath,filesep);
if isdeployed
pixdir = [apppath(1:idx(end-1)),'pix',filesep];
else
pixdir = [apppath(1:idx(end)),'pix',filesep];
end
% read image
img = imread(fullfile(pixdir,'logo.jpg'));

How to use Archive::Extract safely - againist zip bomb or similar?

Problem outline:
need allow upload ZIP files (and tgz and more compressed directory trees) via web-from
the zip files should be extracted for their content handling
planning to use Archive::Extract for the extracting
here are things like ZIP BOMBS and like...
From the manual
Archive::Extract can use either pure perl modules or command line
programs under the hood. Some of the pure perl modules (like
Archive::Tar and Compress::unLZMA) take the entire contents of the
archive into memory, which may not be feasible on your system.
Consider setting the global variable $Archive::Extract::PREFER_BIN to
1 , which will prefer the use of command line programs and won't
consume so much memory.
The questions are:
When I set the $Archive::Extract::PREFER_BIN = 1 - i'm enough protected againist ZIP-BOMB like things?
$Archive::Extract::PREFER_BIN protect me againist much memory usage - but, the standard unzip, tar -z unrar binaries are safe againist zip bomb like attacks?
If not - how to handle safely uploaded compressed directory tree? (so here is not only one file inside the e.g zip archive).
$Archive::Extract::PREFER_BIN = 1 doesn't protect you against zip bombs, you are passing the problem to the binary unzip tool of your system.
This SO question may helps you. I like the idea of running a second process with ulimit.