yocto do_unpack() - same directory names in different layers - yocto

During experiments with Yocto's Advanced Kernel Metadata I got into a situation where I would like to have the .scc, .patch and .cfg files organized into a directory structure as mentioned in the syntax section, and to keep that structure in all the layers (consider 2 layers here - meta-main_layer and meta-bsp_layer):
meta-main_layer
└── recipes-kernel
└── linux
└── linux-yocto.bb
└── linux-yocto
├── cfg
│   ├── main_frag_1.cfg
│   └── main_frag_1.scc
├── features
│   ├── main_feature_1.cfg
│   ├── main_feature_1.patch
│   └── main_feature_1.scc
└── patches
├── main_patch_1.patch
└── main_patch_1.scc
meta-bsp_layer
└── recipes-kernel
└── linux
└── linux-yocto.bbappend
└── linux-yocto
├── bsp
│   └── bsp_definition.scc
├── cfg
│   ├── bsp_frag_1.cfg
│   └── bsp_frag_1.scc
├── features
│   ├── bsp_feature_1.cfg
│   ├── bsp_feature_1.patch
│   └── bsp_feature_1.scc
└── patches
├── bsp_patch_1.patch
└── bsp_patch_1.scc
meta-main_layer/recipes-kernel/linux/linux-yocto.bb contains:
FILESEXTRAPATHS_prepend := "${THISDIR}:"
...
SRC_URI = "file://linux-yocto;type=kmeta \
"
Edit:
And meta-bsp_layer/recipes-kernel/linux/linux-yocto.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}:"
...
SRC_URI += " file://linux-yocto;type=kmeta \
"
... end of edit
This means that after parsing the recipes, the SRC_URI will contain file://linux-yocto;type=kmeta twice, and FILESEXTRAPATHS=meta-bsp_layer/recipes-kernel/linux:meta-main_layer/recipes-kernel/linux:. The way I understand it, the do_unpack() task goes through the files in SRC_URI and for each of them, it searches FILESEXTRAPATHS for that file, and it takes the first one it finds. That means that in my case, only the files from meta-bsp_layer are taken, as its path is the first in FILESEXTRAPATHS, and the files from the meta-main_layer are ignored.
What I would like to achieve is that instead of taking the files from only the first path found in FILESEXTRAPATHS, do_unpack() would go through all of the paths in FILESEXTRAPATHS and merge the directories of the same name (cfg, features and patches) from both layers. Without that, I don't see any big benefits of using the Advanced Kernel Metadata mechanism. Could anyone advise how to get this done?
PS: I'm using Yocto Zeus.

Related

How to change the default directory structure of dh_make so that dpkg-buildpackage does not throw any errors

I am trying to create a debian package for a postgreSQL extension Apache-age release 1.1.1 and created the directory structure using dh_make command.
The directory structure is as follows:
age-1.1.1 (project root)
├── debian
│   ├── changelog
│   ├── compat
│   ├── control
│   ├── docs
│   ├── examples
│   ├── links
│   ├── manpages
│   ├── menu
│   ├── postinst
│   ├── postrm
│   ├── preinst
│   ├── prerm
│   ├── rules
│   ├── source
│   └── watch
├── src
└── Makefile
The dpkg-buildpackage -b when run from project-root folder it looks for debian folder, then reads the rule file, then reads the Makefile located in the project root to build the package.
I want to change the directory structure to the following:
.project root
├── packaging
│ ├── debian
│ │ ├── control
│ │ ├── control.in
│ │ ├── changelog
│ │ ├── copyright
│ │ ├── pgversions
│ │ ├── rules
│ │ └── ...
│ └──
├── src
├── LICENSE
├── README.md
├── Makefile
└── ...
I want to change the directory structure so that the dpkg-buildpackage -b command can be run from the packaging folder and it should build the package.
Inside your Makefile
Modify the install paths accordingly. It should point to your packaging/debian/* where * is the filename.
This way the Makefile can point to the correct file path target inside the new folder structure.
I'm not sure if this is the best way to do this but it's working for me:
Here are the steps:
First run the dh_make_pgxs command from the project root directory.
Create a packaging directory in the project root and move the debian directory created in step 1 to this directory along with the Makefile, age.control and the age--1.1.1.sql.
Your file structure should look like this:
.project root
├── packaging
│   ├── debian
│   │   ├── control
│   │   ├── control.in
│   │   ├── changelog
│   │   ├── copyright
│   │   ├── pgversions
│   │   ├── rules
│   │   └── ...
│   ├── age--1.1.1.sql
│   ├── age.control
│   ├── Makefile
│   └── ...
├── src
├── LICENSE
├── README.md
└── ...
Change the file paths in the Makefile like:
src/backend/age.o should be ../src/backend/age.o.
./tools/ should be ./../tools/.
and so on.
Now you can simply run the dpkg-buildpackage -b command from the packaging directory to build the debian package.
Note: In step 1 we are running dh_make_pgxs in the project root first, this is to make sure that the project name in the control files and the version in the changelog file are correct. In this case the name/source in control, control.in & changelog files should be apache-age and the version number in changelog file should be 1.1.1-1.
Alternatively, you can run the command from the packaging directory and manually change the name and version in the control and changelog files.

Using GitHub Actions in a Single Repository with Multiple Projects

I am fairly competent in using GitHub actions to build a variety of languages, orchestrate deployments, and I've even done cross-repository actions using web-hooks, so I'd say that I'm pretty familiar with working with them.
I often find myself doing a lot of scratch projects to test out an API or making a demo, and these don't usually merit their own repositories, but I'd like to save them for posterity, rather than just making Gists out of them, Gists being largely impossible to search. I'd like to create a scratch repository, with folders per language, like:
.
└── scratch
├── go
│   ├── dancing
│   │   ├── LICENSE-APACHE
│   │   ├── LICENSE-MIT
│   │   ├── main.go
│   │   └── README.md
│   ├── gogettur
│   │   ├── LICENSE-APACHE
│   │   ├── LICENSE-MIT
│   │   ├── main.go
│   │   └── README.md
│   └── streeper
│   ├── LICENSE-APACHE
│   ├── LICENSE-MIT
│   ├── main.go
│   └── README.md
├── node
│   └── javawhat
│   ├── index.js
│   ├── LICENSE-APACHE
│   ├── LICENSE-MIT
│   └── README.md
└── rust
├── logvalanche
│   ├── Cargo.toml
│   ├── LICENSE-APACHE
│   ├── LICENSE-MIT
│   ├── README.md
│   └── src
├── streamini
│   ├── Cargo.toml
│   ├── LICENSE-APACHE
│   ├── LICENSE-MIT
│   ├── README.md
│   └── src
└── zcini
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-MIT
├── README.md
└── src
I'd like to generalize GitHub actions per language, for Go, use go test ./... and go build, for Rust cargo test and cargo build, etc.
I know that what I could do is have a workflow for each created project, but this would be tedious, I'd end up copying and pasting most of the time, and every build would run on every change in the entire repository, and I don't want to be building node/javawhat if only rust/zcini has changed.
Therefore I have a few questions:
Is it possible to have a workflow only run when certain files have changed, rather than running everything every single time?
Is there a way to generalize my workflows so that every dir in rust/ uses the same generic workflow, or will I need one workflow per project in the repository?

Importing json resources inside .pex (Python Executable (format by Twitter))

I'm using a Twitter engineered build tool pants to manage many projects inside my monorepo. It outputs .pex files when I complete a build, this is a binary that packages the bare minimum dependencies I need for each project and makes them a "binary" (actually an archive that's decompressed at runtime), my issue is a utility that my code has used for a long time fails to detect some .json files(now that I'm using pants) I have stored under my environments library. all my other code seems to run fine. I'm pretty sure it has to do with my config, perhaps I'm not storing the resources properly so my code can find it, though when I use unzip my_app.pex the resources I desire are in the package and located in the proper location(dir). Here is the method my utility uses to load the json resources:
if test_env:
file_name = "test_env.json"
elif os.environ["ENVIRONMENT_TYPE"] == "PROD":
file_name = "prod_env.json"
else:
file_name = "dev_env.json"
try:
json_file = importlib.resources.read_text("my_apps.environments", file_name)
except FileNotFoundError:
logger.error(f"my_apps.environments->{file_name} was not found")
exit()
config = json.loads(json_file)
here is the the BUILD file I use for these resource currently:
python_library(
dependencies=[
":dev_env",
":prod_env",
":test_env"
]
)
resources(
name="dev_env",
sources=["dev_env.json"]
)
resources(
name="prod_env",
sources=["prod_env.json"]
)
resources(
name="test_env",
sources=["test_env.json"]
)
and here is the BUILD file for the utility that calls these resources of which the python code above is what you saw:
python_library(
name="environment_handler",
sources=["environment_handler.py"],
dependencies=[
"my_apps/environments:dev_env",
"my_apps/environments:prod_env",
"my_apps/environments:test_env"
]
)
I always get an FileNotFoundError exception and I'm confused because the files are available to the runtime, what's causing these files to not be accessible? and is there a different format I need to set up the JSON resources as?
Also for context here is the decompressed .pex file(actually just the source-code dir):
├── apps
│   ├── __init__.py
│   └── services
│   ├── charts
│   │   ├── crud
│   │   │   ├── __init__.py
│   │   │   └── patch.py
│   │   ├── __init__.py
│   │   └── main.py
│   └── __init__.py
├── environments
│   ├── dev_env.json
│   ├── prod_env.json
│   └── test_env.json
├── __init__.py
├── models
│   ├── charts
│   │   ├── base.py
│   │   └── __init__.py
│   └── __init__.py
└── utils
├── api_queries
│   ├── common
│   │   ├── connections.py
│   │   └── __init__.py
│   └── __init__.py
├── calculations
│   ├── common
│   │   ├── __init__.py
│   │   └── merged_user_management.py
│   └── __init__.py
├── environment_handler.py
├── __init__.py
├── json_response_toolset.py
└── security_toolset.py
I figured it out: I changed the way I access the files within the library and it works perfectly before and after the build to .pex format. I used:
import pkgutil
#json_file = importlib.resources.read_text("my_apps.environments", file_name)
json_file = pkgutil.get_data("my_apps.environments", file_name).decode("utf-8")

If I have a local rpm in my ansible-playbook can I do yum install in one step?

I have downloaded a rpm in my ansible-playbook:
(djangoenv)~/P/c/apache-installer ❯❯❯ tree .
.
├── defaults
│   └── main.yml
├── files
│   ├── apache2latest.tar
│   ├── httpd_final.conf
│   ├── httpd_temp.conf
│   └── sshpass-1.05-9.1.i686.rpm
├── handlers
│   └── main.yml
├── hosts
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
My question is why can't I just install it using:
- yum: name=files/sshpass-1.05-9.1.i686.rpm
? It complains that files/sshpass-1.05-9.1.i686.rpm is not found in the system. Now I am doing it in two steps:
- copy: src=files/sshpass-1.05-9.1.i686.rpm dest=/tmp/sshpass-1.05-9.1.i686.rpm force=no
- yum: name=/tmp/sshpass-1.05-9.1.i686.rpm state=present
No, there is no simple way around coping the package to the remote host before installing. Ansible yum module expects a local file when you define a file in the name parameter.
IMHO it is not a good idea to keep packages inside the Ansible code base. Because they are binary and not exactly part of the actual Ansible code. It would be cleaner to setup a private repository and store those files there. That is the only way around coping a package in this situation I'm aware of.

Including directories for doxygen documentation

I have a collection of .dox documentation files for my project
in a dox directory as illustrated below
In the input section I have included ../ for doxygen to pick up the source code. However when I put ./ it does not pick up my documentation files and have to include each file. Is there a way to include them automatically?
Here is the docs and lib directories. In lib I have the source code, whereas in docs I have the documentation.
../
├── docs
│ ├── dox
│ └── Doxyfile
└── lib
Here is the contents of the dox directory
./dox/
├── gnu_affero_gpl.dox
├── gnu_fdl.dox
├── gnu_gpl.dox
├── larsa
│   └── larsa_core.dox
├── larsa.dox
├── meidum
│   ├── lattices
│   ├── lattices.dox
│   ├── lattices.dox~
│   ├── polyhedra
│   └── polyhedra.dox
├── meidum.dox
├── modules.dox
└── vikingr.dox
I have now fixed the problem. The solution was to remember to add *.dox in FILE_PATTERNS variable in Doxyfile.